Contemporary Decentralization

Wulf Kaal
62 min readFeb 9, 2021

--

By Craig Calcaterra and Wulf Kaal

Abstract

Chapter 2 highlights the transition from Web 2.0 companies, such as Google, Facebook, and Uber, among many others, which have disrupted much of the economy, to more decentralized solutions. Consumers around the globe are becoming increasingly accustomed to the advantages of decentralized business. Improvements in peer-to-peer technology may over time remove the centralized owners of these Web 2.0 companies. Bitcoin serves as an example of a measurably valuable network with a thoroughly decentralized ownership structure. The chapter evaluates how existing shortcomings in open source technology and web3 DApps for decentralized autonomous organizations, among others, can be addressed to help humanity benefit from the decentralized technology evolution.

The book can be accessed here:

https://www.amazon.com/Decentralization-Technologies-Organizational-Societal-Structure/dp/3110673924/

and here:

https://www.degruyter.com/view/title/569051

Chapter 2. Contemporary Decentralization

Web 2.0 started when companies exploited the power of decentralization using the tools of personal computers and the internet. Web 2.0 began around 1999 with companies like Google, Amazon, and Wikipedia taking advantage of previously untapped talent and knowledge. Wikipedia decentralizes knowledge collection and organization; PayPal decentralizes payment services; Skype decentralizes telecommunications; Spotify decentralizes file sharing; Google Maps decentralizes data collection for knowledge about road traffic, just as the Google search engine decentralizes data collection for knowledge about website popularity by monitoring internet traffic; YouTube decentralizes video production, putting a television studio in the pocket of anyone with a smartphone. Marketplaces of all kinds were decentralized by eBay, Amazon, Craig’s List, Airbnb, Upwork, and Uber. With the power of contemporary information technology and intuitive UI design, these idiot-proof applications allow children to do jobs that formerly required long training and substantial material investment to achieve.

Intuitive user interfaces rely on the IT advance of dynamic processing. JavaScript was a revolution that allowed intuitive and interactive functionality for Web 2.0, compared with the relatively static information storage and transmission that Web 1.0 provided with HTML and CSS programming languages. This empowers everyone to be content creators. Web 2.0 companies connect billions of these newly empowered individuals with light speed broadcasting. The resulting cooperation leads to knowledge beyond previous imagination. The decentralized interconnectivity of the web offers up this Olympian perspective to everyone on the web, further magnifying our power. By loosely curating and controlling this content with automated protocols, these companies charge fees, advertise, direct our attention, request donations, and use the knowledge of the network for market advantage. Since these profits are derived from global networks, their potential is literally titanic.

Decentralization is upon us. We are already experiencing the advantage of decentralization in most areas of our lives. The internet has given us the tools to decentralize economics, education, and entertainment. With an algorithm, YouTube automates the process of allowing anyone with a smartphone to create and share education and entertainment content with the world. PayPal allows globally networked peer-to-peer financial transactions. eBay and Amazon and Alibaba have algorithms that automate globally networked peer-to-peer trading contracts. Uber’s algorithm automates the connection of riders and drivers, unlocking the working potential of anyone with a car and a smartphone. UpWork and TaskRabbit decentralize more of the gig economy, allowing anyone with any skill to match directly with customers and employers. Facebook’s algorithm facilitates social connections around the globe. Google Maps and search engine utilizes the decentralized information from user traffic to feed algorithms that automate the directions and commercial decisions of its users. Wikipedia’s algorithm facilitates knowledge creation and sharing[1], along with YouTube, which are the most important tools for learning new skills in any endeavor, from life hacks and cooking to computer programming and graduate-level physics subjects. Decentralized tools are transforming the way people behave in every level of society, in every facet of our lives.

With the launch of the iPhone in 2007, smartphone adoption rates exceeded other technological devices such as the lightbulb, telephones, TV, and the personal computer. The smartphone became the consumer technology with the fastest adoption rate, reaching 40% market saturation in just two and a half years. The role of smartphones for societal change cannot be underestimated. For the unbanked, who lack access to traditional bank accounts, but have high rates of mobile phone ownership, smartphones and mobile money are playing a critical role in financial inclusion. The smartphone provides access to stored value accounts and a growing set of financial services that can change lives. The Arab Spring of 2011 or the protests following the killing George Floyd in 2020 sparked global changes because of the power of social media. Common citizens are now journalists and have the power of news broadcast rooms in their pockets. Average people are able to connect to more viewers than major media corporations could two decades ago thanks to social media platforms such as Facebook and Twitter.

From Social Media to Decentralized Coordination

Social media transfers knowledge from the edges of society into the mainstream. Social media allows people who hold otherwise marginalized or underappreciated views to meet kindred spirits online and form groups that broadcast and promote their shared ideas. When these newly formed social networks grow they can increase their influence and promote their perspectives until they gain mainstream adoption. Otherwise invisible social, ethical, environmental, and political issues can thus gain traction. Increased visibility of these issues can transfer the balance of power from the few to the many. Power is diffused — decentralized.

Facebook is widely credited with beginning the Arab Spring uprisings that led to massive political protests throughout North Africa and the Middle East starting in 2011 and revolutions in Tunisia, Libya, Egypt, and Yemen.[2] How did Facebook know which stories were important to promote in its network? Did some nameless employee come across an important incident and spread it around the network? Not at all. The centralized company Facebook didn’t know or choose to do anything directly related to the beginnings of the protest. The platform was used as a tool of protestors and revolutionaries to communicate their messages of dissatisfaction. “It allows them to circumvent state-controlled media. What we’ve seen in the Arab Spring in the use of Twitter, YouTube, Facebook, all of these things, […] what it allowed protestors to do was to circumvent these dictatorships, their traditional means of controlling information, which was the state television network, the state radio, the state newspapers.”[3]

How do the protestors use social media to organized their protests? In the subsequent global Occupy protests, in the Hong Kong protests, the George Floyd protests and many others, protests have not been centrally controlled. There is no president of the Hong Kong protests. Neither do the companies who own and run these social media platforms guide the movements. They don’t know what the stories are that will spark revolution. Yet stories do go viral and change the world.

How does Facebook know to tell its users the most important news stories of the day? How does Twitter know the best new cat video? How does Netflix know the top 10 comedy movies of the 1990s? How does Google know what the best Thai restaurant in Rome is? Quite simply, they don’t. The companies don’t go out and answer these questions all day every day for themselves and then share it with you. The network itself knows these answers. The network of users has the answers in their members’ behavior. The social media companies simply monitor the networks’ transactions and statistically analyze the information with automated algorithms. Information at the edge is gleaned mathematically using our new information technology.

As the broadest extension of technological decentralization, the internet era gave rise to the most significant societal decentralization. Communication and commerce were freed of geographic limitations. At the beginning of the 2020s, about one fourth of humanity engages in virtual communication on social media in some capacity. Social media created a form of social cohesion that was unprecedented in terms of geographic social interaction. Views and values could be exchanged and influenced with an unprecedented geographic reach. Prior marginalized groups can coordinate their efforts worldwide through online groups that promote their shared ideas. Social functions that formerly belonged to local groups were increasingly being fulfilled by social media exchanges where influence is allocated to the most popular content and its creator.

Societal boundaries are continuously being shifted via internet-based knowledge exchange and social media. In the pre-internet era, knowledge was mostly accumulated by and exchanged with specialists. Internet-based knowledge sharing helps remove information silos and information privileges that created societal structures and privileges. With the dawn of the social media age, the level of interaction between non-specialists has increased dramatically, removing hegemony and centralized control structures over information while creating a more skilled and knowledgeable work force. For example, micro task work via Crowdflower and Amazon Mechanical Turk enable lower skilled or unemployed individuals to earn a living through micro task work over the internet. Recruitment for such work typically takes place over social media outlets, that is by word of mouth on social media channels.

Internet-based collective decision-making via the crowd can replace the centralized coordination functions in society. For example, in the past, product quality assessments were centrally disseminated and evaluated on consumer’s behalf, by way of Consumer Reports, among others. In the social media age, collective decision-making through the power of the crowd is perhaps the most prevailing method of product evaluation and forces companies to take heed. Similarly, knowledge and views from the edges of society can be moved into societal mainstream very quickly via social media. This transfer can remove existing societal consensus, social cohesion among established groups, and order in the process. Otherwise less visible and influential social, ethical, environmental, political issues can thus gain traction rather quickly. Traditional modes of coordinating human behavior by way of political decision making, democratic institutions, business governance, learning, among many others, become slowly less prevalent.

The increased network capability of society that is promoted by social media can change and improve the coordination of human behavior in society. Social media data and metrics can replace centralized coordination of human behavior. For example, social media posts often identify emergency information more accurately with more timely dissemination than centralized media reports. Similarly, for groups that coordinate their conduct, as for example in the Arab Spring and other reform movements, coordination via social media is not merely relegated to information exchange, but can actually coordinate protest movements. In the product context, social media conduct of groups as they relate to products becomes a very powerful placement and marketing device. Product specific or content specific conduct on social media can become a form of ‘social proof’ for such products or services. However, because the incentive design for social proof is suboptimal, the social media coordination function is still largely flawed and corruptible. For example, one of the first things that will happen if you open a shop on Amazon is that you will receive several messages from malicious sock-puppet wranglers offering to game the rating system to improve your company’s ratings while attacking your major competitor.

Decentralized technology solutions are starting to tap into the coordination function that was inaugurated by social media, while also improving it with new tools. Social media allows the enhanced coordination of information that was previously isolated on the edges of society. Because social networks feed off interactions among people, they become exponentially more powerful as they grow, due to the network effect. But these networks are stifled by the centralized ownership and governance of the Web 2.0 companies that run them. Governing decentralized information flow necessitates decentralized incentive designs.

Societal decentralization is a byproduct of broader societal trends that derive from the combined feedback effects of decentralization of science, technological decentralization, organizational decentralization, as well as market and governmental decentralization. Such are the precursors of an ever freer and more open society. Centralized ownership and control of the Web 2.0 information filtering algorithms can be useful to prevent users from gaming the system. By keeping their algorithm opaque it is more difficult for outside interests to exploit the way these social media companies guide their users to information. The Google search engine algorithm is constantly being improved, because website developers infer the rules of the algorithm and exploit its properties to raise their site’s ranking. For example, if you wanted to sell widgets on your website in 2006, you could make a blue background and type “widget” in the same blue color thousands of times. The 2006 Google search engine algorithm (PageRank) would then raise your website’s relevance in any person’s search for “widget”. The 2007 algorithm saw through this trick and would punish any page that used it. So website developers moved on to other tricks in 2008.

But a different incentive mechanism can improve the algorithms even faster. Instead of an arms race between the outside exploiters and the centralized companies who host the network, if the P2P network were decentrally owned by the users themselves, the algorithms can be open source and still remain safe. Instead of a centralized company continually developing the opaque algorithm to punish people who try to optimize their content, P2P networks can reward members properly for improving the algorithm. By rewarding users for policing exploitation, by incentivizing the network to defend the algorithm they own, these decentralized networks leverage the power of a much larger talent pool. Instead of having few insiders and many outsiders, open access flips the ownership model, creating as many insiders as can possibly contribute. This flipping of ownership of Web 2.0 companies to a decentralized ownership model is the heart of the Web3 movement we will discuss later.

Page and Brin deserve to be lauded and rewarded for inventing the PageRank algorithm that underlied the early Google search engine. But the primitive system for recognizing only the ultimate legal winners, the Jobs’s and Gates’s, is being improved. The next Pages and Brins will certainly be recognized and rewarded under this new model of ownership, but the army of developers who further improve the systems will also get their due.

Sharing Economy

Peer-to-peer connectivity is giving new life to the sharing economy. The sharing economy refers to the utilization of previously idle services and goods and the partial use of others’ property rights in goods. For instance, you might provide your car, or your time, as part of a peer-to-peer transaction, often over a platform built to unite the interested parties. Unlike traditional centralized ways of production and selling to consumers by hiring employees, platform companies in the sharing economy typically provide the technological setup that allows individuals to share their property rights in goods or sell their services without centralized employment. Individuals who connect via platforms in the sharing economy share their property rights in, for example, cars, homes, or rent out their personal skillsets and time in a peer-to-peer form of engagement.

The sharing economy has become part of modern society’s mainstream. The origins of the sharing economy can be traced back to an emphasis on sustainability, resource efficiency, and community. As the sharing economy evolved, not only did its services and industry acceptance proliferate, the sharing economy’s credo of “access over ownership” became more mainstream. The public had grown accustomed to receiving services and goods on-demand via digital and mobile technologies — especially the perception of the internet as universal access to information. On-demand access to goods and services became part of modern society, it became no longer a preference and habit of millennials alone.

The sharing economy necessitates a reframing of legacy legal regimes and frameworks. The legal frameworks that regulate disrupted and associated industries are often incompatible with the emerging trends generated by the sharing economy. Cities and co-municipalities had to learn that the sharing economy requires a proactive stance to channel the sharing economy’s outputs and associated new requirements into economic development while at the same time protecting the public with regulation. While some cities have joined forces to declare common commitments and principles for sharing cities[4] and many co-municipalities are developing transportation-as-a-service platforms to better meet the needs of all residents, some states in the United States have passed legislation that in some ways undermine the sharing economy.[5] Yet, some countries, such as Denmark, have changed their internal regulations to better accommodate sharing economies.

The values that enabled the new flowering of the sharing economy morphed from an emphasis on connectivity for the sake of sustainability to a focus on connectivity and community as a commodity. In other words, connectivity and community building via increased connectivity became a purpose and meaning by itself. The purpose of sharing economy participants shifted from connectivity for a cause, such as a community for sustainability, toward mass consumption for convenience and transactional efficiency.

The ultimate sign of the sharing economy’s success is its increasing recognition in policy, economic, and business circles, as part of the overall economy. The sharing economy has the potential to shape entire markets that are better connected and more efficient. It has started to blur the lines between industries and even former competitors.

How did we get here, and where are we going?

History of Web 1.0 and Web 2.0

In harnessing the power and talent of the masses, one of the problems that Web 2.0 companies solved was the problem of individual success under Web 1.0. Ultimately, internet users made a Faustian bargain with Web 2.0 companies to host their content.

In the early days of Web 1.0 if you wanted to post content, you would build your own webpage. Then you could buy a special router to connect your computer to the internet after obtaining special addresses (IP and AS numbers) and permissions from the King of the Internet.[6] Then you would need to keep your computer server running and your telephone lines open, so that anyone who wanted to view your webpage could contact your computer and ask it to send the information.

This was not a problem in the early days when very few people were using the internet. But the network quickly grew, so if your webpage was at all popular, this setup would create a bottleneck. The solution at the time was to hire a middleman. These functions could be achieved by internet service providers (ISPs)[7] who could provide the bandwidth necessary to allow your page to be seen by the world.

The idea at the time was that ISPs would compete to become powerful utilities, since they would provide essential services for the commons. They would provide as much bandwidth as possible to their users to justify their expansion. The incentive structure that would solve the problem is for originators of TCP requests[8] to pay the bill. That way ISPs who hosted more content would be paid from the ISPs who hosted more consumers.

This didn’t solve the problem, as naturally the Tragedy of the Commons[9] asserted itself. Porn sites and pirated file sharing (often set up by the ISPs themselves to game the payment design[10]) quickly used up any available bandwidth. Further, negotiations between ISP providers were much more nuanced than the plan outlined above; the accounting didn’t merely resolve according to TCP requests. Creators concerned with fringe issues, such as science and social issues, were not a priority and could be charged, since they cared about their causes. The ISP charged low-volume providers on a per connection basis. If your site suddenly became popular, with thousands of people constantly accessing your content, you had to pay for thousands of long-distance phone calls. Individuals with popular pages were forced to delete their content.

Web 2.0. companies provided the solution. YouTube, Facebook, and Reddit provide free hosting for your text, picture, and movie files. In exchange they have access to the information that content providers want to share. They own any personal information from viewers that can be gleaned. They control what content can and can’t be shared, guiding popular opinion. And most importantly they have access to our attention.

Each of these Web 2.0 companies has disrupted their industries in dramatic ways. But these examples all use a centralized hierarchical business model for ownership and governance of information. For example, internet-based markets — like Craig’s List, eBay and Amazon — are hybrid centralized companies that decentralize the customer experience, since anyone can be a vendor and anyone can be a reviewer. Buyers and sellers are directly connected. The internet allows their platforms to scale globally. More

users means exponentially more connections[11] — the network effect means the leap in connections matches customers more efficiently with vendors, increasing sales and efficiency.

Intuitive user interfaces — idiot-proof design — help maximize the size of the network. Ride-sharing businesses, like Uber, take advantage of this increased efficiency by providing free apps that anyone can use to engage business, connecting a rider with an available driver with a few clicks. The centralized owners control this software and therefore control the market and can dictate prices. They don’t charge transparent fees. They adjust to real-time information about supply and demand to maximize their profits. If there are many riders demanding rides during rush hour they can increase fees. If there are too many drivers they can pay them less. These Web 2.0 companies use the structure, control, and profit optimization of centralized companies combined with the power of decentralization due to network effects.

Thanks to these Web 2.0 companies, which have disrupted much of the economy, consumers around the globe are becoming accustomed to the advantages of decentralized business. But improvements in peer-to-peer (P2P) technology are prompting the question of whether the centralized owners of these Web 2.0 companies are necessary at all. We now have the technology to decentralize these companies completely. Bitcoin is an example of a measurably valuable network with a thoroughly decentralized ownership structure.

Open source culture

In order to have a truly decentralized organization, the rules must be transparently available to all members. Otherwise the keepers of the knowledge have a higher status, creating a hierarchy. Further, all members must be encouraged to contribute to these rules, according to their talents. Transparency is fundamental to fostering the trust between members necessary to build a decentralized network. The functions of all software must be publicly auditable for people to trust it. If one person could own the copyright to some of the software a decentralized organization uses, then they would have de facto power over the organization, again establishing hierarchies of power within the organization. We wouldn’t be talking about the decentralized organizations of the future without open source software. The open source movement has already transformed our world.

A fundamental divide exists at the heart of computer programming regarding the copyrightability of software. At the beginnings of electronic computing, researchers in universities and technology companies came from a tradition of freely sharing their work in service to their field’s progress. This openness in academia can be traced at least to the Renaissance, but all societies’ golden ages are characterized by a flourishing of innovation which can be directly attributed to temporary open collaboration in their culture. Early computer programs were basic algorithms — they were closer to mathematics proofs than to fictional works of literature and were simply seen as elegant distillations of clear thought that anyone would come to, given sufficient time and effort. In the 1950s-60s most computer companies did not license their operating systems. As programs became more complex, companies began to view their software as intellectual property. In 1976 the U.S. Congress updated the Copyright Act of 1909, and based on the recommendations of the National Commission on New Technological Uses of Copyrighted Works (CONTU), wrote an amendment in 1980 clarifying software as creative art, similar to literature and copyrightable in any form.[12]

Software companies such as Microsoft, IBM, and AT&T began to enforce their copyrights with license fees and no longer distributed source code.[13] Richard Stallman is a major voice in the open source movement who decried these practices as unethical and stultifying to the field of software development, by limiting the ability to build on others’ work. Advocating the use of “free” software (“free as in ‘freedom’, not as in ‘free beer’”[14]), Stallman created the GNU Project in 1983 which was formalized in the non-profit Free Software Foundation in 1985. Their GNU General Public License (1989) implemented the copyleft legal mechanism which grants users the rights to use purchased software without further charges, and the rights to modify the program’s source code, but requires all future derivatives to remain under the same license.

Linus Torvalds released his Linux operating system (OS) under the GNU license in 1992 which has become the most common operating system running unnoticed in most mobile phones. Linux has more than 1500 developers[15]. The crypto community boasts anywhere from 4000 to 200,000 developers per month. These figures are completely unreliable due to anonymity (especially due to the uncertain legal environment) and the fact that very few of this number work on stable projects. Ethereum likely has the largest community with approximately 200 full-time developers.

The Apache HTTP Server software was the next major open source project, which now underlies almost every click you make on the internet. It’s worth exploring the history and operation of the largest stable open source programming community, the Apache Software Foundation (ASF), with 7800 high-level developers, called committers. Today ASF has 202 active committees working on 340 active projects.[16] They are responsible for the experience we have with the internet today, since every major software company uses many of the tools Apache has built and released for free in the last three decades. The network started in 1993 on a project that became the Apache HTTP Server, which today is the world’s most popular web server software.[17] The name Apache was chosen partly to signal their affinity for the Native American tribe’s indomitability and decentralized nature, and partly as a pun — their main concern in the beginning was creating Apache software patches for internet products.

All work released by the foundation uses the Apache license, which is an anchor of the current open source software (OSS) movement. The Apache license gives users the legal right to use the software for any purpose, to distribute it, to modify it, even to profit from it, without ever paying the Apache foundation. The Apache license differs from the GNU license in that new software deriving from work under the Apache license is not required to remain under the Apache license. New work may be patented or copyright protected by its innovators. The only constraint is that the modified ASF file must be annotated carefully with a NOTICE text file explaining the changes.

Some of the rate of technological innovation is due to the open source culture, and it is especially important in the emerging API economy. The “API economy” is a term that comes from a programming structure called an API, which stands for Application to Program Interface. In software development an API is a metaphorical bridge between two incompatible programs — the API is a third program that translates between the other two. The API economy is a recognition that the interoperability between many of the digital tools in our lives and in business is leading to great leaps in efficiency, requiring complex new legal and business negotiations due to the near continual innovation in collaboration displayed in the use of these tools.

As an example, after working on my car, my mechanics texts me the diagnosis and a bill. I choose from a list of recommended maintenance and fixes that I want them to do, and I pay through my smart phone which stores my credit card and interacts with the mechanics’ payment app to their bank account. This triggers the shipment of parts to the mechanic, automatically paying the distributor and shipping service. There are at least eight programs owned by eight different interests that are interacting in this nearly trivial transaction, but they all need APIs to interact.

APIs create the standards that allow companies to exchange data and build seamless omnichannel experiences for their customers. Interoperability strengthens networks, making them more pervasive and useful, leading to greater adoption. Interoperability thus naturally increases network effects making them more valuable to members and users.

As our systems become more interoperable, it will require more sophisticated APIs which will require more access to the source code of each of the separate apps. An open source culture accelerates this development. P2P technologies provide neutral platforms for the API economy. The level playing field of decentralized technologies gives the ideal market for companies to negotiate and collaborate.

But how do we build powerful and valuable technological platforms that nobody owns? Let’s first look to the Apache foundation to see how it has thrived in the last few decades.

The ASF is a decentralized organization. Because of its nonprofit status, it can survive with a very loose governance process, dominated by a do-ocracy philosophy — doing things is the primary governing force for the group. If people are interested in working on a new project and are willing to uphold the organization’s values (following their code of conduct), they are usually encouraged, because a primary value of the network is to build community. Surprisingly, the major source of tension in the community is that some for-profit companies such as Facebook or Google will pay their workers to contribute to projects they need. These workers often devote far more effort to a project and sometimes push out volunteers. Then the paid workers leave once their duties are complete, leaving no one to maintain and upgrade the software in the future. The maxim “Community Over Code” is repeatedly used to promote the idea that a good community can fix code problems, but a bad community cannot maintain even the best code in the long run.

Despite the looseness of the Apache do-ocracy, such strife necessitates some foundational rules for arbitrating conflicts, such as whether a project is finished and should be released under the Apache name. In such cases, there are 781 members who may vote up +1, down -1, or abstain 0. If the sum of the votes are positive, the proposal passes. There are three ranks of power in ASF: contributors, committers, and members. Only the members can vote. You can become a member if another member nominates you and the other members positively vote for your candidacy. Therefore, ASF is technically a closed organization. But the ASF is also quite open, in the sense that anyone in the world can be a contributor on any project, in most any capacity. However, their contributions are only suggestions until they are approved by a committer. Committers have the power of write access to their project repositories. Contributors with a strong record of improving projects can become committers on a project if the members in the project nominate and approve them.

How, though, can we explain the success of Apache? Why do people donate their talents and efforts to valuable projects from which they receive no remuneration? How do they maintain quality? How has the network survived for three decades? Remember, a decentralized network survives by living its values.

The members are happy to explain that their motivations include altruism, social reputation, and belonging and contributing to a community. Owen O’Malley, the VP of Apache Orc explains, “If everyone knows that I did a piece of code, then I’m a lot more careful to make it good. And if it’s just something that is going into a proprietary software, then I can be a little sloppier.”[18] Members simply feel good about themselves and enjoy contributing to projects that they see improving the world. Making a meaningful contribution to code that is used around the world is an impressive line on your resume. But the most compelling argument we hear is Funktionslust. “When I create something, I want it to be beautiful as well as functional.” The German word Funktionslust means the pleasure you enjoy by doing something you’re good at. “We also have to work with a lot of closed-source software, and it just sucks. It’s hard to debug. It’s hard to reason about. With the ASF you work together with a bunch of people who chose these products to collaborate.” The open source culture brings together talent from anywhere on the planet. Projects get populated with people who want to work on that product — who are good at solving those problems. Mostly, it’s fun. It’s fun to work together with other people who share your values, to use your talents and improve yourself. It’s fun to teach and to learn, especially on a project you care about. By acting as if they are living in a post-scarcity society, they create one.[19] Andy Shi, a Developer Advocate for Alibaba, explains, “Joining forces with the open source effort is rewarding. You give a little; you take back way more. So that’s really what I want to share, especially with developers and companies in China. Don’t be afraid to give, to share. You will get more out of it.”

Eric S. Raymond wrote the article “The Cathedral and the Bazaar” and organized a committee which introduced the term open source (first suggested by Christine Peterson) in 1998.[20] The “Cathedral” refers to centralized, carefully controlled, closed-source development projects, while the “Bazaar” refers to the more chaotic, decentralized, open-source projects. Both models have advantages and disadvantages. Since before its publication, many major projects have switched repeatedly between the two models, with Apache being perhaps the most visible and enduring success of the Bazaar model.

For the last three decades, Apache HTTP has been the most popular server software in the world. Google built the Android operating system on the backbone of Linux under the open source Apache 2.0 license. Android is the most popular mobile platform worldwide. The popularity of these open source platforms is largely due to the fact that independent developers have less legal confusion about what they can contribute, control, and own in the open source environment.

These reasons bring us some of the way towards understanding how the open source movement can exist. But how can they thrive? The Apache Software Foundation’s near trivial governance system works in an organization whose members are not competing for power and money within the organization. The Apache foundation rightfully brags that their 227 million lines of code have given the world a value of US$20 billion.[21] Impressive for a volunteer force seeking no monetary rewards, but it’s a blip in the global economy. However, not all is positive in an environment with maximal transparency. Sometimes open source projects fail to attract the necessary quality of developer that a closed-source project can reliably procure with more funding and control. Another problem is that the very success of the open source movement allows major corporations to pressure smaller projects to reveal their code, then take it and use their power to exploit the work more profitably than the startup.

How can we improve such open decentralized networks? How can we properly incentivize larger swaths of the economy to compete with for-profit centralized companies? The Apache Foundation has built a successful non-profit software development DAO. What tools do we need to build a for-profit software development DAO — a Maghribi Foundation?

Early peer-to-peer file-sharing programs demonstrated that the untapped bandwidth available from individuals’ uploading capacity was enough to compete with large ISPs. At one point, P2P networks accounted for a majority of internet traffic. The bandwidth has always been there, but the latency of past P2P projects is higher than people now demand — it took longer to receive the information from individual devices which are not optimized for broadcast speed compared with dedicated servers. New projects promise to address this problem with computation and storage sharing, not just file sharing. Let’s look closer at P2P technology, and see how is it being used to replace Web 2.0 companies.

P2P, blockchains, and Web3

Technological innovation is rapidly accelerating. Hard on the heels of the computer revolution that culminated in the Web 1.0 internet, Web 2.0 disrupted major sectors of our economy by giving global networking power to any consumer. Intuitive user interfaces (UIs) transform children into gods of information, whose abilities would make Hermes blush. Using these UIs, companies are harnessing the previously untapped talent, taking advantage of new wells of decentralized information.

Now less than two decades later, Web3 hopes to furnish the next information technology revolution. [22] This vision for the future is to fully decentralize every aspect of IT with peer-to-peer technology (P2P). The goal of the Web3 movement is to foster radical bureaucratic transparency using open source design, to further individual autonomy and privacy using cryptography, and to level the access to information and computing resources with decentralized networks.

P2P technology

Most people are unaware of the many types of P2P technology that the internet relies on, but we are more familiar with the somewhat analogous technology of cloud computing. Cloud computing uses the internet to displace your storage and processing of information from your personal device to a more powerful distant device. Dropbox (2008), for example, is a privately-owned cloud storage company which allows you to store your files (movies, pictures, documents) on their servers instead of your home device, then access the files on-demand through the internet. The advantage is that most people can trust a large company to back up their storage much more reliably than they can personally, and the files will be available regardless of how technology changes from floppy disks to CDs to flash drives etc.

Similarly, there are cloud computing architectures for displacing information processing from your personal device to a distant computer.

Since your files are stored redundantly on many backup servers, we can say that cloud computing decentralizes the function of your personal computer. But most cloud computing is centralized in the sense that you need to rely on a centrally owned company to provide the service. The disadvantage of centralization is losing power over your personal information, paying for the privilege, and hoping the files stay secure despite the reality of their centralized point of failure (if the company goes bankrupt or is hacked, for example). Web3 is devoted to the challenge of creating decentralized alternatives.

P2P is an alternative decentralized architecture to cloud computing. Napster was an early decentralized P2P file sharing system (1999) which revolutionized how users accessed music, completely disrupting the music distribution industry. Improving on Napster, BitTorrent (2001) protocols allow users to share files directly between each other in a more fully peer-to-peer setting, without storing file content on any centralized server. The idea is that anyone running a client (software that runs the BitTorrent algorithm) will be able to automatically share files from their computer (called seeding), making them available to anyone who wishes to view those files and download them. Large files are split into pieces and held redundantly on many different computers in the network to make them available on demand. So P2P architectures create open decentralized networks, where every user starts on an equal footing.

The point of a P2P network is to achieve distributed computing without centralized control. Distributed computing is splitting a big difficult task across many members as equally as possible. The goal is to unite a network of nodes (i.e., computers) all running the same protocol and all sharing the work. A simple example of P2P is the internet itself, where computers all over the world run the same protocol for communicating with each other and sharing data. Very roughly, the internet has a protocol for keeping track of all the website addresses (called DNS) and users’ addresses (IPs, ISPs, NSPs), and a protocol for connecting computers (HTTP, TCP IP) so they can request data or computations from each other.

Other P2P services use an even simpler architecture to link a network of users, with the goal of bringing the technological requirements down to the level of the average individual’s laptop, as BitTorrent demonstrates. Another example, Tor (2002) helps anonymize internet usage, protecting citizens’ and reporters’ access to communication under oppressive governments.[23] Each user can participate anonymously to build a cooperative network which provides greater value for all, at least theoretically.

The reason earlier centralized companies succeeded against more decentralized P2P platforms is due to the existing technology of the time and the natural incentive design that is built into our capitalist civilization. Commercial devices available to the average user were not able to provide the upload speed that industrial servers with “fat pipes” could, meaning greater latency for P2P platforms built from average individual computer enthusiasts. Centralized companies had the proper incentive design to provide better quality of experience by overseeing commercially-oriented intuitive user interface upgrades. And large companies would find negotiating the central bureaucracy more worthwhile than individual computer geeks thanks to economies of scale.

One problem with most existing decentralized filesharing networks is that unpopular files (such as your personal files) are not guaranteed to be available, unlike centralized cloud computing services which guarantee availability for a fee. Centralized companies provided superior network availability and latency through centralized oversight. A centralized company’s goal is to continually improve quality and has the advantage of being able to consciously analyze and control their own dataflow, helping companies “get closer to their customers”. The improved data allows companies to consciously personalize their services to individual customers.

However, today P2P is making inroads in the competition with private companies for internet space. The centralized client-server computing architecture is threatened by new peer-to-peer networking architectures which are being built to spread decentralized technology. Through the removal of centralized hosts and servers, the nodes that form the peer-to-peer network make computing resources, such as disk storage, network bandwidth, and processing power, directly available to each other. Network effects proliferate in peer-to-peer networks because, unlike the traditional client-server architecture which is subject to the linearly increasing per-unit costs, the peer-to-peer network costs can decrease with each added node (depending on topology).

In the last decade we’ve seen P2P networks flourish, the most famous of which are blockchains, which have made the first basic steps toward a proper incentive design in valuable networks by fairly remunerating their members.

Blockchains

The original and most famous blockchain is Bitcoin.[24] The Bitcoin protocol emerged in 2009 as an attempt by its founder, Satoshi Nakamoto, to provide an alternative to the shortcomings of the financial system in the aftermath of the 2008 financial crisis. A strong distrust of government and central banking is part of the political philosophy of the Bitcoin community to create an alternative to fiat currencies. The community believes cryptocurrency to be a solution immune from national governmental control.

A blockchain is a distributed ledger — it keeps track of some types of transactions from its members. For example, Bitcoin keeps track of the P2P transactions where members send digital tokens representing money (bitcoins) to each other. What makes this ledger valuable is a long list of qualities, some of which are unique to blockchains technology. Not all blockchains have the same qualities, but most are modeled on the basics of Bitcoin, which is immutable, immortal, open, uncensorable, transparent, and decentralized.

Blockchains such as Bitcoin are decentralized through their P2P architecture, since no central authority is completely in charge of anything, including ownership, security, or upgrades. Bitcoin is immutable, meaning the entire history of transactions is never changed — not one letter or number amongst the billions of records will ever change in response to the demands of any centralized authority. It’s open, meaning anyone with a connection to the internet can download and run the client software to participate in the network without seeking the permission of anyone else. (Open networks are sometimes called permissionless.) Bitcoin is immortal, in the sense that as long as there is any freedom in the internet, any machine that chooses to download and run the software (even just for curiosity) will keep the blockchain alive. Further the network is perfectly transparent, allowing complete audits of every transaction, yet it protects its users’ privacy through cryptography.

The Bitcoin protocol created the first decentralized P2P network that could manage valuable assets without resorting to a centralized authority. But the centralizing forces of competition led to concentration of power in the Bitcoin network, as economies of scale led to large computer farms devoted to mining for bitcoins, instead of millions of individual network members maintaining the ledger on their laptops.

Many other blockchain architectures have attempted to improve or generalize Bitcoin’s functionality. They’ve built tools that improve on Bitcoin’s protocol to inhibit Bitcoin’s many problems (we’ll talk about some later, such as ASIC-resistance and sharding), and they’ve built new tools that allow us to decentralize more of the functions of business. The most prominent blockchain besides Bitcoin is Ethereum.[25] Six years after Bitcoin published its first block, Bitcoin developers, aficionados, and critics started the Ethereum blockchain in 2015, providing a ledger with much more complicated transactions, called smart contracts.

Smart contracts are automated, computer-programmed business contracts. The smart contract program can be written in many different programming languages, the most popular being Solidity.[26] They automatically track the transfer of money and assets and labor between parties, without human oversight. They automatically adjudicate when something goes wrong (and something always goes wrong in business), again without human oversight. Thus, we say smart contracts are self-executing and self-regulating.

A smart contract allows more complex business logic than merely transferring digital coins from one user’s digital wallet to another’s. The idea is to be able to write sophisticated programs which dynamically control the timing and execution of any business contract, to act as automated escrow for many different types of digital business assets.

A smart contract is computer code which is executed automatically by the P2P network if it is written properly and uploaded to the network according to the transparent rules the network follows. The network of thousands, or millions, of computers create a “world computer” with a “virtual machine” using P2P distributed storage and processing, whose goal is to be fully decentralized.

The goal of decentralizing our economy doesn’t end with decentralized digital money and smart contracts. More ambitiously, the crypto community blockchain developers have their sights on decentralizing every function of Web 2.0.

Web3

There are many other groups which have developed various versions of Web3 applications using P2P architecture. To name some major decentralized applications that are currently being used and illustrate the potential for the future:

Bitcoin (2009) decentralizes transnational currency, its production, accounting, and exchange.

Bitmessage (2012) decentralizes messaging service for temporary information. Many others have different levels of security and interoperability. Especially of note was the Skype video and telephone P2P system before Microsoft supernodes took over in May 2011.[27]

InterPlanetary Filing System (IPFS) (2015) provides P2P temporary file storage and computation.

Ethereum (2015) decentralizes computation and permanent records for multiparty smart contracts.

ZeppelinOS (2018) is a decentralized operating system for smart contract developers. Built on top of Ethereum, it provides a stable evolutionary environment for developing secure smart contracts.

The goal of these tools is to provide the decentralized information technology to build fully decentralized applications (DApps) for basic users. The tools listed above are already helping developers to easily build DApps to connect your cellphone to networks of other users. More ambitiously, the goal of building these P2P tools is to create DAOs. The idea is to make a single complicated set of programmed protocols that can automate the interaction between a (possibly very large, international) network of members who wish to cooperate on a business venture without requiring a central authority to make investment decisions and settle disputes.

There are three major reasons our institutions and economy will become more efficient by decentralizing with DApps and DAOs.

First, decentralization gives individuals more power and autonomy, unleashing information at the edge. It gives us autonomy over our personal information and over our economic choices, such as where and how and with whom we should collaborate. Decentralization empowers and motivates those on the edge to participate and contribute their talents, unlocking previously untapped potential. Empowering members makes the whole organization more efficient.

Second, decentralization is ideal in chaotic times, when technology and business arrangements are continually changing. The most talented members for any task are not blocked by a rigid hierarchy. Decentralized platforms can give us perfectly level playing fields for markets. Open markets find the best autonomous individuals to more efficiently solve new problems instead of relying on a large bureaucracy to organize a response using an outdated structure. This is tapping into information at the edge.

Third, the liquidity of decentralization is more efficient and stable in the long term, making regulation more dynamic and responsive. Computer processors are able to filter the voluminous information available through global networking to make good decisions. Information from more sources has more equal value than it does in centralized systems. Information at the edge greatly changes decision making. Google Maps sifts through the data of millions of drivers to dynamically determine where traffic jams occur to decide how to advise on users’ best routes. The opposite situation holds in more hierarchical structures, where, for instance it is very unlikely that the understandings of the lowliest employees will ever affect the decision making of a single chief executive of a company. Decentralized information technology is needed to make regulation more dynamic. We now face unfathomably complex legal interactions governing the exponentially growing and evolving business interactions that arise with AI-enabled IoT devices (artificial intelligence enabled Internet of Things). How can we legally regulate the smart contracts which mediate between devices owned by many different companies and individuals interacting throughout the supply chain? New efficient processes lead to newer more efficient processes. Business arrangements constantly adapt to these changing circumstances giving new contracts. The choice again resolves to dictatorship versus democracy. These business problems can be solved with complete ownership by a super-trust, global monopoly firm which avoids all legal contracts. Otherwise we need decentralized markets where fair contracts can emerge.

The engineer’s common perspective is to imagine that we are removing regulation with these automated processes. In fact, the purpose of smart contracts is to give much more fine-grained regulation, more control, not less. In order to give a stable environment where such processes can evolve predictably, we need a decentralized organization, a level playing field for cooperation, an institution which has inertia given by a more fully transparent and democratically written history.

What distinguishes Web3 from Web 2.0 is the potential to completely decentralize valuable business networks. Bitcoin demonstrated that protocols can be designed which organize a network of peer-to-peer (P2P) collaborators without the need for any centralized ownership structure, unlike the Web 2.0 companies. No single entity owns or controls Bitcoin or Ethereum. The ownership and control of these valuable networks is distributed loosely amongst its network members — ideally, they would be distributed according to their participation.

In Bitcoin and Ethereum, there is no formal binding governance framework declaring how the protocols for consensus might be changed in the future. This is a deep flaw which weakens these networks and will lead to instability. But their relative success in running a network with a US$100 billion market cap, the largest among all digital assets, for 10 years — without any governance framework — demonstrates the potential of the system. Instituting a transparent democratic governance process will make the networks much more efficient and stable. We explore some possibilities for governance in Chapter 7.

In order to grapple with the problems arising from these decentralized technologies, we need to understand the basic structures underlying these new P2P architectures.

Web3 P2P technology

In this section we explore some of the many new P2P tools that are being actively developed for implementing the Web3 vision, making it easier for software engineers, and even students, to design tools for empowering new networks devoted to profit. We’ve taught several courses in law and mathematics discussing P2P technology and its impact on society. Craig taught a computer science course in the Spring of 2020 entitled Introduction to Blockchain Technology. Though some students had barely heard of Bitcoin in the first class, every student had a valid idea for a novel DApp and DAO by the second class meeting. One group wanted to tweak the notion of a decentralized marketplace, like OpenBazaar.[28] Another chose a decentralized ticketing app so venues for entertainment could invent secure digital admission tokens, and attendees could participate in a transparent decentralized market for the tickets. A third chose to build a decentralized dog breed registry.

It took only a few hours for students with no previous experience with distributed computing to build a basic functioning DApp from scratch, connecting their website frontend UI to the Ethereum test blockchain with JavaScript. They simply adapted online tutorials and followed the text, Mastering Ethereum.[29] These tools didn’t exist a few years earlier, but the technology is changing quickly, thanks to a collaborative open source community devoted to decentralization which naturally encourages widespread adoption through education.

The decentralized information technology tools available to software developers fall into 3 broad categories: processing, storage, and communication.

Processing or distributed computing

Blockchain architecture, computations are performed redundantly, concurrently. E.g., Ethereum, Bitcoin

Parallel processing, where computations are partitioned. E.g., IPFS, Neural network training/federated learning DCAI (2019), GIMPS (Great Internet Mersenne Prime Search, 1996), Leela@Home (chess neural network training, 2018)

Storage

Temporary: distributed hashtable architectures. E.g., IPFS, Filecoin, Swarm, OpenBazaar

Permanent: blockchain architectures

Communication

Internet browsing: Tor, Zeronet, Freenet, dn42

Messaging: Bitmessage, Matrix, Whisper

File sharing: IPFS, Storj, BitTorrent (2001)

Software developers are rapidly improving the tools to decentralize any task provided by centralized apps. As technology and incentive structures improve, these profitable P2P tools are improving, in security and usability.

To understand how these programs achieve decentralization through distributed computing, we take a short digression into two basic math tools, hash functions and cryptography. Then we can explain in more detail how Bitcoin and Ethereum work. All of this is in service of exploring new designs for decentralized Web3 alternatives to today’s centralized institutions.

New tools: hash functions

The fundamental tool that engineers are using to decentralize the economy is pure math — especially hashing and cryptography.

Hash functions were an early computing innovation; they’ve been around since 1953. They have many uses, but their primary application in P2P networking is to efficiently organize the chaotic information of random messages being sent in a global network with no leader. They take any data as input and return a pseudorandom number. One of the most common hash functions used in P2P programs is SHA-256, which stands for secure hash algorithm. For example, if we input the word “decentralization” into the SHA function, we get a number that is 256 binary digits long:

SHA(“decentralization”) = 1011001110001110001000100111110101011101111010101100110000110101001100111101111101000001001111101100101101000011011100011010001110100001101110111101001000001010001111111111000110111101101101011000110111010101111100000011010110100011011010011000001010111111

This is a very big number, but computers can process it very quickly — combining it with other numbers — because its binary structure works naturally with binary logic circuits.

SHA quickly converts any amount of information into a 256-bit number. If we enter the first page of this book, SHA immediately outputs

1000101101000101111111110011010010100111110000101010011111111111101011110010011101100101110000001011111011001010000000011000111011101101000000001101010101011100110000010001101001110110100111111011010011011100000111100110110001011011101000100100100010110101

and if you delete the last letter SHA outputs

1011001001110011110001100010010100010100001010101001101001001101101001111011001010010111111000011000110000010011111111110110110000111110001101011010111111001110000101111001101110110000001110101110010110111100011011001101000100000011100011001100000100000110

which is completely different. Notice there seems to be no pattern connecting the two numbers, even though the two inputs were almost identical, with only a tiny difference of one letter out of a page of information. We say SHA outputs a pseudorandom number, because the number appears to be randomly coming out of the function, but in fact the function is perfectly deterministic, that is, the function follows a fixed procedure. If we check the answer to SHA(“decentralization”) today or next week or 100 years from now we will always get the same result.

Engineers use this deterministic pseudorandomness property for error detection and proof of ownership. If someone copies a file, of any size, if there is one error anywhere in the copy, then the hash of the copy will be very different from the hash of the original, plainly exposing the error. In another application that is regularly used, if you publish the hash of your file before anyone else, then you can claim priority whenever anyone else later publishes a copy of the file. The output of the hash function is a number that is much smaller than the large data you entered, and the output is unique to that data (by any practical standard). The output is an identifier, a name, or address — an ID — for the file that is easy for computers to use.

However, the most important property for decentralized P2P applications is how hash tables organize randomly occurring data. Let’s consider for a moment how the Visa credit card company keeps track of the thousands or millions of transactions that are occurring in its global network. How do they organize these transactions that are coming in randomly from different users in countries all over the world? How do they label and store them all? Now consider the problem of a decentralized payment system, like Bitcoin, which must do that without having a leader. How can the network of computers come to consensus? How do they all agree on how to name and store each new randomly occurring transaction? The hash function. If anybody in the world takes the data from a single transaction and hashes it, they will invariably come to the same output, because the hash function is deterministic. Each transaction will get a unique output for its ID, because there are so many possible outputs.

This crucial feature of hash functions, that there are many, many possible outputs to a hash function deserves to be explored for a moment. For SHA-256 there are

different outputs. The idea is that there are 2 possible outputs in a 1-digit binary number: 0 and 1. There are

possible outputs in a 2-digit binary number: 00, 01, 10, and 11. In a SHA-256 output, there are 256 binary digits, so there are

different outputs, which is a very big number. It’s approximately the number of atoms in the observable universe.[30] So, since hash functions take any information input and return a statistically random output among these

different outputs, it’s almost impossible two different inputs will get the same output.

By taking the hash output as the label for the input, we can label any sort of thing humans can ever create. Counting up every muscle twitch from every person on the planet for the next million years, would not come remotely close to

. The different SHA output numbers naturally organize everything from biggest to smallest. In the Bitcoin blockchain, for example, in the last 10 years there have been more than 500 million transactions that unpredictably entered the network from any user on the planet struck by the whim to send a coin. Thanks to the hash function, every node in the network hashes the transaction’s data and independently arrives at the same answer, giving the transaction a unique ID that everyone recognizes. This mechanism drives consensus in an extremely decentralized global network thanks to the extreme protocol centralization of mathematics. This gives the technological basis for creating consensus on the creation and ownership of digital money, specifically bitcoin, without the need for a centralized authority dictating, “let it be done”, which is the English translation of the Latin word fiat.[31]

New tools: blockchain protocols

In this section we explain the architecture of the first blockchain network, Bitcoin. The point of a blockchain is to create a decentralized network that records data on the network’s transactions. There are two main types of transactions the Bitcoin network needs to account for. This first transaction type is a transfer of digital tokens, called bitcoins, from one user to another. The second transaction that needs to be accounted for is the creation of a bitcoin. What makes these digital tokens, these bitcoins, valuable? The blockchain is the record of all these transactions — we refer to the blockchain as a distributed ledger. What makes this ledger valuable is that it is immutable, uncensorable, immortal, open, and decentralized. Further the network is perfectly transparent, allowing complete audits of every monetary transaction, yet it protects its users’ privacy. We explain next how hash functions make these qualities possible.

Whenever someone sends a bitcoin digital token to another member of the network, the transaction data is hashed to give it a special identity. Everybody who is running their machine on the Bitcoin network can hear about the transaction. The hash number distinguishes it from every other transaction and orders all transactions by number. Through peer-to-peer gossiping, each machine in the network — each node — relays the useful information it receives to anyone else listening, and they can each independently hash the data and come to consensus with the same hash output. No central authority needs to be in charge of the database. Since there are so many possible output hashes, it is practically impossible for any two transactions to get the same hash address (this would be called a collision). Even if every person on the planet joins the network, even if they all made thousands of transactions every second, even if the Bitcoin network lasts for thousands of years, it is extremely unlikely there will ever be a single collision.

In Bitcoin, you make a type of hash table (technically called a Merkle tree) of all the transactions that are waiting in the network cloud to be added to the Bitcoin blockchain. This is called a block. The blockchain is then the sequence of blocks (hash tables) that are created roughly every 10 minutes. If you are participating in the Bitcoin blockchain, when you win the Bitcoin lottery amongst all the other computers in the network, then you win about US$90,000 worth of newly minted bitcoins (block reward as of May 2020) and the right to put a new block on the chain. Your block will be the hash table of all the transactions your computer has seen in the last 10 minutes since the previous block. Then the lottery starts again to decide who will produce the next block 10 minutes later.

The way you win the Bitcoin lottery is to stick the address of the previous block into the SHA hash function. The output will be a 256-bit number which will start with some number of zeros. If it starts with enough zeros, you win. In June, 2020 you need 44 zeros out of 256, but the difficulty is adjusted automatically every 2 weeks to try to keep the average rate of block production at 1 block per 10 minutes. The first output you get is extremely unlikely to begin with 44 zeros, however. So you can try again after adding more data to the input. This input data is called your nonce. You keep trying new nonces until you find an output with 44 zeros. On average, you will need to try 18,000,000,000,000 or 18 trillion times before you win.

Once you announce your winning nonce, everyone in the network can quickly validate that you are correct, since they only need to calculate one hash to verify for themselves. The process described above is called Bitcoin’s proof of work (PoW) consensus algorithm. It’s the process the network uses to come to agreement, or consensus, about how the network is going to update itself every 10 minutes, adding new transfers of bitcoins between users around the world.

The Bitcoin blockchain is open and politically decentralized, because anyone with access to the internet can participate, but no single entity is in charge — no one owns the network. Anyone can join the lottery and participate in creating the blockchain. The process is very protocol centralized, however, since everyone is supposed to use the same PoW consensus algorithm.

The blockchain is immutable, because any attempt to edit the old blocks will be immediately rejected by the network. A single number or letter changed anywhere in its entire history will result in a different hash of the information. The error-correcting property of hash functions make such attempts immediately apparent. As long as the network is following the blockchain protocol, they will automatically reject any edit, no matter how small. The blockchain grows forever and the history of all the blocks are never deleted. The hashing of the previous block built into the nonce for the next block gives security. If a single digit of information anywhere in any past block is ever changed, the hashes will not work to give a nonce with the correct number of zeros, unless the editor does the work of the entire network and finds new nonces for all the blocks after the edit, which would require them to be faster than the entire network currently while also reproducing the past work of the entire network. Anyone broadcasting false information can be quickly checked and ignored. It is therefore reasonable to assume that all information added to the blockchain is eternal, uneditable, and uncensorable.

The blockchain is uncensorable since it is open and decentralized. Anyone can publish a new block if they find a nonce. Blocks have been posted in the Bitcoin blockchain which violate the EU’s 2018 GDPR law (General Data Protection Regulation). People’s private information have been included in blocks against their will. The GDPR requires the owners of the websites that post information to remove that information, however that is certainly not going to happen.[32] The network would need to create a hard fork to remove the information. More than half of the network’s nodes would be required to change their entire protocol to scrub the data and restart the blockchain. This would need to happen every time information was found on the blockchain which violated the GDPR. The only other choice the EU has is to prevent every citizen in the EU from participating in the Bitcoin network or accessing any of its information. But even that wouldn’t stop the blockchain, because it is immortal.

The Bitcoin blockchain is immortal, or eternal, in the sense that the blockchain will last as long as the internet does, as long as it is in any degree free. If any one computer in any country has access to the information of the blockchain, then they can keep the blockchain running. Even if we imagine Bitcoin has no value or usefulness to society a century from now, a single hobbyist who is interested in forgotten things from the past can keep the network running.

Further, the Bitcoin blockchain embodies transparency. Every transaction is recorded eternally, and any computer that connects to the network can inspect and audit every single transaction that has ever occurred. Everyone has equal access to all the information in the blockchain. This quality is essential to allow a valuable decentralized network to function without a centralized authority to maintain the validity of the ledger.

Finally, the blockchain uses strong cryptographic tools to securely maintain its users’ privacy. We discuss these tools in the next section. But first, let’s consider some of the many downsides of blockchain technology.

The most common criticism of the Bitcoin network is that hashing these nonces requires a great deal of computation and energy. It’s actually worse than what is described above. Everyone in the global network is competing to find the nonce. This means that many different computers are redundantly trying the same wrong nonces before one computer finally tries the right one. The Bitcoin network is performing about

=100,000,000,000,000,000,000 hashes every second in June 2020. That requires as much energy as the entire country of the Czech Republic. Bitcoin has around 300,000 transactions per day, but they use much more energy than the Visa credit card network, which handles 150,000,000 million transactions a day, or 500 times the volume. A more accurate name would be the Proof of Pollution consensus mechanism, since work generally strives to be useful and efficient.

That wasted energy is why most blockchain networks are attempting to find a different consensus algorithm. The leading candidate is called proof of stake (PoS), which generally involves block producers locking their money in a smart contract so they can have the chance to be picked by a pseudorandom number generator. The problem is that if someone comes up with a clever algorithm for hijacking the process, then the network would fail. We discuss this difficulty further in Chapter 4.

The second major problem with blockchain technology is that it will always be expensive, whether or not PoS is fully solved. Every bitcoin in existence originally was created as a reward for winning one of the hash lotteries when a block was built and published. The way to verify a bitcoin is valid is to check the legitimacy of the entire chain. If you want to fully participate in the network, you need to download a copy of the entire history of every transaction of every bitcoin in existence. This extreme redundancy and inefficiency means transactions will always be expensive.

This leads us to speculate that more efficient systems will emerge which handle smaller transactions. A good idea for a DAO is a decentralized banking system built on top of the Bitcoin ledger that charges a smaller fee for temporarily storing minor transactions. These smaller transactions can then be bundled together to make a larger Bitcoin transaction reflected on the eternal blockchain. We discuss such ideas further in Chapter 8, with ZKRollups. To understand those tools, we need to understand zero-knowledge (ZK) proofs, which are explained in the next section.

The Bitcoin blockchain is a fundamental advance in decentralized organization. Before Bitcoin, there were very few examples of politically decentralized organizations that were worth money. There were decentralized religious organizations, like the Quakers. The Apache tribe was an extremely decentralized cultural organization before the 1910s. But competition for wealth centralized their organization. The Maghribi traders are the best example of an organization devoted to profit which remained decentralized. The Maghribis used secure and meaningful reputation to counteract the centralizing influence of competition for profits. But the Maghribis’ 11th century system would not work in our contemporary globally-connected networks. We want to go much further than the Maghribis, and build economic networks which allow the autonomous members to protect their privacy by remaining anonymous. We are doing this with the second major advance in information technology: modern cryptography.

New tools: Public-key cryptography and zero-knowledge proofs

Cryptography is the art of writing secret codes. The word lends its prefix to the words “cryptocurrencies” (like Bitcoin) and “crypto-economy”, alluding to the fact that the technologies are built from a profound devotion to providing user privacy in a decentralized and pseudonymous setting.

Public key cryptography allows secrets to be passed securely within a public network, despite the fact that every message is shared with everyone else. This is essential to providing privacy in a decentralized network, as you don’t need to rely on a centralized authority to maintain security and keep your secrets safe from others. Public key cryptography was first presented in 1976, and has been used for decades to secure internet messages, such as making purchases on websites.

For example, when you make a purchase on Amazon or Alibaba, you enter your credit card number into your computer and send it off to their company through the internet. The internet relays the information around various nodes, and these messages are available to thieves around the world. So, of course your credit card information is encrypted.

But how does it get encrypted? Amazon sends your computer the instructions for how to encrypt the message. You use their public key to scramble the information. But the thief is also listening to these instructions. The thief knows how you encrypted the information — the thief has Amazon’s public key — and the thief has your encrypted information. Why can’t the thief unscramble your information if they have your scrambled message and they know how you scrambled it?

The trick involves just a little bit of elementary school arithmetic, divisors and remainders, but more math than we want to drag our readers through. Basically, Amazon has another key, called a private key, that makes it easy to unscramble the information. But if you don’t have their secret private key, then it takes an enormous amount of computation to unscramble the message by brute force — more computation than all the computers on Earth working in parallel for the next billion years.[33] This mismatch between the sender and the receiver’s power is called asymmetric encryption.

Public key cryptography’s most useful feature in P2P systems is called a digital signature. The idea is that you can invert the process described above. The owner of the secret private key can encrypt their message with the private key, then anyone in the network can use the public key to unencrypt the message with the public key. That way anyone can verify that the sender of the message must have the private key, without requiring the owner to reveal the actual private key. That digital signature trick is how you can prove you own a bitcoin or any other digital asset without losing it, and so underlies any P2P system involving valuable tokens. You need to keep the private key secret, or else anyone who finds it immediately gains control of your assets. (Billions of USD worth of bitcoin have been lost in this way.[34])

The digital signature trick is the simplest example of a zero-knowledge proof. The idea is that the private key owner can prove they are the owner without revealing the private key. The other members have proof of ownership, while the owner reveals zero knowledge of the specific key string. More complicated zero-knowledge proofs are giving P2P application users the power to privately interact despite being in a public and decentralized network. For example, a DAO devoted to health care insurance can give members a synthesized health score from 0 to 100. The protocol for synthesizing your complex health history is public, but your particular health history stays private while you reveal only your final health score to the insurance DAO. Even that number could be secured, by mixing it with other information, such as parts of your property to be insured, or other members from your company.

Anonymity: pros and cons

A major advantage of anonymity in a large network is that it can encourage justice by eliminating many sources of discrimination. In a DAO whose members are anonymous, discrimination based on superficial identifiers such as race, ethnicity, sex, gender, sexuality, age, and social class is eliminated to the first order. At the second order, when people infer those qualities from behavior, anonymity doesn’t protect a group from discrimination. It also doesn’t protect a group from third-order effects of systemic discrimination. In fact, anonymity can exacerbate 2nd and 3rd-order effects, as it makes it more difficult to detect. To combat such effects, different mechanisms in the governance of a decentralized organization must be employed.

The most basic economic effect of anonymity is to provide safety for its members. Safety gives people confidence to engage in business deals, to participate in larger networks of collaborators. Safety is an institutional overhead burden, like insurance and appeals processes. But it’s used because it gives business a catalyst to make deals. This makes the entire economy more liquid and efficient, so the investment is worthwhile. Anonymity provides safety through privacy, which is especially important in a global network. Anonymity encourages more contributions, making the network stronger. The network effect advantages any strategy that makes a network larger.

If the globe has access to your data, it can be used against you. Reporters, political dissidents under oppressive governments, and whistleblowers all have obvious enemies they need to protect themselves from. In many such cases, they cannot work without anonymity. In a world with 7 billion potential actors, people get attacked online without any apparent reason. Wikipedia editors who simply curate generic articles regularly get attacked. If you want to create an environment where people can contribute to controversial pages on either side of an issue, expect to be targeted by enemies with an agenda. Participation will crater if contributors are afraid every nugget of information they share will be available for the rest of their career for analysis by superiors. [35]

However, anonymity can give people too much safety in a decentralized network. With anonymity you get many more trolls — anonymous participants might feel safe enough to attack other members without fear of retribution. Therefore, anonymity should be balanced. Your power to broadcast your voice should be tempered by having that voice tied with a pseudonymous[36] account with meaningful and valuable reputation. If you abuse your broadcasting power, then you should lose your reputation.

Is it possible to allow anonymous participants to broadcast their messages, to participate in a network, especially when money is on the line, and expect anything but a series of catastrophes? The Tragedy of the Commons is a natural consequence of allowing anonymous participation. Anyone who has spent a few hours delving past the top filtered contributions on platforms such as Twitter, YouTube comments, or Reddit can observe the toxic results of anonymity. Occasionally graffiti is interesting, but most anonymous comments are worthless bile which randomly express people’s dissatisfaction with their condition, without communicating any lasting connections which can improve society.

An excellent experiment to test this question, of whether anonymous groups can create productive collaborations was Reddit’s Place[37], which started on April Fool’s Day, 2017. The commons, in this case, was a blank canvas of internet space — one million pixels,

. Once every 10 minutes, any registered Reddit member could change any pixel’s color. The idea was to get subreddit communities to collaborate and compete to create art in a limited space for a limited but unspecified time.

Communities promoting their bases with symbols such as national flags and pixelated works of classical art (The Starry Night and the Mona Lisa) competed with subversive groups such as r/theblackvoid which coordinated attacks to desolate established territories. Anything was possible, and Reddit’s editorial board was nervous about the message the final image would convey about their site and their users, as inevitably, thanks to the freedom from repercussions that anonymity provided, hate symbols emerged, such as swastikas.

In the end, however, each time a hate symbol cropped up, it was replaced with something more positive. In the end, r/place was a success because even though their anonymous members had the ability to destroy, they also had the power to defend their own creations. This effect is regularly observable throughout history. For example, the first Chinese dynasty, the Chin empire, quickly collapsed after following the Hobbesian perspective that humanity is fundamentally evil. For most of the subsequent 2,000 years of Chinese history, stability prevailed under the more generous Confucian perspective of empowering and encouraging the good side of humanity.

We need to build a decentralized auditing system to keep track of pseudonymous reputation. To do that we need sophisticated distributed programming that automatically updates a member’s reputation based on communally accepted protocols. This is solved with decentralized smart contracts.

New tools: the Ethereum blockchain and smart contracts

The blockchain technology that allows decentralized execution of unlimitedly complicated business contracts between millions of members across the planet is quite complex. The math and logic that it’s all built on, however, is not deep at all.

The operations of NOT and AND are functionally complete, meaning any program imaginable can be built by chaining them together. Specifically, any program with finite inputs, which results in TRUE or FALSE at the end of the program, can be written with chains of these operations. For example, if you study symbolic logic for a week[38], then you will learn DeMorgan’s law:

So we can write OR as a chain of NOTs and ANDs as follows:

Remember the code of Ur-Nammu (ca. 2100 BC) discussed in Chapter 1. Law #32 reads:

32. IF a man had let an arable field to another man for cultivation AND he did NOT cultivate it, turning it into wasteland, THEN he shall measure out three kur of barley PER iku of field. [Ed.: capitalization mine.]

IF

THEN

is equivalent to

OR

NOT

. So we can chain NOTs and ANDs together to write

This symbolic logic is not simple; it can get incredibly complicated by chaining these operators together in unlimited arrangements. But it’s not deep. We’re just using NOTs and ANDs. So we only need to use two logic circuits to build the most sophisticated legal contracts imaginable to organize a global economic network in a DAO. The power comes from the fact that these operations can be performed reliably, at the speed of light, millions of times per second.

Similarly, Ethereum’s ability to use these logic operations, and basic arithmetic, and storing and reading information, makes it Turing complete, meaning it can theoretically approximate any mathematical model that exists.[39] However, Ethereum is not practically capable of mimicking the power of even the cheapest smartphone. The storage limit is currently extremely small, because anything stored in the blockchain needs to be stored redundantly on thousands of nodes for all time in the future. Any message passed through the network is repeated millions of times. So Ethereum decentralizes computation of smart contract computer programs, but they are necessarily extremely primitive, compared with the functions of centralized Web 2.0 companies, like the services Google offers.

Therefore, we have built many systems to adapt to these limitations. The InterPlanetary Filing System (IPFS) is a P2P file-sharing system which temporarily stores files on some of the nodes in the network, based on how popular or important the files are. IPFS was launched at around the same time as Ethereum, independently, and interacts well with smart contracts. As an upgrade to BitTorrent, IPFS incorporates a nascent incentive design with its own native token to motivate users to maintain the availability of less popular information. We’ll discuss other architectures for improving the function and efficiency of blockchains, such as sharding and ZKRollups, in Chapter 8.

With these extensions to blockchain functionality, many new groups are developing decentralized alternatives to centralized applications. These decentralized apps are called DApps. DApps are intended to compete with centralized apps, in the hope that decentralization will give users more power, making them more competitive than centralized systems. For example, there are several initiatives developing ride sharing alternatives to Uber, which hope to give greater transparency to customers and drivers.

The idea is that users will download a DApp from a cryptographically verified decentralized P2P service like IPFS, then run the DApp on their own computer, instead of using the blockchain to redundantly perform all calculations. The development layers are detailed in the figure below.

Figure 2: Ethereum DApp stack. A DAO would consist of a network of users who are all running the same DApp on their personal devices, using IPFS and Bitmessage to communicate less securely, and the blockchain to securely finalize transactions.

When software designers refer to themselves as “full stack engineers”, they mean they can negotiate the APIs (application to program interfaces) between each of these layers.[40] An API allows the different computers running the different layers to communicate. The most common APIs for Ethereum are integrated in the Truffle framework, which helps developers connect (migrate) the functionality on one layer with another, and organize the interfacing programs.

Development tools such as the Truffle framework and ZeppelinOS (2018) are making the process of developing DApps for DAOs easier every year. OpenZeppelin library has a curated collection of carefully tested smart contracts that developers can copy to build their projects. ZeppelinOS is an interface of smart contracts that are independently deployed on the blockchain and are actively running for anyone to use, so developers don’t need to redeploy them (or pay to deploy them in the first place).

With the open source atmosphere of Web3, every success is quickly replicated and extended in new applications. Truly successful DAOs don’t exist, yet. Once the architecture of a single DAO is successful, however, it will be quickly cloned and adapted to every imaginable economic and social organization. In the next chapter, we discuss what is holding them back. What is wrong with the open source movement that has produced these remarkable tools, and how can it be fixed? What is needed before Web3’s DApp-enabled DAOs can unleash humanity’s economic potential?

Bibliography

Formating example:

Evans, Dave (Apr. 2011). The Internet of Things: How the Next Evolution of the Internet is Changing Everything. CISCO White Paper, https://www.cisco.com/c/dam/en_us/about/ac79/docs/innov/IoT_IBSG_0411FINAL.pdf (accessed June 1, 2020).

Wikipedia. Last Universal Common Ancestor, https://en.wikipedia.org/wiki/Last_universal_common_ancestor (accessed June 1, 2020).

[1] Intellipedia https://en.wikipedia.org/wiki/Intellipedia (retrieved 6/6/20) is an example of decentralized knowledge creation and sharing behind a strong KYC private firewall that the US intelligence community has used for more than a decade.

[2] Roger P. Mellen, “Modern Arab uprisings and social media: An historical perspective on media and revolution”, Explorations in Media Ecology, Vol. 11 Issue 2, p 115 (April 2013)

[3] CNN’s Ivan Watson during a 2012 South by Southwest discussion. https://www.youtube.com/watch?v=1bSj4f9f8Eg&desktop_uri=%2Fwatch%3Fv%3D1bSj4f9f8Eg (Retrieved 8/12/20).

[4] Share Barcelona, https://share.barcelona/ (accessed June 1, 2020).

[5] Ballotpedia (2017). Local Government Responses to the Sharing Economy (ridesharing/homesharing), https://ballotpedia.org/Local_government_responses_to_the_sharing_economy_(ridesharing/homesharing) (accessed June 1, 2020); Chapman, Lizette, Eidelson, Josh, Cutler, Joyce E. & Bloomberg (Sept. 11, 2019). Governmental requirements that Uber & Lift treat their workers as employees instead of independent contractors will certainly weaken the power the company receives from their decentralized structure. However, this may improve the industry, as it opens the space for more politically decentralized competitors. A DAO which makes each member a partial owner would not be subject to the bill since such a DAO would not have employers and employees. “New Labor Bill Passed by California Senate Would Transform the Gig Economy — And Could Cost Uber $500 Million a Year”, Fortune: Tech, https://fortune.com/2019/09/11/gig-economy-california-senate-uber-law-labor-rights-union/ (accessed June 1, 2020).

[6] That’s an old joke about the World Wide Web. Who is the ultimate person in charge of the internet, anyway? Somebody called me a bad name. Can I talk to the manager? The web was designed to be censorship resistant with maximum autonomy amongst nodes. This brought about a new level of freedom of speech that is clashing with our evolutionary programming. From the beginnings of multicellular life, if one animal were to insult another, the response would be immediate and symmetric. On the internet, a troll can flame and run.

[7] Previously, there were many independent ISPs, culminating in approximately 7000 ISPs in the US by 2000. Within a few years, however, the ISPs were consolidated until U.S. internet telecommunications became dominated by two companies, Comcast and AT&T. https://www.sacatech.com/2019/08/15/neverending-story-isp-market-consolidation/ Posted August 2019. (Retrieved 6/3/20.)

[8] Transmission Control Protocol is the primary set of rules governing the proper format for transmission of website information, email, and other files through the internet. A common TCP request is to view the information at any given website address.

[9] The Tragedy of the Commons refers to the reasonable and predictable situation where a shared resource is spoiled without oversight or accountability. The idea was mentioned early by the British economist William Forster Lloyd (“Two Lectures on the Checks to Population”, Oxford University, 1833) who described unregulated grazing on public land — the commons — and it is commonly used to explain the collapse of fisheries and other environmental problems. Here the “commons” is the shared public resource of internet bandwidth, or even the unmeasurable tone of our culture which still has meaningful economic consequences.

[10] Viktor Trón, Aron Fischer, Dániel Nagy, Zsolt Felföldi, & Nick Johnson, “Swap, Swear and Swindle: Incentive System for Swarm”, Ethersphere Orange Papers, p. 4, draft version May 2016.

[11] The exponent is 2, so a pedantic mathematician might object that it’s quadratic growth. The idea is that with

nodes in a network, there are

possible connections. The network effect of power scaling as the square of the number of members of the network has been in common scientific parlance since at least the 1980s and is sometimes referred to as Metcalfe’s Law https://en.wikipedia.org/wiki/Metcalfe%27s_law (Retrieved 8/12/20).

Compare this with the number of connections in a centralized hierarchical structure. In a tree graph with

nodes there are

connections regardless of the number of levels in the hierarchy, the minimal number necessary to make the graph globally connected. The centralized structure is maximally efficient for sending messages to the whole group from one central leader using minimal energy. For instance, the Catholic religious hierarchy uses 7 levels to create the potential to reach every person on the planet. Hypothetically, if the Pope contacted

cardinals, who each contacted

archbishops, and so on down through bishops, priests, deacons, and lay people, then the Pope would have potential access to

trillion individuals requiring only

levels.

The maximally decentralized structure, on the other hand, is maximally stable in that it is maximally redundant and will not suffer any loss in connectivity when any particular connection is broken. With the contemporary advances in information technology, we live in a post-information-scarcity society and do not need to rely on the efficiency saving architectures of centralized hierarchies. Now every individual can broadcast their messages to every other individual on the planet inexpensively.

[12] Jan L. Nussbaum, “Apple Computer, Inc. v. Franklin Computer Corporation Puts the Byte Back into Copyright Protection for Computer Programs”, Golden Gate University Law Review Volume 14, Issue 2, Article 3 (January 1984) pp 278–292.

[13] Steven Weber, The Success of Open Source, Harvard University Press, pp 38–44 (2004). More details on most of the history in this section is reviewed in https://en.wikipedia.org/wiki/History_of_free_and_open-source_software (Retrieved 7/31/20)

[14] Sam Williams, Free as in Freedom: Richard Stallman’s Crusade for Free Software, O’Reilly Media, (2002)

[15] https://www.linuxfoundation.org/resources/open-source-guides/participating-open-source-communities/ (Retrieved 7/31/20)

[16] apache.org (Retrieved July 30, 2020)

[17] See Netcraft (April 15, 2010). April 2010 Web Server Survey, https://news.netcraft.com/archives/2010/04/15/april_2010_web_server_survey.html (accessed June 1, 2020) for an external audit and more recent claims in The Apache Software Foundation Annual Report for 2020 Fiscal Year, The Apache Software Foundation Blog (July 29, 2020) Available online at: https://blogs.apache.org/foundation/entry/the-apache-software-foundation-announces67 (Retrieved 7/31/20)

[18] Quotes from the documentary feature Trillions and Trillions Served, The Apache Software Foundation, Jun 10, 2020. Available online: https://www.youtube.com/watch?v=JUt2nb0mgwg&feature=youtu.be (Retrieved July 31, 2020)

[19] Barbrook, Richard, “The High-Tech Gift Economy”. First Monday. 13 (12), (1998 with 2005 update). Available online at https://firstmonday.org/ojs/index.php/fm/article/view/631/552 (Retrieved 8/3/20)

[20] Michael Tiemann, “History of the OSI”, Open Source Initiative (1 October 2002). Available online at https://web.archive.org/web/20021001164015/http://www.opensource.org/docs/history.php (Retrieved 7/31/20).

[21] The Apache Software Foundation Annual Report for 2020 Fiscal Year, The Apache Software Foundation Blog (July 29, 2020) Available online at: https://blogs.apache.org/foundation/entry/the-apache-software-foundation-announces67 (Retrieved 7/31/20)

[22] The term Web3 was first suggested by Gavin Wood, who was instrumental in the creation of Ethereum. Web 3.0 is also sometimes used to refer to Tim Berners-Lee’s notion of the semantic web, which is unrelated.

[23] Originally developed by the U.S. Naval Research Laboratory. See Yasha Levine, “Almost everyone involved in developing Tor was (or is) funded by the US government”, Pando Daily (16 July 2014). https://pando.com/2014/07/16/tor-spooks/ (Retrieved 8/8/20).

[24] Most people want to know which blockchain coins to invest in. We’ve read more than 100 white papers detailing the function of different blockchains, and glanced at many more. We’re sorry that we can’t publicly recommend investment in any of them. Yet. We have faith the technology will be a major component in the economy of the future. But the fundamentals of every one of these networks are lacking at the moment. We expect this point in history to be a more extreme version of the dot-com boom and bust, where around 1000 to 5000 startup internet companies failed, but the most powerful and profitable companies in history emerged. We won’t print our guesses for which blockchains will thrive. Instead we spend the rest of the book explaining why the Web3 boom and bust will continue, and what P2P networks must do to emerge successfully. Put simply, when you see a network which has designed secure and effective mechanisms for incentivizing development and democratically governing the deployment of those innovations, invest heavily. In 2020, the authors are not aware of a network with anything approaching such qualities. Which ones will eventually decentralize their power is impossible to predict, because it’s illogical to do so until those in power are forced.

[25] For example, it has long held the second highest market capitalization, behind Bitcoin.

[26] If you have some experience with high-level programming languages such as C++ or Java, you can start programming in one of a few IDEs optimized for interacting with the Ethereum blockchain, such as Remix or Ethereum Studio (available at https://remix.ethereum.org/ and https://studio.ethereum.org/, retrieved 8/4/20). An IDE is an integrated development environment, which simplifies debugging and some command line programming, such as compiling.

[27] Dan Goodin, “Skype replaces P2P supernodes with Linux boxes hosted by Microsoft (updated)”, Ars Technica May 1st (2012) Available online at https://arstechnica.com/information-technology/2012/05/skype-replaces-p2p-supernodes-with-linux-boxes-hosted-by-microsoft/ (Retrieved 8/8/20).

[28] https://openbazaar.org/

[29] Andreas Antonopolous and Gavin Wood, Mastering Ethereum, O’Reilly Media, 2018. I recommend the tutorial at cryptozombies.io (retrieved 6/3/20).

[30] The number is estimated between

and

.

[31] Fiat money is the term for national currencies which are not backed by any promise for exchange. Most of our students in math and law — and even economics — are not aware that no nation on Earth still backs their paper money with gold or silver. The ability of opaque central banks to decide when to print specie, putting their fingers on the scale of the economy benefitting some over others, is argued to be the primary motivation for Bitcoin’s inception. Evidence for this position includes the famous comment in the Bitcoin blockchain’s genesis block: “The Times 03/Jan/2009 Chancellor on brink of second bailout for banks”.

[32] Shannon Liao, “Major blockchain group says Europe should exempt Bitcoin from new data privacy rule”, The Verge (Apr 5, 2018). Available online at https://www.theverge.com/2018/4/5/17199210/blockchain-coin-center-gdpr-europe-bitcoin-data-privacy (Accessed 8/3/20)

[33] This is considered sufficient standard for security in the present, but it ignores the possibility of radical future improvements to information technology. However, as long as there is not a centralized monopoly on computational power, then these P2P tools will be safe in the foreseeable future. Even threats such as theoretical future quantum computers have already been resolved with open source cryptographic algorithms which are hardened against quantum computing. https://en.wikipedia.org/wiki/Post-quantum_cryptography (Retrieved 8/22/20).

[34] An early and famous case is the Mt. Gox theft, in which 850,000 bitcoins belonging to customers and the company were stolen in 2014. At the time, those coins were worth about one half billion USD. In 2020 they would be worth $8 billion. Bitcoin.com has estimated that more than US$3 billion worth of cryptocurrency was stolen in 2018 alone. https://news.bitcoin.com/9-million-day-lost-cryptocurrency-scams/ (Retrieved July 20, 2020.) It’s important to note that all of this theft comes from centralized institutions, especially centralized currency exchanges, like Mt. Gox was. The Bitcoin network itself has never made a single mistake; not a single bitcoin has ever been stolen from the blockchain by a hack directly on the network, even though its entire source code is publicly available for analysis. Its decentralization architecture provides remarkable stability and security, even under complete transparency and globally open access to participation.

[35] “Respondents raised concerns about what it could do to their reputation if current and future employers or coworkers knew what information they were contributing to Wikipedia.” Cited from Drexel University, “Just give me some privacy: Anonymous Wikipedia editors explain why they don’t want you to know who they are”, https://phys.org/news/2016-10-privacy-anonymous-wikipedia-editors-dont.html October 12, 2016, which cites Andrea Forte, Nazanin Andalibi, Rachel Greenstadt, “Privacy, Anonymity, and Perceived Risk in Open Collaboration: A Study of Tor Users and Wikipedians”, Proceedings of Computer-Supported Cooperative Work and Social Computing, Portland Oregon. http://andreaforte.net/ForteCSCW17-Anonymity.pdf

[36] Pseudonymous means using a false name. Anonymous means you do not reveal your name. This subject is important to the cryptocommunity and some get bogged down arguing the small semantic difference.

[37] Josh Wardle & Justin Bassett (u/powerlanguage & u/Drunken_Economist), “Looking Back at r/Place”, April 18, 2017. Available at https://redditblog.com/2017/04/18/place-part-two/ (Retrieved 7/16/2020)

[38] That knowledge is necessary to understand this paragraph, but not necessary to understand the rest of this book. Symbolic logic is also not that difficult, and many resentful children master the skills every day, despite lacking the motivation of understanding why it might be useful. Our education system is in dire need of repair. http://intrologic.stanford.edu/public/home.php (Retrieved 8/8/20).

[39] Alan Turing is one of the most interesting people in history. It is well worth a Wikipedia dive into his past.

[40] Though very few of these people are telling the truth on their resumes, since it is a rare programmer who ever engineers machine code. In practice a full stack engineer knows how to use JavaScript on a UI, together with a high-level programming language, interfacing directly with the hardware.

--

--

Wulf Kaal
Wulf Kaal

Written by Wulf Kaal

Professor, Emerging Technology Strategist

No responses yet