0 to 1 in Crypto: Grasping Blockchains, their Applications & future Impact
Thesis on Blockchain Technology, its Applications and their impact on the Internet, Societies and Companies. Its History, its Current State and its Future.
Table of Content
The Traditional Database and the Need to Trust the Middleman
The Evolution of Layer 1s - Bitcoin, Ethereum, Modular Blockchains and Eigenlayer
Abstract
This paper is meant to provide a high-level view on Blockchain technology and its impact on the internet, societies and companies. Borrowing from the Crypto world’s greatest minds, we aim to understand the evolution of communication systems, and how crypto networks such as Bitcoin and Ethereum fit in with their original trust networks. Additionally, we reflect on Crypto’s fundamental revolution of the internet, privacy, and real-world applications for developing and developed economies. Furthermore, we dive into the question of why now is the right time to focus on Crypto for builders and for investors before exploring how profound the impact of Crypto has been on traditional hierarchical structures within startups and investment firms. This paper is meant as a starting point, aiming to educate rather than confuse, hoping to spark the interest of many and providing guidance on a new industry domain. For the author the original motivation was to build a coherent first principles view on the space and challenge our conviction on Crypto - independent of price actions, “Fear-of-missing-out”, political pressure and reputation. We conclude that our conviction firmly holds.
1. The Traditional Database and the Need to Trust the Middleman
In arguably one of the most significant technological inventions of the past 50 years, Satoshi Nakamoto conceptualized the first decentralized Blockchain database with the invention of Bitcoin in 2008.
Originally, databases were records of information controlled by centralized single entities and stored on local computers or servers. Conceptually, imagine a database to be an Excel file stored on a single computer - the file owner can quickly and easily write, replace or delete all the input values. As databases were controlled by single entities, the users had to trust that these entities were correctly taking care of the data (= records of information). Concretely, users had to trust the centralized entities not to tamper with the records, delete information (i.e.: your address), steal information (i.e.: sharing the social security number with someone else) or change balances (i.e.: money balances).
Let’s go through a simplified example: Imagine Lisa owns $10 and this information is stored within the bank’s proprietary database. Suddenly, the bank manager does not like Lisa anymore and deletes the $10 from her account, or even worse transfers the money to his account. Concretely this means that the bank owner altered the database after the original entry has been created - thereby, ex-post changing the ownership over the $10. As the bank manager is the sole controller of the database, there is no common consensus who owns the $10 (= Lisa says it’s her money, the manager says it is his money). Extrapolating from this example, users (i.e.: Lisa) need to trust the owners of the database (i.e.: the bank manager) that they are doing the right thing. As the entries from the database are hidden, users are not able to verify the results themselves. As a result, they have to trust the owners of the database.
Due to its properties, the database owner can itself change the database without leaving any trace - meaning that the bank manager can transfer the $10 and delete all evidence that Lisa had $10 in the first place. Therefore, users are highly dependent on the goodwill of the database operator - as they control our data or in the case of banks, our money.
Enter Blockchains - Building a Trustless Database
As mentioned, imagine the traditional database being an Excel file stored on a single computer - the file owner can quickly and easily write, replace or delete all the input values. Using the same analogy, a Blockchain would be a globally-shared, append-only (= you cannot alter the original content) with advanced Macros (in the case of Ethereum).
Simplified, after each state at time t, t+1, t+2,..., t+n, the blockchain takes a screenshot of the account balances of Lisa, Paul and Richard. This screenshot is then stored in a cardboard box (= the block), which we then close and seal. Every new state is added on top of the latest block - a chain is formed, giving the database the name Blockchain.
Let’s go through a simplified example. Lisa, Paul and Richard are having a poker night. Some win, some lose - the key is that the account balance changes. The next day (t+3), the friends sit together and agree (= find consensus) on the new money balances and write the balances into the blockchain - the new block at t+3 is added on the latest block t+2. If - a few days later - Paul is curious how much money they lost during the poker tournament, he is able to check his balance at state t+2 and his balance at t+3. As Lisa, Paul and Richard agreed on the new status at t+3 and all of them store one version of the database, Paul is not able to cheat. He would be able to change his balance at t+3 to $100, but Lisa and Richard know that Paul has tried to cheat as they have their own local copy of the Blockchain.
This means, rather than trusting a centralized, hidden database which is controlled by one party, Blockchain technology allows everyone to have their own copy of the database, which consists of blocks of information. In short, after some time period a group of decentralized actors agrees on a consensus. This consensus is saved and everyone gets a copy of the latest consensus. Every second, minute or hour a new block emerges with the new state (= status of information) - making the blockchain longer and longer.
By agreeing on a global consensus, sharing the data across many participants, users no longer need to trust the data owners. Suddenly, users are able to verify information themselves on the blockchain. As blockchain data cannot lie, users are not required to trust anymore, making them trust-minimized databases.
Basically, Blockchain is just another (r)evolution of the traditional database. The core function of the original database was to allow users to store information. Over time, as more data was stored, more proprietary databases were created - suddenly every company had their own database. In order to transfer information between different entities, proprietary databases are communicating with each other - imagine a long supply chain of different databases transporting information. With the emergence of the blockchain database, we can merge all the proprietary databases into one, single database, practically taking out all the different database “middlemen”. As a result, the transfer of information between 2 parties became much more efficient.
Think of it with a simple example. Imagine 2 stock traders - one wants to buy, one wants to sell. In order to execute the trade, many different databases transfer information between each other (i.e.: Stock broker A → Bank A → Exchange → Bank B → Local Bank B → Stock broker B). With a blockchain, we eliminate many of those entities in the middle, storing the data in a public database. This makes the transfer of value and information significantly more efficient as the supply chain of different databases collapses. Taking out the middleman and building trust between different actors of the society is the achievement of blockchain technology.
Blockchain’s Key innovations
Through Bitcoin, the first decentralized blockchain database was conceptualized. However, what could be the impact of this database revolution? Just like the invention of the wheel had a revolutionary impact on human civilisation, and just like the invention of transistors led to the rise of computers, open decentralized databases (= Blockchains) are also expected to have a profound impact on the internet and human society. However, just as the impact of the invention of transistors took many years to lead to the wide-spread adoption of computers, it may take decades to explore the full impact of blockchains
For us, Blockchain’s innovations are three-fold from a first-principles basis:
Ownership rights on the internet: Through blockchains, users can bring ownership to the internet. In the real-world, humans have developed property rights, giving them ownership over land, house and business. Property rights are issued by the state and backed by strong laws. As a result, nobody can just come and take their property away. Due to property rights, owners have the right motivation to further develop their property and improve it for following generations.
However, in the stateless internet with thousands of different actors, there is no central authority and no consensus on who owns what. Through blockchain technology humans have been able to create an open, decentralized database which is not controlled by anyone. As we are able to agree on a common consensus, we are able to agree on who owns property on the internet. This means that for the first time in our history we are able to bring property rights and ownership on the internet. Rather than having the need to trust a bank or another centralized authority, an open decentralized database aka the blockchain shows who owns what.
For instance, in the offline world, when users deposit savings into their local bank, they receive an IOU, basically a promise from the bank that they can withdraw their money whenever they want. However, if the bank goes insolvent, user deposits (= their money) is used to cover the banks’ debt obligations - exactly what happened to Greek banks during the Euro-crisis. The only type of money that really belongs to the owners is the cash (“fiat = currency not backed by a commodity like USD, EUR, GBP”) they hold physically in their hands.
In the offline world, the fact that humans physically hold cash makes them the owner of the cash. In the online world, the blockchain holds the globally agreed consensus. As the blockchain attributes digital assets to the owner, the owner automatically receives ownership rights. Therefore, users can hold digital assets through self-custody. Rather than a local bank and their database telling users that the assets belong to them, the blockchain database with a global consensus tells users that the assets belong to them. Nobody can take ownership away as only the users themselves know the secret private key. Therefore, only the holders of the private key can access the money, conduct transactions or sign contracts. Not the bank manager, just the users themselves.
This is why Crypto advocates the value of self-custody of assets and data. For instance, the reason why people lost money with FTX was because users gave away their Crypto assets to a “bank” (= FTX), which handed them an IOU. Due to fraud, FTX lost the money, the IOU was worthless and users lost their savings.
Permissionless access to novel applications: Secondly, as the blockchain and its applications are of permissionless nature they are not controlled by a single entity or any single person. This means, anyone - regardless of religion, race, sex or gender - can build ownership, share information and interact with each other. Nobody can censor their transactions and their actions. In countries where repressive regimes are in place, individuals are finally able to build (digital) ownership, create wealth and share information thanks to a free and open internet. This is why Crypto advocates for decentralization (= not controlled by a single entity) and permissionless innovation (= everyone can join and build on top).
New Privacy Model: Thirdly, Blockchains have revolutionized privacy on the internet. In Web2 (= today’s internet relying on centralized databases), the user’s real identity, email and physical addresses are known by data owners (Google, Facebook, etc.). If requested, the data owners have to share user data with governments. For example, the requests for Google’s user data by the U.S. government has increased by 510% since 2010. Although concerning, this might not be a problem for users in Europe and the US. But what about users in China, Libya or Russia? In Web3/Crypto (= internet relying on decentralized databases) the real user identity is separated from the “public” address, which users are using to interact online. This allows users to become fully anonymous, which is of fundamental importance in countries with repressive governments. This is why Crypto advocates strong privacy, while maintaining complete transparency of what is happening.
So let’s summarize. Through decentralized databases (blockchains) we can replace centralized, proprietary databases. Thereby, users achieve self-ownership of digital assets, the right to permissionless innovate on the internet and the right of free access to information, data and assets. Understanding the original innovation, which was fundamentally just a new database, allows us to follow the development of blockchains and indicates what the future roadmap might look like. To understand the future roadmap of blockchains, we go back in time, revisiting the long-term trends in communication systems.
2. The Evolution of Communication Systems
Before diving into the history of communication systems, we want to thank Placeholder VC. Their theories on Crypto and communication systems in general are unparalleled and have allowed us to gain a deeper understanding of the Crypto space. Borrowing from Placeholder’s work, the history of communication systems is a multi-decade cycle of 1) Expansion driven by competition, 2) Consolidation/Monopolies driven by business models innovation and 3) Commoditization/breaking of monopolies driven through the emergence of new technologies. Expansion, Monopolies and Commoditization. Over and over again the cycle repeats.
Hardware Era
Going back 80 years, nobody (apart from the US and UK military) had computers due to the high cost barriers to entry. As computers relied heavily on vacuum tubes, they were very expensive to produce (due to lack of standardized manufacturing as the parts were very different). However, in 1947, the transistor was invented. The invention changed everything, over time breaking the first “monopoly” of computers. Artisanal-produced vacuum tubes were replaced by mass-manufactured transistors, leading to the collapse of production costs of electronics. Suddenly, it was economically feasible to build computers at a larger scale and sell them to small businesses rather than the military. Subsequently, the race to conquer the new market was opened. Over time, this new market would be consolidated by IBM.
In the 1970s, another new technology - the microprocessor (= integrated circuit) was produced for the first time commercially in 1971 - compressed the expensive, highly customized CPU systems into a single, general purpose processor that could be mass produced. Yet again, as the production costs for computers decreased, more companies emerged which started to build computers. New entrants meant more competition for customers, supply chain and talent, which consequently lowered the margins. As a result, IBM’s monopoly slowly broke down. Consequently, talent and capital was looking for another frontier to generate out-sized returns. Once again, a new technological innovation broke a monopoly.
Software Era
As computers were slowly becoming more mainstream, a growing demand for new services (i.e.: operating systems) appeared - after all, customers also wanted to use their expensive computers. The initially strong competition within the operating system market eventually disappeared as Microsoft cemented its leadership position through business model innovation. Microsoft had locked in the market through its smart distribution strategy (accessing end-users through hardware vendors partnerships) and its superior product (proprietary operating software).
Only a few years later, once again, a new platform would come to eradicate Microsoft’s monopoly - Linux and the Web. Linux built the first, open source software operating system, offering users a new, highly customisable alternative to Microsoft Windows. Instead of buying the operating software via hardware vendors, users could just simply download the operating system through the internet - bypassing Microsoft’s Go-To-Market channels. Through this tactic, Linux not only accessed new customers but also managed to onboard Microsoft’s users who wanted to add more functionality. Once again, another monopoly broke down as the market became commoditised and the value of proprietary software diminished. Once again, talent and capital was looking for yet another frontier to generate out-sized returns. The new frontier was found within data networks.
Network Era
Within the past 15 years, through the increase in computing power, the cheaper costs of data storage, and the spread of data-collecting devices, the importance of big data grew. This gave rise to data networks such as TikTok, Facebook, Google, Amazon and Apple. By collecting large, uniquely valuable proprietary datasets, these networks build fantastic products, which locked-in users (i.e.: Cookies so we can conveniently surf the internet). As the networks are “owning/controlling” their users through their product lock-in and extensive market share (i.e.: Google has 93% market share in search), it has become almost impossible for competitors to emerge.
In addition, BigTech companies are continuously vertically integrating through offering new products, thereby getting more powerful as a result. For example, Google’s search offering has changed significantly in the past decade. While 10 years ago, Google would guide users to the websites, today, the relevant information is displayed on Google directly. As a result, users have no need to leave Google to access other websites. Although highly convenient, it increases Google’s monopolistic power over time. The fact that BigTech companies own their users, dominate different market segments and integrate vertically, has a profound impact on new entrants and product innovation.
Let’s look at a concrete example: It took Spotify 11 years and several billion USD to get to 50m paid subscribers. Meanwhile, it took Apple only 3 years to get to the same number as they simply offered their new service to its broad user base. Furthermore, Apple has used its (almost) monopoly on the phone and iTunes app market to exercise its power over potential competitors. For instance, in 2022, Apple blocked updates for Spotify users, resulting in Spotify accusing Apple of ‘choking competition’ with App store rules. How far can Apple go? What would happen if Apple forces iPhone users to use Apple Music by blocking Spotify? Where are the limits of BigTech companies? Clearly, centralisation adds convenience for users through cookies, targeted marketing and targeted search. However, we believe that benefits are outweighed by the many negative consequences such as less product offering and limited innovation.
How Blockchains will change data networks
Are we at the “technological” end of history as described by Francis Fukuyama for politics in 1992? Have we found our long-term status quo and equilibrium? Personally, we find this hard to believe. Looking at the past 80 years, we can see that many monopolies (IBM, Microsoft, Google) are increasingly weakened by the emergence of new technologies (transistors, microprocessors, Linux and the Web, Blockchain and Cryptonetworks). As new technological innovations reduce the cost of production, more new entrants come to market and start competing with established incumbents. This, in return, pushes down prices and decentralizes existing market monopolies - a cycle that continues to repeat, over and over again.
Going forward, we believe that Blockchains with open, decentralized databases will break the current monopolies of data networks, starting a new era of innovation and expansion. We believe that commoditization of information is just the next natural step to open-source everything. “Liberating” information and data will lead a new era of open, permissionless innovation.
The Impact of AI on Blockchains
Going forward, we believe that AI and Quantum Computing will provide further tailwind for Blockchain technology. For us, AI is excellent at taking information from databases and creating new, creative outputs. The richer the database, the better the model. In our view, a blockchain database with its superior inherent properties (the user owns its data and is able to monetise it) will replace the traditional database over time. As a result, blockchain databases will amass “richer” and “better” data. At the moment the types of information saved in blockchain ledgers are pretty simplistic (i.e.: who owns what, what was the transaction history) and not yet “extremely” interesting. However, as the quality and quantity of information written into blockchain ledgers grows, AI models will increasingly shift away from traditional databases to blockchain databases. Therefore, we believe that AI is powering new applications built on-top of blockchain, rather than being built on top of the proprietary databases of BigTech. As a result, we see clear synergies between AI and Crypto.
The Impact of Quantum Computing on Blockchains
As of now, no one can exactly predict which mathematical operations Quantum Computers will be able to execute. However, there is broad consensus that Quantum Computing will not render databases or data networks obsolete.
One certain effect will be its impact on security assumptions as Quantum Computing will force Blockchain networks to upgrade their security to be Quantum Computing resistant. This is why developer teams in Ethereum are working on Quantum-resistant proof systems called STARKs. In addition, the millions of distributed users will provide additional security layers for Blockchain, making it difficult for external actors to corrupt the network. Each block has a timestamp and a link to the previous block forming a chronological chain reinforced through cryptography, ensuring the records cannot be altered by 3rd parties. Therefore, technically speaking, Blockchains themselves should be relatively immune to hacking or, at least, provide significantly better security architecture than centralized legacy systems and databases. Current projections assume that it does not require millions to be spent on quantum computing to be able to break into legacy systems. Like with AI, we believe that quantum computing is forming a symbiosis with Blockchain Technology, rather than conflict.
Before diving into more details on Crypto Networks and their Technology Stack, we will look into the Web2 Internet Stack and how its structure makes it difficult for new innovation to emerge.
3. Web2’s “Fat” Applications and “Thin” Protocols
According to Joel Monegro’s Fat Protocol Thesis, today’s centralized Web2 and Crypto’s decentralized Web3 can be separated into 2 main layers - the Protocol Layer and the Application Layer. Within the Protocol Layer at the bottom of the stack, there are different protocols ensuring connection and data transmission. On the Application Layer at the top of the stack, there are different databases and applications, enabling users to interact with each other.
Web2’s Thin Protocols
The underlying internet protocols are not just a singular thing. They are a complex web of interconnected machines spanning the globe with different protocols that were built over time for different purposes. At the very bottom, there are the physical hardware layers consisting of computers. One layer up, we find the Internet Protocol (IP). Developed in the 1970s, the IP sends information packets to their destinations, while the Transmission Control Protocol (TCP) arranges the packets in the correct order. This is needed as the IP sometimes sends packets out of order to ensure the packets travel the fastest ways - similar to a postal service. As an alternative to the TCP, there is also the User Datagram Protocol (UDP) which also interacts with IP to transmit time-sensitive data. For example, UDP enables low-latency data transmissions between internet applications, ideally used for Voice over IP (VoIP) or other audio and video files. In practice, UDP allows you to look at videos even though not all packets have been transmitted.
On top of the TCP and UDP, there are different protocols for different use cases. For example, there is the File transfer protocol (FTP) which is used when a client (i.e.: a computer or a software) requests a file and the server supplies it. Furthermore, there is the Simple Mail Transfer Protocol (SMTP) which is a popular email protocol. In addition, we also find the Telnet, developed in the 1960s, which is designed for remote connectivity. It establishes connections between a remote endpoint and a host machine in order to enable a remote session.
Due to their technicality and the fact that they have been built independently over many years, it is almost impossible to access the protocols without the convenience of front end user interface or user experience (UI/UX). As a result, many years ago, tech companies started to build applications so users could easily use their computers and the underlying protocols.
For example, one of the first popular applications allowed the user to listen to music on the computer. The user would insert a CD, copy the data to its local data storage and would be able to listen to music as it was stored locally. However, over time, users wanted to access data that is not stored on the local storage. This required a web browser to access non-local data. To stream a video on YouTube, the user would type in YouTube’s web address. The web address is then processed by the computer and sent - through a rather complex process - through an internet router into the web. After the request leaves the router, it bounces from router to router on a path to YouTube’s server. The request knows where to go based on a process that links the web address name to its server IP and MAC address (= its internet and physical location). Once it reaches YouTube’s server, the request to access its website is processed and accepted, with the data necessary to display the page accessed by the user’s computer. If the user searches for a video, the process is repeated. In fact, this process is used for pretty much everything accessed on the Internet. Over time, as data grew, locally-stored computers were converted to servers, purely dedicated to hosting and storing data in an efficient manner.
Web2’s Fat Applications
Parallel to the growth of physical data infrastructure, companies that built the original applications in Web2 started to equally built out the data storage infrastructure to capture an increasing amount of data. While the underlying protocols were open and could not be monetised, applications and their front-ends for customers could be monetised and could yield investment returns. As a result, most talent and investment ended up moving onto the application layer.
Over time, the applications started to vertically integrate by building their own data layer, thereby creating their own defense mechanism. The shared internet protocols (IP, TCP, UDP, FTP, HTTP, SMTP, Telnet, Voice over IP) produced immeasurable amounts of value by building connections on the internet (basically the roads within a city). However, most of the value got captured and re-aggregated on the application layer - notably by applications that started as web-page and vertically integrated to capture all data. This led to “thin” protocols and “fat” applications in Web2.
Over time - especially from the early 2010s until the early 2020s, big data players consolidated their stronghold in the industry even more. This was done through many different methods. Twitter for example stopped their open API used by companies to access users. Apple used its power position via the AppStore to reduce visibility of other, competing applications. Google has imposed illegal restrictions on Android device manufacturers and mobile network operators to cement its dominant position in general internet search. As the power of BigTech grew, it became increasingly difficult for new applications to access users. This has led to a situation where new applications are now trapped inside the BigTech universe. Accessing new users is not only difficult but increasingly expensive as the user’s attention is heavily controlled by BigTech and their products. This means that we have shifted towards a state where Big Tech is now a chokehold on innovation. Drawing on our history lesson (Evolution of Communication Systems), we now see how Big Tech has reached a state of (almost) complete monopoly.
As a direct comparison with the existing Web2 stack, we have also outlined the current Web3 stack (Exhibit 9). We believe that through Blockchain databases, the proprietary data layer can be opened and information can be “liberated/made public”. Open-sourcing data would reduce the chokehold of BigTech companies and lead to increased innovation. Overall, we believe that the relationship between protocols and applications is reversed in Web3. The majority of the value and data is concentrated at the shared protocol layer while a minority of that value is distributed along at the applications layer. This leads to a stack with “fat” open-source protocols and “less fat” applications and data layers.
In order to understand how Blockchain technology broke this data monopoly and led to permissionless innovation, we need to understand how Bitcoin and Ethereum built trustless networks.
4. The Evolution of Layer 1s - Bitcoin, Ethereum, Modular Blockchains and Eigenlayer
Enter Bitcoin
With the Bitcoin whitepaper in 2008, the concept of decentralized trust was born. In Web2 users had to trust tech companies for data handling (i.e.: Google’s motto “Don’t be evil”) and that previously agreed-upon rules remained in place. In contrast, Web3 protocols are built on open-source code, cryptographic rules (= rules cannot be altered), and public blockchains with public verifiability. As every action is a result of code, users can verify themselves if the code has executed the operation as intended. As a result, the Web2 model of “Don’t be evil” has been replaced by the code-based Web3 model of cryptographic rules following the premise of “Can’t be evil”. Through this evolution, decentralized trust (= not having the need to trust a single party) was born. How did Bitcoin become a trustless network and what makes it special?
“Because unlike any other tool for sending money over the internet, Bitcoin works without the need to trust a middleman. The lack of any corporation in-between means that Bitcoin is the world’s first public digital payments infrastructure. By public I mean - available to everyone and not owned by any single entity. We have public infrastructure for information, for websites, for email - it is called the internet. But the only public payments infrastructure that we have is cash/FIAT - and it only works in face to face transactions. Before Bitcoin, when you wanted to pay someone remotely over the phone or the internet, you could not use public infrastructure but you had to rely on private infrastructure (bank) to open their books and add a ledger entry that debits you and credits the person you are paying [Basically moving information from the database of your local bank to the database of the receiver bank - with 2-8 different banks in between]. With Bitcoin, the ledger is the public blockchain and anyone can add an entry to that ledger transferring their bitcoin to someone else. And anyone - regardless of their nationality, race, religion, gender, sex, credit worthiness, anyone can for absolutely no cost can create a Bitcoin address in order to receive payments digitally. Bitcoin is the world’s first, globally accessible money. [...] If we can replace private payments infrastructure, then we can replace other private chokepoints to human interaction as well [referring to Ethereum’s quest to build a decentralized supercomputer]”
So why are many stakeholders building more public infrastructure similar to what Bitcoin did in the digital payments infrastructure? Because over time, intermediaries providing today’s critical, private infrastructure are becoming fewer, larger and more powerful. This aspect of centralisation and monopolistic positioning hinders innovation and progress.
Bitcoin’s trust status
How did Bitcoin achieve this “Trust Status” and how did it become accepted as a trusted public payment infrastructure for transactions between different stakeholders?
Bitcoin's “Trust” pyramide consists of a Decentralized Trust Layer on the bottom, a Consensus Layer in the middle and an Application Layer on top. So let’s explain by starting in the middle.
Consensus between the many thousands (decentralized) members of the network is reached through a method called Proof-of-Work (PoW). While Bitcoin uses PoW as a consensus mechanism, there are many different methods to reach consensus such as Ethereum’s Proof-of-Stake (PoS) or Solana’s Proof-of-History (PoH). In PoW, members (= computers) of the network, called Miners, compete against each other to solve difficult mathematical puzzles. The puzzles are difficult to solve, but it is easy to verify once the correct solution has been found. Once a miner has found the right solution, they are allowed to build a block (= a new entry to the database recording the new state of the database). The miner then sends the solution to all other members, who verify that the solution is correct. If the solution is correct, the block is added to the blockchain and the miner receives a block reward. Bitcoin has an inflation rate of 1.7% per annum as they are issuing new blocks to the miners, who then sell the block on the market to cover their electricity and hardware expenses. Through mining, the community members (= decentralized trusted parties) reach consensus on the new state of the blockchain through PoW (= a method to reach consensus = consensus method).
On top of the Consensus Protocol, there is the Execution Layer. The Execution Layer defines which operation(s) can be executed with the protocol. For example, within Email, the only operation that can be executed is sending emails - one is not able to purchase goods online or transfer money. With the Bitcoin protocol, the only operation that can be executed is the transfer of Bitcoin tokens (BTC). BTC is the native asset to the network, just as USD is the native token asset of the United States. In the transaction process, one user wallet initiates the transaction, which is then executed through the Bitcoin protocol. A miner is chosen via the PoW consensus, builds a block in which the transaction is included, and adds the block to the blockchain. This results in a new consensus of the latest status/state of the database.
As the Bitcoin network relies on a Decentralized Trust Layer with no centralized party and facilitates the exchange of BTC, the Bitcoin protocol has become the first public payment infrastructure. As the only operation that can be executed is the transfer of BTC, Bitcoin is also known as a single-application network.
However, over time, developers envisioned more applications (= operations that can be executed on the network) which could be built relying on Bitcoin’s revolutionary idea of decentralized trust and global consensus. Nonetheless, because Bitcoin was a single- application protocol, none of those applications could be built on top of Bitcoin’s Trust and Consensus layer, representing a significant barrier to innovation. As a result, more and more single-application networks were built consisting of their own Decentralized Trust Layer, their own Consensus Layer and their own single Application Layer. Although the code was open-source and could be easily copied (= forked) with different applications, the heart of Bitcoin - its community willing to secure the network (= the Trust Network) - was not easy to duplicate. Bootstrapping a Decentralized Trust community, which is securing the network, was not only extremely expensive, but also difficult to do. Thus, the missing flexibility of Bitcoin’s single-application network represented a clear barrier to innovation.
Enter Ethereum
Due to the single-application nature of Bitcoin, many developers envisioned building a multi-application blockchain, which was created in the form of Ethereum. With Ethereum, three main innovations emerged:
First, Ethereum replaced the original Bitcoin script, which only limited the network to executing solely a single application, with a Turing-complete-machine. A Turing-complete machine is a machine capable of solving any problem using a predefined set of rules to determine a result from a set of input variables - in short, a computer. Located on top of the consensus protocol, Ethereum’s Turing-complete machine allowed developers to build more applications. This transformed Ethereum into a multi-application protocol offering a highly demanded alternative to Bitcoin’s single-application protocol.
Second, to automate applications and transactions, Ethereum created Smart Contracts (the definition comes later). Rather than having to manually start transactions, suddenly users and developers could automate (trans)actions. With the creation of Ethereum, the industry moved from Bitcoin’s distributed ledger (i.e.: Address bc1qx…y2 owns 0.43BTC) to a fully virtual computer that could automate and execute more operations. The right real-world analogy would be to compare the email system (Bitcoin) with the internet (Ethereum).
Third, as a result of the first two innovations, Ethereum allowed the creation of different token types. Tokens are defined as digital assets living on the blockchain, representing achievement badges, a medium of exchange, or proof of membership. Off-chain (= in the real world), different achievements have different properties. For example, there are indistinguishable goods such as banknotes, but there are also unique goods such as paintings or contracts. Through creating tokens with different properties (ERC-20 for money, ERC-721 for NFTs/unique assets), Ethereum allowed users to represent the off-chain world on the blockchain. This meant that users could finally build ownership of digital assets.
Creating a Turing-complete machine reduced barriers to innovation
Replacing Bitcoin’s script with a Turing-complete machine (= a globally distributed computer) kick-started innovation. Rather than having to go through the expensive and tiresome process of bootstrapping a new Trust Network (costing >$20m) to build a single application, developers could simply code applications on top of the Ethereum machine. As this new layer was added, it decoupled trust and innovation. The cost of creating a Trusted Network moved to zero as applications could suddenly rely on Ethereum’s Trust Network and focus solely on building great applications. As a result, the cost of new innovation decreased significantly, leading to more innovation. Suddenly, anyone could build applications on top of Ethereum’s Trust and Consensus Layers and be assured that the transactions are included in the public database ledger.
Creating Smart Contracts
Smart contracts are computer programs stored on a blockchain that run when predetermined conditions are met, following the idea of “If action A happens, action B happens auto- matically”. Through the application of smart contracts, one removes the need to trust multiple parties in the process of buying or doing something. Rather than having to trust a third party that action B happens, users know that action B happens automatically. As a result of smart contracts, dApps (decentralized applications) were born. dApps are applications that run using smart contracts for automation on top of a decentralized network. Rather than needing to trust a middleman to execute actions, actions of any kind could now be executed automatically through code.
Creating different token properties
The Bitcoin blockchain (= decentralized database) could only store information related to the Bitcoin token (i.e.: Address bc1qx…y2 owns 0.43BTC). While this represents an outstanding innovation in terms of decentralized database design, it had its limitations in terms of real-world applicability. As mentioned, in the off-chain world there are billions of different information types. For example, there are indistinguishable data points or assets such as monetary assets such as banknotes which can be represented with Ethereum’s ERC-20 token type. However, there are also unique elements like contracts of paintings. On-chain those can be represented with the ERC-721 token. Allowing to bring real-world assets to the on-chain world while keeping their unique properties, increased the usability of the network. As a result, every data point could be represented on the blockchain.
In conclusion, by adding a new computing layer, Ethereum split innovation and trust. Suddenly anyone could cheaply build applications on Ethereum’s Trust Network, reducing the cost of innovation dramatically. Due to the open nature of the network, anyone could build - a textbook example of permissionless innovation. By creating smart contracts, Ethereum enabled new forms of automated applications. By creating different token types, it created the opportunity to represent any off-chain asset on Ethereum’s decentralized supercomputer.
How Ethereum’s machine works
To understand how the Ethereum machine runs - specifically how it creates blocks and executes transactions - one needs to understand the respective layers. Comparing to Web2, we continue to split the stack between protocols and applications. For now, we only consider the Monolithic-version of Ethereum - basically looking at the case where everything is done “in-house”. Later on, we will also look at modular versions of the blockchain where some or more layers are “outsourced” for technical (scalability) reasons.
All of Ethereum’s other layers are built on top of its Trust Network. Ethereum’s Trust Network consists of ~11600 nodes and ~500k validator nodes. A node is a computer in a Peer-to-Peer network which simply maintains a view of the beacon chain and shard chain. Validator nodes actively mine and validate new blocks in Ethereum’s PoS Consensus system and are responsible for storing data, processing transactions, and adding new blocks to the blockchain. A node can run several validator nodes. Each validator node runs two types of softwares (= clients): the Execution client and the Consensus client. Ethereum relies on a multi-client architecture to increase security and vitality. Suppose a defect is isolated to a single client. In that case, the network can continue to operate because other nodes running unaffected clients will manage the network while the impacted nodes switch to another, unaffected client. Before exploring the functions of the other layers, let’s look at their interoperability in a concrete example.
Step 1: A transaction happens. For example a user sends a token or triggers a Smart contract in an exchange dApp to swap ETH with BTC.
Step 2: The transaction is submitted to an Ethereum Execution Client (= one of the softwares running on the validator nodes). The client then verifies the validity of the sender (i.e.: Does the sender have enough ETH to encourage everyone to include the transaction in the next block?)
Step 3: If the transaction is valid, the Execution Client adds it to its local “mempool” (= short for memory pool). The “mempool” contains a list of pending transactions.
Step 4: Of all the ~500k validator nodes, one validator node is selected randomly - thereby becoming the block proposer node - to build the block and broadcast the next block to the network. As a next step, the Execution Client of the node bundles transactions from the “mempool” into an "execution payload" and executes them locally to change the status of the blockchain (= state change). The information is passed to the Consensus Client of the node where the new block is created.
Step 5: Other nodes receive the new block on the Consensus Layer “gossip” network. The other nodes then pass the new block to their Execution Client. Within the Execution Client, the transactions are downloaded and re-executed locally to ensure the proposed block (who changes the state of the blockchain) is valid. If the block is valid, it reaches finality (= it becomes part of the blockchain and cannot be changed) - basically the blockchain has reached a new state. On another note, in its monolithic state, Ethereum is currently only able to achieve ~8-15 transactions on average. The reason is that each Execution Client needs to download the full transaction data and re-execute the block.
Several layers are required to operate with each other to include a transaction in the Ethereum blockchain and to change the latest state of the blockchain. Let’s look at each of the layers.
Trust Network: On the base layer, there are physical nodes (nodes = computer in a Peer-to-peer network). Within the Ethereum network there are three types of nodes - nodes that can propose blocks (Full nodes), nodes that cannot (Light nodes) and nodes (Archive nodes) that store a history of Ethereum’s states (i.e.: state at time t, state at time t+1). For simplicity reasons, this report will talk about Full nodes when talking about nodes. A Full node can run several validator nodes - which explains why ~11600 Full nodes on the Ethereum network can run ~500k validating nodes. As mentioned, each Full node runs two softwares (clients): the Execution and the Consensus client.
Full nodes: Full nodes can run several validator nodes, which propose and verify blocks, therefore earning protocol rewards (~6% APY paid in ETH token). One Full node (= computer in a Peer-to-Peer network) can run several validator nodes (= smaller computer that runs two software clients). Due to the responsibility, the Ethereum network requires economic commitment which means that validator nodes need to commit (= stake) 32 ETH (~$52k). If the node or their software behaves maliciously, the validator node operator gets punished and loses part of its stake - this process is called slashing.
Light nodes: While the Execution client of Full nodes downloads every block (see #5), Light nodes only download block headers which contain only summary information about the contents of the blocks. Light nodes enable users to participate in the Ethereum network without powerful hardware or high bandwidth required to run Full nodes (i.e.: on the phone). As Light nodes are not downloading all the block data, they do not participate in consensus finding as validators (see #5).
Archive node: Archive nodes build historical states. Those nodes are needed if an application wants to query a concrete state in the past. Archive data represents units of terabytes, making archive nodes less attractive for average users. However, they can be useful for services like block explorers, wallet vendors, and chain analytics.
Consensus Client: The Consensus Client receives the proposed block from the Execution Client. Thereby, the Consensus Client downloads all transactions from the newly proposed block. Afterwards, it re-executes the transactions to confirm compliance with consensus rules (basically validating/attesting the block). Once re-executed, the Consensus Client shares the block with the Execution Client who then itself shares it with the broader network.
Execution Client: The Execution Client software listens to new transactions broadcasted in the network (EVM) - for example someone triggers an automated Smart contract or transacts a token. As a first step, the Execution Client verifies that the transaction is possible (i.e.: Can the sender pay the network fees?). Once verified, it sends the transaction to the “mempool”. This process of finding new transactions, verifying its validity and sending it to the “mempool” is done by all Execution clients of all Full Nodes/validator nodes.
Every couple of seconds, a new validator node is chosen by a random algorithm to build a block. In this case, the Execution Client (of this specific node), bundles the transactions together into a block and sends it to the Consensus Client. Once the Consensus Client attests (= confirms) the block, the Execution Client then proposes it to the network.
As all the other validator nodes receive the proposed block and their respective Execution Client downloads the transaction data and re-executes the transaction. If the execution is correct, all the other Execution Clients agree to a new (changed) state of the blockchain.
Concludingly, this means that Execution Clients have the following tasks: First, they listen to transactions in the network and add them to the “mempool”. If their Full node/validator node is the new block proposer, their Execution Client proposes blocks by bundling transactions. If their Full node is not the block proposer, they download the transaction data, re-execute the transaction and verify the suggested block.
Data Availability Layer: The core tasks of blockchains include executing transactions (done via Execution Clients), achieving consensus on transaction ordering (done via Consensus Clients), and guaranteeing the availability of transactional data to all nodes on the blockchain. Data availability is important because it allows nodes to independently verify transactions and compute the blockchain’s state without the need to trust one another.
The current scalability issues are coming from the requirement of the Execution Client to download and verify data, which reduces throughput (= low transactions per second). In addition, using on-chain storage for an increasingly large amount of information limits the number of entities who can run Full node infrastructure, which leads to centralisation risk if only expensive computers can run nodes. Making data available as the Ethereum Blockchain grows is one of the challenges of the network.
Turing-complete machine/Ethereum Virtual Machine (EVM): Bitcoin’s data structure consists of accounts (Address bc1qx…y2) and balances (0.43BTC). If a user wants to change the state of accounts and balances, they need to initiate a transaction. In contrast, Ethereum not only has Peer-to-Peer transactions but also uses automated smart contracts in its applications. While transactions are initiated by the owner, smart contracts (which are based on open-source code) are run by the Ethereum Virtual Machine (EVM). The EVM is a computation engine that is in charge of deploying and executing smart contracts, and updating the state for every new block added to the Ethereum blockchain. Conceptually, the EVM is a piece of software that sits on top of the node infrastructure of the blockchain and performs critical functions such as running code used for dApps and Smart contracts. By being positioned between the nodes and the smart contracts, the EVM can compile different kinds of smart contract code into a standard format known as bytecode. This code makes the smart contracts readable by the Ethereum network and therefore enables those transactions to be recorded by the Ethereum nodes. This guarantees that dApp data is included in the blockchain. Think of it like being the logistics service that runs the errands between smart contracts and users, making sure that all transactions are included. As mentioned above, by creating the EVM, Ethereum developers can build on top of it - and are not required to build their own trust network.
Application protocols: A protocol is defined as a set of predefined rules (run by the EVM) that dictates how a blockchain operates. It also defines the rules, which all network participants must follow so that the blockchain can function. As a result, application protocols are systems of rules that allow applications to run on the Blockchain. Most applications have their own decentralized application (dApp) which is the part users interact with through the User interface (UI). However in theory, one can interact with the open-source protocol without the UI by learning the programming language - something that is never done in practice. Having great UI/UX is essential for protocols to attract users. For example, let’s say a lending protocol like Aave does not have a UI. Its liquidity would be gone in an instance, because the vast majority of people would not bother learning a programming language to access it.
decentralized Applications (dApps): dApps are website UI that connect the user’s browser with the underlying protocol, its smart contracts, and algorithms hosted on a blockchain network. In other words, the protocol can exist without a web interface, while the web interface would not be useful without the protocol. As most protocols are open-source, in theory, anyone can build their own dApp on top of an existing protocol. For example, in August 2022, Office of Foreign Assets Control (OFAC) of the U.S. Department of the Treasury blacklisted Tornado Cash, an open-source privacy protocol which allowed users to obscure the trail back to the fund's original source. Any public address (i.e.: x0382…273) that has been using the service was blocked from using the front-end of other dApps (such as Uniswap). However, as Uniswap is a permissionless protocol, blocked users could still use the underlying protocol through the code base. While the dApp belongs to an organization, the underlying protocol code is free and can be used by anyone.
Modularisation of Blockchains
Up until the emergence of Ethereum in 2014/2015, only single-application Blockchains - such as Bitcoin, Filecoin and Namecoin - existed. With Ethereum, the first “quasi-modular” blockchain was developed. This was achieved as Ethereum replaced the original Bitcoin Script with its own Execution Layer, thus splitting innovation and trust. This modularisation of innovation reduced the costs of innovation significantly as there was no need to bootstrap the expensive Trust Network and stakeholders could just build on Ethereum’s Trust Layer (= its ~11600 Full nodes and ~500k validator nodes) and Consensus Layer. However, Ethereum itself has remained a monolithic blockchain as it has kept the Consensus/ Settlement Layer, the Execution Layer and the Turing-complete machine inside its stack.
However, what happened if developers wanted to have even more modularity? For example requiring a different consensus mechanism to deal with faster block finality. Or for building applications that require more data points which are less sensitive (i.e.: social media app)? Or using a completely different virtual machine to power more gaming applications? As the list of potential use cases grew, it became apparent that Ethereum’s “fixed”, monolithic stack infrastructure had reached some limitations. Ethereum had built the strongest Decentralized Trust Layer powering the biggest and most trusted Global Consensus machine - but what if an application (i.e.: a game) does not need global consensus but just in-game consensus?
Once again, the problem repeated itself. If developers wanted to have more modularity, they had to bootstrap their own Trust Network and adjust Consensus Layer and the Virtual Machine for their respective purposes. As a result, Ethereum’s limitations gave rise to more modular Alternative (Alt)-Layer 1s in 2018/19. Over the next few years, many new Trust Networks appeared with vastly different properties (around Execution Layer, Consensus Layer, Settlement Layer and Data Availability Layer) allowing developers to create novel products. Let’s recap the different layers:
Execution Layer: Provides an environment for dApps and processes their transactions.
Consensus Layer: Determines the sequence of transactions. Inside Ethereum, Consensus is achieved through PoS (Proof of Stake - basically people use their ownership stake to vote - in case of bad behavior, ownership is reduced), while the Bitcoin network uses PoW (miners compete to solve a mathematical equation). The Consensus Layer agrees on the contents and ordering of transactions.
Settlement Layer (usually combined with Consensus Layer): Provides a layer for finalizing transactions, settling disputes, validating proofs, and bridging between different execution layers.
Data Availability Layer: Nodes receive a block from a block producer and check if the data (transactions) is publicly available. Basically, the DA layer guarantees the availability of transaction data.
Monolithic blockchains are blockchains that handle all three components (execution, consensus/settlement, data availability) of the modular stack - for example Ethereum or Bitcoin. On the contrary, modular blockchains are blockchains that outsources at least one of three components to an external blockchain or handle the component locally. Due to the modular design, blockchains have become more flexible in design principles. This flexibility also allows modular chains to be easily created, mixed, or replaced independently within a modular stack. Just like Lego bricks, modular blockchains can be independently created for each use case.
This flexibility meant that for the first time in the history of blockchains an application can pick its infrastructure according to its technological needs - and does not need to refrain from building more functionality as the underlying technology cannot support it. As each component can do only a few things, the modular blockchain must do them very well. This allows developers to create their own blockchain stack suited to their novel application.
There are many different types of modular stacks. For example Ethereum started as a monolithic blockchain, but has become more modular with the help of roll-ups. Thereby, the network has moved towards a roll-up centric roadmap to solve its scalability issues.
Ethereum’s Scalability Issues and the Blockchain Trilemma
As a recap, a blockchain is a distributed database where blocks of data are organized in chronological order. The basic idea is that with the help of decentralized blockchains, users do not need to rely on trusting third parties for networks and markets to function. In order for this to be achieved, a blockchain needs to have 3 major properties: security, scalability and decentralization. However, as blockchain technology is increasingly adapted, the blockchain should be able to handle more data at faster speeds so that using the network does not become too slow or too expensive to use. Popularized by Ethereum’s co-founder Vitalik Buterin, the Blockchain Trilemma refers to the idea that it is difficult for blockchains to achieve optimal levels of all three properties simultaneously as increasing one usually leads to a weakening of another. We will use Ethereum as an example to understand its scalability boundaries and how it is using roll-ups or modular blockchains to solve them.
On the security aspect, Ethereum is considered one of, if not the most secure network. Through the Merge in September 2022, the network has moved from PoW to PoS which has added additional economic and technological security. Relying on ~11600 node operators and ~500k validators all around the world, currently 16.3m ETH (~$25bn) are staked as of January 2023. In order to attack the network, the potential attacker would need 51% of the staked ETH (~12.5bn). With 51%, the attacker could then use their own voting power to ensure his “copy” of the database would be used for future block building, which would render the “old database” and its values worthless. However, apart from being economically infeasible (as the price of ETH would increase drastically with new inflows in the billions), the switch from PoW to PoS gives the community additional flexibility in mounting a counter-attack. For example, the honest validators could decide to keep building on the minority chain (aka the “old database”) and ignore the attacker's fork while encouraging apps, exchanges, and pools to do the same. They could also decide to forcibly remove the attacker from the network and destroy their staked ETH. These are pretty strong economic defenses against a 51% attack. Against other attacks, the Ethereum network has other proposed solutions (here). For reference, it would cost an attacker $951,552 to hijack the Bitcoin network for 1h as it is relying on the different consensus algorithm PoW.
On the decentralization aspect, Ethereum is also considered to be top of its class. Ethereum is currently run on 508,000 validator nodes on ~11600 computers with 43% being located in the US, 12% in Germany, 4.5% in Singapore and 4.1% in the UK. On the client's side, Ethereum is relying on a wide range of consensus client software (40% Prysm, 35% Lighthouse, 19% Teku), while having some centralisation issues on the execution client software (69% Geth, 14% Nethermind, 10% Erigon).
On the scalability side, Ethereum’s limitations have become obvious during the latest bull market, where users had to pay sometimes >$200 in order to conduct a transaction - clearly too much as Ethereum aims for transaction costs <$0.05. On a monthly average, Ethereum allows for 12 transactions per second, clearly too little for Ethereum’s aim to become the world’s global settlement layer. The main reason for its scalability issues is that every Execution Client (from every validator node) has to download the full transaction data from the proposed block before verifying it. While downloading and re-executing data limits scalability, it ensures that every block includes only valid transactions making the network extremely secure. In addition, keeping data open and downloadable is a fundamental property of the Ethereum blockchain. This means that moving from an open-source nature to a closed-source nature is not possible as everyone can download, thus having access to the underlying Ethereum data. In order to create the large-scale decentralized service Ethereum envisions, the protocol must allow anyone who downloads the software and the database to become a node and download a copy of the database. This is the reason why the Data and Consensus Layer of Ethereum can not be “made” proprietary such as their respective counterparts within the Web2 stack.
However, it is precisely because of this security mechanism of the Execution Client that Ethereum runs into scaling issues. Since Full/validator nodes download and re-execute every transaction to verify they follow the rules of the blockchain, Ethereum cannot process more transactions per second without increasing the hardware requirements of running a Full/validator node: Better hardware ⇒ create more powerful Full/validator nodes ⇒ Full/validator nodes can check more transactions = more scalability ⇒ bigger blocks with more transactions in it. However, as the hardware requirements of running Full/validator nodes increases, it would lead to a lower number of Full/validator nodes, thus reducing decentralization. This would mean that Ethereum is less secure as fewer people check the work of block builders to keep them honest. Voila, the Blockchain Trilemma has once again appeared. In order to solve the Blockchain Trilemma, Ethereum has moved towards a roll-up centric roadmap with many developer teams around the world actively working on roll-up solutions to increase scalability.
Roll-ups aim to combine scalability with the security and decentralization of Ethereum. Thereby, roll-ups execute the transaction outside of the main Ethereum network but post the transaction data back to the Ethereum network, thereby still deriving its security from the Ethereum protocol. In practice, roll-ups execute the transaction off the chain mainly on a rollup specific chain (= their own execution layer). Afterwards, the roll-up compresses the transaction data and sends it back to the Ethereum chain as it relies on Ethereum’s consensus and security derived by its global trust network - this means that the data from the roll-up will be taken into Ethereum’s Consensus Layer. There are several types of different roll-ups: Enshrined roll-ups (i.e.: zkEVM) and Smart Contract roll-ups (i.e.: Optimism, Arbitrum) for the execution layer, Smart Contract Settlement roll-up (i.e.: L2 Starknet) with its own Smart Contract Recursive roll-up (i.e.: a game-specific L3), Sovereign roll-ups and Validiums (i.e.: Immutable X). For more detail, we recommend Jon Charbonneau’s The Complete Guide to Rollups. We also provide deeper insights into Ethereum’s roll-up solutions on page 58.
One of the most exciting new developments within Ethereum is EIP-4844 (Ethereum Improvement Proposal) called Proto-Danksharding. With this update, roll-ups will be able to data post bundles under a new transaction type instead of using the current "calldata" (= storage which persists on-chain forever). The new transaction type will carry a blob, basically a large amount of data - inaccessible by the execution layer - which is much cheaper than calldata. Blobs are 10x larger than blocks, but they are pruned out of the blockchain after some time. This means that a new data-availability layer will arise (think like servers that can provide data when needed). As a result, the scalability of Ethereum increases by an order of magnitude.
Eigenlayer - Securing Middlewares & Alt-1s with Ethereum’s Trust Network
Barriers to innovating middleware protocols
Let’s assume there is a developer team and they want to create their own application. As their application is built on Ethereum, they can rely on Ethereum’s Trust Network. This means that they can be sure that Ethereum’s network will continue to process the app’s transactions and include those in the blockchain. For that service, the application will have to pay some gas fees to Ethereum and its trust network. Due to Ethereum’s recent transition from PoW to PoS, Ethereum’s block making service is extremely secure as ~$26bn of ETH is staked at the moment and attackers would need at least 51% (~$13bn) to manipulate the network. Basically, the app developers can trust Ethereum’s service that their transactions are included in the Ethereum database.
However, in order to run their dApp the developer team also need other middleware protocols which are providing services such as Oracles (data feeds that bring data from off the blockchain data sources and puts it on the blockchain for smart contracts to use), a Data Availability layer (storing the data the app produces) or Bridges (allowing data and crypto asset transfers across different chains). Rather than building all of those services by themselves, application developers can rely on existing middleware providers for those services - just like they relied on Ethereum’s block making service.
However, existing middleware providers face a micro-economic problem. Like every blockchain in 2015, middleware protocols face the same problem in 2023. In 2015, applications had to bootstrap their own trust network as they could not utilize Bitcoin’s trust network due to its single-application nature. This all changed with the development of Ethereum - suddenly applications could use Ethereum’s trust network for their own applications. However, middleware protocols are still required to bootstrap their own trust network - even in 2023. This means that every middleware protocol has to create their own trust network before building out its services.
For example a middleware protocol that provides data storage would need to build their own incentive program to make it economically unsustainable for a 51% attacker to hijack the network. This is called bootstrapping a Trust Network. One way would be to issue a token and provide token holders with a staking mechanism. Users who provide the service of storing data correctly receive more tokens, while those who behave maliciously lose their tokens - similar to Ethereum’s PoS consensus mechanism. In order to fend off attackers, the middleware protocol would need to aim to achieve the highest economic security possible. The reason is that higher economic security (i.e.: higher market cap of the protocol) makes it more costly for attackers to acquire 51% of the token supply, hijack the network and harm the protocol. While there are many ways to achieve high economic security, the most common method is to reward stakers (who secure the network) with more and more tokens. This method works very well in a bull market as token prices rise and the protocol archives higher economic security. However, in a bear market, additional token inflation leads to more supply on the market resulting in even faster dropping prices and lower economic security. In addition to being expensive, bootstrapping a trust network for a middleware protocol is also time consuming, requires a different skill set and is clearly a distraction for middleware developers whose job it is to build a novel middleware application.
Switching back to the point of view of the developer team and the dApp. The developer is relying on the “Gold standard “Ethereum for block making security (secured by ~$26bn, cost of corruption is $13bn) while also relying on several middleware protocols with significantly lower security (i.e.: data storage protocol secured by $1bn). Despite relying on the highest block making security assumptions on Ethereum, the application’s minimum security assumption is $1bn from the data storage protocol, representing a clear security risk for the application.
Enter Eigenlayer
In late-2022, we saw the emergence of Eigenlayer, which built a mechanism to leverage an existing Trust Network to “do other things it was not designed to do” - basically following Ethereum’s playbook of 2015. In 2015, Ethereum allowed applications to use its Trust Layer. In 2023, Ethereum allows middlewares to use its Trust Layer through Eigenlayer’s protocol.
More detailed, Eigenlayer is a 2-sided marketplace which leverages Ethereum’s Trust Network and connects it with middleware providers (Oracles, Bridges, etc.) who are looking for additional security. In practice this means that Eigenlayer allows Ethereum stakers to “re-stake” their invested capital. Ethereum stakers commit to additional slashing conditions to provide additional services that are being built on Eigenlayer (oracles, bridges, data availability layer, new consensus protocol, etc.). In exchange, Ethereum’s stakers also receive additional yield (in addition to the ~6% APY they receive from Ethereum at the moment). This means that Ethereum’s stakers, who are securing the network and proposing and validating new blocks, can use their economic power to secure additional middleware such as oracles, bridges, sidechains or other consensus protocols. For Ethereum stakers this means that they can receive additional yield (up to 2-3x the current yield), while for other middleware providers this means they can receive trust through Ethereum’s trust network. As a result, middleware providers have no need to (costly) bootstrap their own trust network, which will lead to more innovation on the middleware layer.
Once again, the trend of further modularisation continues: In 2015, Ethereum modularised trust and innovation by building an operating system instead of the Bitcoin script, thus enabling the cheaper development of applications. In 2018/19, Alt-L1s further modularised trust and innovation by bootstrapping their own (weaker than Ethereum’s) Trust Networks to build customized consensus protocols and operating machines for more use cases. In 2023, we will see the best of both worlds: Merging Ethereum’s Trust Network and Alt-L1’s highly customisable Consensus Protocols and operating machines. How does that work? Eigenlayer’s mechanism allows existing networks to leverage their trust layer to do other things it was not designed to do. In a concrete example, Eigenlayer provides Ethereum’s trust and economic security to other protocols which want to build on top of it. We highly recommend diving deeper into Eigenlayer. This is very well explained by this Graph presented by the Eigenlayer founder at a16z presentation.
Even though we continue to see an additional modularisation of the stack (Trust network, Consensus Layer, Execution Layer, Virtual machines, Protocol Apps and dApps), we believe Ethereum remains the key asset and economic powerhouse of the Crypto industry - giving trust, receiving data, enabling permissionless innovation. Although more and more innovation will focus on the Protocol Application and dApp layer, Ethereum’s Trust Network will give credibility to the whole stack. With the help of Eigenlayer, Ethereum has the chance to build the biggest Trust Network in the world while being equally flexible enough to house all other Consensus Protocols on top. Developers can focus on building on top of the trust network, rather than working on the challenging task of bootstrapping one. This means that Ethereum essentially becomes the (Trust)base Layer of the whole Crypto industry.
How Crypto Networks coordinate themselves
In the past paragraphs we looked at Bitcoin’s Trust Network and how Ethereum built a decentralized supercomputer on top of its own trust network. This split innovation and trust, which gave rise to many novel applications. Due to the monolithic nature of Ethereum and its scaling limitations, we have seen an increasing modularisation of blockchains, eventually pushing Ethereum towards a roll-up centric roadmap. Going forward, we believe that Ethereum continues to be at the base center of Crypto and blockchain innovation, seen in the case of Eigenlayer which uses Ethereum’s Trust Network to allow permissionless innovation on the middleware layer. One key question remains - How do cryptonetworks coordinate its decentralized actors?
5. Comparing the Web3 Stack with Web2 Stack
In the last chapters, we highlighted how Bitcoin and Ethereum rely on open databases, how these databases are kept open and how they are kept operational within a distributed community through token-economic incentives. Relying on different product and technology design choices, this has inevitably led to a different value stack in Web3.
The Web3 stack enables Open Innovation through Composability
Like in Web2, we split the Web3 stack into Protocols and Applications. On the bottom of the Web 3 stack, there is the - extensively discussed - Trust Network relying on thousands of miners and validators. Those reach a consensus on the latest state of the blockchain.
On top, there are overlay/middleware networks that enhance the functionality of the underlying Layer 1s (L1s), such as Ethereum. For simplicity, we define L1 as a monolithic blockchain consisting of its Consensus/Settlement layer, its Execution Layer as well as its Data Availability Layer. An overlay network could be a specific on-chain solution for Data Availability, Execution or Consensus, such as roll-ups. Roll-ups operate their own Execution Layer which posts their transaction result on the L1. This gives Ethereum additional scalability, while providing Ethereum’s security to the roll-up as L2 transactions are included in the L1 consensus blockchain. In general, overlay networks refer to mostly additional blockchain infrastructure that reduces the technical limitations of the original stack. Other examples would be Bridges which allow users to move assets between different networks, or Oracles, which bring off-chain data onto the blockchain network.
On top of the overlay networks there are different (application) protocols. As a recap, protocols are code-based systems or smart contracts that allow applications to run on the blockchain. Due to the Virtual Machines (i.e.: EVM) those protocols are fully automated, basically completely unstoppable. On top of the (application) protocols, there are dApps, which are user interfaces that connect the user’s browser with the underlying protocol hosted on a blockchain network. Most protocols have their own decentralized application (dApp), basically a UI (= website), which is the part of the stack most users interact with. However, in theory, users could interact with the open-source protocol without using the UI by learning the programming language. As the vast majority of protocols are open-source, anyone can build their own dApp on top of an existing protocol.
Due to the open-data/code nature of blockchain, everything in Web3 is open source - the code as well as the underlying data. As mentioned, anyone can build their own dApp on top of a protocol (i.e.: different design or color code) - therefore we have split protocols and dApps in the graph above (Exhibit 19). While in Web3 anyone can build on top of protocols, in Web2, existing companies would sue if anyone would re-create the user application.
As the vast majority in Web3 is open source, anything can be built upon each other: dApps on top of other dApps and Protocols; Protocols on top of other Protocols and Overlay networks; and Overlay networks on top of other Overlay networks and L1s. This system design principle is called Composability and allows the combination of modular components to create new products and systems. As one of Crypto’s core values, it supports the mission of permissionless innovation. Rather than asking for allowance to build on a new idea, stakeholders can refactor existing ideas and adjust them towards their specific use case.
Composability allows applications to build on top, with, or next to other applications. Within Decentralised Finance (DeFi = any dApp that deals with money) this is called Money Legos. This design principle allows projects to increase utility for their protocols and token, and create defensible competitive advantages as it is more difficult to be replaced when the protocol is the foundation for many other projects. In addition, DeFi Composability allows protocols to generate exponential growth as the partners’ success contributes to the project’s own success. For example - rather than building their own exchange, developers can just build on top of the best exchange focusing on the segment they are good at (i.e.: user acquisition). Over time, as more applications are being built, the cost of additional innovation is significantly reduced, which should result in a rise of more innovative projects.
Web2’s Big Tech as Bottleneck to Innovation
Comparing the Web2 and the Web3 protocol stack, one can see clear differences. One of the most influential theories to compare the two stacks has been Joel Monegro’s Fat Protocol Theory. With his theory he argues that the original Web2 protocols (TCP/IP, HTTP, SMTP, etc.) produced immense value. However, most of it got captured and re-aggregated on top of the protocol layer at the applications layer, in the form of data layers (think Google, Facebook, Twitter and so on).
Over time, the largest applications have become gatekeepers to the consumer audience, making it difficult for new applications to engage with consumers. For instance, the way Google Search has transformed itself over the past 15 years. Rather than guiding the user to the right online content and driving user traffic (and their economic value) to the website, Google Search is increasingly displaying the requested information on the Search result page itself. Although Google is relying on user-generated content, it is increasingly channeling user traffic away from the original content creators (Websites, etc.)
In Exhibit 21, we display how new applications are increasingly pushed into a corner, becoming increasingly unable to access new users. Concretely, this has led to a situation in which startups spend ~12-20% of revenue just on user acquisition, de facto paying Google, Facebook/Instagram, Twitter and TikTok. Over time, Big Tech has done an excellent job to get rid of potential competitors very early on. As they own customer data within their proprietary data silos, they are able to create highly sticky, ever expanding applications, making it increasingly difficult for customers to leave their services.
After all, their services are also extremely compelling (i.e.: using Cookies to log-into many websites rather than remembering log-in details). As a result, the internet stack - in terms how value is distributed - contains “thin protocols” (TCP/IP, HTTP, SMTP), “fat data layers” (Google, Amazon, Facebook, TikTok) and “fat applications” (Google, Amazon, Facebook, Twitter, TikTok). While the current structure of the internet stack significantly favors Big Tech companies providing them with an ever increasing moat of defensibility, new entrants face a significant disadvantage.
Building Moats of Defensibility in Web3
Understanding the history of tech investments, investors have learnt that applications yield higher returns, while investing in the underlying protocols yields only lower returns. However, given the Crypto/Web3 protocol’s ability to store and publish data, the argument has been made that the Web2 trend of investing might be reversed in Web3. As data and code is open-source Protocols and dApps are able to build products easier, cheaper and faster. In addition, it allows them to acquire customers with lower customer acquisition costs.
However, this also leads to more competition. At the moment, it seems that the lower on the stack the project is located, the better the moat of defensibility is. Bootstrapping the Trust Network is very expensive as one needs to convince people to buy and hold the token to secure the network. Not only does this require a large amount of capital, the right team and the right sector niche, there is already quite some competition in form of other L1s that have been running for years. Most innovation from the bottom of the stack is coming from projects that are building on top of existing Trust Networks, providing them with additional functionality (i.e.: Ethereum’s roll-ups to achieve scalability) rather than building direct competitors.
Moving up the stack towards the infrastructure/middleware layer, it will be interesting to see the impact of Eigenlayer on infrastructure/middleware projects. Accessing Ethereum’s Trust Network for economic security allows them to focus on building highly technical infrastructure projects rather than bootstrapping a Trust Network - which is a significantly different task. Eigenlayer’s innovation will not only significantly reduce the cost of innovation but it will lead to faster iteration cycles as developers can focus on their core skills. Mid- to long-term, this will lead to ground-breaking innovation on the infrastructure/middleware layer which will enable an increasing number of applications.
At the top of the stack - the (protocol) application layer - the market has seen significant innovation in the past years. Due to composability and open-source data, it is cheap to build applications. In addition, as infrastructure protocols are becoming more powerful and modular, the technical entry barriers are being significantly reduced for application developers. This means that we will see more competition appear in the mid to long-term, which raises the question how applications can create defensibility if there is no data lock-in like in Web2. In the past years, we have seen some ideas to create user lock-ins:
Merging Web2 and Web3 data. The original idea is to build a unique dataset that allows applications to access users better than anyone else. For example, a protocol that conducts off-chain KYC verification. Once verified, an NFT is issued confirming that the person behind the public address is a qualified investor. This can be built in an open fashion (open database which lists all the addresses that are verified investors) or in a closed fashion (API to hidden database which returns confirmation).
Another example would be the Solana phone which should be able to capture richer amounts of data than any existing crypto network. In return, Solana could then use this data to build a richer database, thus attracting more builders and developers.
Cost moats. The idea is for a company or a protocol to centralized costs and build something others cannot build. For example Coinbase has invested billions to work with regulators and create proper KYC processes, which is something that is incredibly difficult to replicate.
User-Staking. One of the most significant business model innovations in the space. Users receive tokens for engaging with an application. Those tokens can be re-deployed (= re-invested) in the protocol and generate additional yield for the users. Through this mechanism, users and protocol are aligned in their incentives.
Vertical integrations. Applications or protocol applications which are located on the top of the stack are increasingly vertically integrating deeper into the stack by building their own Consensus Network or even Trust Network. For example, an application may amass enough trusted users to bootstrap their own Trust Network, therefore reducing their expenses. Rather than paying gas to the Ethereum network so their transactions are included, they can just include transactions themselves.
Within the last chapters, we highlighted several times that the proprietary data layer of Web2 does not exist in the same fashion in the Web3/Crypto space. Furthermore, we explored the key reason for that - namely that all data is by definition open-source and cannot be made proprietary. But there is another reason which makes the collection of user data impossible - Privacy in Web3.
Privacy in Web3
Taking inspiration from Antonio García Martínez, we created an analysis to show which type of information is public or hidden in Web2 and Web3. In Web2, the users’ online actions (browsing history, financial holding, etc.) are tracked by the entity the customer is using. Usually, these entities keep the “action” information hidden in their own proprietary servers. Although hidden, users need to trust the entity that there is no leakage, hack or sale of that information. In addition, the users’ “identity” information is also known to the corporation, yet most of this information is shared with 3rd party providers. All information on “identity” and “action” is known to the entity whose services the consumer is using.
In Web3, data privacy operates differently. In order to interact with Web3/Crypto, one needs a public Crypto address (x03824..232) which can be created by installing a software wallet (i.e.: through a Metamask - less secure) or a hardware wallet (i.e.: looks like an USB stick - more secure). After choosing a pin-code, the user receives a recovery phrase - 12-24 words the user needs to remember in case they forget the pin-code or lose access to the wallet. Through that process a unique Public Crypto address (x03824..232) will be created. As an analogy, one can think of a post-box with a random number rather than the name. Creating a new address can be done seamlessly within a few seconds. Once the user participates in a transaction, the user’s public address will be forever affiliated with that transaction. However, the user’s real identity is not linked with the Public address.
In order to transfer money onto your wallet (= on-ramp), the user has several options, ranging from bank transfer, Bitcoin ATMs or Crypto Exchanges. Once money has been transferred to the user’s wallet (soft wallet or hard wallet), the user automatically holds ownership over those assets.
So what is private and what is public? In general, every online action the user takes is forever affiliated with the Public Crypto address. At the moment most Web3 transactions are of financial nature, however use cases are rapidly expanding to gaming, social media, science, education, etc. Everything users do online is automatically affiliated with the public address. As a result, anyone else can check a public address and see its transactions.
As actions are automatically affiliated with the public address, one can also assign achievement badges or membership passes to a public address. For instance, if a user is part of an online gaming club, the user is able to receive achievement badges in the form of digital collectibles (= NFTs) which show their level of expertise. While in Web2 users can pretend to have achieved something, in Web3 digital collectibles can be used as proof tokens. Although all actions are related to the public address, the real identity, the email, the home address, the cookies and the device ID is completely hidden - at least in theory. Through Blockchain’s new privacy model, Crypto has given users the choice to define their own privacy model.
This means, as the user’s identity is - at least in theory - completely hidden, corporations are unable to create the link between online actions and the offline identity. This makes it impossible for Big Tech to build a proprietary data layer around the user’s online persona. Automatically, they lose their business model defensibility, which makes it easier for new entrants to build their own customer base.
Why anonymised traceability matters.
In the past years, we have had dozens of conversations on this topic and usually the same questions come up. So, why is it important to be able to publicly see the online transaction history, yet not being able to see the identity of the user?
Don’t trust, verify! In the decades leading up to 2008, thousands of people directly (or indirectly) invested ~$60bn into the advisory business of Bernard L. Madoff Investment Securities. Each month, the business sent its customers a physical print-out showing the transactions as well as the returns. What the customers did not know was that the advisory business actually never invested their money, ran a Ponzi scheme and forged all the trade and transaction documents. Was there a way for investors to actually find out? Truth is, they could not have found out by themselves that the money was never invested and all the transactions never actually happened. They could not find out as there was no way for customers to verify for themselves if the transactions had happened. Rather than verifying, they had to trust Bernard Madoff.
Through the help of the open nature of the Blockchain database one can track all the transactions on-chain. Over time, more and more unknown public addresses of big financial institutions are being labeled (i.e.: x03785..2832 belongs to Binance). This allows users to verify transactions, which over time creates a new open financial system. As more financial institutions move on-chain (= use the blockchain to trade and interact), a new dawn of financial transparency is upon us. Humans have lied in the past, they continue to lie in the present and will do so in the future - that is why a verifiable, open and unbreachable database like the Blockchain is transforming how we understand the concept of trust. Trust which can be verified.
The financial system on the blockchain has changed fundamentally in recent years - seen in the case of Genesis, a financially distressed Brokerage and lending company. Due to the wisdom of the crowd, their public addresses are known, thus the community is able to see what is happening behind the scenes. As more and more financial institutions move on-chain (= making their assets publicly viewable on the blockchain), we expect a new era of financial transparency to appear. People have lied, lie and will continue to lie - yet a public, unbreachable blockchain cannot l
Open Data leads to revolutionary Business Models
Big Tech platforms expand their reach by locking users into their proprietary interfaces. For example, in order for users to see which information is transacted (via posts) on the Instagram or Facebook “platform”, users can only access that information via the respective mobile applications. As the user data is locked inside their proprietary data servers, users have no alternative but to use the respective applications. This structure is fundamentally different in Web3/Crypto.
In Crypto, all data is public and stored on the public blockchain. As a result, public data can be accessed through many different user interfaces and applications. So what happens if an application charges high fees or offers an application with bad UI/UX - in short, not providing enough value to its users? As the code and the underlying data is open-source, nobody can keep user data hidden in proprietary data servers. This means that anyone can easily recreate the application to change the UI/UX or the revenue model. Sharing the infrastructure and data, lowers the cost of innovation (i.e.: build an MVP and find Product-Market Fit before raising >$10m in funding rounds) and eliminates data monopolies. This, in turn, allows applications to go to market faster, thus building more revolutionary products in quicker succession. While launching applications and products in the Web2 world has become increasingly expensive, Crypto/Web3 offers a fertile ground for innovators. This open data approach has attracted an increasing number of developers and operators in the past years. And increasingly, more investors understand how open data/ infrastructure provides a fertile ground to launch ground-breaking projects.
Going forward, open source data, open product code and the ability to launch new projects at a low cost will lead to more competition. As a result, to fend off competition, we will also see the emergence of revolutionary Web3 business models. With more compelling business models and better products, we see no reason why Web3 projects should not be able to compete head-to-head with dominant Web2 businesses.
For instance, the standard Web3 business playbook issues tokens to early users. This allows early users to profit from the success of the company - a methodology completely unheard of in Web2. Although Web2 networks like Uber, Instagram or Facebook profited significantly from their early users who promoted the respective platforms, the financial reward got paid out to its investors. This is in stark contrast to Uniswap, a Decentralised Exchange (DEX) who issued tokens to their earlier users and liquidity providers (LPs), rewarding them for their role in growing the network.
6. Why People should actually care about Crypto
Crypto/Web3 is difficult to understand. However, this does not mean that it will not have a similar effect as Web2, which is only understood by 20% of Americans. One of the reasons why Crypto/Web3 is difficult to understand is because the average user cannot really see its impact. While Web2 was primarily a front-end revolution, making it easier for users to interact with each others (read and write on the Web, i.e.: read the news, write your blog), Web3 is mostly defined as a back-end revolution (read news, write your blog, own your data/tokens/digital property rights). In addition to many new technical (blockchains, smart contracts, token design) and practical concepts (property rights, new privacy assumptions), Crypto also combines computer science, economics, philosophy, business and politics. Crypto is also difficult to understand, as the full impact of the innovations and its potential use cases is not yet visible today. But then again, we doubt that the inventors of transistors would have believed that their invention would allow computers to conquer the world?
Even today, Crypto has clearly defined use cases which can make the world a better place. However, the use cases are vastly different depending where and how users live. According to the NGO Freedom House (2022), out of a total population of 7.8 billion people, only 20% (1.5 billion people) live in free, democratic countries, while almost 38% (3 billion people) live “not free” inside repressive regimes (methodology here).
Crypto in Not/Partly Free Countries
Taking inspiration from Garry Kasparov’s blog, imagine the situation of dissidents in Venezuela: The government controls access to financial services (banking, loans, ATMs, etc.) through state institutions. Only those loyal to the government have access to resources, while dissidents are locked out. Everyone who speaks out, loses their privileges. As the government knows the identity of dissidents, it can link their identity directly to their digital footprint. Therefore, the government is able to monitor every digital and physical move dissidents make and is therefore able to track all of their financial transactions. In addition, the government controls the monetary and fiscal policy, following an unsustainable economic agenda, which leads to high inflation. So how do dissidents manage to live their lives? What can they do to speak out and not be oppressed as a result? Well, pre-Crypto, they could primarily rely on Cash transactions and on in-person protests. Cash is inefficient as it relies on face-to-face transactions and large-scale in-person protests are difficult to organize when the internet is censored. Through the emergence of Crypto, some alternatives have emerged. For example, Venezuelans can use the Bitcoin blockchain as a public, open payments infrastructure to send BTC. And Venezuelans can use Ethereum to express their opinion online while being sure that the government is not able to track their identity - trusting the strong privacy assumptions of Crypto (see Exhibit 24). In addition they can also access DeFi protocols escaping triple-digit inflation by purchasing stablecoins (= assets that are 1-1 pegged to the USD or EUR).
Venezuela might be an extreme example, but what about remittance payments to Bangladesh, Nigeria or Mexico? Instead of going through Western Union and paying 7-10% commission, users can rely on Ethereum when transacting money and only pay $0.1 - $2 commission to the network. What about trusting banks in Russia, Vietnam or Ukraine? Instead of depositing money in local banks, Crypto enables users to self-own assets without the need to rely on 3rd parties. What about escaping inflation in Turkey? With DeFi, users can access the global financial market and access stable USD. What about multi-million dollar donations into war-torn regions? For example, in the first month of Russia’s war against Ukraine, Ukraine has received almost $70m through Crypto payments infrastructure.
In his blog, Garry Kasparov continues to explain that, “as we cannot change human’s nature of oppression, we need to create instruments and institutions that promote freedom and give ordinary people all around the world the chance to escape totalitarian control”. But what are these “instruments and institutions” that need to be created? These are some of the “instruments and institutions” that have been created in Europe, the US and other “free, democratic countries”:
Strong property rights. In his book “The Mystery of Capital”, Peruvian economist Hernando de Soto explains how strong property rights led to an economically strong and free society. Strong property rights mean that neither public nor private actors can simply take property away from the rightful owner. The basic idea being that owning their property themselves, motivates people to develop it further as their incentives are clearly aligned. As property rights are fragile in the developing world, businesses remain under-capitalized.
Free speech backed by strong rule of law. Pretty self-explanatory. As free speech is enshrined in our constitutions, one is not going to prison when voicing an opinion.
(Relative) Clear walls of separation between money and state. Although occasionally disputed, the US and many countries in Europe have a (relatively) strict separation between elected politicians and central banks. This means that the stability of money (= inflation and purchasing power) does not directly lie in the hands of politicians who can manipulate fiscal and monetary stimulus to their advantage. The situation is different in countries such as Venezuela or Turkey where Central Banking independence is highly disputed leading to out-of-control inflation.
Understanding which institutions have helped Europe, Canada, Australia, New Zealand and the US to build a free and open society allows us to gain deeper knowledge on Crypto and how it can help citizens of non/partly-free countries to achieve the same.
Strong property rights through Crypto. Crypto relies on a public, distributed, amend-only database (aka the blockchain) which provides transparency and traceability. Those properties allow for the simple management of property rights. For example, the blockchain can be helpful to prove ownership of off-chain and on-chain assets by linking the token (= asset on the blockchain) with the public address of the user.
Free speech through Crypto. As discussed above, Crypto’s new privacy model splits the real identity from the public address. Therefore, it is impossible to identify the real identity of the user. This gives users the security to engage in free speech without the fear of persecution.
Clear walls of separation between money and state in Crypto. Most Crypto networks are decentralized and have their “set-in-stone” monetary and fiscal policy based on code. This means that no single institution can change the policy and print money for their political and financial gains.
Clearly, there are some fundamental use cases emerging in developing markets. However, more importantly, Crypto networks enable non/partly free countries to create institutions and instruments to build democratic and free societies.
Crypto in Free Countries
Most of the institutions and instruments mentioned above already exist in free countries. But then again, who would have thought that Canada would freeze the bank accounts of protesters violating freedom of speech principals? In general, Europeans and North Americans are able to trust banks with their deposits as they are (usually) protected up to €100k by the state. But then again, would the Greeks have expected that their ATMs stopped working during the Eurozone crisis and they could not access their money? Payments infrastructure in the developed world is relatively cheap and high inflation (>5%) is rarely an issue. But then again, who would have thought that inflation in some EU countries hits >20% inflation in 2022?
Although most of these examples are tail risk events, they can always occur. Being better safe than sorry, this highlights that the principles of self-custody of assets and new privacy assumptions are also of importance in free, democratic countries. In addition, there are other structural inefficiencies that can be solved through blockchains and their applications.
Disintermediating Financial Functions
To understand how Crypto can impact the financial markets infrastructure, we explore a simplified example: The current financial system is built around closed-up databases (= ledgers linking identities with assets or liabilities). Each bank is a closed-up database, just like each broker, each stock exchange and each settlement house.
For instance, for a money transfer from Austria to Australia, the money moves through 6-7 different banks and their proprietary databases, thus taking 4-5 days to arrive. Each of the banks takes some transaction fee which leads to high processing costs for users. Replacing all the different databases with one single decentralized database (= the blockchain), would not only lead to faster transaction times (1 - 25 seconds) but equally reduce transaction fees ($0.1 - $2). This process of shorting the financial supply chain is called disintermediation of financial functions and would be beneficial for all members of the financial systems - from retail traders to big institutional investors. Alone for investment banks, this disintermediation could lead to $12 billion in annual cost savings. Concretely, one can transfer $100m in ETH or any other asset from one account to another while only paying $0.1 - $2, which would be a commission rate of 0,000001%.
Improving Asset Securitisation through Software Tokenization
Crypto can also be used to increase the efficiency of the securitisation process. Securitisation is the administrative process through which certain types of assets are pooled together before being repackaged into interest-bearing securities - basically making assets tradeable.
Imagine the case of a German eCommerce SME, which aims to raise €3m debt backed by its inventory. Located in rural Germany, it has barely access to the global financial makers and has to rely on 3-4 local banks in its city. One of them offers €3m for 9% APR per year. As banks aim to move the risks of their book, they would pool the €3m with 99 other SME loans and create a new €300m special purpose vehicle (SPV). By pooling the assets with other borrowers the SPV is more diversified, allowing the bank to sell their pools’ Collateralized Debt Obligations (CDO) with 5% APY to other investors (= paying investors 5% interest). This represents a 4% profit for the bank. Although this sounds like a great deal for the bank, behind the whole securitisation process there are dozens of other service providers (rating agencies, trustees, reporting agents, paying agent, custody agent, etc) handling the large paper-trail and establishing security mechanisms to decrease the risk between all of the different actors. All of those service providers are earning a commission, which means that the SME has to pay a higher APR. However, these are not the only inefficiencies in the paper-based, multi-party process:
No access to the international financial market. Financial markets for SMEs tend to be highly nationalized. This means that rather than accessing a more liquid financial market, which would allow the SME to borrow on more favorable terms as there is more capital supply, they are limited to smaller, localized liquidity. Due to lower liquidity, they are forced to pay higher interest rates. Allowing SMEs to access the international financial market would allow them to reduce borrowing costs.
Expensive securitisation process due to middlemen. Current securitisation processes rely on 10-15 3rd party service providers who are in charge of handling the large paper-trail and establishing security mechanisms trying to decrease. Operating highly inefficiency, the servicing costs are relatively high. As each provider asks for a commission for their services, lenders and borrowers are required to pay higher fees, making the financial market increasingly inefficient.
Through Blockchain technology, Crypto is also able to increase the efficiency of this process. For example, an SME could access the global financial market through tokenization and real-world lending protocols like Centrifuge. Tokenization is the automated software process in which an on-chain asset (= token) is created representing an off-chain asset. Once on-(the block)chain, users can prove the universal ownership of the asset by affiliating the asset with their public address. Rather than proving ownership over assets on every database (banks, service providers), there is one public database to prove the ownership. Once the assets are tokenized via the Centrifuge “engine”, they are moved to the Centrifuge marketplace, through which interested parties can invest. The impact is multi-fold:
Giving lenders/investors and borrowers access to an international financial market. By reducing the barriers to entry to the financial markets for borrowers and lenders, the overall liquidity is increased. As liquidity is increased, APR (paid by the borrower) is reduced, which in turn allows companies to pay lower interest rates.
Automated tokenization process due to software. Rather than relying on several parties during the securitization process, enterprise customers can rely on a single party and their software, which significantly reduces servicing costs.
Securitisation is the backbone of our financial system. However, as securitisation is expensive due to the inherent inefficiency of proprietary databases, paper-based processes and a large number of middlemen, only big financial institutions can access it. This pushes smaller market participants out of the securitisation market. Through blockchain tokenization, Crypto reduces the cost of creating a publicly tradable asset. And through DeFi anyone can access those tokenized assets, which increases liquidity and reduces borrowing costs, which, in turn, is beneficial for the financial system.
Data Ownership through Crypto
Moving away from the financial use cases, there are also many use cases relying on Crypto’s self-custody elements. As discussed, a blockchain is a public database that cannot be changed retrospectively or being manipulated by one party. This means if there is an asset on the blockchain and it is affiliated with your public address, there is global consensus that this public address holds ownership over that asset. Only the person/entity with the private key is able to transact the asset or conduct an action. Through this process, we create digital ownership based on a global consensus (i.e.: the whole network agrees that this belongs to this address).
Let’s walk through a hypothetical use-case: Imagine being the owner of a real Picasso. The local government, Sotheby’s, Christie’s and other auction houses agree that you own the art piece - a local consensus has been achieved. Each of these entities has a written document in their proprietary database that lists you as the owner. However, in case you want to prove to the whole world (and not just to 3-4 entities and their databases) that you are the owner, you would also create a digital certificate (or token) on a global, trusted database (the blockchain) to prove your ownership and the scarcity of the painting. In case someone tries to forge the Picasso, their respective victim could see that the painting in question is actually forged and the original belongs to you.
The basic idea is the following: One can take any digital or non-digital object, tokenize it and put the digital certificate on the blockchain to prove its scarcity. Once the token is affiliated with the public address on the blockchain, the address has ownership rights over the object. Ownership rights are achieved for the public address as only the holder of the private keys (for the public address) can move the asset. While this can be done for off-chain art (like a Picasso), it can also be used for off-chain assets like titles, ownership certificates or identities. However, one can also prove ownership for on-chain assets such as art (i.e.: NFTs), money (i.e.: ETH tokens), gaming assets (i.e.: Gaming Skin in World of Warcraft) or content (i.e.: TikTok videos). The point is that before blockchains there has never been a way to create digital scarcity and ownership around digital objects. This has been achieved through a trusted, open database secured and agreed upon by thousands of distributed participants all around the world.
One of the biggest problems with Web2 social platforms is the location and ownership of content information. Even though the content is created by the user (= UGC = user generated content), it is stored inside the proprietary databases of social media platforms which also own it. This allows them to use your information and market it to 3rd parties. Owning data through blockchains allows creators to be also the owners of the content. This gives them also the option to monetise it in other, alternative ways of choice. Long-term, blockchains and its underlying technology give users the ability to own their information and data. This will, in turn, lead to a re-creation of social media applications. This is something that was previously unthinkable due to a different database design. Due to the self-custody and ownership properties of the blockchain database, we believe that long-term its database model will “win” over traditional databases. This will not only attract different types of data but also larger sizes. Becoming the largest database will enable new, novel applications, allowing users to own gaming content, music, as well as social media content.
Illicit Activity on the Blockchain
In the past years, many questions have been raised regarding the illicit use of cryptocurrencies as means to launder money, trade illicit goods and services, and commit fraud. In 2021, illicit addresses received $14 billion (+79% YoY) over the course of the year, while total transaction volume grew to $15.8 trillion in 2021 (+567% YoY). This means that transactions involving illicit addresses represented just 0.15% of cryptocurrency transaction volume in 2021. In the past years, crime has become a smaller and smaller part of the cryptocurrency ecosystem. In addition, law enforcement’s ability to combat cryptocurrency- based crime is also evolving.
7. Why the Timing is looking increasingly attractive
Someone once mentioned that the worst thing that ever happened to Crypto was the fact that Bitcoin and other tokens went up by 100,000x. Rather than focusing on technology, price became the main focal point for the many stakeholders. However, although rising prices led to many negative aspects such as scams and get-rich-quick-schemes, it also had many positive effects: Rising prices led to increased publicity, which itself attracted many new participants, of whom many stayed and contributed in a positive way. And although 2022 was an annus horribilis for Crypto with the likes of FTX, Terra/Luna, Three Arrows Capital (3AC), Celsius and others, we believe more than ever in the bright long-term future of the industry.
Many of the scams were enabled due to the missing technological maturity, irrational exuberance of the market, and a lack of rationality of its actors. For example, a lack of economic understanding paired with social media echo-chambers and FOMO led to the rise and the subsequent collapse of the algorithmic stablecoin Terra/Luna. As Terra/Luna collapsed and eradicated ~$50bn, all the other dominos started to fall due to the high interconnectedness and high leverage of the whole Crypto ecosystem. Due to a lack of sound risk management practices, high leverage and the belief that we are in a “super-cycle” Three Arrows Capital (3AC) ended up being liquidated. Due to fraud, a lack of basic risk management principals and high trading losses, the centralized exchange (CEX) FTX and its hedge fund Alameda Research ended up filing for bankruptcy which impacted ~1m users.
Despite Crypto’s core values of self-custody, many users moved their assets to centralized exchanges due to the poor UI/UX and limited product usability of decentralized exchanges (DEX). In addition, due to Crypto’s inflexible wallet/self-storage solutions (the user needs to keep a 24-word phrase as a back-up) and the lack of decentralized on/off-boarding channels, many users preferred to give up self-custody for convenience. History never repeats, but it rhymes. While the fall of the centralized exchange Mt. Gox almost led to the downfall of Crypto in 2014, the fall of the centralized exchange FTX will have significantly negative consequences for the years to come.
Unfortunately it seems that scammers are part of every developing industry - in Crypto just like in the early days of the internet. However, although early innovations are mostly perceived negatively - for example, the first open source encryption was treated by the US government like warfare munition - problems get resolved, the stigmata disappears and the technology gets increasingly adopted over time.
After Bubble Time, the Deployment Period begins
A commonly used framework for the adoption of technological revolutions can be found within Carlota Perez’ book Technological Revolutions and Financial Capital. In her book, she describes the connection between technological development and financial bubbles (based on past technological revolutions).
Broadly divided into two periods - installation and deployment - the cycle starts with the irruption stage where “new technologies burst into a maturing economy and advance like a bulldozer disrupting the established fabric, [...] before eventually transitioning into a full-blown frenzy as speculative capital pursues increasingly fantastical commercial applications”. Between 2019-2022, Crypto’s speculative frenzy was further fueled by extremely loosened financial conditions (due to monetary and fiscal stimulus), which lead to unsustainable economics, scams and empty promises. With rising inflation and tightening financial conditions, the bull market came to a halt, “bringing financial capital back to reality [...] together with mounting social pressure, [this] creates the conditions for institutional restructuring. In this atmosphere of urgency many of the social innovations [...] are likely to be brought together with new regulation in the financial and other spheres, to create a favorable context for recoupling and full unfolding of the growth potential. This crucial recomposition happens at the turning point which leaves behind the turbulent times of installation and paradigm transition to enter the ‘golden age’ that can follow, depending on the institutional and social choices made”. Although it is impossible to predict any sort of timing and draw direct conclusions, the parallels between Carlota Perez’ model and the current momentum of the Crypto industry are undeniable.
In addition, there are clear similarities (from a macro as well as from an industry perspective) between the burst of the Dot.com Bubble 2002 and the burst of the Crypto bubble in 2022. The Nasdaq peaked at $7 trillion in March 2000 before crashing down 78% to $1.5 trillion in October 2002 - indicating the turning point between the Frenzy Period and the Synergy Period as described by Carlota Perez. Going forward, it would take the Nasdaq 14 years to reach that height again (August 2014). As of today, the Nasdaq has a market cap of ~$31 trillion (20x since the bottom in October 2022 and 4.5x from the top in March 2000). Separately, the Crypto market capitalisation peaked at $2.9 trillion in November 2021 before crashing down ~73% to ~$800bn. Although painful at the time, the burst of the Dot.com Bubble gave the market a much-needed cleansing which resulted in more sustainable business models, greater tech products and pushed the technology industry significantly forward.
When comparing the Dot.com Crash 2002 with the Crypto Crash 2022, we continue to see parallels. While pre-2000 it was typical to invest based on the innovation potential alone, after the Crash a working business model and a cash flow plan had to be in place. Over time investors became more sophisticated and focused more on scalability, monetization and future product roadmaps. Similarly to 2002, Crypto’s capital markets have cooled down as investors continue to demand more proof points from businesses in terms of traction and existing business models. As a result, Crypto businesses slowly adopt more sustainable business models with sound economic assumptions.
But just like the internet benefited from the speculative mania as key infrastructure (databases, server structures and high throughput software) was built, the Crypto industry will benefit from the creation of different L1s and consensus mechanisms, its strong DeFi rails, stablecoins and NFTs. Paired with increasing business knowledge and combined with the inflow of new talent into the industry will be highly beneficial long-term. In addition, it is important to note that some products, which were developed during the speculative frenzy of the past 2 years, have finally achieved product market fit. As quoted by The Block, “In 2022, stablecoins continued to be one of the growing handful of [Crypto] currencies that found product-market fit and broader institutional acceptance. Since the beginning of the year [2022], [...] the aggregate supply only contracted by 2.4% - from $143 billion to $140 billion. Across the board, the number of daily active users has remained the same across many blockchains and applications, while total adjusted on-chain volume on a blockchain, which is a proxy for economic throughput, reached $5.6 trillion between Bitcoin and Ethereum in 2022, a 32.5% decrease from the previous year. From an institutional side, we expect institutional adoption to continue in the upcoming years”.
Despite many institutional investors incurring big losses, there are many positive signs that they continue to invest in their Crypto capabilities. Quoting Coinbase, “a recent Institutional Investor survey suggests that investors believe crypto is here to stay, regardless of the poor price action in the short term or the unfortunate behavior of some bad actors [...], with many using this as an opportunity to learn and build for the future.”
Upcoming Regulations could provide additional Clarity
On the regulatory side, there is slowly more regulatory clarity emerging in the EU and North America. In the wake of the collapse of FTX, political and public pressure has increased, calling for more stringent regulation of the Crypto industry.
Many of the Crypto failures of the past years share certain communalities like high leverage, insufficient risk control as well as unethical business practices. However, many of these characteristics are also seen in traditional finance and should not be interpreted as an indictment of blockchain technology or its potential impact on the world of finance.
In the European Union, its “Markets in Crypto Assets” (MiCA) Crypto Bill - a landmark legislation to unify digital assets across the EU - is expected to be put forward for parliamentary approval in April 2022 after first being delayed to February. The bill itself will not have an effect before Q4/2024 after applying a transition period of 18 months. The bill defines a crypto asset, provides four exchange licensing requirements and defines eligibility to issue tokens. Decentralized finance (DeFi) and non-fungible tokens (NFTs) are largely left alone by MiCA, with the EU claiming that further legislative packages down the road will tackle the sectors separately. In the US, it seems regulation is at an inflection point with senators suggesting (in December 2022) they anticipate taking up revised legislation in 2023. For further information, we highly recommend Messari’s and Coinbase 2023 Crypto Market outlook.
Technological Break-Throughs
The industry has seen significant technological break-throughts in the past years which will give rise to novel applications and real-world use cases over the next quarters.
Scaling solutions (great thread can be found here): In 2021, high demand for Ethereum blockspace (= people wanted their transactions being recorded on the Ethereum blockchain) lead to high gas fees of up to ~$200 (= basically a fee to pay to the network in order to get transactions included into the blockchain). Theoretically, this highlighted Ethereum’s limited scalability, practically, it led to many smaller users being priced out from Ethereum’s applications. Both reasons gave rise to Alt-L1s.
Rather than building new trust networks which is a cumbersome and expensive process, many new scaling solutions (i.e.: roll-ups) focused on leveraging Ethereum’s Trust Network. With $26bn in capital staked and secured by ~500k validators (i.e.: Alt-L1 network Solana has 3.5k validators), Ethereum is the most secure Trust Network and the most trusted Consensus Layer.
The high level idea of roll-ups as a Layer-2 scaling solution (i.e.: roll-ups) is to remove a majority of the data load from the Ethereum mainnet. Roll-ups are smart contracts that hold a certain state (of the blockchain) in a compressed form. For example, the status can be that Paul has 3 ETH and Lisa has 5 ETH. If Paul makes a transfer of 1 ETH to Lisa, the new status will be that Paul owns 2 ETH and Lisa 6 ETH. As this is a roll-up, the transaction (change of Lisa’s and Paul’s balance) is performed off-chain. This allows the roll-up to achieve much higher transaction throughput and reduce fees. Performed off-chain means that the new state of the roll-up is not computed by miners and validators (like on the Ethereum Trust Layer) but by other users off-chain. Once the transaction is performed, the batches of compressed transactions plus a fraud-proof is sent back to the original mainnet. The original mainnet then includes the transactions into its blockchain and changes the state (of the blockchain).
As the computation is off-chain, the Execution Client of the Ethereum mainnet is not able to download the data and re-execute the transaction. This is where the fraud proofs and optimistic trust assumptions are coming in.
An Optimistic roll-up assumes that all transactions are valid by default, therefore being called optimistic. L1 Validators (= who usually re-execute all the transactions) do not do any calculation per default, they just assume that the transactions are correct. This means that optimistic roll-ups would simply act as a kind of notary, recording each transaction and posting the data on Ethereum. The posted data can be later reviewed by “watchers” to ensure that nothing malicious has happened. If a malicious transaction is being found, the “dishonest” processor who “batched” the transactions is being punished.
Another roll-up methodology is Zero Knowledge (ZK) roll-ups. Rather than trusting that the transactions are valid by default, ZK roll-ups operate under a “guilty until proven innocent” approach. Yet again, execution happens on a separate chain. Thereby, hundreds of off-chain transactions are bundled into a batch by a “prover”. The prover also generates a cryptographic fraud proof, called ZK-SNARK, which is included into the batch of transactions. Transactions are only accepted onto Ethereum after ZK-SNARK is validated by the L1 validator. In theory this works as follows: To determine whether a “prover” is telling the truth, the “validator” asks them a series of questions in order to prove that the ZK-SNARK is valid - in general the cost of verification of the proof is low.
On top of Layer-2s, there can also be L3s, which are application specific roll-ups (i.e.: for gaming). Although the goal is to add scalability, the further roll-ups move away from Ethereum - the base Trust Layer - the less secured are the transactions. With roll-ups, the ecosystem has found a way to reduce gas fees and add scalability, which expands the use cases dramatically (i.e.: gaming and social media applications where the number of information transactions is significantly higher than i.e.: transfer of money).
Modular blockchains (great thread can be found here): We covered Modular Blockchains earlier in the report. Modular blockchains are blockchains that outsource at least one of three core functions of a blockchain (Consensus/Settlement, Execution, Data Availability) to an external blockchain or handle the component locally. Thanks to the modular design, blockchains have become more flexible per design. This flexibility also allows modular chains to be easily created, mixed, or replaced independently within a modular stack. Just like Lego bricks, modular blockchains can be independently created for each use case. This flexibility means that for the first time in the history of blockchains an application can pick its infrastructure according to its technological needs - and does not need to refrain from building more functionality as the underlying technology cannot support it.
Account abstraction (great thread can be found here): Ethereum’s upcoming account abstraction has been hailed as the game changer for Crypto’s UI/UX problem. Problems with Crypto’s UI/UX can be found everywhere - ranging from creating wallers to approving transactions.
For instance, creating a wallet (and storing the secret seed phrase) is completely counterintuitive for average Web2 users. Once users forget and lose their seed phrase, they also lose access to their coins. A security mechanism that is hard to accept for many Web2 users. A lot of other Web2 optionality - such as changing password, forgotten passwords, 2FA - is not yet available.
Additionally, transactions are difficult to implement as the user needs to sign and submit multiple sub-transactions (i.e.: to send ETH, one needs to sign 2-3 times). Imagine buying a coffee, yet having to sign three times - clearly there is room for improvement if Crypto wants to onboard additional Web2 users.
With the upcoming EIP-4337, wallets are receiving additional functionality. For example, it allows users to easily deal with lost seed-phrases, and gives them additional flexibility in choosing which token to use when paying network fees. The update also enables apps to become significantly more user friendly. For example, via an application linked to the Crypto account, users can seamlessly pay for their coffee.
In short, through account abstraction, Crypto’s self-custody and ownership is being merged with Web2/FinTech’s UI/UX and optionality. As a result of the upcoming account abstraction, we are going to see an additional onboarding of retail users onto the Crypto space.
The Merge & Shanghai update: Since its launch, Ethereum had relied on a Proof-of-work (PoW) consensus mechanism. In the PoW Consensus, members of the network (called miners) compete against each other to solve difficult mathematical puzzles. The puzzles are difficult to solve, but easy to verify once the correct solution has been found. Once a miner has found the right solution, they are allowed to build a block (= a new entry to the database). The miner then sends the solution to all other members, who verify that the solution is correct. If the solution is correct, the block is added to the blockchain and the miner receives a reward.
Since 2016 Ethereum had planned to replace its PoW consensus mechanism with a Proof-of-Stake (PoS) mechanism. In PoS, validators stake capital in the form of ETH into a smart contract on Ethereum. This staked ETH then acts as collateral that can be destroyed if the validator behaves dishonestly. The validators (= staker) have two main tasks. Firstly, they are responsible for checking that new blocks propagated over the network are valid. Secondly, once in a while, they are randomly selected to create and propagate new blocks themselves.
The PoS consensus mechanism offers several improvements such as better energy efficiency, lower barriers to entry for stakers which increases decentralization and security. Finally, the switch - titled The Merge - from PoW to PoS happened in September 2022. With “The Merge” the original Execution Layer of Ethereum was “merged” with the “new” proof-of-stake Consensus Layer, replacing the “old” PoW Consensus Layer. After the Merge, the Execution Layer takes care of state storage and management, block production and smart contract interactions. The Consensus Layer takes care of synchronizing the state across the network and submitting transactions to the chain.
By merging its Execution Layer together with a PoS Consensus Layer, Ethereum eliminated the need for energy-intensive mining and instead enabled the network to be secured by staked ETH. From a technological perspective, the Merge made block time more predictable and added benefits to roll-up solutions. In addition, it allowed Ethereum to build additional scalability (which is easier on PoS than PoW) as well as enabling re-staking (see Eigenlayer) which is impossible under a PoW consensus.
From an economic perspective token issuance (inflation) dropped significantly from ~3.5% per year to 0% (i.e.: Bitcoin has a current inflation of 1.7%) making it an “economically harder asset”. Even though inflation is significantly reduced, the network still manages to pay out a 5-6% staking reward rate to its stakers. This is being achieved through the following economic model: Based on the last 30 days average, Ethereum pays 653k ETH/year to 500k stakers who staked 16.3m ETH as a reward for providing security (without MEV ~4%). With a total supply of 120.5m ETH in circulation, it leads to a total network inflation rate due to “rewarding stakers” of 0.54%. However, users of the network also pay a network fee (= gas) so that Ethereum includes their transactions into the Blockchain. 30% of the fee gets distributed to the block proposers, while the remaining 70% get destroyed (= burnt). Based on the last 30 days average, 758k ETH/year got burnt, leading to a deflation rate of -0.63%. Combined together, the overall inflation rate of the network is -0.09%, making the network deflationary. This is also highly beneficial for the “investment flows” into the Ethereum asset. Since the Merge, rather than issuing 1.6m ETH (worth $2.6bn) to the market, the network burnt 9k ETH (worth ~$15m).
This means, under the PoW model, Ethereum would have a selling pressure of $2.6bn. Selling pressure means that the market would have needed to buy $2.6bn in the past 140 days just to keep the ETH price stable. Under the PoS model, the dynamics have changed completely. Now the market would need to to sell $15m in the past 140 days just to keep the ETH price stable. A fundamental improvement in the market flow dynamics for ETH as the asset.
Compared to other networks that rely on increasing token inflation to incentivise people to stake (= which is uneconomical), Ethereum’s network has enough usage to pay stakers through their revenue (= network fees) while also becoming a deflationary asset. A revolutionary new economic model.
At the moment - in order to stake ETH to secure the network - one needs to lock the ETH into a contract and is not able to take it out - basically a one-way door. Despite many conspiracy theories, this has been a technological decision from “The Merge” developer team in order to reduce the technological difficulty of “The Merge”. Using an analogy, the switch from PoW to PoS while keeping the network running has the same technical difficulty as changing an airplane turbine mid-air. The upcoming Shanghai update (End of Q1/2023) eliminates this technical burden and allows users to take out their “locked” Ethereum, thus the risk of being “locked-in forever” disappears. Mid/Long-term, eliminating this risk will lead to more retail investors and institutions accessing ETH as a sustainable economic asset.
Eigenlayer and staking: As mentioned above, Eigenlayer allows Ethereum stakers to secure additional middleware in the ecosystem with their economic power. From a technological perspective this means that more innovation (Oracles, Bridges, Consensus Layers) can be developed with reduced costs as middleware service providers can use Ethereum’s Trust Network rather than bootstrapping their own. For Ethereum stakers this means that over time, they will be able to increase their staking yield (~2-3x) by being paid additional fees by middleware providers. Avoiding the need to bootstrap their own trust network will not only lead to an increasing amount of middleware applications but will eventually lead to the creation of novel applications on top of an improved infrastructure stack.
Continued Institutional Adoption
Within the last bull-market, the industry has seen the increased adoption of Crypto/Web3 technologies within traditional companies, especially on the consumer and the financial technology side. On the latter, many institutions such as J.P.Morgan, Goldman Sachs and Morgan Stanley are exploring the underlying blockchain technology. The French bank Société Générale even built their own unit (SG Forge) to explore the link between TradFi (Traditional Finance) and DeFi (Decentralised Finance). In one of their projects. SG Forge even received an on-chain $30m loan from a DeFi protocol. The loan from MakerDAO to Société Générale/SG Forge was backed by French home mortgages which were tokenized and put onto the blockchain. In addition, more banks are exploring stablecoins, such as the National Australian Bank which is planning to issue an Australian Dollar stablecoin on the Ethereum network. Many wealth managers, such as Fidelity, Blackrock or StateStreet are already offering their clients the opportunity to invest into Crypto through their proprietary investment platforms.
On the enterprise side we have seen additional adoption during the past cycle. Nike not only acquired the Web3 studio RTFKT in 2021 but also announced plans to launch a new Web3 platform called .Swoosh that will offer Polygon-based NFT products which will go live in 2023. Adidas, Prada, Tommy Hilfiger, Tiffany and Balenciaga are also planning to utilize the Metaverse to build closer connections with their clients. Starbucks will use NFTs to increase customer loyalty, while Reddit issued NFTs to heavy platform users. While Meta is still building its Metaverse, Instagram has started to showcase digital collectibles (aka NFTs). Further companies that are exploring NFTs and other Crypto applications are Ebay, Disney, Stripe and Adobe - just to mention a few. According to Coinbase’s institutional investor survey, “despite current market conditions, the overall sentiment towards digital assets remained positive, with 72% supporting the view that digital assets are here to stay (86% among those currently invested in crypto and 64% among those planning to invest). Among the top reasons to invest, participants pointed toward goals of higher returns, accessing yield opportunities, investing in innovative technology, and having the potential for long-term appreciation.”
A Maturing M&A market
One of the signs for any maturing market is the development of a vibrant M&A market. A maturing exit market gives investors and other stakeholders the opportunity to sell their investment. This exit flexibility leads traditionally to a higher willingness to invest in the market in the first place. Looking at the past years, one can clearly see that a vivid M&A market has developed within the Crypto industry.
Even though 2022 has been one of the most difficult years in the industry, we believe that investment timing is looking highly favorably. A lot of bad actors and scammers have left the industry with those remaining actively working towards the long-term success of the industry. As money is more expensive due to higher interest rates, scams are less prevalent. Due to the efforts of the Crypto community, malicious actors are getting sidelined. At the same time, many of the traditional projects (in TradFi and DeFi) are being tested by difficult market conditions setting them up for long-term success. Finally, the market is moving back to Crypto ideals of self-custody and self-sovereignty. Prices are heavily down ~75%, while long-term assumptions have improved significantly. This not only builds a highly compelling case for the industry in the long-run, but also allows long-term focused investors to buy stakes in great tech companies for great prices.
8. How Crypto is changing Organizational Structures
With the rise of Crypto, a new organizational structure has emerged. In addition to traditional hierarchical companies, new decentralized networks have emerged. Rather than having a central stakeholder on top of a top-down structure that coordinates activity between stakeholders, Crypto networks are decentralized systems that are coordinated by a native blockchain-based token asset. As a result, ownership and decision power is decentralized. Their widespread usage as means of ownership, governance and coordination tools transforms tokens into commodities, moving away from the traditional securities model.
Relying on nodes and participants all around the world with little hierarchical structure allows Crypto networks to be significantly more resistant. While traditional companies can vanish and become insolvent, Crypto networks cannot vanish. As long as there are a few computers validating the network, executing the code and powering applications, the network continues to operate. As a result, Crypto networks are not built for decades but for centuries (i.e.: Ethereum’s monetary model).
Reformed Startup Governance
Due to the decentralized nature of Crypto networks, companies operating in the space have transformed their hierarchical structure towards a more community focused structure. In practice, this means that everyone can get involved in the community and proposals will be judged based on merit rather than reputation or invested capital. The underlying idea is that broader community engagement yields better results by engaging in a wider discussion rather than relying on the opinion of a few individuals behind closed doors.
On the upside, wide community engagement yields better ideas. On the downside it might limit a network's scalability if there are no proper governance structures in place. Once again - just like in the Blockchain Trilemma - the trade-off between decentralization and scalability becomes apparent, highlighting the need to find the right balance between both.
Usually, the community has a strong say on a strategic level. Depending on the organization, the community might even get involved in more granular operations such as product development, Go-to-Market or tokenomics. In general, execution tends to be semi- centralized (multi-signatures, teams and hiring, etc.). Due to the young nature of the space, the industry continues to explore different governance structures.
Apart from a strong community, Web3 also actively involves users. Close user relationships shorten the “Product-Feedback-Loop” and allow projects to build more meaningful products. Through the distribution of tokens to early users (“airdrop”) and the ability to stake the token (= earning more tokens as interest), projects reward early users. Through smart user acquisition strategies, projects are able to kick-start their growth, build better products and establish a heavily engaged community.
In general, the Web3 industry incentivises and rewards users more than Web2 companies. Being early involved in the right project as a user or community member can make a financial difference for many users. This is in stark contrast to Web2 companies where users are neither remunerated nor have the chance to invest in projects pre-IPO due to (in our opinion) poorly designed investor protection laws.
Post-Enron, stringent investor protection laws (Sarbanes–Oxley Act) increased the minimum administrative requirements for companies to go public drastically. Although meant in a positive way to protect retail investors from fraudulent companies, it led to companies pushing out their IPO. Rather than having early, fast-growing tech companies going public, those companies preferred to stay private and would only go public later in their lifecycle, when growth slowed down. As a result, the fruits of growth were earned by professional investors such as Venture Capitalists and Private Equity Funds. Most retail investors were unable to be deemed as “qualified investors” as that would require a networth of >$1m or an annual income of >$200k. As a result, retail investors were unable to participate in the private investment market, missing out on fast-growing private companies, which further increased inequality within our society. Through early token distribution to early users combined with a liquid exit/trading market, Crypto bypasses the traditional system and (in our opinion) further democratizes capitalism.
Through the easy distribution of tokens and their de facto usage as a membership ticket, anyone can become a community member. Rather than needing to be a large investor or an activist hedge fund, everyone can voice their opinion. As barriers of entry are being reduced, more people are able to be involved in a project. Through good tokenomics, a project is able to build strong community engagement and align the incentives between the community and the project. All of this leads to more engaged communities leading to better ideas and faster feedback loops. This in turn, leads to better projects and networks. We believe that community-engaged structures are destined to overtake the traditional, hierarchical company structure as we know it - simply, because people have better ideas if they collaborate.
New Founder Profile
The changing hierarchical structure of Web3 startups has a significant impact on the founder and management profile within Web3 companies. Instead of traditional hierarchical top-down management, a new open, transparent and collaborative management style has emerged. This management style aims to involve the community in the strategic decision making as well as the day-to-day operations. Aligning incentives is done via well-designed token incentive mechanisms.
Working with long-term engaged communities will lead to higher competitiveness and also support the price performance of the token as more people want to hold it. Given the nascent state of the industry, governance structures remain in flux. The objective is to find the right balance combining merging fast scalability with decentralized governance. Within the past 24 months, vague roles and shadow hierarchies were replaced with clearer structures including board governance and new meritocratic contribution models. Companies have also slowly added hierarchical elements, such as the role of middle-layer discussion/ community facilitators in order to guide proposals and initiatives more efficiently.
Strong, highly engaged communities are the key to the long-term success of projects. Once a community is disengaged and feels unheard, their loyalty decreases and the community slowly disperses. This leads to slower project growth and lower token prices as demand for the asset evaporates. As the community leaves, developers leave, which leads to slower company progress. However, as the code - including the data which is stored publicly - is open source, the project can be re-started (= forked) with newly implemented operational changes. With - for example - well designed tokenomics and a re-engaged community, the new project has built a better set-up than the former project. As a result, the original project gets increasingly abandoned with users flocking to its forked competitor.
In order to avoid this risk and achieve strong community engagement, we are looking for founders with strong leadership skills yet low ego, paired with excellent community management skills. Therefore, we favor transparent and collaborative management styles with founders prioritizing the values of decentralization over the desire to get rich quick.
New VC Profiles
The changing hierarchical structure of Web3 startups also impacts the role and structure of Venture Capital Investors. Traditionally, VC investors have four key tasks: Finding startups, Investing in startups, Managing startups and Exiting startups.
The traditional Venture Capital investor
Although the ancient concept of “risk capital” dates back thousands of years, the traditional Venture Capital model was established in the late 1950s. Since the Dot.com Bubble in 2002, the traditional Venture Capital model has relied heavily on an hierarchical corporate structure with ~5-10 partners on top, who are in charge of fundraising, investing and managing startups.
Due to their wide focus area, they are usually supported by several dedicated teams, such as the Investor Relations team or the Investment team. Usually, the Investment team is either sector-focused (on broader industry verticals) or stage-focuses (Pre-Seed, Seed, Series A). From a process perspective, once a startup has been discovered and vetted, the investment decision is usually made by the senior partnership behind closed doors relying on opaque decision making processes. Depending on the fund, the decision process may rely on the input of the focused investment team.
How low interest rates and Web3 have changed the traditional VC game
Within the past years, the VC industry has undergone several changes - especially on the Web3 side:
Competition through other VCs: The VC industry itself has grown 10x in the past 12 years, from $47bn of globally invested capital to $481bn in 2022. Dovish Central Banks pushed institutional investors to invest in private markets, which led to rising valuations. Due to low interest rates, public market valuations also grew rapidly, also increasing private “on-paper” valuations. As a consequence, more and more VC Funds appeared (i.e.: Solo-GP VCs, State-funded VCs), which led to increased competition for Tier 1 startups. Suddenly, offering “just” money was not good enough to win deals - VCs had to provide additional value add services.
Competition through Retail Investors: In addition to competition from other VC Funds to invest in the most promising Web3 startups, competition also emerged from private investors. After Ethereum’s ICO in 2014, many Web3 companies started to offer tokens for their respective Crypto networks. While previously only professional investors were able to invest in private companies, suddenly retail investors were also able to invest via tokens, further increasing competition for VCs.
No “information barrier to entry”: In Web2, information on the inner-workings of startups are inaccessible to retail investors. However in Web3, companies rely on open community forums, regular community calls and an open code-base. This allows anyone to understand the inner-workings of a project before becoming an investor. While VCs in Web2 profited from an information advantage, Web3’s open community approach has lowered the “information barrier” to entry.
Fast technological innovation within Web3: Within Web2, the underlying technologies and business models tend to innovate relatively slowly. However, due to permissionless innovation (open code- and database) and the young age of the Web3 industry, underlying technologies and business models change rapidly. For example, in May 2021, 12 months after Version 2 (V2), Uniswap released V3 with dozens of revolutionary tech and business model features (Concentrated Liquidity, Active Liquidity, etc.). As each sub-ecosystem (DeFi, GameFi, etc.) is expanding and specializing rapidly, it is becoming increasingly impossible for general VCs to keep track of the innovation, highlighting the need for specialized investors.
Open community reduces the managerial value-add of VCs: Due to the strong community focus and the permissionless nature of Web3 companies, a system based on meritocracy is developed. For example, a VC investor can post a governance proposal, or an engaged college student in Delhi can post their thoughts - both will be vetted by the community and valued based on merit. Over time, as specialization grows it will be difficult for traditional Generalist VCs to compete with highly specialized investors. As the “managerial value-add” of VCs slowly disappears, startups will increasingly question the actual value-add of traditional, Generalist VCs.
Due to faster technological progress, business model innovation and rapidly expanding sectors/ecosystems (DeFi, ReFi) across many different networks (Ethereum, Cosmos, Solana), we observe that individual investors slowly narrow down their focus, targeting either different networks or ecosystems. As a result, investors become increasingly specialized into silos. This leads to the question, how traditional, broad VC Funds respond to this industry dynamic. How can VCs see all the early innovations within the different ecosystems? How can VC Funds continue to add value to startups apart from money? We believe that the fund structure needed to respond to the challenges would be a decentralized community fund.
The future of VC Funds within Web3
Just as we have seen monolithic blockchains becoming more modularised, we believe that we will see a further modularisation of VC Funds adopting a DAO-like structure. A Decentralised Autonomous Organisation (DAO) is formed by a group of people who decide to abide by certain code-based rules to meet common goals. The DAO would raise investment capital and invest it based on the voting majority of the DAOs members. Using blockchain technology, an Investment DAO “receives” its rules and guidelines from the original code. This code can represent investment structures like those represented in a paper contract or they can be as informal as an investment club. Its structure, its rules for governing, and its ownership rights are all built into smart contracts that are deployed to a blockchain for all members to see and interact with. Another way to think about it would be a group of people with a shared wallet - decentralized with no hierarchical structure, yet driven by the same economic incentives. A DAO structure would allow VCs to become globally distributed with semi-authark sub-teams within industry segments working towards a common goal while allowing them the flexibility to pursue their own strategy.
As competition increases, investors need to provide much more value than just providing money. In order to create the most value, the fund needs more specialized minds working together towards a common goal. Naturally, an open-community Investment DAO with 50 specialized minds distributed around the world, will create more value than 10 partners around a table in a locked room.
Finding companies will be easier as specialized sector investors will be able to see more granular movement within their respective sectors and thus will be able to spot opportunities earlier. Winning deals will be easier due to the expert knowledge of specialized sector investors, with whom every founder would want to work with. Managing companies will be easier as specialized sector investors will be able to understand market dynamics, sector trends and competition better.
Concretely, the need for specialized sector investors will lead to several decentralized investment teams with a clear focus on sub-industries. Those teams focus on researching themes within their industry, identifying investments and working post-investment with the companies themselves. They aim to not only provide value-add feedback, pushing the founders to the next level, but they should work together on products, tokenomics, or even securing the network by running validator nodes. For example, a DeFi team consisting of 10-15 investors with each of the team members having a special focus (i.e.: 3 members focus on stablecoins, 3 members focus on AMMs, etc.).
Due to the public market nature of some of those assets, there is also the need for a second team that focuses on the actual investment and exit process of public assets. Once the research team has decided on a publicly traded project, the public investment team executes the trade. As the goal is to buy great tech at great prices, the public investment team needs to understand macro dynamics as well as understanding trading metrics for the individual tokens (i.e.: options curve, supply and demand, etc.).
Conclusion
Understanding Crypto is difficult. One not only needs to understand technology, politics and economics but also psychology, philosophy and history. Only by becoming a polymath, one can truly grasp the impact of Blockchain technology and its applications. When looking at the history of communication systems - from the Hardware and Software Era to the Networks Era with proprietary databases - it is clear for us that Crypto is the next logical step of the evolution to “open-source” the end-to-end tech stack.
We believe that due to its tech stack, Crypto will not only “open-source” data but will revolutionize the internet as we know it. This will lead to permissionless innovation, better products and economic freedom. Through Crypto’s values of Self-custody and its Privacy Model, people will not only be able to build digital property rights but also be able to speak openly on the internet without fearing repercussions. From a timing perspective, the bubble has already burst, allowing Crypto to enter a broad adoption period. Past and present technological innovations and break-throughs will allow for broader user adoption. And by decentralizing communities, Crypto will not only change current hierarchical structures but also impact founder and investor profiles.
For the author the original motivation was to build a coherent first principles view on the space and challenge the conviction on Crypto - independent of price actions, political pressure and reputation. We conclude that our conviction strongly holds.