The Tower of Babel  | Marten van Valckenborch the Elder (1595)

The Tower of Babel | Marten van Valckenborch the Elder (1595)

1:5 Full Stack

Cities and DLT protocols are successful because of the same network effects that make app stores and existing centralized social networks successful. More people move there because more people are already there, or conversely fail to take off as people move away and abandon the platform. 

The effort and cost required to migrate from one City/App Store/DLT Protocol to another are known as switching costs. Network effects in the distributed ledger protocol space are much more fickle than in cities or centralized protocols, as the switching costs are much lower. 

  • For a company or individual to move from one city to another takes considerable effort and expense. Real physical expenses are incurred when making the switch when diesel fuel is consumed by the moving truck, and movers demand payment to move pianos to the third floor. These, however, are not the only expenses. Inefficiency expenses like closing costs on a new home, or losing governmental benefits that do not transfer across state lines must also be considered when calculating the full switching costs of a move.

  • Migrating the movie you purchased from the Amazon platform to a competing platform like Netflix is next to impossible. Not because there are any physical barriers, but because each platform is trying to become an anti-capitalist monopoly (Thiel’s “unfair advantage”)

  • However, for a dApp to switch from say the Ethereum DLT network to network X takes considerably less effort. Less effort but not 0 effort, as changing platforms is still a major feat for any application to do. This brings up two primary switching costs. 

    • Migrating the Code Base: This is the effort and cost for project developers to learn how a new platform works. As codebases are slowly built up over time, migrating to a new platform becomes especially difficult when sub systems need to be piecemeal ported to a new system while maintaining backwards compatibility with the previous system.

    • Migrating the User Base: This is the effort to get your users to move to the new platform. To migrate a snapshot of current account balances need to be taken for all current users so they will have same percentage of tokens on the new network. This can add significant risk to early stage startups as changing such a fundamental fabric of the system early on can send the wrong message to investors that the company did not do its due diligence beforehand.

In a competitive landscape, dozens of platform level protocols compete everyday for projects to launch on their platform for a variety of cost, ease of deployment, and network effect reasons. The line between the incentives offered by a platform such as low fees, mining rewards, fast execution, and security, from the technical underpinnings that allow such benefits to happen is tricky to separate. 

Friction

A key theme we will return to from many different angles throughout this book is the concept of “friction”. In the first chapter, we established certain friction is an unavoidable cost of introducing entropy into the universe. Pouring concrete, moving pianos, etc. Coordination friction, however, is not a natural source of friction, but instead is a man made source of friction that drives up the cost of doing business beyond the raw materials and effort required to perform some useful action. With DLT, the goal is to drive coordination friction towards zero, freeing up useful capital in all facets of our economy from being hoarded by inefficient middlemen.

Therefore, the switching costs embedded into our daily lives should also approach zero given sufficiently advanced logic running on an immutable trust fabric. When analyzing DLT protocols, the bigger question being asked is: does this system of rules based logic encourage people to coordinate in the lowest friction way possible? In other words, if the system can be implicitly trusted to fairly execute transactions, and the logic itself is automated and open to all, then friction will approach zero.

User Stack

We hope the cities analogy helped, but the truth is the technical architecture that allows distributed ledgers to work differs in many important ways from the more simplistic architecture presented in the previous chapter. A more complete picture of the development stack needs to be broken out by the primary set of users that interact with each layer. 

  1. End Users: Incentivized by using the products

    User interface layer - The pretty surface layer that allows users to interact with the application via a friendly UI which does not show what's going on beneath the iceberg. To end users, only the value they perceive they get out of the service matters, not the elegance of the underlying architecture.

  2. dApp Developers: Incentivized by making the products

    Smart Contract Layer - The first logic layer where the useful work happens bebit a calculator, or a mortgage application automator. Within the smart contract layer there is a massive body of computer science best practices for good architecture including class based approaches to prevent excessive resource usage by efficiently calling only what is needed to execute each specific part of the code. Most developers wanting to build something useful will use prewritten tools called libraries to help them create their smart contracts. In fact, terms like "ERC20 standard" in Ethereum are really just a set of programmatic best practices used to keep developers out of trouble by giving them a set of pre written instructions to make building new products easier.

  3. Protocol Developers: Incentivized by making the products that allow developers to make the products

    Virtual Machines, low level execution layers, APIs, Bridges... - The translators that turn smart contract code into a format that can be executed on a global network of distributed machines. This is where the critical heart of each project lies along with the consensus mechanisms, and elliptic curve cryptography that allow the network to process, validate, and store transactions.

  4. Hosts: Incentivized to store copies of everything end users want to store for all time. These are the IT professionals or eventually even end users that support the network with hard drive space, bandwidth, and compute resources.

    Storage layer(s) - The actual databases where information is stored on a redundant distributed ledger for all time, or until the chain is abandoned.

  5. Hardware Manufacturers: Incentivized by selling physical devices that support all of the above layers

    Raw metal - The interaction the bytecode translated by the virtual machine has with the raw distributed server hardware that must work on any machine on the network regardless of what hardware configuration or supported server operating system the ledger happens to reside on.

Within this stack, the virtual machine and storage layers are the most controversial as they do the heavy lifting for the entire system. In the next chapter, we will discuss the virtual machine layer in detail, but first we need to address the philosophical trade offs within the storage layer. 

Court Systems, Hidden Costs, and Dueling Philosophies

In the consensus mechanisms chapter, we briefly mentioned the big blocks vs lightning network scaling strategies. These two examples illustrate the fundamental difference between a uni-cast network and and multi-cast flood network

In the early days of the internet, when someone wanted to send a file across the network, all peers needed to replicate the original data for the file to transfer (E.g., a flood network)

If the internet operated this way today, it would instantly grind to a halt as every tweet or Amazon purchase would need to be passed on to every other person in the network.

Hence today we use the TCP protocol to uni-cast our communications directly from one party to another without the need for this replication. 

The distributed ledger world is currently in a philosophical quandary about what deserves to be replicated for all time on-chain, and what can safely happen off-chain. 

  • Strategy 1: Keep everything on-chain. This strategy seeks to use ever falling data storage costs to host redundant copies of the entire ledger as it grows over time.

  • Strategy 2: Use the chain as a court system where the base level ledger is only used when disputes arise between parties. (E.g., a lightning channel is disputed and collateral locked up on the base level chain is transferred from Party A to Party B. Otherwise, if there is no dispute ,the transaction happens without being registered to the chain)

Keep in mind any consensus mechanism type (they-are, you-elect, you-are) can in theory operate either with an all on-chain, or as a multilevel on chain + off-chain strategy. As long as the network provides incentives for a distributed enough base of computers to host full copies of the base ledger, it is too early to rule out which strategy will succeed. 

The important part here is that storing data onto a redundant global ledger is not free. The cost to post a transaction to an on chain ledger is not just the first time cost for the miner to validate the transaction, but the recurring cost of keeping the transaction in the ledger for all time by nodes continuing to host a complete chain of events going back to the first transaction posted to the network.

Hidden Protocols

The beauty of a backend revolution is that much of what goes on beneath the iceberg is magically invisible to the end user. Take converting a .JPEG image file into .PNG image for example. To accomplish this task, your computer needs to interact with all five levels of the development stack. 

  • As an end user, you only see the drop down menu to select .PNG and hit submit.

  • But beneath you someone needed to write the photo editing software.

  • Beneath them someone needed to write the operating system, whether it is a virtual machine or a regular computer.

  • Beneath that someone needs to store the results. This could be your local hard drive, a FAANG data center, or a DLT.

  • And finally beneath all that someone needs to manufacture the hard drive, cpu, RAM, etc. that allows the physical transistor gates to open and close rendering the correct 1s and 0s.

Today, Converting one token on protocol ledger X to another token on protocol ledger Y is technically possible, yet is not built out to consumer internet (swipe right for sex now) standards.

Imagine the following use case:

  • Xoana (our hero from the foreword) wants to pay for 10 gigabytes of storage to back up her Harvard graduation pictures. 

  • However, months earlier she paid for a 1 year supply of VPN tokens so she could access the internet freely outside of Venezuela.

Now that she is safely in America, just like converting a .JPEG to .PNG, she can seamlessly convert her VPN tokens to Storage tokens.

  1. Beneath her user interface tip of the iceberg, a ballet of electrons takes place in milliseconds across the globe.

  2. A application on the smart contract layer submits a sell order to swap her VPN tokens for storage tokens

  3. The protocol layer the VPN tokens exist on initiates an atomic swap by first locking her VPN tokens, then releasing them when the Storage token protocol confirms their side is also locked.

  4. The swap happens and the ledger storage layers of both the VPN protocol and the Storage protocol update their ledgers.

  5. Physical hard drive space is unlocked by Xoana new storage tokens allowing her access to 10 gigabytes of storage space across a global network of shared hard drives.

If this sounds like science fiction, it is not. While clunky, this entire use case is possible today. By analyzing early proto-applications like Filecoin/Siacoin/Burstcoin (shared storage) and Mysterium (decentralized VPN), it's possible to see a new type of internet evolving out of this primordial soup. 

Over the short term, choosing platforms with fast and loose governance, limited scalability, and high transaction costs can work when the value of applications built on the protocol is speculative, and real world usage is yet to happen.

However, protocols with good architecture will by design win out over the long term as applications begin run into the limits bad architecture imposes on successful applications, forcing developers to switch to better platforms.

In our next chapter, we will explore how an actual computer science technique used in database architecture for decades holds one of the major technical keys to unlock good architecture. When trying to maintain a shared global truth, scaling an immutable flood network can become extremely difficult.

1:6 In Shard We Trust -->

The isolated man does not develop any intellectual power. It is necessary for him to be immersed in an environment of other men, whose techniques he absorbs during the first twenty years of his life. He may then perhaps do a little research of his own and make a very few discoveries which are passed on to other men. From this point of view the search for new techniques must be regarded as carried out by the human community as a whole, rather than by individuals.
— Alan Turing