1:5 In Shard we Trust
The brilliance of a monolithic proof-of-work ledger like Bitcoin this that it can gain value while scoffing at efficiency. The more power it consumes, and the more hard drive space its ledger uses, the more powerful it becomes.
Face it, we humans are an odd bunch. We all want the McMansion. Who among us will say no to using as much electricity as possible if given the chance. Two or three people can cram into coach for every business class airline seat, but guess what.. business class seats cost four times as much. Sure there is a little extra profit margin, but using twice as much physical real estate, enjoying a heavier more comfortable seat, and get snockered on unlimited champagne are not free. Real capitalism is tied to energy use.
As fun as Bitcoin is, “store of value” via excessive consumption will never get us to our end goal: replacing FAANG and the rest of the siren servers with more equitable platforms that make more money by returning economic dignity to regular people.
Say you have 1 terabyte of artificially generated cat pictures your AI stayed up all night creating for you. There are two main options for how to best store your precious creations:
Buy a 1 terabyte hard drive off Newegg for $49.99 and store your little guys and gals locally.
Pay a siren server like Google or Amazon ~$10-15 2019 Equivalent USD per month (which decreases a few percent per year thanks to ingenious new optimizations) to store them for you
This chapter is less about Google using your cat pictures for free to make their cat pictures better without paying you, and more about how Google goes about efficiently storing enormous reams of data using the least amount of resources possible.
Instead of storing one massive terabyte chunk of cat picture data, Google will instead break your data down into bite-sized chunks called shards. Typically, we think of a chunk as something like an individual JPEG image, but computers really don’t care as long as the 1s and 0s match up.
To deliver your 1 terabyte of data to you for the low cost of 10 dollars per month, FAANG needs to be smart about how it stores your cats. The cheapest and dumbest way of storing your data would be to buy a $49.99 1 terabyte hard drive from Newegg, then stick it into a cheap server that holds the data.
Alas, there is no real cost advantage to this approach other than buying hard drives in bulk. When you factor in cooling costs, server racks, and the average 18-month service life of a typical hard drive, it makes exceedingly little sense to store data at scale using the same methods you would at home.
More importantly - you demand uptime!
Uptime Just in Time
So what is an enterprising FAANG server to do?
If a single hard drive platter gets corrupted FAANG can potentially be held liable for losing your data. Their EULA is so iron clad you probably can’t sue them and win, but they still want their spy data pristine dammit!
To guarantee 99.999% uptime (ability to access the data) and 100% reliability ( data not getting deleted), your cat pictures must be partially replicated and made redundant in case something happens to an individual hard drive.
FAANG also knows not all data is created equally, fresh New York Times 1 million+ hits in an hour data is a different class of data than Karen from South London’s 2006 graduation picture album on Facebook.
The New York Times demands very redundant solid-state drives geographically dispersed across the world to load those millions of HTML browsers as fast as possible. Karen not so much. Her data is probably on a magnetic platter tape somewhere near the break room.
Both classes of data are sharded, but the Content Delivery Network that sends them from deep bowels of data servers to end user browsers is much different. CDNs are a fascinating part of the internet ecosystem, but due to brevity constraints, this is the only time we will mention them. Startups in the distributed ledger space are however looking to create distributed CDNs which is exciting.
To explain what a shard actually is, we need a confusing techie graphic.
The picture above shows how an application like downloading your cat pictures, reading the New York Times, or viewing old graduation photos queries many separate chunks of database to piece together the full scope of the data needed to run the application.
This graphic also mentions the ever-present “hashing” to confirm the integrity of the data in each shard. If the hashes don’t match, the shard is invalid as a 1 or 0 must have gotten out of place. More than just confirming data integrity, hashes are used to separate user data on shared architecture. Hypothetically, Ford and GM proprietary design sketches could be on the same FAANG server at the same time yet not have communication because each data repository is unlocked by a different key.
To further show the vast differences between at home IT configurations and enterprise demands, there are entire careers made out of nothing but:
More efficiently sorting data and predicting what data will be needed where and when
Shortening the time it takes to reassemble shards by reducing bottlenecks in the system where one critical path needs to complete before a multi-threaded process can complete
Reducing compute and storage resource requirements by having more or less shards, even down to writing lower-level database languages that compile to faster code.
Fortunately, we will keep it casual and not dip even a single toe into those frightening waters. Treat your backend developers well or the world might end.
In the previous chapters, we introduced the “monolithic ledger” argument and how flood networks have difficulty scaling. We concluded that either:
the main chain needs to remain small with side chains or off-chain transactions taking the vast majority of network load
No universal consensus is possible and we need separate encrypted networks with interoperability
Or maybe universal consensus is possible if we are especially smart about how we tie our shards together.
Let’s explore door number 3.
Behind this door, we run headfirst into an incredibly hard computer science problem. Let’s say Bob has 10 tokens worth of credit on our fancy sharded ledger network. Instead of one universal chain to reference, there are many distinct shards each with only a piece of the overall puzzle.
Bob spends his 10 tokens on Shard A
and then tries to spend THE SAME tokens on Shard B!
This sneaky double spend attack destroys the entire value proposition of a distributed ledger if someone on the network can game the system in this way.
Thus Shard B must know something about Shard A to prevent a double spend attack.
But what constitutes something?
The answer is always more HASHES.
In this case, we want shard B to know as little about shard A as possible, except that shard B needs to know if a transaction has been spent already on Shard A so it can block malicious transactions.
Thus, shard B must have some recent hashes from shard A inside of it so Shard B can verify Bob’s 10 tokens have not already been spent.
This gets hard quickly when there are not just two shards, but potentially 10,000.
Merkle Roots save the Day
Good thing we have our trusty friend the Merkle root that can compress transactions from each shard into a single proof!
We feel a little bad for burying the lead this far into the book by not talking about Merkle trees until now. There is just so much ground to cover we feel putting some space between technical bedrocks like Merkle trees is important.
The Merkle tree (patented by Ralph Merkle in 1979) finds its way into every distributed ledger project, and has use cases in almost every type of database structure regardless if it is decentralized or not.
In Bitcoin, Merkle trees are used to quickly verify the entire monolithic chain by compressing all previous transactions into a Merkle root that can verify the entire tree in one computational step. If something is rotten in Denmark, the root can be unpacked one tree branch at a time until the non-matching hash is found. In the Bitcoin world, software called Simplified Payment Verification can perform hash checks quickly which allows anyone with modest computational resources such as a smartphone or laptop to make bitcoin transactions. No terabyte sized full copy of the ledger required.
In part II of the book, we will get very meta about the entire notion of Merkle roots as potentially the only data that ends up on a monolithic main chain like Bitcoin, something we call the “fingerprint internet”.
But before we get there we need to bring sharding home.
So now we know Merkle roots are how shards can be tied together to quickly verify the integrity of other shards on the network.
This is fine unless your honest node is surrounded by deceitful nodes intentionally jamming your node with #fakenews.
As seen over and over again through pesky 51% attacks, a network is only as good as the integrity of its full nodes who perform work by maintaining and updating copies of the ledger. When we move away from pure proof-of-work, we lose the physical tie to security that must be made up for in more creative and less energy intensive computer science ways.
This enhanced structure (or at least one good train of thought) involves getting more out of your end user software wallets.
Instead of being dumb terminals that only perform Simplified Payment Verification-esque functions, end user wallets need to get more active by storing little chunks of the shards themselves.
It is only when we have a sufficient density of honest nodes in the network, that distributed ledgers are even possible. Our monkey brains typically imagine this in humanistic terms like each of us storing our own little ledger. This line of thinking can be a bit limiting though as we are prone to think the majority of nodes must be honest for the system to work.
In fact (because of math) these systems are actually extremely fault tolerant and can root out many malicious actors by preferentially propagating honest information across the network.
part raw computer science using logical mechanisms like gossip protocols and Merkle roots to propagate information trustlessly across the network
and part game theoretic incentives to reward people with tokens for supporting the network, while punishing defectors by rejecting their transactions, or even confiscating their tokens.
All distributed ledger projects use these two fundamental flavors as they attempt to build truly immutable global consensus systems. Some have provably better code that executes faster and more securely, while others might have innovative incentive mechanisms that drive honest use of the network.
Propagation = Gossip
As we mentioned at length in the Bitcoin then subsequent consensus mechanism chapters, nodes need to communicate with each other and share information about new transactions posted to the network (carving transactions into the shared internet rock)
Alice broadcasts to the world that she wants to send transaction “Send 1 Token” to Bob. Say this transaction represents selling an equity token to Bob
Node 1 is closest to Alice and picks up her broadcast request first. (Close can be defined in many ways such as latency, geography, or for more security the closest random number from her random number on the network while maintaining some semblance of routing efficiency)
Node 1 checks its local copy of the ledger and verifies
Alice has an appropriate balance
and rights to issue the transaction. (Eg she cannot send a security token to non-verified accounts if this is a permissioned network)
Node 1 then writes Alice’s transaction to its local ledger
If this transaction was posting to a single centralized database no more steps are required. However, in our “trustless” future we have no guarantee that the centralized database will not delete/modify the transaction. Hence why we use a distributed system to:
GOSSIP what Node 1 knows about the transaction to Nodes 2, 3, 1012, and 4. In our simplified system, we will say nodes are gossiped to sequentially with some randomness. On a real production platform, there would be very strong mathematics governing how nodes interact and are chosen to pass on information.
Node 2 might receive the transaction in just a few milliseconds, while node 1012 might take a few seconds to receive the gossip.
Once Alice’s transaction begins to gossip across the network, potentially thousands of ledgers will have a secure copy of her transaction within just a few “hops” as propagation becomes exponential.
1 node shares with 5 nodes who share with 5 more nodes which quickly becomes -> 25 -> 125 -> 625 -> 3125
At 5 neighbors per hop, within 4 hops 625 nodes now share Alice’s transaction, then with just one more hop 3,125! In a production environment, gossip could spread to many more than 5 nodes per hop leading to global consensus in a matter of seconds.
The key difference between a monolithic gossip protocol like Bitcoin, and a sharded protocol lies in each node NOT needing a full copy of the ledger to validate transactions, but rather processing a subsection of the network knowing the shared Merkle roots will quickly reveal malicious activity such as double spend attacks.
If Alice attempted to sell her equity tokens on her nearest shard and simultaneously on a shard extremely far away on the network, the gossip protocol should catch up to her duplicity in just a few hops, then reverse the transaction back across the network.
If Alice “staked” (pledged token collateral) to attest to her honesty, those “staked” funds could potentially be confiscated automatically by the network as soon as the fraud was discovered.
In a gossip system, the double spend might not be immediately discovered, but is discovered quickly enough the transaction can be efficiently rejected before becoming an immutable part of the network history.
In our next chapter, we will move away from consensus and ledgers to dig deep into raw mathematical logic that determines if a 1s and 0s based “state” system can be provably trusted to work the same way every time. Without this core mathematical trust, no system should be trusted to execute tasks as important as organ donor matching, real estate title transfers, prison release schedules, etc.