1:6 High Assurance
Somewhere deep in your brain, hidden underneath a haze of adolescent hormones, lies the distant memory of writing geometry proofs. Remember those silly things that prove corresponding angles are actually equal?
In the world of computer science (which distributed ledgers are an expression of) there are techniques that allow programmers to mathematically prove code will always execute the same way every time. At their very core, computers are simply “state machines” or a way to consistently compute a sequence of electrical impulses. Is the transistor flipped to a 1 or a 0? On or off.
Techniques used to prove the consistency of outputing the same sequence of 1s and 0s every time code is executed are commonly called "high assurance" software design. These designs are very labor intensive to deploy, and have largely been reserved for critical functions such as energy grids, airplane control systems, medical devices, and other software that cannot fail. Life and death software.
There is an amazing level of abstraction between the words you see typed on this screen, and the 1s and 0s your processor turns on and off inside of your computer to show you this text.
Underneath this beautiful Squarespace website lies:
Below this lies an interpreter (either server side or on your web browser) that translates the programming languages into an intermediate representation of machine readable bytecode
At the very bottom lies the execution of the intermediate representation into raw 1s and 0s using your computer's CPU, along with many libraries native to your operating system and web browser that helps the process along.
This amazing ballet for the most part relies on a subset of computer science called "imperative" languages that allow programmers to write code that tells the computer "how" to do something, rather than explicitly "what" to do.
Imperative languages are a massive time and resource savings, as programmers can skip many steps by writing high level statements to accomplish a task, rather than needing to explicitly state every single sub task the computer needs to do.
By giving up control over every step of the process, programmers gain ease of use and speed, but lose the guarantee of security and reliability. This is a perfectly acceptable trade off in any type of application that is not mission critical.
Take this adorable rabbit GIF for instance.
If your web browser has a bug and crashes trying to render the GIF, no permanent damage is done. However, if the database that hosts this GIF has a bug and causes the GIF to be permanently corrupted, the world would be a slightly worse place to live in.
To a computer, it makes no difference if the file is a rabbit GIF, or an audio recording of a taped confession implicating a high ranking official in a corruption case, or a real estate title, or a bank account...
As distributed ledgers contain billions of fiat dollars worth of value today (and will eventually host services like organ donor registries) it is exceedingly important the core levels of the protocol are written in such a way they mathematically cannot fail.
We know how to keep Avionics systems on airplanes from crashing, so we should be able to use these same techniques to keep the rest of our global IT infrastructure from crashing.
To explain this crash proof (or really theoretically crash proof) method of writing software, we must go back to a time before computers even existed. A time before contempt culture.
A (Very) Brief History of the World
Please forgive us for reducing 80 years of mathematics and computer science into a few grossly oversimplified paragraphs.
The problem asks for an algorithm that takes as input a statement of a first-order logic (possibly with a finite number of axioms beyond the usual axioms of first-order logic) and answers "Yes" or "No" according to whether the statement is universally valid, i.e., valid in every structure satisfying the axioms. By the completeness theorem of first-order logic, a statement is universally valid if and only if it can be deduced from the axioms, so the Entscheidungsproblem can also be viewed as asking for an algorithm to decide whether a given statement is provable from the axioms using the rules of logic.
In plain english, Church-Turing proves that any computation can be represented in a computer in such a way you can get a provable yes or no answer. Thanks to the pioneering work of Alonzo Church developing Lambda Calculus, and Alan Turing developing the first practical state machine, billions of people now enjoy digital modernity.
As software crashes when the computer doesn't know to do and gets stuck in between yes and no, Lamda Calculus can be immensely powerful when expressed via programming languages, as it guarantees a definitive yes or no answer as every piece of logic is ultimately represented by a mathematical function.
The hallmark function of Lamda Calculus in relation to computer science is the Y Combinator.
This amazing piece of logic allows for recursion to exist in functional programming languages. Computers gain their immense power from taking the solution to a function and feeding it back into itself potentially billions of times in a second to solve things humans never could by long hand. Without recursion, there would be no Bitcoin mining, as all a Bitcoin miner does is recursively guess over and over again which random number will win the next block.
Recursion is of course a double edged sword, as things like memory leaks will cause your computer to crash if an infinite loop gets going that slowly eats up more and more memory. The beauty of functional programming over imperative programming, is the computer is told to do the recursion explicitly, rather than relying on higher order logic that converts to raw computation without the programmer being aware of exactly what is occurring inside the state machine.
In other words, most computer programs written today in imperative languages "trust" opaque black boxes written by someone else and often decades ago to do something. If programmers are not incredibly careful, an inadvertent flaw in another part of the code will break the entire program. Effectively, all functional programming does is lay bare each programmatic function as math which can be "provably" checked to behave the same way every time.
By the 1980s, immense amounts of groundwork had been laid from the combinatory logic work of Haskell Curry discovering the Y Combinator, to the development of Hindley–Milner type system, culminating in the creation of the first usable functional programming languages.
Ever since, a small contingent of developers willing to write code in a rigorous and often unforgiving way are rewarded with code that is mathematically guaranteed to execute the same "formally verified" way (as in proven via mathematical proofs) every time.
“Turing-restricted” environments can be an immensely powerful way for DLT projects to ensure security. By only giving developers access to small pieces of formally verified toolsets to work with, platform level protocols give higher level developers piece of mind that their projects will execute properly every time.
For instance, formally verified code to perform simple payment transfers can be pulled off the self by developers or even end users through a simple user interface. When the user sends a payment, they are mathematically guaranteed it will execute properly every time.
Bringing Math to the Masses
The world is far far away from running in a formally verified way. Not only to 99% of people on earth not understanding basic programming, 99% of programmers have no idea what formal verification is.
That's fine though, as long as somebody understands formal verification, again its the process that can mathematically prove the correctness of any arbitrary algorithm.
In an ideal world, a programmer would write her software using whatever language she felt most comfortable in, and the code would undergo a thorough review during the compilation process revealing all bugs.
In the real world, this does not happen, and will not for quite some time.
This leaves programmers with two types of systems they can use:
One that will let you compile bad code and potentially introduce bugs into the system
And one that will NOT let you compile bad code. (Functional programming and formal verification)
The distributed ledger space has largely divided itself into two camps regarding this issue.
One camp limits what kind of code can be executed on the ledger network to a very narrow range of possible commands to get around the issue. Bitcoin is one such system as it only contains a set number of operational codes "op codes" that limit the types of operations that can occur on the network. In this scenario, formal verification is less necessary as the network does not allow arbitrary computation to be run on the network.
The other camp does not limit what types of operations can be executed on the network and allows any conceivable code to run. This way of running a distributed ledger network is fine until it is not fine, as a software bug running on a global network of shared ledgers can result in catastrophic failure.
The ideal solution is likely to combine the security and integrity of the limited base protocol layer, with the unlimited creativity afforded by a separate "smart contracting layer" that can execute any arbitrary command in an environment that cannot leak out of the container and into the system at large.
The ideal solution will also hopefully check “contempt culture” at the door. Tabs or spaces, language X or language Y is not the issue. Creating good math is.
When you run software on your local machine the process is fairly straightforward from the operating system down to the bare metal of your computer's processor, ram, etc.
Running a computer program on a distributed ledger network is not so straightforward as there is no clear relationship between the software you want to run, and the hardware it will run on.
The way this is handled on distributed ledger networks is by running applications (smart contracts) inside of self contained virtual operating systems that compile the code and run the program directly on nodes within the ledger network.
If you have ever "remoted in" to a copy of Windows you have used a virtual machine. This process was probably chunky and error prone as the operating system was not optimized to be so far removed from the base hardware, and thus degraded the performance.
The same thing happens to applications (smart contracts) running on distributed ledger networks. Code deployed to distributed ledger can't run natively on an individual piece of hardware, so the code is instead sent to a virtual machine to be executed and run by the network as a whole.
Most web applications today run on a Java virtual machine, which was never optimized to run large scale applications. This is why you never see computationally demanding tasks like playing video games running inside of web browsers.
Fortunately, there is an emerging coalesence from existing centralized services such as Google and Amazon around "serverless" computation using tools like WebAssembly. The development path forward is most likely to borrow from technology across the compute, video game, VR/AR industries, who are all searching for ways to run complex computations natively on the internet.
The distributed ledger networks of the future will need to address countless issues before complex real world applications can be deployed securely on top of distributed protocols.
The following is a laundry list of tasks developers will have to grapple with in the coming decades. While in no way exhaustive, we have attempted to round up an overview of key technical development areas that will need to be addressed before trust can be shifted to distributed ledgers beyond the simple sending and receiving of transactions (which bitcoin and 1000s of other projects already do quite well).
Each bullet point deserves into own chapter, but in an attempt to keep the conversation as high level as possible, we will link to the relevant sources and summarize instead of explain each concept in great detail.
Verifying Smart Contracts: To make better code, we need better tools. Rather than expecting developers to create formal methods from scratch each time they develop a new project, pre-existing libraries and dependencies can be called that have gone through rigorous formal verification. Projects such as the K Framework offer a glimpse of what these formally verified libraries can offer developers without asking them to re-configure their code into an actual functional programming language like Haskell or OCAML.
Limiting Precompiles: One of the more powerful features of technologies like Web Assembly is the ability to compile in real time inside of the framework, rather that trusting precompiled code. Critical pieces of distributed ledger infrastructure such as the elliptical-curve cryptography used is currently precompiled on platforms like Ethereum. While very few people on earth are capable of sneaking in vulnerabilities into the pre-compiles, without technologies like WebAssembly we cannot be guaranteed there are no vulnerabilities. Eg. right now someone’s Github is responsible for the elliptic curve cryptography running most crypto projects. Scary
Adopting an Agent-based Model to avoid concurrency issues: Concurrency is the ability for software to run multiple processes simultaneously. This is a wonderful advancement in computer science that allows massively more computation to be performed by running many processes in parallel, rather than waiting for one process to finish before the next can begin. However, in the realm of smart contracts the opposite is often crucial to prevent theft. The original Ethereum DAO hack was possible due to a concurrency bug in the DAO smart contract. The hacker created two instances of the same state and re-injected himself back into the smart contract over and over again, draining out hundreds of millions (now billions in today's dollars) worth of Ethereum. Solutions such as incrementing logical clocks can help solve this issue by guaranteeing a unified system time among nodes in the network.
Secure execution: Several related areas of computer science including secure multi-party computation, zero knowledge proofs, and trusted execution envorinments will form an essential backbone of the distributed ledger infrastructure.
Secure multi-party computation: Is an area of computer science concerned with deploying code onto a distributed network of computers for computation, where the computer executing the code does not know the contents of the code they are executing. For systems like secure voting to work, you want the execution to be a provably trusted black box that is mathematically guaranteed to compute the correct result, but the actual calculation is hidden from the computers processing the data.
Zero Knowledge Proofs: development of Zero Knowledge proofs are intertwined with secure multi-party computation in that they want the contents of the computation hidden, with a proof receipt that the computation was performed properly. However, zero knowledge differs in that the trusted agent running the computation knows the contents, and is simply shielding the computation from outsiders. Zero knowledge proofs are thus easier to execute than truly secure multi-party computation. Shielding the sender and amount of a simple transfer transaction is available today on platforms like Zcash and Monero, but shielding any arbitrary computation is a much more difficult problem to solve.
Trusted Execution Environments: relate to secure areas of computer processors that do not let any other areas of the processor interact with computations inside of the secure area. This is easier said than done, as ultimately a hardware manufacturer is trusted to create these secure environments.
This is Not Easy
If this entire chapter felt like drowning in the deep end of the pool, that's okay. The goal is not to understand the abstruse nuances of every computer science problem. Simply knowing that these problems exist (and that smart people are working to solve them) puts you far ahead of everyone who has not taken the time to familiarize themselves with the fundamental issues involved in moving every mission critical IT system on earth to a distributed ledger.
Part I is intended to serve as menu of raw ingredients that can be combined, dismantled, and re combined at will. Which consensus mechanisms, programming languages, virtual machines, etc. will ultimately constitute the "good architecture" of this new ecosystem is anyone's guess.
However, knowing that something will emerge from this primordial soup of computer science innovation is important. Moreover, knowing that "fit" designs (which actually solve real problems) will hopefully beat out vaporware in the long term gives some solace to the investor.
In Part II we will attempt to hang a series of heuristics (ways of thinking) from evolutionary biology, to game theory, and the madness of crowds on the technical framework presented in Part I.
To bridge the technical with the socio-cultural, our next chapter will be devoted to the logic being developed to help distributed ledger systems interact with real world, including identity management and binding arbitration use cases.