In Part 1, we learned that the 3rd Generation of cryptocurrency is about solving the problems of Scalability, Interoperability, and Sustainability. In this second excerpt from the whiteboard video, Charles explains the Cardano solution to Scalability:
Scalability has a lot of meanings, but from a cryptocurrency perspective, you can think of it three ways:
1) Transactions Per Second (TPS)
You’ll often hear people say, “Well Bitcoin has 7 transactions per second” or “Ethereum has 10 or 20 transactions per second”. This is simply the notion of how many transactions are able to get processed on the blockchain within some finite period of time. [Editor’s note: Cardano has achieved ~250 TPS, with plans to grow that number much higher!]
Introducing Ouroboros
To address TPS, we developed a peer-reviewed whitepaper for our provably secure proof-of-stake protocol called Ouroboros. Ouroboros is one of the most efficient consensus protocols in the cryptocurrency space, and it’s the first to be proven secure in a very rigorous cryptographic way. The magic of Ouroboros is that it’s been designed in a modular way and with future-proofing in its DNA.
This is how Ouroboros works:
- First, it breaks the world into epochs. [Currently, a Cardano epoch is 5 days]
- Within an epoch, it takes a look at the distribution of tokens, and from a source of random numbers, holds an election to create “Slot Leaders”.
- Slot Leaders functionally do the same thing that a miner does in Bitcoin when they “win a block.” The difference is that it doesn’t require the extensive computational resources that Bitcoin requires. As a consequence, this system is considerably cheaper to run, even though we have similar security guarantees. It’s a major advancement!
Here are some of the advantages of the Ouroboros protocol:
–> Slot leaders don’t have to just maintain a single block and a single chain. They can maintain other blocks and other chains because the cost of constructing a block is so low. It’s actually now tractable to talk about consensus over a range of blockchains instead of a single chain.
–>Furthermore, epochs perhaps could be run in parallel; instead of having one epoch run and then another epoch run, we’re going to develop a system using Ouroboros where epochs run in parallel, and transactions are partitioned accordingly. What this means is as you gain more users, and your users gain more capabilities, these slot leaders will be able to maintain more types of blockchains and also run transaction processing for blockchains in parallel. This is a major advancement!
–>Ouroboros has very rigorous security standards in terms of its theoretical foundations as well as its implementation. As we develop new capabilities for the protocol, these capabilities will also be secure. This is in contrast to other systems, where one has to prove these things on a case-by-case basis, and in some cases make major modifications to the system to grow securely.
–>We intend for Ouroboros to become quantum resistant sometime in 2018. When the slot leader signs their blocks, they’ll be using a quantum-resistant signature scheme. With this, we get even more future-proofing into the system. [Editor’s note: Quantum computers are the powerful computers of the future, which we imagine may be able to break cryptographic keys. As of today, this threat is hypothetical, but planning for it now is an important bit of foresight!]
These features speak to these scalability questions:
“How do we construct a way of maintaining the network that doesn’t cost $300,000/hour, which is what Bitcoin currently costs?” [Editor’s note: Charles Hoskinson gave this talk in 2017. Bitcoin’s energy cost has more than quadrupled since then. The energy requirements to run Bitcoin exceed that of the entire country of Argentina. https://www.nytimes.com/interactive/2021/09/03/climate/bitcoin-carbon-footprint-electricity.html]
“How do we build a system that allows us to go parallel and maintain multiple chains concurrently?”
Answering these questions is at the heart of Ouroboros.
2) Bandwidth
Transactions Per Second is important, but it’s not the only thing we have to be concerned with. Transactions carry data, and as you get more transactions you require more network resources. This is the notion of bandwidth. For a system to scale - if it’s going to grow to millions and billions of users - that system could require hundreds of gigabytes per second of bandwidth to support all the data flowing through it. This kind of volume is familiar in the enterprise world, but not in the peer-to-peer world.
Introducing RINA
As our network grows from a few hundred transactions per second to hundreds of thousands of transactions per second, we cannot maintain a homogeneous network topology. In other words, we cannot have a situation where every node has to relate every message. As we grow, there will be nodes that don’t have that capability. So we’re looking at a new type of technology called RINA. This stands for Recursive Internetwork Architecture. RINA is a new way of structuring networks using clever engineering principles - mostly conceived by John Day out of Boston University. The goal of RINA is to build a heterogeneous network that gives us privacy, transparency, and scalability. RINA is a major step forward that will give us a way to tune and configure Cardano as it grows.
3) Data scale
Blockchains store things - hopefully forever! Every time you put a transaction in, it ends up in the log. So, as you have more transactions, you need more and more data. As a consequence, blockchains will grow from megabytes to gigabytes to terabytes to petabytes…. potentially even exabytes. This is okay in the world of big business [with centralized data centers], but when we talk about a replicated system whose security model relies upon each node having a copy of the blockchain? That data volume is simply not feasible for consumer hardware devices [ie typical home computers].
Cardano is trying to solve these problems in a very elegant way. In Cardano, as we add people to the network, we naturally get more transactions per second. We also naturally get more network resources. Eventually, we’ll get more overall data storage. All without compromising our security model!
Introducing Pruning, Partitioning, and Side Chains
To address the Data Scaling problem, what we must realize is that not everybody requires all the data. The transactions that Alice sends to Bob are not necessarily relevant to Jane and Bill. They’re only relevant from the context that these people can know that the tokens they receive are legitimate and correct. Some techniques to address this include:
1) Pruning: Restrict what some people can see, on an intelligent case-by-case basis
2) Partitioning: a user might not have a full copy of the blockchain, but instead has just a chunk.
3) Side chains: create a compressed representation of a blockchain [on a secondary chain], and translate transactions between chains
Academic rigor - A Cardano differentiator
One of the most important things when developing new cryptography is to make sure it is developed in a very rigorous, peer-reviewed way. Ouroboros was accepted at “Crypto 17” [the 37th Annual Academic Cryptography Conference], where our team presented it. Future versions of the protocol continue to go through more rigorous peer-review, giving us a high assurance that the conceptual design of the system is correct. We are also modeling a formal specification of Ouroboros using “side calculus” - a formal modeling language that’s machine-understandable. Eventually, we’re going to be able to connect to the Haskell code in our github repo and actually show that we’ve correctly implemented the protocol. This is a standard that does not exist in the blockchain space, and we’re very excited to be the first to do it.
The goal of the Cardano project is to study all of the questions in a rigorous way and come up with new blockchain architectures. The solutions must allow people to have much smaller amounts of data, while still getting the same level of assurance that transactions are correct. One fortunate thing is that while TPS and bandwidth needs grow quickly, data storage is still relatively cheap and available. So we believe that the data scaling side of Cardno will be something that we don’t have as much urgency to resolve. Research on these questions is taking place at the University of Edinburgh; we believe we will have a total solution to this problem by the end of 2019.
No comments yet…