Dan and Radix’s Tech Journey

Radix’s journey started back in 2013 when our founder, Dan, saw both the promise and challenges of Bitcoin. For Bitcoin or any cryptocurrency, to become a new global cash system Dan knew it needed to be able to scale to meet global demand. To see if this was possible Dan started running tests to see what the limits of Bitcoin’s scalability were.

Beginning with Bitcoin

Down in his coding cave (in reality quite a nice, if messy study, in the back on Dan’s house) Dan started spinning up nodes and spamming his test network to really see what bitcoin and blockchain could do. He did everything from increasing the blocksize to ridiculous numbers, using the top hardware available, to even making mining as cheap as possible. In the end, though, he could still only achieve 700–1,000 transactions per second (TPS) with blockchain. Knowing that Visa on processed up 24,000 TPS and Alipay did over 725,000 transactions on their biggest day of shopping, Dan knew that these speeds would not be enough to achieve the goal of a global payments rail.

Building upwards with blocktrees

Dan’s next thought was if a single chain of blocks could only hit 1,000 TPS could a branching network of blockchains do any better? Catchily named blocktrees this next area of investigation was Dan’s first step into investigating and understanding sharding. The theory was that different branches of the blocktree could have different states of synchronisation, with related transactions being in one branch and unrelated transactions in another.

The importance of grouping related and ungrouping unrelated transactions turned out to be a key insight for how to enable efficiency in a scalable ledger solution. You will see it reappearing throughout this post as it has been retained throughout the different stages and iterations of ledgers that Dan and the Radix team have explored.

For now though back to blocktrees. It was whilst Dan was exploring, building and testing blocktrees that Dan also started using the name eMunie to describe the project, which had started to attract some loyal community members. These community members enabled Dan to scale up his testing and run beta tests in more ‘real world’ scenarios. Unfortunately what these tests found was that blocktrees could only achieve a few hundred TPS before they started encountering problems.

What Dan found was that when large branches of the tree started to disagree on the correct state of a transaction then it led to a high level of load for the network to align. This was due to an increasing level of complexity of the messages sent between nodes when they were trying to align on the state of a transaction. If a single transaction in a branch needed to be aligned then all of the transactions in that branch and any sub-branches then needed to be. Sticking with the approach of having blocks of transactions and mining, unfortunately, did not lend itself to the efficient resolution of network synchronisation issues.

Doing it with DAGs

You or I may have been deflated by this news, decided to pack it all in and popped down to the pub instead. But not Dan, he got a strong cup of tea, metaphorically rolled up his sleeves (very difficult to do literally when you only wear T-shirts) and decided that he need to think differently. What if instead of transactions being grouped and synchronised in blocks they were dealt with individually? With this in mind, Dan started exploring Directed Acyclic Graphs (DAGs).

DAGs had similar branching properties of blocktrees but in contrast to blocktrees and blockchains, which see a block created and added to the ‘chain’ every set number of seconds/minutes, DAGs allow each transaction to link directly on to the next. This approach had two main advantages — DAGs can process transactions instantly, not using block times, and they can also allow the removal of traditional mining — leading to great efficiencies.

So, was this it? Had Dan found the Holy Grail of a globally scalable decentralised ledger? The short answer is no. The long answer is no but with more explanation. Whilst DAGs could successfully achieve up to 1,500 TPS without issues, once it scaled beyond this — trying to reach Visa-like levels it ran into security issues. To achieve further scalability you need to shard the DAG but a DAG can only protect against double spends if all nodes can access all transactions, as far as we know. Sharding prevents this, meaning that scalable DAGs were fundamentally vulnerable to double-spend attacks. Other projects have tackled this issue by creating centralised “witness” or “co-ordinator” nodes which see all transactions but by relying on these nodes the ledger becomes fundamentally centralised, adding trust and attack points to the system.

Channelling on with CAST

Whilst Dan had finished that strong tea; it was after all almost a year since Dan first started investigating DAGs and two years since he explored blocktrees; he kept up his positivity as he knew that the learnings from DAGs and blocktrees could apply to other solutions. Such is the way with research, what is that famous Thomas Edison saying?

With that Instagram worthy quote in the back of his mind (#determined), Dan moved on to the next iteration of a scalable ledger: CAST — Channelled Asynchronous State Tree (we know the acronym is a bit forced but we decided not to hold it against him). CAST was trying to tackle the message complexity seen in blocktrees, whilst ensuring protection against double spends in a sharded ledger; it did this by separating state from data. The state was in a branching tree, which was where conflicts and double spends were managed, whereas the data was in a DAG-like structure. This split in structure led to a partial-synchronous network, which made it more efficient, leading to speeds of almost 2,300 TPS before network latency and the old enemy of message complexity raised its ugly head again.

It was whilst testing CAST that Dan really saw the importance of real-world beta tests. The loyal community members attracted back in the blocktree era, had continued to stay with the project. Providing funding, support, both moral and technical, to Dan throughout the different iterations of the technology. This loyal cohort also allowed Dan to run beta tests; spinning up nodes and sending transactions and generally trying to create as many real-world scenarios as possible. One of these community members, Greg, soon became an infamous figure of doom. Whenever a load test was run on the CAST beta network everything would run smoothly until Greg joined. Greg’s slow network connection would inevitably lead to latency and synchronisation issues in the network. This led to two outcomes — Greg becoming a meme synonymous with failure within the community (sorry Greg) and the realisation that CAST couldn’t stand up to real-world conditions at scale. A global network for everyone would inevitably have to deal with slow connections and if Greg’s one slow connection would cause it to fail then CAST was not the solution we were looking for.

Table-flip with Tempo

At this point, Dan metaphorically flipped over the table. At least we assume and hope it was metaphorical otherwise it would have created quite a mess within the small coding cave. He decided that instead of iterating and evolving on each solution this problem required a complete rethink. Following lots of brainstorming, blue-sky thinking, and other overused cliches, Dan came up with Tempo.

Inspired by the theory of relativity, Leslie Lamport’s Logical Clocks and the importance of sharding from CASTs and DAGs (see he didn’t throw everything away) Tempo took a new approach. Its pre-sharded data structure enabled the grouping of related transactions and separating of unrelated transactions (told you it would be important) and combined it with a passive consensus mechanism led to incredible efficiencies.

This new approach was looking promising but there was a lot to do between the theory, initial successful tests and full-blown global ledger solution. All this could not be done by one man, even with many cups of strong tea, so Dan started assembling a team. Whilst it would be nice to imagine that Dan formed the Radix team like this, in reality, it was more a process of networking, posting on StackOverflow, interviewing and having the lucky team members come and join Dan in Stoke-on-Trent for several months onboarding.

It is also probably worth mentioning that around this time the project changed its name from eMunie, which I think we can agree was a very silly name, to Radix. With strong Latin roots and nice mathematical implications, the Radix gave the project sufficient gravitas. Even if it turned out to have horrible SEO implications as the world and his wife apparently liked to use the word Radix to name things. That though is another topic entirely.

With this dream team assembled the development and testing of Tempo really kicked into a new gear. The team not only worked on Tempo itself but also building the network infrastructure, test wallets and most excitingly the Radix Engine. The Radix Engine is the application layer of Radix — the part that developers directly interact with. The Radix Engine has been covered at length by several other blogs so we won’t go into detail on it here.

All of this frantic theorising, development and testing eventually lead to the 1m TPS test, where the entire history of Bitcoin’s transactions were run through the Radix ledger, Tempo, and achieved speeds of over 1 million transactions per second. This was an incredible achievement for Dan and the team but it also proved to be the beginning of the warning signs of some of the security issues in Tempo.

Whilst Tempo enabled incredible scalability it also became apparent that it was vulnerable to two attack vectors. The first, the Weak Atom Problem, meant that a small number of nodes could engineer a situation where consensus was weak enough for them to have influence over historical transactions. Whilst this was only in specific circumstances and required a carefully coordinated and planned attack, it was too high a risk not to address before launching the network. The second attack vector was from a Sybil attack. Tempo used a novel Sybil protection mechanism, called Mass, which increased node reputation for good behaviour over time. The way Mass worked meant that Mass was much more valuable to dishonest actors than to honest actors. This introduced a possible attack of a malicious actor buying node ID’s (and reputation) from honest actors.

When these issues were discovered Dan and the team hoped that they were fixable in a way that kept Tempo as the underlying ledger for Radix. Everyone rolled up their sleeves (again metaphorically as T-shirts are very popular in the Radix team, apart from Leroy who insists on wearing shirts) and tried to find a way to fix the issues. Several solutions were thought up, some more viable than others but after months of investigation none could be found which allowed stage gated releases and testing. This level of uncertainty and unknowns could lead to years more research and uncertainty, without releasing a public ledger.

After years of research and development, and having just achieved 1 million TPS, no one wanted to put Tempo to the side. But after several cries against the Gods of Tech development, some rather depressing team drinks (rum, Kracken for preference), we had to accept that when tackling hard problems sometimes you have to make hard decisions and Tempo had to be put aside.

Charging forward with Cerberus

So what next? Well we picked ourselves up and looked at what from Tempo made it great and we could keep, what we could learn from other research projects and what we’d have to start over. The result of this exercise is Cerberus.

Cerberus uses Tempo’s pre-defined shard space concept but also builds on a number of well-proven cryptographic primitives, giving strong guarantees around safety, liveness and well-defined security bounds. This combines to create a unique BFT-style agreement process enabling scalability alongside security. Importantly for the Weak Atom problem, the security bounds of this can be well-proven and gives strong guarantees around safety and finality.

We are incredibly excited about Cerberus’s potential, but also the incremental approach to delivery it allows, building on both the Radix learnings of the last 7 years, as well as the cutting edge of consensus research available today. There are still unknowns and problems that need to be solved. As part of our commitment to sharing more regular updates, we will not only be sharing the Cerberus theory and whitepaper with you but also these questions and problems.

When Dan decided to create a globally scalable ledger he didn’t pick a small problem to solve. Whilst a small, teeny-tiny part of Dan wishes that he’s picked an easier challenge to tackle, such as creating the ultimate flavour of ice cream or how to stop people playing music out of their phones on public transport, the rest of him (the main part really) and the entire Radix team is committed to solving this. We will not stop until we have created a truly global, decentralised, scalable place for the world to transact across.

To stay up-to-date with the latest Radix news, gossip and any future goodies sign-up to our newsletter. Its like piñata but less violent and in the form of an email.

By Sophie Donkin

--

--