Part 1: A Primer on The Scalability Test and Radix

What is this test?

These tests replay the entire 10 years of Bitcoin’s transaction history on the Radix ledger, with full transaction and signature validation, on a network of over 1,000 Nodes distributed evenly throughout the world.

What are these tests demonstrating?

That technology for the transfer and ownership of value, without a central authority, can exist at the same scale as the protocol that the internet is based on.

How does this compare to what has come before?

With the advent of the internet came the advent of digital commerce. Since then the world has needed an increasingly larger transactional throughput just to keep up with the needs of global and increasingly connected citizens:

https://radixdlt.typeform.com/to/BupSOz

What sort of use case requires this kind of throughput?

Few individual use cases require that level of throughput, but as the throughput of a public ledger is shared by every single application built on top of it; the cumulative throughput capacity is key.

What dataset are you using to simulate this?

For the first runs, we are testing the throughput of the Radix network using a verifiable data source that we have a lot of love and respect for the Bitcoin ledger transaction history.

Is this the maximum TPS Radix is capable of?

This is by no means the maximum throughput of our platform, but it is definitely stretching it much further than we ever have tried before.

What does this blog cover?

This blog covers what we did to set up these tests; plus how we got the Radix ledger to do full signature and UTXO validation of the entire Bitcoin transaction history, in less than 30 minutes.

How big is the network?

The first run of these tests concentrates on speed, rather than fault tolerance. As a result, the network consists of approximately 1,000 nodes, with minimal overlap; with each node servicing approximately 1/1,000th of the total ledger.

What are the limitations?

Redundancy in this test is configured using “shard groups” — the network has a fixed shard space of 18.4 quintillion shards and a Node can operate as much or little of the shard space as it likes (assuming it has enough resources). We spread the Nodes out in the shard space using “shard groups” — the smaller the shard groups, the larger the amount of shard space the node is covering. E.g. 1 shard group = 18.4 quintillion shards/100% of the ledger. 2 shard groups = 50% of the ledger per group etc. The more nodes per group, the greater the redundancy — e.g. 100 nodes + 2 shard groups would mean 49 node redundancy per group.

Do you detect bad blocks?

There are no blocks or mining on Radix — all Atoms (transactions/on ledger operations) are submitted and checked individually and are determined to be either valid or invalid on a per transaction basis (UTXO double spend check, signature validation, etc.).

How do you stop a double spend?

Transactions are individually validated — this is done using a combination of the Radix consensus layer(Tempo) and the programmable system of constraints that we can add using the Atom Structure and the Constraint Machine. Together these are able to strictly order related transactions (e.g. from the same private key) and drop double spends.

A note on the Bitcoin dataset

The Bitcoin fee model incentivizes grouping together as many transactions as possible in the same block. The Radix fee model will dis-incentivize this (we don’t have blocks). In this regard — although we can achieve a high transaction per second throughput on this data, the bitcoin dataset is not optimized for the Radix data architecture.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store