Why DAGs Don’t Scale Without Centralization
In contrast to blockchains, which see a block created and added to the ‘chain’ every set number of seconds/minutes, DAGs allow each transaction to link directly on to the next. This means there is no wait for the next block to confirm, and for the global ledger to reflect this update.
This architecture enables DAGs to bring forth two major improvements. DAGs can process transactions almost instantly, in a fraction of the time it takes blockchains. DAGs such as IOTA and Nano also remove traditional miners from the equation, thus making transactions both fast and free; the holy grail. They hint at an end to the scaling issues that plague blockchain projects and promise a vision of world-spanning networks underpinning systems such as the Internet of Things (IoT) and financial payment industries.
However, just as blockchains struggle to scale, owing to fundamental early stage design choices which are proving difficult to overcome, so too will DAGs. They will hit an inflection point where scaling is not possible without significant centralization.
To understand the potential issues with DAGs, it is important to understand how they operate. DAGs are directed graphs with no directed cycles. What this means in practice is that unlike blockchains, which operate on a vertical architecture (i.e. miners process a block, this block gets added to the chain, the miners mine another block, etc), DAGs utilize a horizontal architecture.
This allows, as stated above, for transactions to link to transactions. For example, DagCoin (the original DAG implementation) and IOTA require each pending new transaction to process one and two other transactions respectively before the transaction is processed. There is still Proof of Work (PoW) mining, it just comes through processing transactions in order to allow your own transaction to be processed.
As a result, DAGs claim to be more decentralized than blockchains, which have seen control consolidate in the hands of an increasingly concentrated group of miners. Instead of a relatively small number of miners being responsible for the overall security of the network, all active participants on a DAG are not just capable of but tasked with the responsibility of approving new transactions.
This also enables projects to scale to more transactions per second (tps) than blockchain to meet future needs. In contrast to the blockchain, where the more participants on the network and the more transactions being processed has thus far led to a slower network, on a DAG the more participants and transactions the better, as it theoretically enables quicker resolution of more transactions. Whereas Bitcoin and Ethereum currently manage 7 and 15–20 tps respectively, the likes of IOTA and Nano claim ~1,000 and 7,000 tps as currently possible. It also means there are no transaction fees — your ‘fee’ is processing the other transactions.
So far, so good.
However, there is one overriding issue with how DAGs are structured which will hinder scaling to the levels needed to become the backbone of the IoT or a financial payment system.
No global state
Blockchains operate through network participants having an overview of the entire ledger at any one time. Through this, all participants (or nodes) are able to check a transaction against ledger history and can check against the threat of double spending. This lies at the core of blockchain technology, with all participants having open and equal access to all transactions.
DAGs, however, operate differently. Because there is no one global state (owing to the transaction-by-transaction approach), the ‘global state’ of a DAG is such that it changes every time there is a transaction.
This is not an issue if all nodes can see all transactions because nodes will still be able to check against historical transactions to ensure there is no double spend. For example, this is how the IOTA Tangle currently operates, with the Tangle stored in full on every node. Because the size of the database would get too big if left unchecked and hard drive requirements would become infeasible, the database is pruned when necessary. This essentially takes the form of a snapshot being taken, enabling nodes to delete all transactions prior to that.
This is not an optimal solution as one of the benefits of blockchain is the immutable and ever-present ledger it keeps. To enable DAGs to avoid this necessary deletion, the DAG can be split into different shards. This works along similar lines to sharding a blockchain — the DAG is split up into lots of different mini DAGs. It is less intensive to process 1/100 of the DAG than it is to process it in its entirety (because they only have to check against a much smaller subset of transactions), and as such more transactions can be processed in a smaller time frame. While all shards still operate to the same protocol, they now only see parts of the ongoing transactions and associated history.
This causes a number of issues.
One downside of sharding a DAG comes with preventing double spending. A DAG can only guard against double spends if nodes have access to all transactions. To take a simple example, consider the DAG is split into ten parts. I present a transaction on the strongest tip of two of these ten shards. Unless there is a node that has sight of both shards, the transactions I present will validate in each of the two shards, thus causing a double spend.
As the DAG scales, this issue becomes more prevalent. The more shards there are, the less chance of overlaps between shards, and thus the possibility of double spending increases. The simple solution would be for all nodes to have to contact each other for each transaction they see — but that then costs the same as all nodes simply holding the entire DAG.
Furthermore, as opposed to a blockchain like Bitcoin where blocks are being continuously mined in ‘unison’, albeit opposition, by miners, with a DAG hashing only happens when processing new transactions. Malicious actors only need to gain over 33% of total hash power to be able to attack the network, even before it is sharded, and the lack of constant mining (IOTA, for example, currently processes between 1.2–2.4 tps, the vast majority of which are empty transactions) plus the minimal level of transactions makes it vulnerable to an attack.
Secondly, there is no otherwise verifiable and guaranteed list of transactions in timestamp order. Unlike blockchain, which has a block number/verifiable time of block creation, DAGs do not have guaranteed and secure timestamps, as latency/transaction execution time will vary across nodes. This causes issues, not just for double spend but also for any application built to run on the DAG that requires an exact timestamp.
As it is, decentralized security is traded for performance.
At present, the only way for a DAG to guarantee against double spending and 34% attacks is with the aid of a centralized authority. Byteball, another DAG, has 12 ‘Witness Nodes’ and IOTA has ‘The Coordinator’.
These tools mean that the networks are not censorship resistant and that, should the centralized authority be compromised, the network would be vulnerable to an attack from the centralized state itself.
These are meant to be temporary states for networks in their infancy, but so far, there is no proof they have the means to leave these centralized states behind.
Their presence calls into question the long-term viability of a system. A centralized authority directly contravenes the guiding principles of distributed ledger technology. A project which relies on a centralized authority at the start of its life builds into it a capacity to have a centralized state that could later be reactivated.
Consider what happens if a malicious actor manages to take a significant proportion of nodes out of action (either through an attack on the system or by an ancillary attack e.g. on power grids). Does this mean that the centralized authority is reactivated? And what happens in the event of a DDoS attack on the centralized nodes themselves? A limited number of nodes are much easier to attack than thousands spread worldwide. One of the main selling points of Bitcoin was that it was a distributed network spreadworldwide and as such would be much harder to ever shut down.
There are other issues associated with DAGs too that will hinder scaling to the levels needed.
Life in the real world
In test conditions, variables such as hardware and location are usually optimized or a non-issue (as it can be difficult or unwanted to spin up nodes worldwide). In real-world scenarios, no network can control these factors. That means they need to be prepared for the worst-equipped and worst-located nodes. In a network that provides instant (or near instant, given the limitations of speed of light/internet) confirmations, this causes a problem for further away or slower nodes, which will quickly stop being synchronized with the network and instead begin to see unconfirmed transactions accumulate.
This then prevents new transactions from being resolved as quickly, and the system will start to fill up with more pending transactions. Owing to the architectural differences between blockchain and DAGs, how quickly your transactions are processed will then depend on which node you are connected to — unlike blockchain where the pending transactions/wait times are consistent, transparent and the same for all.
DAGs are capable of scaling beyond current blockchains. But just as blockchains will hit a limit on how much they can scale, so too will DAGs. The network will begin to struggle under its own weight without some form of centralized authority, or a revolutionary (and as yet completely unknown) new sharding technique which doesn’t compromise security, decentralization or performance. Much as blockchains will struggle to scale owing to fundamental design choices, so too will DAGs.