Radix Technical AMA with Founder Dan Hughes — 13th April 2021

Every second Tuesday, Radix DLT’s Founder Dan Hughes hosts a technical ask-me-anything (AMA) session in the main Radix Telegram Channel. The premise of the session is simple: Dan climbs in from the coding cave, and answers as many Radix tech-related questions as possible.

Thank you to all who submitted questions and joined the AMA. This week’s session was unique because Dan focused on two important topics — centralization, and front-running.

You can find the full transcript below from the AMA sessions on April 13th, 2021.

Rock Howard asked on Discord if front-running is possible on Radix. My thoughts are: In Olympia and Alexandria there are blocks, so IMHO the block proposer (ie. leader) could sneak a front-runner transaction ahead in the block. Even after fully sharded Cerberus, there is a mempool of potential transactions that the “leader” could use to choose the order of transaction consideration to favor a front-running transaction. The leader per shard group is chosen “at random” every epoch, so this latter form of front-running is only possible IMHO 1/100 of the time. But that could be enough to make it profitable, provided it was undetectable. Especially if the large stake holders colluded to create an alternate docker image for their shared use.

Question: Is this mechanism possible on Radix, and is there any way to detect and punish node runners who try to use it?

An important question that has blighted me for some time, with an appropriately long answer.

For Olympia and Alexandria, there is not a lot we can do at the protocol level without shoehorning some application layer stuff in there to try and detect it. Even then the smartest detection methods will soon be circumvented I’m sure.

It’s not really anything to do with blocks at all. If it was discrete transactions I could still determine the order as a leader. It’s more to do with who is proposing the order, and what resources are required in order to take control of that order in some way, be it totally, or partially.

Even 1/100th of the time is probably quite profitable, and you can be sure that all leaders will adopt the same strategy if there are no penalties. One might hope that the stakers will monitor “front-running” and remove stake from validators that do it, which would be a start, but then there are natural instances where it may look like a front-run when it wasn’t, just purely due to the way the trades were submitted, etc.

The solution is a selection mechanism from the mempool that is cheaply verifiable yet difficult to manipulate to some preferred end state. (I’ll get to this in a minute as it’s relevant to a centralization question in the AMA too).

When sharded, things are a little bit better… as most transactions/trades will touch multiple shards. Assuming there is some trade I wish to front-run and I’m the leader in shard 10, the chances of me also being the leader with another validator in shard 20 are 100:1 Therefore, compounded, my chances of being able to front-run any trade when I see it is 10,000:1 … better,

If all validators are happy to front-run though, then it’s not difficult to imagine that many validators will syndicate together to level front-running trades cross shards with an “if you help me front-run this I’ll split 50:50 with you”

The ONLY way is to prevent any surety at all, and yes, leaderless is part of the solution.

Don’t limited Validators make radix more or less centralized like Binance coin? They have 21. Radix has 100 but it’s still centralized since it’s picked and limited?

Centralization isn’t all about quantity …

I could have 1M validators but if half of them are controlled by the same actor, its more centralized than 100 validators which are run by purely honest individuals, with top-notch security, redundancy considerations, multiple ISP provides, in a temperature-controlled environment, with multiple backup generators, and 200 feet underground in a private bunker just in case.

Quantity of course matters to some degree, but generally when I think about centralization in networks of a few hundred in size or greater I’m generally thinking about Sybil. Or more specifically how easy is it for me to take control of the network, even if just temporarily, to meet some evil agenda.

Bitcoin for example has 10,000+ full nodes that hold a copy of the ledger, very quantity decentralized, and of quality too. But the hash power is controlled in majority by perhaps a handful of actors, not quantity decentralized at all, and shitty “quality” too as they are incentivized to selfishly maximize their agenda. The latter here is the Sybil part btw.

Continuing on the previous questions

In both the front-running case, and the centralization case (when talking quality) it needs to be hard to force my will as a validator onto the network.

Note: I’m assuming that the later mainnets will look something like, or have some resemblance to Cassandra in some way. That may not be the case, Cassandra isn’t far enough along yet to be able to say with certainty, but it is AN implementation of sharding that works. So for this next piece, don’t take it as gospel! This is what I know TODAY!

One thing Bitcoin and POW got very right was the leaderless aspect of things. No one knows who the next leader is going to be, it’s simply probability solving the POW.

What wasn’t accounted for was pools, pools screw that up. Big mining farms with efficient hardware were, but pools not.

What we have now is almost a certainty of who the next leader is going to be at any time, as we can see the size of the pools, the hashrate they are producing, etc … pools have even been known to send SPECIFIC work packets to those with hashpower to find a POW as it makes profit for the pool on the side.

One simple change to POW potentially kills all that dead, that change is to make it a combinatorial POW,

What that means is there is only a limited number of combinations for any input set. It means that no matter how much compute I throw at it, I will not find a better POW than that of the natural best yielded by the current combination.

If having excessive compute becomes less worthwhile, then pools die, and your decentralization improves. If having massive compute hashing power is “useless”, no one will buy it, therefore more people can generate valid POWs without the crazy upfront outlay.

It’s still a competition … but that competition is executed in a different way.

So what about front-running?

If the POW is combinatorial, and my seed is the current set of transactions pending and my validator key … there will be a finite number of combinations that I can to find that produce a valid POW. Another validator will find a different set of combinations.

The critical piece here is that transaction X I want to front-run may not be in ANY of the combinations that I can produce a valid POW with. Furthermore, I won’t know which of the other validators will have valid combinations with X in unless I perform all of their combinations too.

Further, I also need to find a combination that not only has X, but has my front-running transaction X’ in it. And has X’ BEFORE X in the combination order.

By the time I’ve done that, contacted another validator if needed them, and agreed to front-run it, some honest actor has produced a valid POW containing X and it’s been committed.

A simple diagram (yes I HAVE A DIAGRAM FOR THIS AMA):

In the first picture, the ring is the mempool.

There are 4 transactions pending within it with locations on the ring.

S0 is the current seed for my validator and has a location on the ring.

T1 the grey box on the opposite side of the ring is my “target” region.

S0 is known and verifiable by all other validators, so I can’t change it, I have to start there.

I can select transactions within the range of T1, and I’m looking for the closest to the midpoint of the target box … or the furthest possible arc-distance away on the ring from S0.

In the second image, S1 is the new seed generated for me by selecting that purple transaction in step 1. Again this is deterministic, if I change it, my POW won’t validate.

Now I’m looking for transaction in the range of the second grey box T2 … same as before I’m looking for the transaction closest to the midpoint which has the greatest arc distance from S1

If I select any transactions outside of the grey areas for S0 and S1 … my POW fails.

My compute is only useful for discovering the possible combinations quickly … no amount of compute will allow me to select either of the yellow transactions in step 1 or 2. Some other validator with a different starting S0 has them in its combinations

That’s what stops front-running, if I want to front-run one of the yellow transactions, I can’t.

I can massage it by changing the current state, either by registering another validator, or populating the ring with transactions within my T0/1 ranges.

Registering another validator is expensive and latent, so the front-run is probably gone.

Having lots of pre-registered validators would work … but by the time I’ve performed all the compute on them to secure THE best POW I can, some other validator has already created one, or better yet, found one that is better than my BEST and it’s committed.

I can attempt to grind the pool with transactions that increase my chances of an Sx picking up the one I’m interested in … but then I’m spending compute and possibly fees … and another validator may come along yadadadada you get the drill.

The POW I explain above is what Cassandra uses to select which blocks and which branches of blocks go on to be committed, and is how Cassandra prevents front-running and increases the quality of decentralization. Also, bear in mind those images are from a presentation I’m slowly putting together to do a Twitch presentation on and I’ll go into more detail, with more pictures showing the process.

Oh, an interesting point … if I know that your best POW combination is x and took i iterations … then assuming I even have a better one, the effort to find > x = is generally i*e. This is also what I was harping on about on Twitter some time ago about POW in this post.

“Pools have even been known to send SPECIFIC work packets to those with hashpower to find a POW as it makes profit for the pool on the side.”

With “work packets” you are referring to a part of the problem to solve the hash?

Pools select the transactions that go into the block, and send the pool miners the info they need to generate the block header and work on the POW.

As a pool miner, I would actually have no idea what transactions were going into the block I was producing a POW for. Only recently has this started to change a little and become more transparent.

If I manage to steal / clone a validator key, would I be able to pull a front-run off? With it, I should be able to calculate a valid POW?

Depends if you managed to get lucky and acquire the validator key that would give the best probable POW that includes the transactions you need to do the front-run at that moment in time … unlikely odds!

So this pow is kind of energy efficient?

Energy-efficient for honest actors, energy-expensive for dishonest

I’ll try and elaborate with some simple maths but it gets a little complex as it involves multiple processes.

But let’s assume that we have 1000 validators and let’s assume that they all can find a valid combination in 1,000,000 iterations on average

in total that is 1,000,000,000 iterations performed, and one of those produced combinations will be the best of the group of 1000.

roughly after 600,000,000 collective iterations the best will have been found and will in most cases require 600,000,000*e (2.71) to beat it …

thus as a dishonest validator I would have to perform at LEAST 600,000,000*e = 1,630,000,000 assuming that I too have a valid combination whereas the honest validators spent 1,000,000 each.

What’s super interesting is that in large mem pools with many transactions, the probability of finding a solution increases … but you have to spend compute, and time! If everyone is spending the same amount of computing power, then everyone’s solutions are stronger so the dishonest actor doesn’t benefit from the higher probability.

if small mempools the reverse happens, no matter how much compute and time you have, you maybe can’t find any.

Why “times e” to beat it? If it goes in too deep nvm, I can wait till the presentation

Yeah, I left a bit out on how solution strength is measured …. it’s quite involved and wasn’t needed for the front-Q question but that missing piece is what drives the exponent.

That covers all the questions Dan had time to answer this session. If you are interested in getting more insights from Dan’s AMA’s, the full history of questions & answers can be easily found by searching the #AMA tag in the Radix Telegram Channel. If you would like to submit a question for next time, just post a message with questions in the chat, using the #AMA tag!

--

--