Distributed Consensus Algorithms in the Animal Kingdom

Distributed systems and packs of animals both need to solve a common problem: who should be in charge? Who should resolve disputes? When there’s a disagreement over the way to proceed, whose lead should we follow?  The animals need a leader to coordinate pack hunts and make decisions as to where the pack should go. Distributed systems need a leader to make decisions such as ‘which of these transactions came first?’  Both distributed systems – those made of computers, and those made of animals – make use of the same kind of tool to solve this problem: a distributed consensus algorithm.

When two wolves each want to be the leader of the pack, they fight each other. The winner of the fight becomes the leader of the pack.  The wolves don’t fight all the way to the death, though. When one of the wolves realizes it is losing, it rolls over on its back and presents its neck to the other wolf.  The losing wolf is, saying, in effect: “You are the leader. I will stop fighting. To prove that I will be loyal, I put myself in this vulnerable position so that you can kill me if you want.”

The wolves’ distributed consensus algorithm has a ‘fighting’ phase, followed by a ‘submission’ phase.  In the fighting phase, two wolves engage in combat. In the ‘submission’ phase, one of the wolves gives up.  This ‘submission’ step of the algorithm acts as a cost reduction mechanism: If the wolves always fought to the death, that would seriously hurt the pack. The wolves who watch the conflict are all taking part in the algorithm as well – they need to understand and agree upon who the new leader is. If there isn’t consensus on who the leader is, the pack could split in two, which would harm its ability to take down large prey animals.

The distributed systems use an algorithm that is less violent and more boring.  This algorithm has a number of carefully defined roles, and each role follows algorithms which involve exchanging messages and numbers according to a precise recipe.  After enough messages have been exchanged, eventually all of the computers in the distributed system come to a consensus as to who the leader is.

To summarize, distributed systems of animals and computers both use consensus algorithms to select leaders and reduce the cost of conflict. The animal algorithms are brief, simple, and usually feature escalating violence. The computer algorithms are longer, more elaborate, and involve the exchange of lots of messages involving numbers.

At this point, the astute reader may be thinking, “Wait, humans are both animals AND computers! Shouldn’t this mean we use conflict resolution mechanisms that are a hybrid of the two?” This looks like what we humans do. Consider the following story:

Alice and Bob are neighbors.  Alice has loud parties at night, and this wakes up Bob, who is frustrated by his lack of sleep. Bob talks to Alice directly, and Alice responds by telling Bob that her parties aren’t too loud. Bob then talks to an attorney, who drafts a letter telling Alice that Bob is considering filing a lawsuit and would she please stop blasting Duran Duran after 9 pm or else an expensive lawsuit will follow.   Alice ignores this letter.  Bob then files a suit in small claims court against Alice. Alice ignores the summons. The court rules in Bob’s favor, and issues an order preventing Alice from holding parties after 9 PM. Alice ignores this court order.  The next time Alice throws a party and it goes after 9 PM, the police arrive to arrest Alice. Hungry Like the Wolf blares loud enough that nobody within a 3 mile radius can sleep. The entire neighborhood – a posh DC suburb which consists mostly of lawyers, lobbyists, politicians, and journalists – is outraged. Everyone in the neighborhood feels compelled to howl at the moon, which these days mostly takes place on twitter. However, the police are unable to gain access to the property because Alice has installed a gravitational repelling force field. The only way in is through the wormhole to Martian Distributed Federation Node 7635 that Alice has constructed in her basement.

It turns out that Alice has been working with the Martian Federation, which has been hoping to provoke Earth into a war for some time. This was the moment they waited for! The people of Earth will have to decide either to commit to a violent, protracted struggle with the technologically superior Martian Federation, or to allow Alice to violate their noise laws with impunity. The end.


Now, this story could have turned out many different ways.  It could have followed any path on a network like this:

Humans use a stack of consensus algorithms. At the top of the stack, the algorithms are cheaper to execute, but they don’t provide definitive consensus.  Personal requests are usually cheap to make, but they don’t always work.  When higher levels of the stack fail to reach consensus, humans move down to lower layers in the stack, where the costs are higher, but so is the probability of consensus. Moving lower in the stack is called “escalating a conflict”, and it generally means increasing both the cost and risk involved, but also the probability of some kind of resolution.

If we drew a similar stack for wolves, it would just be a lot smaller. There are no wolf lawyers or settlements.  There are animal species which have ‘escalation layers’ in their conflict resolution algorithms – read up on how lobsters do it, for example. They don’t go straight for snapping each other’s eyestalks off – there’s a series of escalations, first.  In many cases, the cheaper forms of conflict are able to lead a resolution, where one lobster backs down.  These cheaper forms of resolution reduce the chances of a violent, and therefore risky conflict.

If you look at older human consensus mechanisms, they are more violent and expensive. Read up on the elaborate ritual called Ashvamedha, which acted to solidify a King’s legitimacy. This ritual took an entire year to perform. It involves letting a horse roam the countryside for a year. Any territory the horse touches becomes the property of the would-be king performing the sacrifice. If the horse is injured at all, however, the sacrifice fails, and the would-be king isn’t really a king. The ability to perform this sacrifice requires immense economic and military strength. You need to be so rich that you can fund an army to follow this horse around for a year,  and so tough that people would rather give their territory to you, than do something as simple as shoot a burning arrow in the direction of your horse, in the hopes of scaring it into a panic that harms it.  Bitcoin uses ‘proof of work’. It looks like the ability to perform the Ashvamedha is something like “proof that you are already a raging badass, and nobody around really wants to mess with you anyhow, so we might as well just call you king and forget all the nasty fighting.”   The horse sacrifice sounds silly or weird until you realize the alternative is probably a succession of nasty violent conflicts to produce a sufficient ‘proof of raging badassery’ to get everyone else to back down.

The higher, more recent layers of the human stack are more computer-like, involving crisply defined protocols and the exchange of messages and numbers. The legal system looks like a hundred year old computer made from people reading and writing ‘code’. The lower layers of the stack start to look more and more animal-like. Court orders are syscalls which escalate down the stack, to indicate the precise locations to apply threats of violence. This progression (from cheap, mathematical, but not-always effective, to violent, expensive and definite)  looks like it might be some basic law of the universe – definitive consensus doesn’t come cheap.  Is it possible that there’s something like a CAP theorem for distributed consensus algorithms? Perhaps you can have cheap consensus, or definitive consensus but you can’t have both.

In addition to this difference between computers and animals, with humans using a mixture of both algorithms, I think these conflict resolution mechanisms play extremely important roles in the lives of human beings. The concept of a ‘consensus algorithm stack’ suggests that we might view human technology for peacefully resolving conflicts as an incredible survival asset. Think of how much worse the world would be if violence were the only way to settle a dispute. Think of how much poorer we’d all be if our choices were either horse sacrifices conducted by Brahmins, or countless wars whenever a leader dies. What if, instead of flying cars or superintelligent computers, the next phase of rapid technological development involved construction of new mechanisms by which humans could resolve their conflicts cheaply and constructively? 

It’s hard for me to imagine a more utopian form of technological growth.

3 thoughts on “Distributed Consensus Algorithms in the Animal Kingdom

  1. I like to think of the waste from animals fighting within the same species as them paying the cost for information discovery. If I could be missing out on the chance to mate with that female moose over there, I don’t want to just take you at your word that you’re the baddest ass male moose around here – I want to play out the actual fight to discover provable information. This interpretation is informed by what I’ve read about honest signaling: https://en.wikipedia.org/wiki/Signalling_theory#Honest_signals.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.