As part of my tokenomic consulting work I’m building a kind of meta-inventory of tokenomic patterns and anti-patterns in my head. This is akin to things like tvtropes for writers or design patterns for programming but for game-theory. I call the set of anti-patterns Moloch’s Toolbox. These are qualitative criteria amongst actors in an ecosystem that lead to coordination failures. To anyone unfamiliar, Moloch is the anthropomorphized god of coordination failure. Moloch’s toolbox is the means by which this malevolent deity influences human organizations to inevitably fail. I hope that by giving a name to these things, we will be able to spot them easier and socialize solutions to them more readily.
For the purpose of this post I’m going to narrowly scope what we hope to achieve by coordination in utilitarian terms. According to Jeremy Bentham, “the greatest happiness of the greatest number.” Each actor in the system has an objective function that reduces any state of the world to a numeric value that can used determine preference between states. Each actor seeks to optimize their personal objective function. By coordinating, the set of actors in the system seek to optimize the sum objective function of all actors. This collective optimal state is called the Pareto-optimal. Coordination failures take us away from this optimum. This post will cover several examples of qualitative game theoretical relationships between actors that lead to coordination failures.
Tragedy of the Commons
Tragedy of the Commons is probably the most famous coordination failure category. This occurs when there is a “commons” resource being divided up amongst participants. They can either cooperate and mutually profit or defect and optimize their personal score at the expense of the global score. Formally, there is a known Pareto Optimum defined by the sum of each actors objective function but each actor is free to defect from that strategy and increase their value at the expense of all other actors. Common outcomes from this coordination failure are things like nature ecosystem collapses from activities like overfishing, market collapses from raced selling into a price inelastic market, and famine.
By this definition, there are many examples of things that are not commonly considered “Commons” that nevertheless fall into this category of problem such as the famous Prisoner’s Dilemma. The “commons” in that case is reduced jail time. Each actor is free to defect from the optimal solution and snitch to the police and reduce their personal prison time while increasing the total prison time. The net result is all actors inevitably defect and the system collapses to the worst possible value (what is the antonym of Pareto optimum called?).
To correct the incentives you have to change the local calculus of each actor. Recognizing that each actor is likely to locally optimize, you have to add external incentives for complying or external penalties for defecting from the Pareto Optimum. Typical solutions to this problem resemble fail-deadly deterrence systems which admittedly isn’t a source of optimism for humanity.
Let’s look at fixing the prisoner’s dilemma as an example. If, prior to the crime, each criminal had to put up a hostage then their local optimum is to risk the longer prison sentence to protect their hostage. They would be willing to do this because they also know this is true of the criminal in the other room so they can rest assured that the other criminal will also comply rather than defect. After they get out of prison with their minimum sentences they can have their hostage back so it ultimately cost them nothing to enter this agreement (the hostage experience notwithstanding).
What I like about this solution is that it can be applied as a layer on top of existing systems without having to otherwise change those systems. The criminals can enter this agreement without changing the dynamics between the criminals and the police and they can do so without the consent of the police.
The example above breaks down in two situations:
- There are actors outside the contract system. In the prisoner’s dilemma, what do the criminals do to secure the cooperation of an outside witness? How do you prevent overfishing from countries that refuse to pass or enforce laws preventing ships from their nation from overfishing?
- The actors have nothing to use as a hostage.
To address the first case, the hostage you stake has to be something you can weaponize. For example, the criminals could stake a large sum of capital instead of a loved one. As long as the stake can be used to apply negative incentives against external actors somehow, you can use it to fix local incentives. For example, the criminals could make a conditional assassination contract against any witnesses and leak the information about this contract to the witness before they speak to the police or testify in court. This solution can be combined with a dead man’s switch so no active actions are required by the criminals to enforce it. This adds execution-trust.
In the second case, you have to find some kind of negative outcome to enforce. The most common stake-of-last-resort example is the safety or life of the defector. For example, in The Dark Knight, The Joker made each criminal literally implant bombs into themselves. In Harry Potter, Voldemort gave everyone a Dark Mark. Assuming we can bootstrap reputation systems, reputation is a much saner collateral for less extreme use cases. That’s basically all your credit score is when Visa grants you a credit line without collateral.
Race to the Bottom
Race-To-The-Bottom is the quintessential Moloch Trap. Sacrifice everything dear to you for the slightest edge in competition. The ends justify the means. Victory at any cost. In literature this usually leads to the hero eventually asking “Was it really worth it?“. In capitalism, we just call that business as usual. Corporations are optimizing for profit above all else; specifically short-term profit. Because of this they are willing to destroy their environment, subjugate their communities, and corrupt the legal systems that govern them to stifle competition and create power imbalances in their favor. A race-to-the-bottom is even obligated by law by the fiduciary duty of board members to shareholders. This is even at the expense of the values of the very nation upholding those laws. The result is curious when you look at it academically from the outside like an archeologist of a future civilization looking back on ours. From the inside, it’s actually rather terror inducing.
Formally, a race-to-the-bottom occurs when external pressures force individual actors to change their objective function to a form that discounts or excludes things they would otherwise value and may depend on to survive. Usually this is because of pressure by another actor such as a boss, government, or shareholders that are not affected by the negative externalities this causes and are not accountable to those who are. This external pressure can be applied transitively to entire organizations. In tragedy of the commons, all actors share an objective function but defect from a Pareto Optimal solution out of selfishness. In race-to-the-bottom the objective function itself is skewed.
The solution requires applying external pressure on the source of whatever is skewing the natural objective function of the individual actors. For example, nations can pass laws such as environmental legislation that all companies must abide by. Now, if your boss tells you to pollute the river, you just send him to jail (assuming whistleblower protections work). In theory this puts all companies at an equal disadvantage. In practice compliance with regulations usually favors larger entities. Also, this solution is not viable at all when it occurs on a global scale and there is no global government that can enforce a rule. In those cases you have two options: economic carrot and stick.
The former is easier to bootstrap if you can find a source of funds. Incentives are never going to be prohibited by a law, there’s no one with standing who would even benefit from trying to. The most widespread attempt at this I’m aware of are carbon credits. As with carbon credits, measuring impact to the system objective function can be frustratingly difficult and any incentive schemes are going to be the target of corruption and gamification.
The latter option is economic coercion. This is the option of last resort due to the difficulty of instantiating a system to enforce it and just how ethically dubious such systems are. I make an argument in my human coordination post that blockchains are indispensable for this approach to work at scale.
A solution needs to be opt-in because no global government exists to coerce all network participants militarily. The system is going to be massive which means it will need to be modular and composable to tackle the inherent complexity of this problem. Since the incentive system is economic in nature there needs to be a unified monetary system attached to it binding all participants. This monetary system needs to be neutral and not belong to any one participant or else it threatens the sovereignty of the other actors. Executing these incentives needs to be scalable which basically implies a program needs to do it but that program needs to be incorruptible to be trustable. Crypto is providing the consensus, execution, and governance technologies required for such a system to be practically built and adopted. A solution of this form exists nowhere else…
Blockchains offer the potential for a unified monetary system that can be encoded with rules to penalize all defectors. I dryly discuss a real-life example here but let me expand on a simpler example. Let’s create an evil smart contract whose only goal is to grow the set of participants in it. The only thing we need for this to work is a way of turning money into something universally unpleasant to a target actor. In the simplest case I’ll just suggest an assassination contract. Next, we create an evil conspiracy of initial participants to fund this contract. All the contract has to do is target some unlucky sap who isn’t in the contract yet, inform them of a pending assassination contract, and tell them to join us or die. Joining us includes streaming money to the contract so that the contract can grow faster. If the target dies, then the ratio of participants to non-participants still marginally increases. Now we run this until everyone remaining is in compliance with the contract rules.
This describes a minimum viable memetic rule system. The penalty obviously doesn’t have to be assassination. It just has to be enough that for whomever the current target is, they are incentivized to join according to their objective function. The penalty I describe in my human coordination post is economic isolation rather than assassination.
The initial participants expose themselves to economic pain from the fees they pay to the contract. This is the cost of expanding the size of the network and punishing defection. The value-add of this cost depends on the value of each new participant of the network. Due to [Metcalfe’s Law])https://en.wikipedia.org/wiki/Metcalfe%27s_law) we expect a super-linear value-add for each new participant as the network grows. Therefore, there is critical mass of initial participants required to bootstrap such a solution, but once that is reached the stable equilibrium of the system is to dominate.
Contrast this to networks that tend to fall apart or fail to achieve their goals because there is no leverage against defectors. For example, boycotts fail to work because all the negative consequences are directed at the seller, not at anyone defecting from the boycott. Strikes fail when scabs are brought in whom the strikers have no leverage against. A blockchain solution can have universal leverage because it acts on the unified monetary rail, which is quite unique.
Now, in this simple evil smart contract, there is no inherent value to the network (Metcalf value f(n) = 0). But in economic systems, there is. We can layer other rules into this base memetic rule system that can result in a net positive situation for the initial actors on a long enough horizon. The rules can include things like environmental protection, nuclear disarmament, or UBI.
Once widely adopted, the result is a class of fail-deadly system that punishes both those that disregard the coercive rules and those who don’t participate in punishing said defectors. There will always be some players like North Korea who choose to try to go it alone. The goal of an economic coercive system is to try to make them as poor and disconnected as possible to either minimize the harm they can do or compel them to join the coercive network and play by its rules.
Btw, if my evil contract sounds insane, it’s basically just taxation + the police that enforce taxation by throwing you in jail instead of killing you. If you say you’d never join such as system, you already have joined such a system.
First-Mover Disadvantage
Onto something very different. I haven’t found an official name for this pattern so I’m calling it First-Mover Disadvantage until I get a better name. In this pattern, if everyone were to move, everyone would be better off but an individual actor moving leads to a suboptimal reward for that actor so no one is willing to be the first mover. This results in a suboptimal steady state. In game theory terms, there is a known Pareto-optimal solution that cannot be reached because all actors are stuck in the current Nash-equilibrium.
For example, let’s look at the global reserve currency. The reason the dollar became the current world reserve currency is because oil is priced in dollars. So whether you are buying or selling oil, you are using dollars by default. Everyone deals in oil, so the infrastructure was setup to use dollars and then this kind of carried over into everything else. As the US prints its way towards a $200T national debt there may arise a better contender for global reserve currency in the coming decades. But, even if everyone wanted to, everyone is kind of trapped into this system. Change at this point would be difficult because even if everyone wanted to move, there are high costs and risks of being the first to move.
From the perspective of an actor in this system, imagine you are a country that wants to reduce your dependence on the US dollar and want to switch to something like the Euro. In theory you could setup a trade deal with another country in that currency for oil but it’s kind of like OTC trading; there is less infrastructure for transmittal, fewer banks to rely on, less established regulations, and most importantly less liquidity. Any strategy that doesn’t place your oil for sale on the market with the highest liquidity will cost you money. As a seller, you would only want to move to a new base currency for your oil if over half of the global oil sales are there first. The first mover is taking a huge risk, putting in all the effort to address that friction, and losing money on their oil sales. This causes everyone to stay complacent even in bad systems. This is Moloch’s tool.
In web3 we see the equivalent pattern every time a new Dex wants to attract capital. The incentive pattern we have seen for this kind of migration is called a vampire attack. It historically has required someone to put up a large sum of money to compensate those first potential movers. This would not be a viable strategy for the global oil market. However, Sushiswap did something quietly brilliant besides juicing their vampire attack with SUSHI tokens. When it first launched, Uniswap LP holders could deposit their LP tokens in a migration contract. Rather than moving the liquidity to Sushiswap immediately, the contract held the money in escrow until a certain time passed. In this interim period, the liquidity was still on Uniswap, which meant it cost participants nothing to join. This latter part is the coordination breakthrough we should be more interested in.
In the oil/USD example above we could create an escrow contract that would hold your LP position until a certain criteria is met such as having a critical percentage of the order book in the escrow contract. Then the escrow contract could atomically move everyone at once. This allows everyone to escape the Moloch trap together, even without additional incentives for early participants.
Without a blockchain, there is no infrastructure for this type of coordination. With smart contracts though we have a moat-breaking technology that enables all of society to advance towards global-optimal solutions. It removes the first-mover disadvantage by enabling atomic coordinated action without a privileged coordinator (arbiter). With the blockchain acting as a incorruptible neutral arbiter to execute the rules, without any risk that this arbiter can run away with the funds or do something devious because the rules are auditable by everyone. Seemingly tiny innovations like this that came out of a greedy vampire attack from a dying exchange are how we are changing the world, one contract at a time. Beauty out of chaos.
Bystander Effect
Bystander Effect occurs when there is a group of people who individually all fail to act despite sharing a common goal. Formally, there is a shared objective function amongst actors but there is no individual incentive for action. Everyone just assumes or hopes someone else will address the problem, especially if there is any type of personal cost. This famously occurs in situations like medical emergencies where no one calls the ambulance despite an incident being widely witnessed. Presumably, everyone prefers that no one dies, but no one is paid out for personally for calling the ambulance and contributing to saving lives so we just watch with morbid curiosity. There are several examples of this that occur at Ethereum’s staking and networking layer that Ethereum has either weak or no answer to.
The first example has to do with balancing the adoption of various EL/CL clients (client diversity). Everyone wants to use the easiest to use, best supported, client but if everyone uses it the network is at risk. There is a global incentive for staker’s to protect the network (and therefore the value of their stake) but everyone just hopes everyone else will switch so they don’t have to. Ethereum actually does have a weak answer to client diversity so long as no client gets a supermajority of validator nodes but there was a time not so long ago where Prysm was edging on this supermajority and we were all at risk. In this specific instance, there was a surge of Layer 0 coordination to push staker’s to diversify and Prysm today sits at closer to 40% adoption (but Geth still remains dangerously high).
The second example has to do with LSTs and staking providers (staking centralization). Everyone would prefer to use the “most trusted” LST with the highest liquidity but if any given staking provider has 33% of the validators they can potentially do things like ransom attack the network or at least halt finalization at their whim. As above, everyone is better off with a more decentralized network but everyone hopes everyone else will switch so they don’t have to. Unlike with client diversity, Ethereum natively offers no incentive of any kind to protect from staking centralization and Layer 0 efforts so far have fallen short.
This is a tokenomic failure of Ethereum but not one without potential solutions. It is obviously in the interest of competing LSTs to pound the Layer 0 alarm but they should also be willing to spend their marketing budget to target existing Lido validators for conversion. In this vein there are four things that can be done (from most to least ethical).
- Create convenient zaps to migrate from one LST or staking solution to another. The less friction to migrate, the better. In fact, collaborate to put conversion all under one website with information about each competitor in an easy to digest format.
- Add bribe incentives for existing Lido validators to migrate and stay with their new staking provider for some time. It makes sense for alternative staking providers to spend their inflation/marketing budget to attract this capital to their platform. The result could look something like the Curve bribe markets or what Tokemak v2 is making that resembles something closer to an exchange.
- Pool assets to acquire governance share in Lido and vote for self-throttling Lido’s market share. You could alternatively just bribe existing LDO holders to vote one way on this particular issue.
- Become Lido node operators themselves and attack the network from within. Get ETH directed to yourself then ransomware to slash the stake and destroy Lido’s reputation unless the DAO votes to self-throttle.
I’m happy to say my local EVMaverick community is doing the first two of these under project codename Lidont. The third option would require a delegating voting locker like XToken was once making. The fourth option would be catastrophic and definitely illegal so if you want to pull that off be sure to do so in a country without an extradition agreement.
Information Asymmetry
Most of the above examples have to do with greed of some sort. This last one has to do with ignorance. When actors would mutually benefit from a transaction but can’t proceed because trust can’t be established. Perhaps I’m trying to sell you a gold bar. How can I prove to you that it is gold? Do you have an acid test or XRF Spectrometer with you? If not, we probably can’t do business. How can I prove to you that my car isn’t a lemon? The buyer is taking a risk and that will be included in the price we settle on regardless of how much better I’ve taken care of the engine than the average car. This will be reflected in his offer, causing a mutually beneficial sale to fall apart.
Sometimes it’s not that I can’t prove something to you, it’s that I am unwilling to for whatever reason. A common reason here is privacy. Like sure, I could prove to you I’m a US citizen by showing you a picture of my passport but that information could be used for all sorts of nefarious things so I’m not sending that picture over the internet if I can help it. There are plenty of transactions that are stymied by an inability or failure to establish trust between parties.
How can I prove to a bank that the money being used for a house down payment isn’t from a loan? More generally how can I prove that I’m trustworthy and will honor any agreement? This is like 90% of why Defi is hard, for every service provided we have to have an encoded answer for “or else” that satisfies both parties. Usually that solution involves collateral, which is why we don’t have crypto-native credit cards yet. We cannot delegate this matter out to a court system like Visa can.
What crypto brings to the table for these scenarios is decentralized oracles, various zero-knowledge proofs, and (soon™) reputation systems. If trust is the problem, cryptographic systems are often the answer.
If you can think of patterns that should be here but aren’t, reach out to me; I’m happy to amend this post. I could also see this becoming more formalized into an academic paper, book, or website like those I linked for other types of patterns if you’d like to collaborate.
Finally, a warning to any who would seek to understand this topic better. Knowing things is, of course, empowering and generally improves your life. However, there is a set of things you can understand but not effect. In those cases, your knowledge can make you unhappier. In the case of Moloch traps, just because you know something doesn’t mean the rest of the world suddenly learned it or ever will and even if they did they still might not be able to apply that knowledge. For a quick example, I’ll refer you to airline boarding procedures. When you see suffering around you and you know a way of preventing it but have no power to act it can be… unpleasant. This is what I call the curse of knowledge. Moloch’s Toolbox are anti-patterns that once you understand you will see everywhere. So be warned.
9/22/23