DePin


I’m seeing a major uptick recently in DePin (Decentralized Physical Infrastructure) projects. The basic idea of DePin is to expand the sharing economy to computation. Rather than restricting compute providers to a few tech-oligarchs like Amazon the DePin movement is democratizing that access. BTC and ETH PoW were the early permissionless players for decentralized computation but over time BTC mining became dominated by a few miner oligarchs who could source ASICS and ETH obviously moved to PoS. What we’re seeing with DePIn today is a renaissance of this idea mostly driven by surging investment and demand for AI.

The idea of decentralizing computation isn’t a new one. Prior to blockchain I remember a Playstation app called folding@home. Even before that I suppose we had Napster (yeah I’m getting old). What blockchain brought to the scene that these foundational apps lacked was the potential to compensate participants monetarily. I recall learning about Golem when I was first entering this space. Their idea was to create a GPU marketplace and use the blockchain for peer-to-peer payments. They were one of the darlings of the 2017 ICO era and… they’re still going I guess?

Golem was probably visionary but too early. It sucks being right but too soon™ but it happens to the best of us sometimes. I think now the opportunity is ripe for another attempt at this concept because we’ve discovered a new practically infinite pile of work with AI. Right now that is creating huge moves for players like BitTensor, IO, and Akash which serve as a sort of computation broker but I think we’ll also see success for task specific products like Render Network for rendering (duh) and Helium for decentralized wifi. Just around the corner we also have teams like Gensyn which are doing AI model training and innumerable AVSs which will change the collateralization model (see restaking).

Before we worry about what network they run on, what task they are solving, what collateral they take, their tokenomics, etc., all DePin projects share a common challenge: The work itself is being done off-chain so how do we verify that work was done honestly? If I ask you to render something and it comes back solid black was the problem in the rendering or in the task description? I call this the verification problem and it’s the first thing I ask every DePin project I evaluate.


The Verification Problem

So how do we verify off-chain work? There are three schools of thought which I will cover below. The first school of thought is the oldest and is basically at the heart of every oracle you’ve ever known: subjective consensus. I summarize this with the immortal quote pictured below.

This applies when you can’t figure out what a proof would look like for the task being done. Sometimes this is because the task is state-dependent such as solving the n’th prime and so verification would basically just have to do the work over again. Sometimes the task is non-deterministic. For example there can be latency differences between callers so each caller can see slightly different answers from an API such as a price feed just based on exactly when the query is processed. It’s impossible to synchronize so the answer looks non-deterministic even amongst honest nodes.

Either way the fundamental assertion such systems make is that ultimately money is truth and we’re going to let people with money fight it out. This works as follows.

  1. Worker nodes calculate a number and make an attestation about that number to the network.
  2. Verifiers calculate a reputation score about the workers in any arbitrary way. This is task agnostic which is why this approach is so flexible.
  3. The system reconciles both the worker answers and the verifier reputation scores. The workers get paid according to the verifiers that backed them and usually the lion’s share of the rewards go to the fastest worker. For verifiers, the median answer after being weighted by their stake determines the subjective truth and then distance from the median is used to determine how much to compensate/slash each verifier.
  4. If the system is wrong enough the last resort is to fork it, override whatever false truth was asserted and slash those involved in asserting it.

This is the basic design of everything from age old systems like Augur and Chainlink to newer systems such as BitTensor and Arbius. Sometimes the miners can serve as the verifiers because they redid the work in parallel. I personally find duplicating work to be a capital inefficient design.

The problem with this approach is the system has no actual proof of who is right and therefore can arrive at wrong answers and in the worst case cataclysmic system forks. Instead of proof these systems rely on fallible voting and escalation systems. Fundamentally these systems assert that money is truth. Because those holding money usually have the agenda of making more money we can expect them to act in a profit maximizing way, not an honesty maximizing way. If there is no objective Schelling point to coordinate behind and care is not taken to prevent answer copying, nodes will just begin to vote copy the largest, most capitalized/influential voters and the system degrades into centralization.

The solution to this concern is to gather some type of objective proof that verifiers can use when grading. People love stomping on evil when given the chance and they believe they can win. Proof serves as the Schelling point people can coordinate around to give individuals the belief they can win against better-capitalized entities. This proof can take two forms: statistical and formal.

Bitcoin hashing is the most famous example of statistical proof of work. You don’t submit every hash you do to a mining pool or the network. You only submit the best hash you have and from that the mining pool can infer the amount of work you statistically must have done to find that hash and give you pro-rata credit. The final hash is a concise statistical proof of a much larger body of hashes you must have created which can be verified in constant time.

A common pattern for statistical proof I’m seeing more of lately is sampling. This applies to things like rendering and ML training wherein the output is both deterministic and embarrassingly parallel. The concept here is similar to the Panopticon; given that you don’t know when the guards are looking you always have to behave as if they are. It achieves security with less work/cost. For DePin with sampling, each worker won’t know which segment of the embarrassingly parallel task will be replicated during verification and given that they have some collateral to lose if they misbehave the optimal action for each worker is just to do the work honestly. All we have to do is ensure that the penalty for cheating times and the probability of being caught is greater than the reward. This is how Render Network tackles verification of rendering tasks; verifiers can raytrace random individual pixels of the image and if those match it is statistical evidence that the entire process was completed honestly. DA Scaling is another promising example of this. Due to erasure coding only a small percentage of the data needs to be requested to know that all of the data can be reconstructed.

Formal proofs of work are rarer. You see them when the work itself can be used to derive a proof that can be verified in near constant time. You usually see this when a zk-proof can be efficiently constructed in addition to doing the actual work. The effort that goes into the zk-proof may be quite substantial though and zk-proofs in general are cutting edge research so I don’t see very much of this yet. In the long term though, zk-proofs will rule the world so look for more of this in the coming years.

Once you establish the nature of the work for a DePin system and how the work will be verified you’re 80% there. At the very least you’ll be able to filter out a lot of bad designs which is why I look at that first. IMO, stay away from any systems that require replication of work to function; they are destined for the dustbin of history.


Everything Else

The remaining 20% of the design comes from the chain they are using and how it interacts with the off-chain systems. I personally look for friction-minimized designs. For most generic tasks I’d suggest using an Ethereum L2 just because that is the easiest way to onboard the capital that node operators will need to join your network. Similarly, I’d suggest allowing for account abstracted billing rather than forcing utility token friction on your compute buyers. Just reduce friction everywhere you can. If you’re worried about costs here, sequencing and execution consensus for an L2 post blobs is now in the sub-cent range for networks like Starknet and that should be good enough for most applications (again zk-proofs will eventually rule the world). I will carve out an exception to this rule for some specific tasks that can be the basis for DePin chains which I describe below.

On the off-chain integration end, the main difference I see is whether the work description is something that is submitted to the chain as data or whether this coordination happens entirely off-chain reducing the chain contracts to managing slashing of node operator collateral and payments. On this the jury is still out. On-chain task coordination can reduce the possibility of verifiers unfairly promoting specific workers but it comes with higher DA costs.


Depin Chains

There is a special category of tasks above I think have an interesting application I want to call out. In addition to being at least statistically provable these tasks:

  1. Are practically infinite.
  2. Are parallelizable with checkpoints.
  3. Require no dispatcher to coordinate.

Usually these tasks feel a lot more like discovery rather than just computation. Bitcoin hashing definitely has the feel that everyone is “searching” for the next block rather than solving some user submitted job. Other examples include things like protein folding, charting the stars, and notably AI training.

The reason these tasks are interesting to me is solving them can be a source of Sybil resistance which can in turn be used to build DePin chains. I’ve written many times about Sybil resistance, how difficult a problem it is, and how integral it is to understanding some of the weirdness that is crypto. Once you establish Sybil resistance reaching consensus in a blockchain is just network gossiping at scale and voting on a sequence of blocks; majority rules. So how do we establish Sybil resistance? Every attempt I’ve seen at this involves either scarcity or attestation. Here’s a non-exhaustive list:

  • Proof-of-work (PoW). Proof of having spent something scarce such as energy.
  • Proof-of-stake (PoS). Proof of possessing something scarce such as capital.
  • Social attestations. Attestation by trusted entities that you are a unique actor (proof of humanity, brightID, worldcoin, etc).

Now between these the least gameable/evil so far seems to be PoS. Ethereum is 99.95% more energy efficient than Bitcoin, can’t be captured by bot networks, corrupted by poor people willing to sell a digital identity for $10, etc. It does nothing for wealth inequality but that isn’t its purpose. Crypto enshrines equal opportunity not equal outcomes. The thing is, the energy use of PoW is only criticized because the energy isn’t being spent on something inherently productive to humanity; it’s just solving meaningless hashes. Otherwise PoW chains can be more open to anonymous contribution and allow contribution with less capital investment than say 32 ETH. These are both very crypto-native virtues that PoS chains are commonly criticized for.

Swapping out the nature of the blockchain work for something more useful is not an entirely new concept. It’s actually an old one that just never caught on. The first example I’m aware of is PrimeCoin. If you’ve never heard of it, that’s because it has a market cap of about $500k and is from 2013. It’s just one of many failed coins from that day and age. Regardless, the concept is simple: instead of doing random hashes, find the next prime for a block. Primes are of inherent value outside of the blockchain and their discovery therefore makes the work done to establish Sybil resistance for the blockchain less wasteful. There are many reasons this particular chain didn’t catch on but amongst them is that the difficulty of discovering and even verifying the next prime doesn’t scale very well and a prime in and of itself isn’t monetizable. However, several forms of DePin are.

This enables a useful PoW chain to have potentially no inflation. Instead of receiving BTC, you mine for and receive something like a share of an AI model you helped to train. As long as there is a steady demand for work for the underlying DePin service this offsets the security budget that issuance would usually fill. The waste of the system is reduced to the verification time of the work done. That can be offset by the transaction costs of the network being secured. Basically you just swap out the consensus layer of the blockchain, leave the execution layer as is, and add blockspace revenue to the DePin workers.

Ethereum is currently emitting about 750k ETH per year. At $4k a coin, that’s a $3B security cost that could be eliminated.

Conclusion

Today we are seeing a Cambrian explosion of these less general, task specific, verification systems coming online that were built over the last bear market. There are purpose built compute brokers for just about anything a computer can do for you. Filecoin and Arweave are tackling data storage. A plethora of AI tools and accelerated compute brokers like IO and Akash are tackling anything you want to do with a GPU. Helium is doing wifi. Theta and Livepeer are doing video streaming. Audius is doing music streaming. Bringing this all together we have abstracted payment systems like Nevermined and enterprise grade compute providers like CETI. It might have taken 7 years to build but I love it when a plan comes together.