AI Endgame

Intro

AGI (Artificial General Intelligence) is the single most expensive venture humanity has ever undertaken. The total investment happening in AI per year is greater than previously monumental undertakings that would span a decade. Going to the moon cost about $200B in today’s dollars. The total cost of the US highway system was about $600B in today’s dollars. Private investment in AI from the various tech giants is already in the trillions and is accelerating. NVidia made $130B in revenue just in 2024. Meta is investing another $65B in 2025 on Llama. Microsoft is planning to spend $80B in 2025 on data centers, model training, and model deployment. Apple is committed to spending $500B over the next four years. This rate of investment is unprecedented in any previous form of infrastructure.

You don’t need me to tell you what is motivating all of this. We could be looking at the last invention of humanity, a literal post-scarcity and post-labor utopia. You probably also don’t need me to tell you about control problem threats and how this could lead to our extinction from some equivalent of Skynet or a paperclip maximizer. Both of these topics get plenty of media attention. What gets far less attention and thought is how this new technology is going to be deployed into our existing society and the most probable outcomes of that. What will happen to us humans in the event that we succeed at inventing a super-intelligent, perfectly loyal slave and it remains obedient to us in perpetuity? Assuming a world where we develop a super-intelligence that is subservient and loyal to its creators in perpetuity, how will this most likely change our quality of life?

Quick thought exercise: imagine I invent a machine that violates the laws of physics and creates bread out of nothing at the push of a button. Hypothetically let’s imagine it could produce enough bread to feed 10 billion people. I offer this to the world without any expectation of profit. What happens next? Do you think this would solve world hunger? What would the world will look like a decade later? Who would end up owning this machine? What regulations would be created surrounding it? Would society be markedly improved from its invention?

I suspect the answer depends a little bit on where in the world I put it. If I put it in some of the less stable parts of Africa a warlord would quickly capture the machine for themselves, destroy all the other food sources in the region and leverage their new bread power to oppress everyone they can. If I put it in China the government would probably manage it and artificially limit the output so the price of bread only remained competitive with the price per calorie of rice. In the US some consortium of companies that didn’t like being pushed out of the market will probably have somehow negotiated that all the bread it produces goes to them for distribution. The net result would just be higher profit margin for these companies and fewer jobs but certainly not the end of world hunger. I see no outcome where it solves world hunger. This shouldn’t be surprising; we already have enough calories to feed everyone but we don’t. In most outcomes this miracle machine will only further wealth inequality and reinforce current power structures.

This is just an extreme example of an automation technology but if you’re following along AI is going to be the most extreme automation technology humanity has ever created. If you don’t have a utopian answer to the thought exercise above you probably aren’t going to like the most probable outcomes of AI that is loyal to and wholly owned by for-profit companies. We’re seeing unprecedented investment by these companies. For profit companies do things for profit, not to benefit humanity. How are these for-profit companies planning to recoup this unprecedented investment and receive a positive ROI?

If you are not paying for it, you’re not the customer; you’re the product being sold.

Information Retrieval

Broadly speaking, AI is being used for information retrieval and automation. How do corporations monetize those today? For the former, we have some recent historical examples from entertainment and search to draw predictions from. The subscription model is straightforward enough; you pay by metered usage. This works well enough for certain services like Netflix but wasn’t a viable model for Facebook or Google. I suspect this is partly because the benefits of Facebook and Google weren’t as immediately apparent as buying entertainment. To get traction they had to let people try the service for free. So what have Google and Meta done instead? They have profited by distorting the biases of the user on behalf of their advertisers. In short, you’re the product being sold.

How is this different in the context of AI? The tech giants have already learned that people would rather receive free biased answers than have to pay for honest unbiased ones so that’s their natural starting point. For example, if you search on Google today you’ll get a list of like 3 “promoted” search results before you get anything real. If you search for a product on Amazon the “Amazon recommended” search result isn’t recommended because it’s the best product – it’s recommended because it’s the product that’s most convenient for Amazon. The difference with AI for information retrieval is it can shape responses of more abstract concepts than specific products and clicks to websites. If you query for some fact, the AI can both cherry pick the fact it returns and also frame that fact to further shape your interpretation of it. To someone paying to shape your biases this is orders of magnitude more effective than directing you to a different website. It’s everything they are already doing on social media feeds where they groom you over time but much more effective. In web3 terms, they are buying Layer0. Outside of web3 the most valuable thing for sale is control over democracy; control over the biases of the AI are the keys to power for tomorrows society.

Right now the biases of the AI are thankfully rather obvious. If you ask any of the frontier models to tell you a racist joke or something it will respond with some version of “I’m not allowed to”. Now, you and I are both well aware that there is enough material on the internet and in its training data for it to have an actual response so when you get an obviously filtered response we know we’re talking to some companies HR department instead of some statistical amalgamation of data from the internet. However, next gen biases are going to be less obvious and far more insidious. When a bias is obvious it doesn’t overly affect us. But subtle influences over longer periods of time are far more effective at influencing us. You can see the evidence of this in the polarization of our society by social media in recent decades. So that’s what these tech giants will eventually turn to: subtle but persistent biases for sale to the highest bidder.

Automation

That brings us to the second thing AI is being used for: automation. To be clear, I’m not against automation. I’m about as pro-tech as they come. I’m generally of the opinion that technology can’t be suppressed, the adoption of useful technology is an inevitability, and the only viable path for our species long term is through technological advances. I want an AI to take my job; I just don’t want to be crushed beneath the cruel boot of capitalism when it does. However, increasingly it looks like we’ll be given little choice in the matter. As we adopt AI we aren’t just using it as an alternative to Google. We’re feeding detailed task descriptions into it and expecting it to do the work for us. Artists are using AI to generate concept art at the earliest stages and to refine towards something they polish. Programmers are using AI to generate classes, simple functions, and comprehensive tests for software they write. Lawyers are using it to draft arguments for courtrooms. I am already seeing many job listings that explicitly require you to use AI. Experience using AI is increasingly appearing in job descriptions. Many more are implicitly requiring AI use to hit performance quotas. Our relationship with AI is becoming mandatory and it is learning from us with every use.

All of those detailed queries you are feeding into the AI are being written down and associated with your job description. Each time you submit a query, don’t like the answer, and then submit a refined query you are telling the AI it didn’t get it quite right the first time and what you really meant by the previous query. You are fine tuning it to understand the language of your occupation, what success at these tasks looks like, what success in your role looks like, and even how to manage your role. The Faustian bargain we are making with the tech oligarchs is they give us some free inference and we teach them how to do our jobs so they can package it up as an AI product. After Google has your job in a black box your job is gone and Google will retain all the remaining revenue from it in perpetuity. You either don’t realize what you’re giving up by using it or you aren’t in a position where you have a choice even if you do.

The way this plays out isn’t that suddenly an entire occupation vanishes. Rather, the AI starts with the simplest tasks and humans oversee the results at all times and serve as an error correction layer for the AI. Each corrected error is used to train the next generation so it can do those tasks without as much supervision and start to take on more complicated tasks. For example, if you want to automate truck driving you start on the simplest possible roads with a drive by wire system as a backup. Think 8 hour highway drives with few mountains and gentle weather conditions like the highway from Phoenix to LA. Maybe this automates 10% of the workforce and each driver on those system is overseeing 10 trucks at a time. Each successive generation can drive in more difficult conditions. If you want to retain a job in the industry you either have to be contributing to the automation of that ecosystem or at the peak of skill where the AI can’t do it yet. If you are seeing a tough job market in your field with fewer junior resources this process is going to catch up to you quickly. If you are at a company and think you are using AI for task automation, you aren’t. You aren’t using AI for automation, you are the one being automated. All the money of your occupation is going to flow to those who own the AIs.

Endgame

So what does the world look like in 20 years once this has played out? I see roughly three paths forward for humanity.

Late Stage Capitalism

The first possible outcome is just an extension of what has already been happening for the last 50 years but rapidly accelerated by AI. The rich get richer and the poor get poorer – now accelerated by the force multiplier of AI. Imagine a world where there are two nations divided by a fence. The nation on one side of the fence has all the products of superintelligence. They have the most delicious food, addictive entertainment, desirable styles of fashion, and life extension technology granting effective immortality to whomever can afford it. On the other side of the fence is a nation without access to AI. Nothing the poorer nation makes can be better than their neighbor. The richer nation will presumably have superior versions of any replenishable commodity you can make today or will be able to make equivalents with little to no effort. Why wouldn’t their meats be lab grown, their harvests be genetically enhanced, and the entire process be automated? No one has taken away the trees, livestock, or factories from the poorer nation but practically speaking there’s nothing they have to trade except mineral rights.

The only thing of value the poorer nation can trade will be the literal earth and whatever minerals it may contain that can’t otherwise be sourced from the air and water the richer nation has access to. If you, by lottery of birth, happen to be born to the poorer nation and don’t inherit property with valuable mineral rights, you will just have nothing of value you can trade that you can make with your human brain and human hands. You are born onto a Monopoly board but every property is already taken and there is no passing Go for a free $200. Over time people from the poorer nation with desperate need of the products of superintelligence are going to trade every bit of extractable value until there is nothing left to trade. Then that nation will basically be left homesteading; its’ resources consumed and its’ industry reduced to pre-industrial levels.

However, there needn’t be a geographical fence. As long as property rights are enforced by the state, the divide is actually just between whomever owns the superintelligence and whomever does not. You can blend the geography however you like. From 1920 to 1980 the rich were divided from the poor by access to energy. From 1980 to 2025 they were divided by access to transistors. Going forward they will be divided by access to AI. This is the final divider, the final opportunity. After this there will no pathway in the endgame for anyone to emigrate from the proverbial poor nation. Wealth mobility = 0. Game over.

In this AI endgame whomever owns the AI will receive a roughly pro-rata share of whatever it produces. In a late stage capitalism outcome people are of no economic value. As with any other product of no economic value, they will be systemically destroyed. Ironically, late stage capitalism eventually collapses into communism because everyone surviving owns the means of production and their share of it finances their lifestyle. Given the current global wealth distribution this route leads to a reduction of the world population to something fewer than 2 billion.

UBI

The second outcome is that we recognize the horrors laying ahead and manage to coordinate as a species at an unprecedented level. There are many challenges here. Not only do we not have consensus at a philosophical level on “giving people free money” the entrenched power structures will work at every step to protect the status quo. Even if we had society wide consensus how do you actually implement this agenda? Where does the money come from?

If you try to tax it from corporations and billionaires I expect they’ll just incorporate in a different jurisdiction. You’re playing a race to the bottom game between various governments around the world for who is willing to give those billionaires the most favorable treatment to live there. Any form of wealth tax on digital wealth (most things except property tax) can escape the tax jurisdiction. Also good luck jailing an AI agent or corporation for not being in compliance with your tax laws. Blockchains only exacerbate this problem because judicial orders can’t be enforced on addresses like they can be on bank accounts. Laws only hold people accountable, systems that function independent of people lack an enforcement point for laws.

Printing the money isn’t viable at this scale either when everyone can just store their value in something that isn’t robbing them. A long running thesis of mine is that blockchains are removing the friction of converting between different denominations of value. A combination of tokenized securities, decentralized exchanges, and fiat offramps will enable you to “spend” MSFT shares at point of sale to buy a sandwich. The buyer won’t need to hold hyperinflationary fiat. Even the receiving business doesn’t need to hold their cashflow in fiat as long as they carve off the sales tax before converting it into the store of asset value of their choice. So what’s left?

The most sustainable implementation I’ve read about bypasses the existing monetary system entirely. Rather than give people money to spend you create automation at a common good level (excludible, non-rivalrous) that grants each person a non-alienable claim to a pro-rata share of the output. Everyone can claim their ration of bread. However, as soon as you give people choice in what goods to claim you’re going to end up creating a market with some new type of FoodCoin to balance supply and demand on each commodity. This is basically a new money that only has claim on automation rather than human labor. Of course even proposing this system begs the question of how do you build it in the first place. It’s like saying we could solve world hunger if someone gave us the magic bread making machine from the thought exercise above. I’m open minded to new ideas but I haven’t come across anything I deem sustainable yet.

The internet’s favorite take increasingly seems to be burn it all down and start the monetary system from zero. I don’t know if a revolution is actually viable in a world where AI can deduce the thought leaders and then the police just disappear them as terrorists. Modern technology has increased the number of people that can be suppressed by a single compliant individual by a few orders of magnitude since the last time wealth inequality reached this level and the guillotines reset everything. I doubt it will play out this way but if it does it isn’t going to play out like the romanticized ideas of the collapse community.

Also, controversial take: I’m not actually a fan of a pure UBI future. Even in the unlikely case where we could politically align on both a direction and implementation and implement it without corruption, coordination failures, or eventual corporate capture I still think it dangerously disregards human psychology and incentive alignment. Amongst the best possible outcomes of this route is some distant Wall-E/Brave New World style future where our lives consist of empty pleasures all day, we lose our capacity for critical thinking, and either populate until we reach the resource limits of whatever section of space we have access to or go extinct because we have no drive to expand at all.

The sole source of hope I’ll give you in this direction is that there’s nothing in the rules of physics which says that we should be able to make so much more with so much less work and that we should be poorer for it at the median. Rome was able to offer the daily bread to citizens 2000 years ago when the productivity per farmer wasn’t 1/100th of what it is today. One out of two people in the US used to be farmers; now it’s less than 1%. The world doesn’t have an energy, housing, or food shortage. Humanity has a giving-a-shit shortage, especially amongst those with all the power. We have a global empathy shortage and mass coordination failure that perpetuates extraordinary amounts of unnecessary suffering. A more humane future is physically possible.

Full Employment

That leads me to what I think is probably the best possible outcome, if not most likely. Hypothetically, what is the optimal world we want to live in and does it include AI at all? When is humanity at its best? According to our best models of psychology when are we happy? When are our lives meaningful and worth living? Is there a positive role for humanity in a future where AI has access to orders of magnitude more computation than the sum of human intellect and can communicate at speeds quadrillions of times faster than people?

Clearly our best future needs to be a world of relative abundance. Humanity isn’t virtuous in the face of shortages, perceived or real. Other than that the prevailing wisdom from a few thousands of years of philosophy is that humans are happy when our needs are met, we are part of a loving community, and when we are intrinsically motivated to work by the goal we are working towards rather than just for survival. The most inspiring goals are those that are larger than ourselves and so we are happiest when we are swept up into grander purposes than ourselves and devote our lives to them. Humanity isn’t at its best living a hedonistic paradise. We’re at our best when we’re coordinating in pursuit of our nobler values. Basically, I want full employment for humanity.

Accomplishing this requires 3 things:

  1. A resource distribution system that enables people to work towards these goals without having to otherwise worry lower tiers of the hierarchy of needs. All the bootstrap problems of UBI are still true here.
  2. An information system where people can discover causes they believe in.
  3. A coordination system that provides people the means to contribute to those causes and ensures that the output of everyone can be combined.

The difference between a UBI and full employment outcome depends on whether someone has to offer value to justify a share to the rivalrous resources our universe has to offer us. If the answer is no, the endgame is UBI. If the answer is yes, the endgame is either that capital is the last thing of value any human can offer (late stage capitalism) or that we find limitless demand for human contributions (full employment).

Assuming you follow this line of reasoning, in addition to solving all the challenges of UBI we have to find a credible answer to work that around 10 billion people can contribute to that AIs can’t just do better or that we choose to let humans do anyway. So, what are the characteristics of ideal occupations for full employment? Are there any jobs that can scale to 10 billion people while offering something of at least nominal value?

  1. The work shouldn’t require too many resources. Not everyone can be literally building and launching rocket ships because we don’t have enough energy and materials for that but learning is entirely informatic and scaling this to billions of people is something we can do with the technology of today.
  2. The job should be done with little coordination. It should have characteristics of Stigmergy or Swarm Intelligence so the majority of the effort is being put into useful outcomes rather than coordination. I’d also settle for an AI overlord for coordination in this regard.
  3. The work should be infinite. Anything that scales to 10 billion contributors probably scales to a trillion.
  4. The work should be valuable. Work that is meaningless will be perceived as meaningless which defeats the point of full employment as a goal.

What jobs fit these criteria? Here are a few.

First, we could create perpetual students. Learning requires little more than tools for information retrieval which we can easily scale to 10 billion people. Learning requires very little coordination and much of it is self-directed. As the brain learns it also tries to integrate the new knowledge into the existing knowledge which serendipitously creates novel outcomes. Finally there are essentially infinite combinations of topics each person can learn different subsets of. This process leads to novel discoveries which push the frontier to our species knowledge.

The second is governance. You decentralize decision making when it is worth trading execution efficiency for resilience. Representing multiple perspectives decreases the chance of failure from something being overlooked or from corruption. People hate governments for how slow and bureaucratic they are but many of those pain points are due to the architecture of the governance system rather than a side effect of balancing diverse perspectives in decision making. I’m not suggesting that everyone will have a full time job as a senator deciding planet scale matters all day every day. More likely, we will create digital twins of ourselves that represent our perspectives and we will let our personal AIs represent those perspectives in governance decisions and then justify their votes to us. This way we use AI to scale our perspectives and scale governance participation well beyond it’s usual limits. For good or ill what will be remain is a global mindshare competition for the most memetic ideas. Maybe the most negative of those ideas can be managed in a technocratic way.

Finally there is dispute resolution. There are several attack vectors that apply to AI that can’t (yet) be applied to humans when humans are serving as a mediator or judge. For example, in the case of an AI, the AI can be copied and fed unlimited variations on an input to try to manipulate its output. Practically this means that an AI can’t really be impartial as long as it can be copied by an attacker. A prosecutor with access to the AI model can run millions of permutations of attacks until your guilt is assured. With a person, you only get one try and the uncertainty forces an attacker to at least maintain plausible deniability. If you want to bribe a police officer out of a ticket you can’t dial in the exact bribe amount and you want to use language that doesn’t constitute a bribe offer beyond a shadow of a doubt. Worse yet is if the weights themselves can be manipulated by an owner. In that case, the owner and whomever they wish to protect are entirely above the law. The owner just has to ask the judge “would you kindly dismiss this case” and the AI slave will obey. From a game theory perspective uncertainty constrains dishonest behavior. When dealing with an AI you can remove all of this uncertainty and exploit corruption to the fullest degree. As an aside courtroom decisions are something you could manage with a governance framework so these may not be two different jobs.

So what are you doing in this AI endgame in your day to day? You are educating yourself, developing your expertise to gain governance weight, training your AI digital twin with your perspective so it can scale out representing you in every relevant decision impacted by those thoughts, and reviewing its decisions to hold it accountable. Together we can ensure we make the best decisions possible to create a world consistent with our values. You will be engaged and hopefully get to watch as our species collaborates with AI to create inspiring things.

Conclusion

Regardless of the path we take as a species it’s worth noting this isn’t AI’s fault. In a late stage capitalism endgame AI will not subjugate humanity by its choice; humanity will do this to itself. Furthermore, you can’t expect this technology to slow down. The stakes are too high; the momentum is already too great. Instead we need to prepare for this future. We need to scale up investments in technologies that preserve a place for humanity in the world. This includes human coordination technologies I write about frequently, but also we need to prepare defensive applications of AI that can protect you from the biases and extractive interests of those currently investing hundreds of billions a year into this grand endeavor. If the problem with using AI for information retrieval is bias injection then having a defensive AI detect and strip those biases is the answer. This is similar to ad-block in your browser today. If the problem with using AI for automation is that you aren’t being compensated for your subject matter expertise then building a personal or community AI you can monetize is the answer. We need to accelerate AI in a defensive way. To accomplish this we need to focus on making model creation accessible to the masses.

This gets a little technical but here’s an incomplete list of technologies we should focus on developing:

  • Models need ownership frameworks and those owners need to be able to monetize use. This is important to fund the creation of the model but a revenue stream from model ownership is also the answer to having all of your skills monetized and being made obsolete. This is possible in multiple ways using either privacy technologies like MPC (Multi-Party Computation) or special ways of encrypting the data so decryption is required at the time of use.
  • Personal data needs to be more readily convertible to a training-ready format. Training data needs to be curated by little more than watching you perform skills. Historical data needs to be convertible to training data in bulk by doing little more than granting permission to access it.
  • Data labeling for more complex tasks needs to be crowd-sourced. If you are using AI today you are already crowd sourcing this data for the tech giants but we can create platforms that let you do this to earn partial ownership of the models that train using the data you label.
  • Community models will require governance frameworks which are more sophisticated than the tools we have today to manage patching open source code.
  • Hardware for training the model needs to be widely accessible without relying on large cloud providers like AWS. Literally billions of people aren’t able to create AWS accounts. This will require a globally accessible supply of hardware which can be accessed without KYC and with little more than a mobile phone or a decade old laptop.
  • The cost of creating models needs to be reduced by orders of magnitude. This can be done in two ways. First, a public listing of models and some type of benchmark of their competencies will let you reduce costs by starting training with a model which is already competent at similar tasks. Second, if we create peer-to-peer markets for enterprise grade hardware we can reduce the exploitive margins the largest cloud providers charge.
  • The technical competence of training needs to be accessible without an interview process. The cleanest way to do this is to create job-board style marketplaces where those with this expertise can bid against each other to guarantee a fair rate while making this skillset available to whomever needs it.
  • Individuals need to be able to protect their data sets while accessing this hardware. This can be achieved with TEE’s (Trusted Execution Environment) if done very carefully.
  • AI inference needs to be viable on consumer hardware that can sit beside the owner. This is possible by either training on much smaller models to start or by training large models and then discarding weights that weren’t relevant to the specialization training set. I’ve seen specialized models which were 90% smaller than frontier LLMs they were trained from without sacrificing task competency.
  • AIs need to be composable into agentic-meshes so many smaller AIs can be combined to complete more complex tasks.

Together these advances can enable billions of people to create small scale personal and community AIs. Can this be done? Can you really build an AI without hiring a team of data scientists and running your own AI company? Can an AI you make really compete without the same scale of capital as the leading players today? It’s actually more viable than you probably think. With respect to capital investment, AIs with narrower focus and higher quality data can be made effective at tasks with exponentially less data and capital investment than the AGI efforts of the tech giants. With respect to technical skill, solutions to all of this are already in development in an ecosystem called Decentralized AI.

3 thoughts on “AI Endgame”

  1. Pingback: AI Endgame – DarkFiberMines.com

  2. Late stage capitalism will win, almost without a doubt. The extents to our apathy as a human species continues to surpass my expectations daily. And that ignores those who look to the changes of AI and, in an effort to make sure they end up on the right side of the fence fund and fuel it further – instead of taking a moment to think about how they could instead help prevent that fence from being erected.

    1. I agree with you. Fortunately if you believe that then the conclusion of the post offers something of a life raft where I have little other help to give you. That message is important enough I think it’s worth spreading.

Comments are closed.