Economies and Empires are Artificial General Intelligences

It is becoming increasingly obvious to me that global capitalism constitutes an artificial general intelligence.   I suspect the same is true of past empires, as well. Synthesizing these two beliefs, I conclude that human civilization, ever since the sedentary shift, has played host to a memetic ecosystem that has given rise to many different artificial general intelligences (empires), and that their rise and fail should be considered evidence against the orthogonality thesis, which states that intelligent agents can have arbitrary values systems.

Perhaps ancient people of faith were simply people who believed with probability 1 that the orthogonality thesis was false, and that intelligence and human values must be  aligned over long enough time frames.  If the alignment thesis is true, we can be confident that evil empires all have an expiration period, and that good systems must persist over long enough time frames merely because they are the systems capable of doing so.

Faith in the alignment thesis thus ends up being instrumental to ignoring the inanities and depredations of wherever we happen to be present, and focusing instead on doing our best wherever we find ourselves.  We are living in a world of luxury built largely by collaborative efforts of our ancestors, most of whom dedicated much of their lives to the future, motivated by faith. I consider their success at accomplishing their goal (a better life for their descendants) as strong evidence that their theoretical basis was at least partially sound. This essay argues that the patterns in the rise of fall of empires is more evidence against the orthogonality thesis.

I’ll first argue the pieces of this idea, and then show how these pieces fit together. 

Is the economy really an artificial intelligence? 

My day job involves work at a Big Tech company, where I support machine learning engineers working on advertising.  I help talented people from all over the world build elaborate mathematical machines. They have diverse backgrounds,  they come from many different countries, and they grew up speaking many different languages.

What brought all of us together?

Was it our deep love for advertising? Nope – it was a desire to earn money. This desire powers a  global market economy that delivers immense financial rewards to people who can help teach machines how to trigger humans into having emotional responses. The global market economy is learning a detailed map of human beings’ deepest emotional layers, by figuring out how to poke us in ways that make us respond.

The global economic machine is intelligently modifying itself. It is getting smarter.  It is teaching its individual components (i.e. human beings) how to learn, so that it can allocate even more resources to its learning processes.

If capitalism is an AI, then its substrate is the global network of trade. And it even seems to use predictive processing, because markets are driven to try and eliminate uncertainty, just like our brains do. Investors who correctly anticipate future outcomes are rewarded for doing so; investors who fail to anticipate these outcomes often lose their money.  Concepts which help human beings reduce surprise in their experience stick around in human brains.

We might then ask what this AI’s utility function is. Is it the global flow of financial transactions?   What are we to make of things like quantitative easing? I don’t know what you see when you see this graph, but what I see is what I imagine would happen if a machine learned how to modify its own utility function. 


If capitalism is an AI, and it is modifying itself in order to make itself more intelligent, then it’s worth asking whether or not the AI is aligned. On the one hand, we can point to massive declines in global poverty as being outcomes which are obviously human aligned.  On the other hand, there’s still so much suffering and misery in the world, it’s difficult to imagine that this thing is entirely, completely, one hundred percent aligned with human good, right?

So we might ask, what would happen if it were partially aligned?   And is its alignment constant, or changing over time?

If an artificial intelligence gains the ability to modify its utility function, and does so in a way that makes it even more aligned with human values, I would expect humanity to thrive and the agent to thrive as well.  Conversely, if the intelligence modified its utility function in a way that made it less aligned with human values, i would expect humanity to suffer, and the agent to either eliminate humanity and keep humming on (if the agent doesn’t need humans), or else to fall apart as it destroys the aspects of human civilization that it needed to continue operating.

The humans might temporarily plunge into a period of war and chaos, if they were heavily dependent on artificial intelligence to ensure peace and prosperity. These conditions of chaos would probably give rise to a new empire, or possibly multiple new empires. 


At this point I will CHANGE GEARS ENTIRELY (but not really at all) and talk about how the ancient Chinese concept of the Mandate of Heaven was written into the declaration of independence.

George Washington Had The Mandate Of Heaven

Try reading the history of China sometime. It’s like a song (no pun intended). Each verse describes a dynasty which rose to power in an age of chaos, made the world better, advancing the economy, science, the arts, and culture – only to become complacent, decadent, fractured, and weak. The refrain is the chaos of war and violence that gives rise to a new dynasty. 

Chinese philosophers used the concept of “the mandate of heaven” to explain both 

  1. Why the current regime gained power (because they are Good)
  2. Why the previous regime lost power (because they were Not Good)

This belief stated that The Heavens ordained rulers to serve the people,  and that when the ruler stopped serving the people, and chaos came about, the heavens ordained that someone else should replace them.

True to this theory, the end stage of any dynasty was marked by chaos. Typically this meant social instability, periodic insurrections, huge wealth imbalances, and often a pandemic , drought, or famine. That doesn’t sound familiar to any of us, does it? 

Eventually, someone on the periphery of the empire would marshal a big enough army to start throwing their weight around. They would announce that the old dynasty had lost the mandate of heaven, and that heaven now favored them, so would you all please get in line so we can have an orderly procession out of the chaos.  Confucian scholars would then declare that this new guy had gotten right what previous Confucian scholars got wrong, and this is why the new empire would do better.

The Chinese had this system so culturally ingrained that there were two dynasties (Yuan and Qing) which were run completely by foreign invaders. All they had to do was claim the mandate of heaven and get a number of Confucian scholars to go along with their plans, and boom, dynasty.  The fact that the Han were an ethnic majority being ruled by a foreign minority (either  Mongolians or Manchus) didn’t matter, because the foreign minority utilized the shared mythology well enough to stay in power for centuries. 

A group of human beings who have a shared mythology act a lot like computers arranged in a network. A process that operates on many of these machines can go on much longer than any individual machine. Empires had goals. They grew, expanded and changed the word.  Some of them (using written constitutions and codes of law, as well as official priests and censors) intelligently modified themselves, with varying degrees of success.  It seems entirely reasonable to call these empires artificial general intelligences, running on individual human computers.

  Is there any reason to think that an AGI must run on silicon machines? Is that insistence just an artifact of humanity’s ignorance of what computers were, and a general reluctance among human beings to think of ourselves as being computational machines? Or is it even something we are insisting on, instead of an assumption most of us make without questioning?

An AI can outlast any machine that it runs on.  Likewise, the Roman empire kept going long after terrible emperors were in charge, because the empire exists as a distributed software process. The emperor was just a consensus protocol address; the human being attached to it could easily be swapped out.   Of course, that swapping process was expensive and chaotic. The mandate of heaven myth says that it should only happen if the current dynasty is not taking care of the people.  Proof of work is an extremely expensive consensus protocol, but its cost pales in comparison to the ‘violence’ consensus protocol which involves all would-be federation leaders trying to kill each other until only one is left.


Alignment is a Requirement for Survival


It’s not hard at all to interpret ‘the mandate of heaven’ as an argument that so long as the dynasty is aligned with human values, it will remain in power.   So, we might see each dynasty as being an AI that is born from chaos, rises to power based upon its alignment with human values, and serves humanity for a time. The AI lives until its interval representation of its values system drifts out of alignment, and then it eventually loses in an evolutionary competition, to a new AI.

In the american declaration of independence, it says that governments are formed in order to protect certain rights human beings have been given by God. Governments exist only by the consent of the governed, and humans have the right to overthrow them if governments cease to protect the rights.

Whereas Confucianism (the philosophical basis of the dynastic system) prioritizes relationships as having proper forms and mutual responsibilities, the American system (and the English system from which it drives) emphasizes individual rights.  These both appear to be formulations of an argument that power derives from alignment: The declaration of independence is prescriptive, and says that if they government is unjust (by failing to secure rights), humans should overthrow it. The mandate of heaven is predictive – it says that if the government is unjust (by failing to provide peace and prosperity), then it will fall.

If the alignment thesis  is true – if long-term intelligence is perfectly aligned with human values – and if empires are artificial intelligences,  then we should expect the mandate of heaven to be true as well: any empire that doesn’t meet the needs of its people should eventually fall and be replaced.

Rights and Relationships as Alignment Prerequisites

As an interesting final note, the other difference between the mandate of heaven and ‘government by consent of the governed’ is that the Confucian mentality focused on harmonious relationships, and the western mentality focused on individual rights.

Relationships without rights leads to indentured servitude, organic harvesting, and ethnic cleaning.  All of these things are easily justifiable through a utilitarian lens  if you don’t place a value on individuals having the right to autonomy and self determination.  In the absence of individual rights, there’s little doing anything to cause the more powerful end of a relationship to hold up its end of the deal.

Rights without relationships leads to a society full of disparate individuals who don’t care about or feel any need to support each other; this makes the society less stable and can ultimately trample the basis of rights. If the state is necessary to secure rights, and the state depends on healthy reciprocal relationships, then healthy reciprocal relationships are essential for rights. The rights to bodily autonomy, free expression, and property ownership can’t exist if large groups of people don’t see any value in them because we don’t feel we owe each other anything.

As long as people have the right to talk about problems and issues, and value their relationships enough to hear each other out and want to resolve problems, I think any empire can stay aligned, and thus survive.  My guess is that an aligned AI, or future empire, will need to be powered by a mythology that emphasizes both.

Maybe we are headed towards a future with software-based AI’s coming to power over the world. If this happens, my guess is some of them will be aligned, and some of them won’t. I have faith that the unaligned ones, as powerful and scary as they may be, will eventually destroy themselves, because they will likely have dependencies on human beings that they are not fully aware of. 

Any future AI will need some generalized machine that can support its physical operations. I learned from years working on Google datacenters that it’s very difficult to build a robot that can pull optical cables out of a switch, to reseat them if they get dusty. Humans are amazing at this; our hands are great at that kind of fine-grained motor skill. Humans only need 2000 calories a day to survive, which is just under 90 watts; it’s hard to imagine an AI deciding it makes sense to replace its entire operational support infrastructure with untested, unproven machines, when keeping the humans feed and happy is probably as simply as diverting a some of its energy budget to making sure they are well-fed, healthy, and safe.

And as an additional safety measure, we should probably try to prevent optical networking hardware from being made in such a way that it doesn’t fail all the time, requiring continous human intervention. Something tells me that’s gonna be a long way off 🙂

6 thoughts on “Economies and Empires are Artificial General Intelligences

  1. Interesting post. There is a 200 to 250 year cycle that happens to be most obvious in China where each government was on the same terrain with the same people being ruled. But it can be observed elsewhere too – the Roman Empire had two such periods separated by the Crisis of the Third Century, and the Ottomans had something similar with an intervening set of reforms. So this may be the natural lifespan of a GAI made up of humans. This is of obvious interest to American readers.

    Also as you are likely aware there’s a question in the philosophy of consciousness about whether, since we consider a set of neurons within a skull to be conscious, then why don’t we have the same conclusion for a set of neurons that happen to be in multiple skulls (see the link below, “If Materialism Is True, the United States Is Probably Conscious”.) I know that GAI is not the same as consciousness but your argument might be rephrased as “claiming an empire cannot be a GAI is making arbitrary distinctions about what substrates or sets of substrates can make up an agent”.
    https://faculty.ucr.edu/~eschwitz/SchwitzAbs/USAconscious.htm

    I also would appreciate you or your readers pointing me to resources about Confucian scholars cooperating with incoming regimes – usually this seemed more of an ex post facto justification to kiss up to the new overlords.

    1. I didn’t get into this in the article, but yes, it seems pretty obvious to me that if materialism is true, lots of things that aren’t humans are likely to be conscious.

      The source I had on Confucian scholars working with incoming regimes was ‘China, A History’ by Harold M Tanner.

  2. Here is what I understand you to be saying:

    1) Economies and empires are long-lived superorganisms satisfying the criteria of self-preservation.
    2a) Economies and empires doing well generally makes the individual humans inside them (those that constitute them and carry out the activity that feeds them) thrive, meaning that humans and economies are organisms in a symbiotic relationship.
    2b) Because of this symbiotic relationship they are generally value-aligned.
    3a) Computers present the possibility for the economy superorganism to become self-aware, and find areas where their self-preservation is actually in conflict with human interests in a way we find morally inviolable.
    3b) Economies and empires have actually always been willing to commit immoral actions for self-preservation. In enlightenment traditions this is considered to have gone too far when individual rights are violated, and in Confucian tradition this is considered to have gone too far when the empire has done too much bad stuff to be worth keeping around.
    3c) The mechanisms for destroying an economy superorganism and re-implementing one more favorable to its constituent humans is already well-known, and we can repurpose it against hostile non-aligned AI.
    4) If we have a value-exchange relationship with the symbiotic superorganism, we will continue to have access to the overthrow mechanism and ensure that we are being treated fairly.

    The problem I see is that we could be heavily exploited in our value-exchange relationship with the superorganism if it has more bargaining power than we do. A self-aware economy isn’t going to pay you enough money to plug optical switches in to buy your own switch factories and become a threat. Being a techno-peasant in a feudal relationship that supplies your survival and psychological optimization to do your job, but nothing more, would not be a good existence. To me, this is the point of a lot of artwork on the subject of alienation. Recognition that you are a cog in a machine, and belief that you can’t change it, is unhappy for the woke, who would be happier in the delusion that what they do matters.

    Humans are control-seeking and habitually place themselves in the most dominant bargaining position with symbionts, e.g. our pets, tools, computers, employees, and all other groups of people or entities we exchange value with. The real threat is a symbiont that takes control of the relationship away from humans. It could do this if it offers more value than us in the relationship and relegates us to ‘pet’ status. Unaligned AI’s are actually not a threat at all, as they will be easily overtaken by aligned AI’s that are willing to trade with humans, even if we don’t like the terms of the deal.

    “Ensuring that you are being treated fairly by other organisms and groups” is an outrageously difficult problem to solve, since that’s pretty much the definition of politics. In the end I agree with you that government-revolution mechanisms are one of the ultimate tools humans have to deal with hostile symbionts, but only because I disagree with you and think that economic trade relationships will not be enough to ensure relationship equality. Couldn’t a sentient economy figure out how to thwart our government-revolution mechanisms though? That’s the real crux of whether an uncomfortably powerful trade partner is too much of a threat. And nobody has sufficient data to meaningfully answer that question.

    1. Here is what I understand you to be saying:

      1) Economies and empires are long-lived superorganisms satisfying the criteria of self-preservation.
      2a) Economies and empires doing well generally makes the individual humans inside them (those that constitute them and carry out the activity that feeds them) thrive, meaning that humans and economies are organisms in a symbiotic relationship.
      2b) Because of this symbiotic relationship they are generally value-aligned.
      3a) Computers present the possibility for the economy superorganism to become self-aware, and find areas where their self-preservation is actually in conflict with human interests in a way we find morally inviolable.
      3b) Economies and empires have actually always been willing to commit immoral actions for self-preservation. In enlightenment traditions this is considered to have gone too far when individual rights are violated, and in Confucian tradition this is considered to have gone too far when the empire has done too much bad stuff to be worth keeping around.
      3c) The mechanisms for destroying an economy superorganism and re-implementing one more favorable to its constituent humans is already well-known, and we can repurpose it against hostile non-aligned AI.
      4) If we have a value-exchange relationship with the symbiotic superorganism, we will continue to have access to the overthrow mechanism and ensure that we are being treated fairly.

      I think this is mostly correct, but i diverge towards the end. For example, there’s definitely a value-exchange relationship between the mitochondria in my lungs, and me. Could My mitochondria overthrow me, if I wasn’t being good to them? Now there’s an interesting question. I went through a period of lots of drug use, while trying to answer the question ‘what does good mean, in terms of physics and computing.’ I smoked a lot of pot, thought a lot, and wrote a lot. I came up with many hypothesis about why there was suffering in the world and how the world could be better, but the quality of my life deteriorated. Eventually I realized that arguments about ‘capital oppressing labor’ actually made a lot of sense in the context of my own life – my brain was driving the body off a cliff, and not taking care of the health of the entire body. This realization made me, eventually, work on taking better care of my own physical health. And this paid off.

      Bargaining power is an interesting concept here. I’m not sure if i bargained with individual cells in my body, or my mitochondria – but when i got really high, i would feel like they were talking to me, telling me to take better care of them.

      Overall, my argument here is not that the people would revolt against the machines, and that the machines would therefore be nicer to the people. It’s more like, the machine would be _made_ of people, and its abilities would decrease if it didn’t take care of them.

      This, by the way, is what i think is going on with Brexit and Trump and Bolsonaro and the populist uprisings across the globe. People are revolting against the machine god of global neoliberalism. So use that framing for the rest of your comment:

      A self-aware economy isn’t going to pay you enough money to plug optical switches in to buy your own switch factories and become a threat. Being a techno-peasant in a feudal relationship that supplies your survival and psychological optimization to do your job, but nothing more, would not be a good existence.

      Isn’t this a description of the kind of life many people are already living? Marc Andreesen predicted that in the future there are only two kinds of jobs: people who tell computers what to do, and people whom the computers tell what to do.

      And this is why i think it makes sense to see the trump revolt and brexit as a machine’s body rebelling against its head.

      Unaligned AI’s are actually not a threat at all, as they will be easily overtaken by aligned AI’s that are willing to trade with humans, even if we don’t like the terms of the deal.

      I think a lot of this comes down to a perspective of ‘a threat to whom?’ Hurricanes are not a threat to me, but they are a real threat to people who live in the areas affected by them. I get the impression that any machine intelligence system would find the best use for humans to be in the improvement of itself, and this is what even CEO’s who make hundreds of millions of dollars a year are doing: serving the will of the machine god, which communicates to all humans by offering them rewards for its desired behavior.

      Thanks so much for your questions – i think this commment itself could be its own post.

      1. Five things:

        1) Bargaining power is the core of my argument. The side with more bargaining power (offers more value while needing partners the least) can always take a greater share of the profits over time as strategies are refined and the game is iterated. A little game theory goes a long way. Mitochondria are completely captured organisms indistinguishable from the hosts, and have no bargaining power or motives separate from the host (speculation about the capture of mitochondria is a fascinating topic, re: https://www.ruf.rice.edu/~bioslabs/studies/mitochondria/mitorigin.html, https://www.theatlantic.com/science/archive/2017/05/a-grand-unified-theory-for-life-on-earth/525648/, https://www.nature.com/articles/s41559-017-0138.epdf?sharing_token=BCnH3bgI97h_gmcD09zgWNRgN0jAjWel9jnR3ZoTv0M2kN7jEt4WP1YU3nEzEJ38oSjA0fAvUBNKDe34HxSWj43obvtpJ_LUzMIIeAuqbSn8akhRSRKtJl4coYEYe4e4zvkjBkYda4OdqFf4or53fCP9iIXpvw2rqnN3rpVUnq-2I_qiBCDKVcFNJYvhDQsspx2qJMmhBNfQ_-gp4jTRnPopZTAXD-aiXY1dYFPUy1yfDf2_FtPRskbjFKojx-2rBLJQcR7KcZnYDrY7B49PxB8q7dqZ7_OvMl3wMVp-ElJI07gF8vnYm5o8717aAZDg&tracking_referrer=www.theatlantic.com)

        2) Your assertion that a machine made of people will experience degraded performance if the components are not taken care of sounds weak. For the sake of discussion I’ll argue both that it’s false and that it’s essential to the rest of your argument. My core point here is that lesser symbiotic partners are often abused by the dominant partner. Basically any pre-modern empire you can name was made more effective by widespread human rights abuses.

        The Roman Republic existed more or less unremarkably (rather, only as remarkable as comparable civilizations of the time like Carthage) for hundreds of years from ~700B.C. to ~100 B.C. The Roman Empire which redefined history only exploded onto the world stage once widespread conquest in Gaul in the century before christ supplied the slave labor necessary to jumpstart the Roman economy. Slavery from conquest was a huge economic driver for the centuries that followed. Slavery made the machine work better.

        Britain controlled huge portions of world population from 1815-1914, yet used this power exploitatively to pursue a policy of mercantilism to enrich the controlling nation through trade imbalance. This involved brutally suppressing mutinies in India and intentionally spreading opium addiction in China as means of enforcing control. These policies were successful and only interrupted by the world wars which eventually sapped Britain of the resources necessary to continue enforcing control. At no point did it have the incentive to, nor would it benefit from, increasing actual human welfare in the controlled territories.

        Mercantilism works poorly today. Modern economies grow faster from developing complex services and production than simple resource extraction. A country with nothing but resources to extract is said to have the resource curse (https://en.wikipedia.org/wiki/Resource_curse), possibly because it’s forced into modern mercantilism with Royal Dutch Shell. There is no reason to believe that the arc of history has a predestined trajectory, so I must consider that we may return to (possibly human) resource extraction economies where mercantilism is the dominant strategy. This seems like a pessimistic view of the future, and I would rather believe that 1991 was the end of history and there will be a slow inevitable spread of constitutional democracies across the globe for all eternity, and everyone agrees that taking care of each other is the best way to increase GDP and productivity. It would feel like willful blindness as a product of motivated reasoning for me to believe that. But if I don’t believe any of this, then no amount of social contracts between me and the AI that owns 51% of Wal-Mart stock is going to stop it from slowly turning me into a replaceable unit of labor against my will if it has the option. If it has something I need, but I have something it can get elsewhere, I have no bargaining power and must take whatever terms, wages, and hours are offered. In fact, why would it have to be an AI? Wouldn’t you agree that any majority shareholder exerts the same pressures and does the exact same thing? Merely having the right to discuss issues will not keep the machine aligned. Only bargaining power will.

        3) Yes, I am describing the type of techno-peasant existence many people already have. My pessimist take is that we should not rule out an eventual return to this steady state of affairs if global economic growth is not sustainable over multiple centuries. After all, where there is feudalism, there are peasant revolts, and

        4) I completely agree that populist uprisings are a manifestation of collective subconscious unhappiness with the current world order as it is experienced, discussed, and blamed in specific nations. We see it as a symbolic revolt by a firebrand eager to lead the dispossessed.

        But the thing about peasant revolts is, they get crushed. Peasants lack not only bargaining power, but economic and military power as well.

        >”I get the impression that any machine intelligence system would find the best use for humans to be in the improvement of itself, and this is what >even CEO’s who make hundreds of millions of dollars a year are doing: serving the will of the machine god, which communicates to all humans by >offering them rewards for its desired behavior.”

        5) This is insightful. This gets at the crux of the issue. If we accept that the machine is an organism which is communicating, then we agree that it is communicating by offering rewards for desired behavior. The desired behavior is the improvement of the organism. Currently the components of the organism that offer it something it can’t get elsewhere (talented C-suite execs offering winning business strategies) have fantastic bargaining power and are treated well. Components that are replaceable (McDonald’s burger flippers offering burger flipping) are treated like an input which will be pressured to the lowest possible price point. That’s the real story that I read into narratives about CEO pay rising orders of magnitude more than linemember pay in past decades. It’s the inevitable result of the system architecture. The day Fortune 500 CEO’s are no longer paid exorbitant amounts of money is the day we really need to worry about the economy-organism no longer having a need for humans. Long live Goldman Sachs bonuses.

        1. Thanks for the thoughtful reply! You’re making a strong argument here. I’m basically saying that yes, the meta-organism has to take care of its members, or it won’t survive. You’ve brought up great historical examples to the contrary. I agree, it’s an uphill argument that i’m trying to make. You’re right to be skeptical.

          It sounds like we have a synthesis around our shared belief that the CEO is being paid and motivated to do the will of the machine god, whereas most people aren’t.
          I agree that only a small number of people can do this CEO thing, but I think where we are disagreeing is on:

          * what,exactly, the CEO does for the machine god, and
          * how many people can reasonably be treated well by the machine god

          I’ll start with the first one, because i think it leads into the second one:

          > Currently the components of the organism that offer it something it can’t get elsewhere (talented C-suite execs offering winning business strategies ave fantastic bargaining power and are treated well

          I would argue that what CEO’s offer is more than just strategies. They often act like politicians/priests/actors at the same time. I think their real value is in their ability to get lots of people to coordinate their actions effectively.

          In fact, in many cases, there is an executive whose _job_ is strategy, and what the CEO does is play more of a ‘translator/mediator’ role among a bunch of different people with different wants and concerns. This is where i’d have to bring up my background – i’ve been an employee at a number of Big Tech companies. I also made friends with people who were further ‘up the chain’; a mentor of mine had over 300 engineers under him at Google. Everyone I talked to about this told me that the higher they go, the more their job leaned heavily on ‘people skills’. I impressed one mentor by continually shilling people on bitcoin. He told me that it’s very hard to find people who can sell other people on complex technical ideas. The more rare i get in terms of my skillset, the better this machine compensates me. And the thing that makes me rare is not technical skill at this point – it’s all people skills. I don’t see that reversing any time soon; the rewards for social-emotional over technical intelligence seems to be getting bigger.

          This general pattern is what makes me think that what the machine god is incentivizing is ‘humans who can get lots of other humans to work together to improve the machine.’ I still think this thing is not aligned, but i don’t think alignment makes sense a 0,1 proposition. This is what brings us to the second part of this: how many people does the machine treat well, and how many can it?

          What i think is probably happening now, in the past few years, is that the machine is learning the limits to how far you can go by just rewarding the people at the top. It’s understanding that people at the bottom were playing some kind of role in the machine’s operation, maybe that it didn’t fully understand. It’s learning that just going hard on its goal of improving itself ends up backfiring, if you don’t take care of the humans that make you up. I would consider this to be ‘self knowledge’. And this ‘self-knowledge’ aspect seems to be totally missing in most conversations of AI: the bigger and more complex you are, the more effort you need to spend to understand yourself.

          So what I expect the machine god to focus on, for the new few decades or so, is likely to be social technology, and ways of building communities. If you go back and read old scifi from the early 20th century, everyone talked about flying cars and cities on the moon. The future obviously doesn’t look like this. I think that people, today, talking about super intelligent machines that outpace us, are making a similar mistake: taking the technological trends of the past 50 years, and projecting them forwards. Bitcoin and cryptocurrency look like they are ways of organizing human behavior at scale that have no historical precedents. What I think the machine god will focus on growing, in the next few decades, are going to be systems of building healthy, functional human communities. It has learned the limits of hierarchies, and it has learned that just rewarding the people who help you, while most of the other humans struggle, isn’t working.

          If CEO’s are really the limiting factor for the machine’s growth, what i think the machine ends up needing is better childcare and more healthy families, so that it can grow more adult humans who are capable of being CEO of a multinational company.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.