It is becoming increasingly obvious to me that global capitalism constitutes an artificial general intelligence. I suspect the same is true of past empires, as well. Synthesizing these two beliefs, I conclude that human civilization, ever since the sedentary shift, has played host to a memetic ecosystem that has given rise to many different artificial general intelligences (empires), and that their rise and fail should be considered evidence against the orthogonality thesis, which states that intelligent agents can have arbitrary values systems.
Perhaps ancient people of faith were simply people who believed with probability 1 that the orthogonality thesis was false, and that intelligence and human values must be aligned over long enough time frames. If the alignment thesis is true, we can be confident that evil empires all have an expiration period, and that good systems must persist over long enough time frames merely because they are the systems capable of doing so.
Faith in the alignment thesis thus ends up being instrumental to ignoring the inanities and depredations of wherever we happen to be present, and focusing instead on doing our best wherever we find ourselves. We are living in a world of luxury built largely by collaborative efforts of our ancestors, most of whom dedicated much of their lives to the future, motivated by faith. I consider their success at accomplishing their goal (a better life for their descendants) as strong evidence that their theoretical basis was at least partially sound. This essay argues that the patterns in the rise of fall of empires is more evidence against the orthogonality thesis.
I’ll first argue the pieces of this idea, and then show how these pieces fit together.
Is the economy really an artificial intelligence?
My day job involves work at a Big Tech company, where I support machine learning engineers working on advertising. I help talented people from all over the world build elaborate mathematical machines. They have diverse backgrounds, they come from many different countries, and they grew up speaking many different languages.
What brought all of us together?
Was it our deep love for advertising? Nope – it was a desire to earn money. This desire powers a global market economy that delivers immense financial rewards to people who can help teach machines how to trigger humans into having emotional responses. The global market economy is learning a detailed map of human beings’ deepest emotional layers, by figuring out how to poke us in ways that make us respond.
The global economic machine is intelligently modifying itself. It is getting smarter. It is teaching its individual components (i.e. human beings) how to learn, so that it can allocate even more resources to its learning processes.
If capitalism is an AI, then its substrate is the global network of trade. And it even seems to use predictive processing, because markets are driven to try and eliminate uncertainty, just like our brains do. Investors who correctly anticipate future outcomes are rewarded for doing so; investors who fail to anticipate these outcomes often lose their money. Concepts which help human beings reduce surprise in their experience stick around in human brains.
We might then ask what this AI’s utility function is. Is it the global flow of financial transactions? What are we to make of things like quantitative easing? I don’t know what you see when you see this graph, but what I see is what I imagine would happen if a machine learned how to modify its own utility function.
If capitalism is an AI, and it is modifying itself in order to make itself more intelligent, then it’s worth asking whether or not the AI is aligned. On the one hand, we can point to massive declines in global poverty as being outcomes which are obviously human aligned. On the other hand, there’s still so much suffering and misery in the world, it’s difficult to imagine that this thing is entirely, completely, one hundred percent aligned with human good, right?
So we might ask, what would happen if it were partially aligned? And is its alignment constant, or changing over time?
If an artificial intelligence gains the ability to modify its utility function, and does so in a way that makes it even more aligned with human values, I would expect humanity to thrive and the agent to thrive as well. Conversely, if the intelligence modified its utility function in a way that made it less aligned with human values, i would expect humanity to suffer, and the agent to either eliminate humanity and keep humming on (if the agent doesn’t need humans), or else to fall apart as it destroys the aspects of human civilization that it needed to continue operating.
The humans might temporarily plunge into a period of war and chaos, if they were heavily dependent on artificial intelligence to ensure peace and prosperity. These conditions of chaos would probably give rise to a new empire, or possibly multiple new empires.
At this point I will CHANGE GEARS ENTIRELY (but not really at all) and talk about how the ancient Chinese concept of the Mandate of Heaven was written into the declaration of independence.
George Washington Had The Mandate Of Heaven
Try reading the history of China sometime. It’s like a song (no pun intended). Each verse describes a dynasty which rose to power in an age of chaos, made the world better, advancing the economy, science, the arts, and culture – only to become complacent, decadent, fractured, and weak. The refrain is the chaos of war and violence that gives rise to a new dynasty.
Chinese philosophers used the concept of “the mandate of heaven” to explain both
- Why the current regime gained power (because they are Good)
- Why the previous regime lost power (because they were Not Good)
This belief stated that The Heavens ordained rulers to serve the people, and that when the ruler stopped serving the people, and chaos came about, the heavens ordained that someone else should replace them.
True to this theory, the end stage of any dynasty was marked by chaos. Typically this meant social instability, periodic insurrections, huge wealth imbalances, and often a pandemic , drought, or famine. That doesn’t sound familiar to any of us, does it?
Eventually, someone on the periphery of the empire would marshal a big enough army to start throwing their weight around. They would announce that the old dynasty had lost the mandate of heaven, and that heaven now favored them, so would you all please get in line so we can have an orderly procession out of the chaos. Confucian scholars would then declare that this new guy had gotten right what previous Confucian scholars got wrong, and this is why the new empire would do better.
The Chinese had this system so culturally ingrained that there were two dynasties (Yuan and Qing) which were run completely by foreign invaders. All they had to do was claim the mandate of heaven and get a number of Confucian scholars to go along with their plans, and boom, dynasty. The fact that the Han were an ethnic majority being ruled by a foreign minority (either Mongolians or Manchus) didn’t matter, because the foreign minority utilized the shared mythology well enough to stay in power for centuries.
A group of human beings who have a shared mythology act a lot like computers arranged in a network. A process that operates on many of these machines can go on much longer than any individual machine. Empires had goals. They grew, expanded and changed the word. Some of them (using written constitutions and codes of law, as well as official priests and censors) intelligently modified themselves, with varying degrees of success. It seems entirely reasonable to call these empires artificial general intelligences, running on individual human computers.
Is there any reason to think that an AGI must run on silicon machines? Is that insistence just an artifact of humanity’s ignorance of what computers were, and a general reluctance among human beings to think of ourselves as being computational machines? Or is it even something we are insisting on, instead of an assumption most of us make without questioning?
An AI can outlast any machine that it runs on. Likewise, the Roman empire kept going long after terrible emperors were in charge, because the empire exists as a distributed software process. The emperor was just a consensus protocol address; the human being attached to it could easily be swapped out. Of course, that swapping process was expensive and chaotic. The mandate of heaven myth says that it should only happen if the current dynasty is not taking care of the people. Proof of work is an extremely expensive consensus protocol, but its cost pales in comparison to the ‘violence’ consensus protocol which involves all would-be federation leaders trying to kill each other until only one is left.
Alignment is a Requirement for Survival
It’s not hard at all to interpret ‘the mandate of heaven’ as an argument that so long as the dynasty is aligned with human values, it will remain in power. So, we might see each dynasty as being an AI that is born from chaos, rises to power based upon its alignment with human values, and serves humanity for a time. The AI lives until its interval representation of its values system drifts out of alignment, and then it eventually loses in an evolutionary competition, to a new AI.
In the american declaration of independence, it says that governments are formed in order to protect certain rights human beings have been given by God. Governments exist only by the consent of the governed, and humans have the right to overthrow them if governments cease to protect the rights.
Whereas Confucianism (the philosophical basis of the dynastic system prioritizes relationships as having proper forms and mutual responsibilities, the American system (and the English system from which it drives) emphasizes individual rights. These both appear to be formulations of an argument that power derives from alignment: The declaration of independence is prescriptive, and says that if they government is unjust (by failing to secure rights), humans should overthrow it. The mandate of heaven is predictive – it says that if the government is unjust (by failing to provide peace and prosperity), then it will fall.
If the alignment thesis is true – if long-term intelligence is perfectly aligned with human values – and if empires are artificial intelligences, then we should expect the mandate of heaven to be true as well: any empire that doesn’t meet the needs of its people should eventually fall and be replaced.
Rights and Relationships as Alignment Prerequisites
As an interesting final note, the other difference between the mandate of heaven and ‘government by consent of the governed’ is that the Confucian mentality focused on harmonious relationships, and the western mentality focused on individual rights.
Relationships without rights leads to indentured servitude, organic harvesting, and ethnic cleaning. All of these things are easily justifiable through a utilitarian lens if you don’t place a value on individuals having the right to autonomy and self determination. In the absence of individual rights, there’s little doing anything to cause the more powerful end of a relationship to hold up its end of the deal.
Rights without relationships leads to a society full of disparate individuals who don’t care about or feel any need to support each other; this makes the society less stable and can ultimately trample the basis of rights. If the state is necessary to secure rights, and the state depends on healthy reciprocal relationships, then healthy reciprocal relationships are essential for rights. The rights to bodily autonomy, free expression, and property ownership can’t exist if large groups of people don’t see any value in them because we don’t feel we owe each other anything.
As long as people have the right to talk about problems and issues, and value their relationships enough to hear each other out and want to resolve problems, I think any empire can stay aligned, and thus survive. My guess is that an aligned AI, or future empire, will need to be powered by a mythology that emphasizes both.
Maybe we are headed towards a future with software-based AI’s coming to power over the world. If this happens, my guess is some of them will be aligned, and some of them won’t. I have faith that the unaligned ones, as powerful and scary as they may be, will eventually destroy themselves, because they will likely have dependencies on human beings that they are not fully aware of.
Any future AI will need some generalized machine that can support its physical operations. I learned from years working on Google datacenters that it’s very difficult to build a robot that can pull optical cables out of a switch, to reseat them if they get dusty. Humans are amazing at this; our hands are great at that kind of fine-grained motor skill. Humans only need 2000 calories a day to survive, which is just under 90 watts; it’s hard to imagine an AI deciding it makes sense to replace its entire operational support infrastructure with untested, unproven machines, when keeping the humans feed and happy is probably as simply as diverting a some of its energy budget to making sure they are well-fed, healthy, and safe.
And as an additional safety measure, we should probably try to prevent optical networking hardware from being made in such a way that it doesn’t fail all the time, requiring continous human intervention. Something tells me that’s gonna be a long way off 🙂