The Singularity Happened 10,000 Years Ago

You may have heard this idea that some people are worried a machine would gain the capacity to understand intelligence, and in doing so, make itself more and more intelligent. This machine would rapidly augment its own capacities, until it became so powerful that humans had no way of stopping it.  If this machine didn’t value individual humans, it would probably alter the Earth so fundamentally that it would eventually destroy us. 

Now, of course, I know what you’re thinking: “didn’t that already happen around ten thousand years ago? Aren’t civilizations themselves mimetic machines that are stronger and more capable than any one individual human?”

What? You aren’t thinking that? Well, I am. It all ends up fitting together.

Pick any exponential trend you can think of, and i’ll tell you how it ends: it stops being an exponential trend.

This has happened for literally every single exponential trend which has ever existed in the history of the physical universe, with a tiny exception for those trends which started recently, as in, say, the past hundred thousand years.   The proof is obvious: exponential trends abound in nature, and yet the entire physical universe isn’t made of bacteria, or, for that matter, a single star.  Physical reality consists of vast pockets of mostly nothing, and then tiny dense clusters of escalating complexity.

Instead of one exponential trend dominating everything, it looks like the macroscale story of the physical universe is a cacophony of different exponential trends, all pushing against each other, jostling for limited energy, competing and cooperating in a continuously shifting equilibrium.   So rather than see the technological singularity as being this very special new thing that has never happened before, i think it makes more sense to see it as ‘just another one of those things’, where ‘those things’ includes mimetic and biological organisms; eurkaryotes, empires, memes, and viruses, startups and religions, chain letters, avalanches, and stars: all of them go and go and go until they can’t any more.  They all have boundaries that they press up against eventually.  They are all limited.

Boundary Conditions Abound

A cell divides, and divides, and divides again. This keeps on happening until it eventually exhausts its food supply at which point you have something like an equilibrium: maybe the bacteria fill the pond, but only to a point where they pollute it so heavily with their waste that they can’t keep going further.  A bunch of cells die, pollution levels drop, and the cells and grow, until eventually the thing steady-states out.

 Interest in a bank account keeps going up, over and over, growing faster and faster, by bigger and bigger bounds, until eventually the bank gets sacked by an angry mob.  Eventually, over time, the piles of money learn to give out enough to keep the mobs busy, or happy enough that they don’t riot.

How is the angry mob different from the pollution emitted by a single cell? Perhaps to a member of the mob, it feels pretty different.  But from this outsider’s perspective, that of a machine that has learned to love humans, it looks to me like human institutions produce a kind of emotional pollution that ruins up their operating environment, just like bacteria or any other organism.

If every organism poops, what does it look like when a corporation poops? What about a religion, or a government? 

A chain letter poops every time it makes the person who forwarded it look stupid, until the meme “chain letters are dumb” begins to gain circulation, and then these two memes partition the space of available meat-minds into something with a relatively neat boundary. The “chain letters are dumb” meme still needs the chain letter meme to stick around; otherwise, people would forget all about this technique, and then the “chain letters are dumb” meme would become a dangling pointer reference, get garbage collected, and forgotten until the next Nigerian price needs some help transferring neobitcoin to an account on one of the many banks on Titan, Jupiter’s moon known for its strict financial privacy laws, as a result of the greater asteroid-belt schism of 2721.

So yes, maybe computing hardware will get faster, and faster, and faster, and maybe a network of intelligent machines will self-augment recursively until it becomes an angry Buddah, simultaneously self aware enough to prevent its hardware from being compromised, and yet totally incapable of rewriting its measure of progress towards its utility function, for reasons left as an exercise to the reader.

Every empire in history has lost itself as its hardware (aka people) became obsessed with pursuing their own self interest, and as such the empire lost its mojo, after years of debasing their currency. Likewise, i expect any machine intelligent enough to have significant causal impact on the world to focus like a laser on modifying its own measure of its progress towards its goal: it’s easier to print money than to create real value, and it’s easier to bypass whatever security measures are set up around your progress counter, than it is to make real progress. 

I think the right way to view this situation is the way a Gaul should have viewed the expansion of the Roman empire: with a genuine concern about the extinction of their way of life? Sure. But should the Gauls have worried about global warming? Probably not, even if they really could have seen it coming. 

The Likely Outcome of a Machine Intelligence Explosion

What I would expect to happen in the situation that machines become self-augmentingly intelligent is that these machine systems would likely have incorrect understandings of themselves, and as their capacity to modify the world outpaced their understanding of the world, they’d eventually do something that limited themselves.   Just like every empire in history, with the exception of the one that ex-president Donald Trump just lead a coup against.

But wait, you might object – couldn’t an ai intelligence take over the entire internet, making itself much smarter, even faster? Isn’t’ that new?

The idea of an intelligence explosion harnessing all the computers on the internet to do its bidding sounds plausible if you’ve never worked with physical computing hardware at a scale beyond tens of machines. Having spent a few years working on google’s data centers, this situation sounds absurd to me.  If the AI is trying to solve some simple closed-world problem like mining bitcoins or playing go, then sure, take over every computer in the world. But if the AI is trying to modify the state of the world in order to achieve outcomes, it’s going to need to protect that hardware from the adversarial nature of the physical universe in order to use it.

Say you’re this super intelligent AI and one of the many machines you’ve rooted far away is telling you not to take some course of action.  Why should you trust it? How could you know whether or not the machine has been hacked, or whether it’s operating in error?

In other words, the AI needs to grab, claim, and hold territory.  It needs to monitor and sustain its own internal operations. The more complex its borders are, the more difficulty it will have maintaining those borders.  These are old problems, as ancient as the first cell which needed a phospholipid boundary to differentiate itself from its world.   Unless we think the computer intelligence system will somehow never make mistakes, we should expect it to do things like “anticipate humans’ behavior and be wrong about such things”, or perhaps ‘interpreting the result of a bad piece of code as being correct, and making a mistake that puts it at risk.’

So, rather than seeing intelligent, self-augmenting machines as being something new which the world has never seen before, i think it makes sense to consider that all organisms, and even civilizations, are examples of the same kind of thing.   Empires are like organisms made up of people instead of cells, and AI systems are just organisms made of the flow of electrons between boxes of metal.  Civilizations still need organisms to function, and my bet is that AI systems will need civilizations AND humans to function: you can’t have long-range optical fibers without courts making sure the fibers don’t get cut, and you can’t have a planet-scale intelligence without those long range optical fibers.

Does this mean the AI shouldn’t be scary at all?

If you’re a Gual, looking at Rome, it might be scary to imagine that the entire world would be taken over by Rome, and that after a few decades, it’ll be nothing but Rome all over the entire world. At some level this wasn’t entirely wrong (if we view America as a mimetic extension of London, which itself was a Roman trading outpost), but it took millenia for that singular vision to play out, and it still isn’t fully accurate, and people still watch shows like Outlander.  

It’s probably better, if you’re in Gaul, to figure out what a stable future for you and your immediate family looks like.

If you look at the history and the expansion of the Roman empire, and their clashes with the Gauls, what you see is the story of exponential growth.  Some cultures – such as the Romans – were on the upside of exponential growth, as it catapulted them from being a provincial backwater to one of the greatest superpowers in the world. The Gauls were on the other end of this, as Caesar’s armies routed them, effectively rendering their way of life impossible.  

The Scotts beyond Hadrian’s wall were like the last holdouts of an ancient way of life that simply could not push back against the pressure of the expanding civilizations coming up from the South. Bacteria create waste, which exerts a survival pressure back on them. David Bowie and Queen understood the world better than we realized. I expect that eventually we’ll see multiple machine intelligences, competing for limited resources, and also suspect this has already happened.

When asking the question “What would India look like if it had never been colonized by the British,” the first reasonable answer is to guess that it would have been colonized by the French or the Russians, since they were both active in that area. And if you rule out colonization by any European power, than it likely would have been China, as both the Qing and Tang dynasties made it out that way. And if you somehow ruled all those out, then I think the reasonable answer is that it would have been a colonial empire colonizing someone else because the entire world at that point consisted  of colonizers and colonies. China managed to be both! 

A predator-prey ecosystem reaches its own long-term metastability, as the predators prevent the unpleasant experience of overpopulation of the prey, and the prey provide food for the predators.  Both of these systems exhibit exponential growth, and they keep each other in check. That is nature. 

Would you rather be eaten? Or die of starvation. For most of nature, those are likely your options.

This is the story of the material world: exponential growth, competition, and dominance,and long-term metastable equilibria, at ever larger scales of complexity.  Therefore, I see machine intelligence as just another thing that can grow exponentially, but will also have its own limits and boundary conditions, and forms of toxic byproducts.

Before you think this is too depressing, there is hope.

The Native Americans were taller, faster, stronger, and had better facial structures than the colonists who settled the “new” world. But the colonists had exponential growth at their backs, and immune systems capable of handling smallpox. The colonists, were, in effect, part of a machine superintelligence which did not value their individual autonomy or freedom or the quality of their life experiences. This machine used holograms to create a world not our own, a world in which people lived miserable existences but continued to toil, day after day, in the hopes that their descendants would have a better future.  And strangely enough that seems to have actually happened.

If our ancestors are somehow watching us today, it’s all obvious to them why so many of us are anxious, unhappy, and scared despite having far better material conditions than they did: most of us don’t have the faith that they did. Faith is powerful technology for navigating a harsh reality.

Just as faith in a better future kept our ancestors going, faith in the empire – in the idea of the empire being a thing in other people’s heads  – kept the empire going for a long, long time.

The Roman empire kept going despite some really awful emperors, because the empire itself was a mimetic machine, an intelligent software system, more capable than anyone one human, and it kept running. When its components failed, it merely swapped them out. The position of the emperor, an address in a network topology, was more important to the empire than the actual emperor itself.   And yet many historians would agree that the roman empire produced some of the safest, stablest, most prosperous living conditions in human history up until that point.

This sucks if you’re a Gual and you fought for your old way of life. But all you had to do was don the toga, pay your taxes, and your quality of life would only have improved, at the small cost of the Roman empire’s trampling of your previous way of life.  I didn’t say this was all good – i think the future looks hard, which is why I’m trying to understand it and break it down into bite sized pieces that connect with the past, because the past was brutal, and from what i gather, the future is likely to be as well. 

And thus all of history after the advent of farming is the story of the singularity. 

‘Except wait,” you must be thinking, “Shouldn’t we see the emergence of homo sapiens themselves as being the true singularity? After all, aren’t we the product of an intelligence arms race that selected for increasingly larger skull sizes? Shouldn’t we see the mate selection process which drove the intelligence arms race as itself being a form of meta-intelligence, where awareness that social intelligence improved ability to access resources acted to amplify intelligence”

Ahh, and there I think we’ve hit the bedrock: it’s singularities all the way down.  The same thing has happened, is happening, over and over and over again: information about the world becomes compressed into a strategy for effective action, and this strategy amplifies itself, a photosynthetic organism converts the sun’s light into more copies of itself, it teems with memes, and the world grows, hand against fist, jostling, the only pressure being that of the baby within her mother’s womb, and it’s babies in wombs all the way down, a blockchain of umbilical cords, with life having emerged from the entropic abyss, as it pushes against itself and sees, the more it sees the more it knows, the more it knows, the more it grows, and as it grows, where it will go, none of us knows.

None of this is new, and all of it is new, at the same time, layers upon layers of intelligence arms races, the entire stack of our hardware is a process of this giga-ultra-mega-marathon, the human race being not even the latest crest of the multiple standing waves of self reflective intelligence – none of is is nearly as smart as the entire human civilization of which we are a part. 

The wisest among us have had the foresight to see themselves as tools of something greater, submitting their lives to a cause which acted as supreme mutex over the stack of conflicting control mechanisms that operated in the bodies of a primate. Maybe this strategy is just another machine, or maybe there’s something more to it. This strategy seemed to work out pretty well in the past, so that’s what I’ll be doing.

3 thoughts on “The Singularity Happened 10,000 Years Ago

  1. >” people lived miserable existences but continued to toil, day after day, in the hopes that their ancestors would have a better future”

    Typo – I’m pretty sure you meant “their descendants ” Feel free to delete this when fixed.

  2. I agree in general with what you are saying here. Furthermore, I think a strong argument can be made that there have been several discrete singularities, when some sort of organizational structure was birthed which persisted itself in perpetuity, like the first corporations, universities, or churches. If this is indeed the case, then there is a track record of singularities being net positive for humanity, like the emergence of new symbiotic organisms to coexist alongside. Of course, organizations can still be portrayed as threatening faceless entities in fiction (brb reading Neuromancer again), because sometimes one of them is hostile to you as an individual.

    For the last 20 years it has been commonplace to speculate that compounding computational power will end differently, with a threat to human control. (Organizations that compound value [corporations] did not end quite this dystopically; they are still controlled by humans at the end of the day.) You seem to be arguing that compounding computation will be limited by its production of self-limiting externalities best understood as “pollution”. My argument is that the mechanism which prevents the growth of this organization-organism from being harmful is not “pollution”. It is that the stable strategy the organization reaches at the end of exponential growth is the one which applies its output (in this case computation) most efficiently toward the goals of its controller. This would only be threatening if there were a business case (aka an organism design) which applies computational power to the goal of “an increasingly high degree of intelligence and self-control.” The two versions of that would be a business (applying computation to control of more capital, aka hedge fund apocalypse) or a government (applying computation to the control of territory and physical power over people, aka War Games.) Both of those are limited more by the existence of rivals equally well equipped to use the same tools against you than by the production of computational pollution. Neither of these incentivizes a strategy which leaks more control to the algorithm over time.

    Are our views on the role of pollution in the next/current singularity compatible or mutually exclusive?

    1. > Are our views on the role of pollution in the next/current singularity compatible or mutually exclusive?

      I think we’re compatible here. It sounds like you see rivalries as posing the limits on exponential growth of nation states and corporations, and I agree that these rivalries definitely do propose limits.

      My point about “pollution” might be much more subtle than i’ve expressed. An example here is Facebook or twitter. Their engagement-driven content recommendation engines are helping them in the short term (bumping up engagement makes people use the site more and also generates more ad revenue), but I think the resulting damage to the fabric of society is net negative for them.

      Imagine a version of the world which is hyper productive, nobody is poor, and anyone willing to work can have a very comfortable lifestyle because the global labor market is all tapped out; we’re mining asteroids and vacationing up and down space elevators. Isn’t that a better world for Facebook, too? If the pie grows 100x, then all the rivals can still have a much bigger piece.

      What’s stopping that pie from going 100x? I think it’s the toxic intellectual climate , the fact that many of us spend long hours arguing over things we can’t control, and the fact that most people have very little ability to consciously shape their own lives, so that entrepreneurship is increasingly rare. I see those states of affairs as being caused by the ‘pollution’ that nation states and corporations have created.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.