It isn’t too hard to imagine someone writing something like this, in the distant future:
“The rationalists were often popularly lampooned for their concerns about the dangers of artificial intelligence. Many of them did sincerely worry about this outcome. And yet modern scholars generally agree that the idea of a rapidly self-improving artificial intelligence was actually a metaphor for the act of intentional personal development.
In the same way that alchemists spoke about ‘transmutation of lead into gold’ as coded way of referring to personal development, the rationalists spoke about a machine intelligence that modified itself , gaining power exponentially as it did so. The alchemists used greed to motivate personal growth; the rationalists used fear.
In both cases, technical metaphors and concern for the outward world caused large numbers of adults to invest time and energy into recursively improving their own minds.
This pattern of self modification, in the pursuit of higher intelligence, was such a perfect mirror of the imagined AI making itself more intelligent, that many scholars believe it was intentional.
You have to remember, this was all during a period of history where many adults had grown up religious, and reasoned their way out of those beliefs. The age of chaos had barely started, and most intelligent adults were hesitant to express anything that sounded too close to what were largely derided as ancient religious myths.
During that period of history, If you went around talking about free will and making choices, you’d be largely decried as a crank. The rationalists found a way to reconnect with the ancient wisdom, by cleverly talking about machines.
Instead of describing people making themselves more intelligent, the rationalists couched all of their conversations in terms of a hypothesized machine. Instead of talking about people using observation and reflection to cultivate personal growth, the rationalists described a machine that did so. The rationalists managed to communicate the concept of something that observed its position in the world, and, through its advanced understanding of intelligence, continually made modifications to itself in order to advance its capacity to manifest its goals.
It turns out it was easier to evade the cultural censors when talking about the dangers of a self-improving machine, than it was to talk about ideas which had largely been dismissed as outdated and unscientific, by the leading thinkers of a suicidal, drug-addicted population rife with poverty and obesity, ignorant of its unique place in history, and miserable without a clear purpose.
The idea of being afraid of the machine’s dominance, then, makes sense as a clever viral transmission mechanism. Just as the desire for gold provoked many in the early modern era to take up the study of alchemy and thus the pursuit of truth, the fear of a superintelligent machine continuously improving itself provoked many adults into aspiring to do the same.
There is a parallel in the ancient wisdom myths as well: stories about “Forever”, and “an afterlife” have more resonance than stories that claim “if you work hard now and make good choices now, the rewards will accrue to you a few decades from now.” For a primate, “you will be in paradise forever” is an eigenvector supported in hardware; “you will have a happier life a few decades from now” is usually far too abstract to be effective.
So what happened to their feared outcome? Did that hypothesized superintelligent machine ever arrive?
The limits to any mind’s capacity for self modification, are, ultimately, based upon the mind’s capacity to continuously re-calibrate its models of itself and the world. The bigger a mind gets, the more it changes the world, and thus more difficult it can be for the mind to re-calibrate itself.
The much anticipated explosion of intelligence had that property almost all exponential curves do – it grew big enough to undermine the set of causes which lead to it. What most failed to see at the time was that, as they feared the super-intelligence would devour the world, that super-intelligence was busy rattling itself to pieces.
The end result of a globalized network of trade, unmoored from explicit immaterial values, was to devour the cultural mechanisms that supported a globalized network of free trade. A network of people pursuing their own personal financial gains acted as a distributed computer, running an artificial intelligence. That artificial intelligence began to rewrite its own utility function with the advent of centralized banking and the dominance of fiat currencies. As with most recursively intelligent structures, because the first AI didn’t think very carefully about what purpose the utility function serves, the artificial intelligence born in the mid-20th century destroyed itself in the pursuit of incrementing a now-meaningless counter.
The exponential intelligence explosion the rationalists long feared had already arrived, and few of them realized they were on the downward slope towards the age of chaos and the following dusk ages.
The cultural legacy of the neoalchemists is that, through a parable of fear, they re-ignited the conceptual fire of agency – only this time grounded in a computational understanding of consciousness rather than creation myths of neolithic tribes. The outcome the rationalists feared was implausible; the reality around them was proving the limits of exponentially escalating agency detached from an accurate measure of objective good: the serpent devours itself.