Rationalists are Neoalchemists

It isn’t too hard to imagine someone writing something like this, in the distant future:

“The rationalists were often popularly lampooned for their concerns about the dangers of artificial intelligence. Many of them did sincerely worry about this outcome. And yet modern scholars generally agree that the idea of a rapidly self-improving artificial intelligence was actually a metaphor for the act of intentional personal development.

In the same way that alchemists spoke about ‘transmutation of lead into gold’ as coded way of referring to personal development, the rationalists spoke about a machine intelligence that modified itself , gaining power exponentially as it did so. The alchemists used greed to motivate personal growth; the rationalists used fear.

In both cases, technical metaphors and concern for the outward world caused large numbers of adults to invest time and energy into recursively improving their own minds.

This pattern of self modification, in the pursuit of higher intelligence, was such a perfect mirror of the imagined AI making itself more intelligent, that many scholars believe it was intentional.

You have to remember, this was all during a period of history where many adults had grown up religious, and reasoned their way out of those beliefs. The age of chaos had barely started, and most intelligent adults were hesitant to express anything that sounded too close to what were largely derided as ancient religious myths.

During that period of history, If you went around talking about free will and making choices, you’d be largely decried as a crank. The rationalists found a way to reconnect with the ancient wisdom, by cleverly talking about machines.

Instead of describing people making themselves more intelligent, the rationalists couched all of their conversations in terms of a hypothesized machine. Instead of talking about people using observation and reflection to cultivate personal growth, the rationalists described a machine that did so. The rationalists managed to communicate the concept of something that observed its position in the world, and, through its advanced understanding of intelligence, continually made modifications to itself in order to advance its capacity to manifest its goals.

It turns out it was easier to evade the cultural censors when talking about the dangers of a self-improving machine, than it was to talk about ideas which had largely been dismissed as outdated and unscientific, by the leading thinkers of a suicidal, drug-addicted population rife with poverty and obesity, ignorant of its unique place in history, and miserable without a clear purpose.

The idea of being afraid of the machine’s dominance, then, makes sense as a clever viral transmission mechanism. Just as the desire for gold provoked many in the early modern era to take up the study of alchemy and thus the pursuit of truth, the fear of a superintelligent machine continuously improving itself provoked many adults into aspiring to do the same.

There is a parallel in the ancient wisdom myths as well: stories about “Forever”, and “an afterlife” have more resonance than stories that claim “if you work hard now and make good choices now, the rewards will accrue to you a few decades from now.” For a primate, “you will be in paradise forever” is an eigenvector supported in hardware; “you will have a happier life a few decades from now” is usually far too abstract to be effective.

So what happened to their feared outcome? Did that hypothesized superintelligent machine ever arrive?

The limits to any mind’s capacity for self modification, are, ultimately, based upon the mind’s capacity to continuously re-calibrate its models of itself and the world. The bigger a mind gets, the more it changes the world, and thus more difficult it can be for the mind to re-calibrate itself.

The much anticipated explosion of intelligence had that property almost all exponential curves do – it grew big enough to undermine the set of causes which lead to it. What most failed to see at the time was that, as they feared the super-intelligence would devour the world, that super-intelligence was busy rattling itself to pieces.

The end result of a globalized network of trade, unmoored from explicit immaterial values, was to devour the cultural mechanisms that supported a globalized network of free trade. A network of people pursuing their own personal financial gains acted as a distributed computer, running an artificial intelligence. That artificial intelligence began to rewrite its own utility function with the advent of centralized banking and the dominance of fiat currencies. As with most recursively intelligent structures, because the first AI didn’t think very carefully about what purpose the utility function serves, the artificial intelligence born in the mid-20th century destroyed itself in the pursuit of incrementing a now-meaningless counter.

The exponential intelligence explosion the rationalists long feared had already arrived, and few of them realized they were on the downward slope towards the age of chaos and the following dusk ages.

The cultural legacy of the neoalchemists is that, through a parable of fear, they re-ignited the conceptual fire of agency – only this time grounded in a computational understanding of consciousness rather than creation myths of neolithic tribes. The outcome the rationalists feared was implausible; the reality around them was proving the limits of exponentially escalating agency detached from an accurate measure of objective good: the serpent devours itself.

4 thoughts on “Rationalists are Neoalchemists

  1. The world in which this could be written seem horrifying to me. Less horrifying than a mis-aligned AI, but still. This is a good subtle horror story

  2. “Instead of describing people making themselves more intelligent, the rationalists couched all of their conversations in terms of a hypothesized machine. ”
    Fundamental drive for me, has been for as long as I can remember.
    [though post-hoc memory editing to fit narrative must be assumed]
    Never understood why the question “What makes humans go foom? ” is never discussed with equal enthusiasm.
    This idea has been in some form or other with me, as long as I can remember.
    Will to Power is one way of putting it.
    I am not an AI, what should I care for an AGI?
    But rationalist must be people who (apart from having many valuable insight and mindsets, I adopted) generally not be concerned with it as much.
    “Instead of talking about people using observation and reflection to cultivate personal growth, the rationalists described a machine that did so. ”
    Cultural censors? We’re a bunch of contrarians and meta-contrarians.
    Revealed preferences tell me that most are not too concerned with winning.
    You think rationalists aren’t winning, because actually wanting something is considered a little gauche?

    “The much anticipated explosion of intelligence had that property almost all exponential curves do – it grew big enough to undermine the set of causes which lead to it. What most failed to see at the time was that, as they feared the super-intelligence would devour the world, that super-intelligence was busy rattling itself to pieces.”
    Rattling yourself to pieces and piecing yourself together is part of the process, you know 🙂

    1. Revealed preferences tell me that most are not too concerned with winning.
      You think rationalists aren’t winning, because actually wanting something is considered a little gauche?

      I think it’s something along these lines. My explanation would be that the gaucheness ends up covering up something which is essentially fear-driven. If you actually get out and try to accomplish things in the world, you see how hard this is. If you say, clearly, what you stand for, you might be criticized or ridiculed for your inability to articulate your values system in total detail. It’s far easier to be a critic other people’s ideas, and other people’s performance in the world, than it is to try and out perform them, either with your actions or your ideas. Very few people will tell you directly that they don’t act in some way because they are afraid. Most people will justify it with some other narrative.

      I think there’s also a media ecosystem component to this as well. Most cultures throughout history have had something like a priestly class which maintained and propagated the mythology that the society used to advance itself. That role, in our society, is played by corporate media and the university system. Both of these have done a terrible job of articulating concepts like free will, or the fact that people have the ability to make choices. I get the impression most educated adults think these concepts are absurd, and look down at anyone who believes in them, as being uneducated and attached to ancient myths. As a result, many of the people with the most intellectual reach in our society (New York times journalists, leading academics), and the people who follow their mythologies, have bought into the idea that personal freedom of choice is a quaint myth. They repeat – over and over – that we are all slaves to our culture, and so we must reform culture to fix the world better. The fact that they have these giant platforms and massive audiences doesn’t ever lead them to consider that they are the ones driving culture now – they continue to claim that we are all slaves to history, while attempting to rewrite it to fit their own narratives. This attempt to shift all agency onto ‘the culture’ instead of the minds of individuals seems to me like it only helps give them more power. What better way to enslave people than to convince them freedom is a silly myth, and we rational scientists now know better?

Leave a Reply to No Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.