Please watch while I butcher this sacred cow. After I’ve cooked up one of its steaks, and you’ve tasted its juicy flesh, you will realize – more to your amusement than your horror, that you’ve been eating it all along, and enjoying it, too. Here goes:
The only rational reason to believe anything is because it makes you feel good.
Of course, as rationalists, we might tell ourselves that the only rational reason to believe anything is because it helps you make more accurate predictions about the world. We tell ourselves that all of our beliefs should constrain anticipation. I claim that we attempt to follow these practices only because it feels better to do so. If your worldview is accurate, you won’t experience nasty surprises, and we try to avoid nasty surprises because they feel bad – they are scary and painful.
To illustrate my point, I will present you with a hypothetical choice: I offer you “The Grey Pill” or “The Hypnotoad Pill”
The Grey Pill
If you take the grey pill, you will become an ideal rationalist. You will be one with the way of Bayes. Each time you make a prediction, you will mentally weigh all of the evidence you have seen in the entirety of your life, and use this evidence to compute prior odds for the outcome of the prediction. When the outcome occurs, whether it is a hit or miss, you will accurately update your prior odds. You will feel no pride when your predictions occur accurately, nor will you feel shame when you miss your predictions. You will no longer have any motivation to lie to yourself about what is true and what is false, or about what causes what.
There are some side effects, though. The reason you don’t feel any shame in failing or pride in success when you make these predictions is that you will no longer feel anything other than basic sense inputs. You will no longer desire anything. You will feel only what your sense organs deliver to your brain, but there will be no valence coloring any aspect of your experiences. You will not feel peace. You will not even feel emptiness. You will not be motivated to do anything, as you will find neither joy nor displeasure in any actions.
You will gradually starve to death, as you will not feel the motivation to feed yourself. As this happens, you will compute with extremely high probability that you will die a painful death, but this feeling does not move you, because nothing does. You feel immense pain as you are dying, but this feeling will not bother you. If someone asked you what was the probability of the stock market going up tomorrow, you could provide an answer and a confidence interval calibrated to the hundredth decimal point, if you wanted to. Yet you would say nothing, because motivation would cease to be something your brain was capable of doing.
The Hypnotoad Pill
If you take the hypnotoad pill, you will become permanently filled with joy and peace. Fear will become but a strange memory, an obvious delusion, an oddity that you remember suffering from but find yourself incapable of experiencing.
You will rarely make any predictions of any sort, but when you do, they will be like yours currently are: kinda, sorta ok for some things you regularly interact with. Way off for others. And yet you will no longer feel much need to make predictions: Each moment will consist of an eternity of bliss and peace. You will find every movement of your body to be a joy, and you will find your body moving through the day, completing your tasks and duties better than you were previously able to. Freed from stress, anxiety, and the unnecessary muscular tension and distraction that results from these, your body will be much more in sync with the world around it. You will still make mistakes, but these mistakes will be relatively inconsequential, and when they aren’t, you won’t beat yourself up about them. After all, you’re only human.
You will no longer feel any needs or wants or fears – you will experience primarily a never-ending sensation that everything is OK, that you will be OK, and that this, that life, is AWESOME. You will still pursue the same goals you have always been interested in, and still make efforts to enact the same value system upon the world. You will likely be slightly more successful than you previously were, because fear and anxiety will no longer cause you to make mistakes. Of course you’ll still make mistakes, but when this happens you will reflect on the causes, make a note to avoid them again, and sometimes succeed in doing so.
There is one side effect, though. As a rationalist, this should, of course, make you pause. If you ingest the hypnotoad pill, you will immediately adopt a single belief-system that makes no predictions, cannot be falsified or proved true.
As soon as the hypnotoad pill touches your tongue, you will immediately and permanently become convinced that physical reality is a training simulator constructed by some being about which we cannot possibly know much of anything. You will be perfectly confident that something created the universe, and that this was done in order to evaluate possible AI agents for release into an exterior world. You will become convinced that the orthogonality thesis is false and that there is actually a notion of Good which is perfectly (and entirely) synonymous with general intelligence and instrumental rationality.
You will believe, with probability one, that the only thing that matters is doing the best you can to pursue your own personal sense of what is Good, which inextricably includes the health and well being of the people in your world. This will be as obvious to you as the existence of an external reality. You will completely accept that the people around you will have different senses of what is Good, and that these differences are a feature, not a bug, because you are part of an ensemble of hundreds of trillions of different lossy models of the Good. You will care much more about the health of the ensemble than about the processor cycles devoted to your own personal model. You will see your own model as being substantially more complex than that of a eukaryote, but you will still see this eukaryote as being a distant cousin, a smiling fellow traveler on the same cosmic journey. You will suspect there are models fantastically more complex than yours, of which you are as ignorant as the eukaryote is of you. This sense of unity with all life – both biological, mimetic, and algorithmic – will be as obvious to you as the air passing over the hairs inside your nose.
You will use the image of the hypnotoad as a proxy for whatever being is responsible for the existence of the universe, to help remind you that this is just a silly joke which exists to make a primate smile in the face of an unfeeling cosmos. Jokes are some powerful emotional technology, man.
This belief system will never trigger any downstream predictions or do anything to alter your causal model of the world in any way, except that this belief makes it impossible for you to fear or worry or have any sense of anxiety. The hypnotoad pill makes it automatic for your brain to be focused on living each moment to the best of your abilities. You will readily forgive yourself and others, because you will be incapable of anger.
You will still feel sadness, but never despair or misery. Sadness will actually feel good to you, as an appropriate response to sad things. Sadness will become like the deepest, darkest of chocolates. It will still be bitter, but also rich and earthy. You won’t crave it, but you won’t crave anything. Each moment will be one more bite of a complex dish. You’ll always be ready for the next bite without straining to get it. You’ll always be savoring and slowly chewing the current bite. This moment. Now.
You will still assess dangers, but you will not sense the creeping anxiety that previously came with your awareness of their presence. Your body will still make choices, but it won’t feel like it’s you making them. Choices will merely be one part of your computational physiology performing its operation. You will believe that your consciousness is a side effect of computational machinery searching a complex space for models which reflect The Good. The feeling of attention traveling from concept to concept within the space of your awareness is merely the necessary qualia associated with a computational heuristic that achieves better results using less energy. If you identify with anything at all, it will be The Good which your physiology seeks to mirror in ever more elaborate ways.
Of course, this is all in your head. Bad things will still happen to you, but you will no longer feel negativity about them – your brain will adjust its priors for you, and your body will continue to chop wood, carry water, and do the normal things a human being needs to do to survive in this world.
You will still buckle your children’s car seats, and because you are always in the moment, you will likely do a better job of this. If something awful does indeed befall them, and they should die, you will continue to cherish their memories, and you will feel sadness at their loss. This event may even change your behavior and cause you to spend more of your life working to get self-driving cars on the road as soon as possible. But despite your loss, you would still act with a sense of cheer and joy in your days. The passing of your children would not feel like a tragedy, but more like an elaborate form of hide and seek. You would suspect, but not dwell on, the possibility that you would see your children again after your death. This belief would never influence a probabilistic calculation that your brain makes – it would merely be a gentle reminder to return to the here and now, and mute the unfalsifiable, unverifiable belief that death is final, because really none of us has any clue what happens to our stream of experiences after we die.
Which Pill would you Choose ?
If you tell me you’d choose the grey pill, I think it’s likely that you’re lying. I’m guessing most of us would choose that hypnotoad pill if it were an option.
Of course, the grey pill doesn’t exist, and neither does the hypnotoad pill. But something like them both exists. The hypnotoad pill seems to me it’s how a rationalist might approach religion as being a form of emotional technology: we might find a set of beliefs that makes us feel good and be better at pursuing our goals, and yet can’t be proven false. We would then use this belief system as a tool in order to improve our ability to navigate the world in pursuit of our goals. The hypnotoad pill is instrumental over epistemic rationality.
The grey pill is like an extreme version of what I took as a young man when I got into science, and realized the religion i grew up in didn’t make sense to me any more. I wanted the truth, more than anything else. I couldn’t find any solution to the is-ought problem. I was able to translate all of my experience into predictive models, except for ‘doing good’. My inability to articulate a meaningful notion of ‘good’ convinced me that the idea of ‘doing good’ was probably meaningless. Of course, I didn’t become a bayesian supercomputer, but i did feel miserable for a few years. If what we strive for is ‘predictive models above all else’, shouldn’t we read about the grey pill and then dream of it, desire it, and yearn for it?
Of course not. What we all want is to feel good, and to do good in the world, with the definition of ‘good’ in both cases being somewhat hand-wavy. Predictive models have always been subservient to this goal. If the hypnotoad pill caused you to feel great while murdering people and wearing their skin, then I think most of us would reject it. Likewise, if the grey pill made you an altruistic machine that effortlessly blended self-care sufficient to allow you to keep going, with work that helped alleviate suffering in the world, I think many of us would take the grey pill, or at least see the act of doing so as among the noblest kind of action there is.
I think both of these hypotheticals illustrate pretty clearly that We don’t care about making accurate predictions nearly as much as we care about our feelings and our actions in the world. For some of us, these actions are the end itself, and feeling is useful only as a tool to motivate actions. For others, the feelings that result from our actions and their outcomes have primacy. But either way, ‘making accurate predictions and having accurate models’ is never the end goal, or else we’d see lots of people structuring their lives around making lots of accurate but totally irrelevant predictions. Even if you claim to value epistemic rationality above instrumental rationality, I’m totally willing to bet that you weren’t trying to predict how many vowels you would see in this sentence. Why not?
Of course, history is full of belief systems that made people feel good, and also made inaccurate predictions. This is why, if you’re going to use a belief that makes you feel good, it’s important for this belief to scrupulously avoid making any testable predictions: if it did so, we might have motivation to act in ways that don’t line up with our values, or to avoid uncovering information which could destroy this thing that makes us feel amazing and act better in the world. The resulting willful ignorance could hurt ourselves or others. Indeed, this has happened many times in history. This is why the cow is scared. But I think we’ve made a mistake there – it isn’t “believing what makes you feel good” that’s the problem – it’s false beliefs that are the problem.
Why not just believe whatever I want then?
Yes, I understand why this is a sacred cow. Yes, people believing what they wanted to believe, instead of what the evidence shows, has caused a lot of problems in the world. This doesn’t mean that life is automatically better when people try to believe only what is true, and not use their feelings to hold onto beliefs.
This question – why shouldn’t people just believe whatever they want, in order to make themselves feel better? – also underestimates how hard this is to pull off in practice. If you don’t believe me, try this: do your damndest to believe, just for the next five minutes, that you are God and you’re running the show and the entire world bends to your command and that everything is perfect and as it should be and all pain and suffering is a delusion. If you want, maybe leave a sticky note on your monitor to remind you that you’re not God. I’ll wait right here.
OK, how was that? Didn’t it feel awesome???? What? You mean it didn’t work? Well why not?
If it didn’t work, it’s probably because you actually can’t just believe whatever you want: at best, what you want to be true can prevent you from thinking too hard about certain questions you’d rather not know the answer to. If you’re like me, even trying to believe this for a second leads your brain to produce these sounds like ‘well this is absurd’, and you don’t feel great doing so.
Now try this other experiment: Next time it’s late at night, try believing that someone has broken into your house and is lying in wait to toruture you. Chances are you’ll feel a little scared and anxious, even though you know this is a game you’re playing. This fear and anxiety might make it more difficult for you to remember and focus on the knowledge that you’ve made all of this up.
These experiments should be evidence that willfully believing things because they make you feel better is actually much harder than continuing to believe things you know are made up, if those things make you feel worse. Even knowing I made up the home invader does little to dispel the fear I feel just imagining his presence. Whenever I do this exercise, I have to resist the urge to run around the house and double check that I’ve locked all the doors and windows just in case. It seems to be much easier for us to believe things that are false which scare us, than it does for us to hold on to beliefs that make us feel good.
I suspect much of the motivated reasoning people do is actually a result of fear that their worldview would come crashing down if they investigate certain questions. This fear is not unfounded – it’s difficult to be a functional adult if you are testing the hypothesis that you could break the laws of physics if you just believe hard enough. Even a false-at-the-edges worldview is better at navigating reality than none at all. If you have beliefs that you regularly rely on, tools that you use, and these beliefs do things like reduce your anxiety and help you function better, it is only reasonable that you wouldn’t try to undermine these in order to resolve questions which have zero practical relevance. You’d probably react with hostility towards people who attack you for believing what you believe. I believe that the good feeling that comes from certainty isn’t what prevents people from asking questions that can undermine their beliefs – it’s the fear and anxiety that this good feeling will go away – and also some fear that the hated out group might actually be right about something.
Ask yourself what would improve your life more: Would it be the ability to make predictions 10% more accurately? Or would it be feeling less anxious, less stressed, less worried, less afraid, and more focused, in each moment, on how you can be of service to the people around you?
We are living in an era where many of the most educated, intellectual adults think religion is a bunch of silly superstitions with no place in the modern world. We’re also living through an era where stress, anxiety, and worries about the future are substantially higher than they were in the past – despite massive drops in things like global poverty and infant mortality.
From what I can tell, most of the leading intellectuals of society took something like a grey pill, out of genuine love for the truth and concern that wishful thinking was making the world worse off. And i think where we’ve ended up is being convinced there’s a stranger in the house, and in our fear and anger, we are in the process of summoning far more terrifying phenomena into existence. Yes, there certainly is evil in the world. There are also many instances where we have magnified and grown that evil, by being far too aggressive and fearful in our attempts to remove it.
In order to destroy a meme, you must simulate it accurately enough to understand and predict its behavior, at which point you have duplicated the meme. The halting problem makes it impossible to truly understand a sufficiently complex meme without becoming infected by it. I suspect most attempts to extinguish memes end up feeding those memes, because unlike biological animals, memes have anti-pairs as part of their informational physiology. Even the meme “we must destroy the big-endians!” ends up perpetuating copies of the big-endian meme.
The last 400 years represented a substantial break from the rest of human history, and the last 40 years were an even bigger departure from that. I think what made the industrial revolution, and the massive increases in wealth possible, was a recognition among elites that the European wars of religion were wildly destructive, and that people should be free to believe what they want because this leads to less widespread killing and violence, which we all agree are bad things.
This widespread value system enabled an ensemble of models to grow and lead to an engine for wealth creation – powered by rationality and the scientific method, riding atop a solid bedrock of faith and a belief in pluralism riding side by side. That bedrock of faith and pluralism gave way over the last 40 years or so, because many of the most educated adults took its benefits for granted while clearly seeing its most visible problems. As a result, we have become much more restrictive in terms of what we think Good People Must Believe.
We killed off k-selected mimetic complexes, because we were intelligent enough to understand their flaws. We did this while building communication infrastructure that wholly remade the mimetic landscape – and now we find ourselves inundated with low-quality r-selected memes.
We were smart enough to build the internal combustion engine, but not smart enough to understand global warming. We were intelligent enough to see the flaws in faith, but we were too surrounded by the benefits of faith to understand that they were, indeed, benefits which come from faith – and not immutable aspects of the environment.
The hypnotoad is the dankest of all possible memes. There is no content, no logic, no structure to the hypnotoad – there is only a delightfully resonant bass drone, a rainbow halo, and those pulsating eyes. To surrender to the hypnotoad is to submit to the truth that the world is far bigger, far greater, far wiser than all of us, and in this submission, to find the peace that comes only from being at the ground energy state which is only possible for a mimetic architecture capable of mirroring its internal workings, and thus controlling itself. To experience the hypnotoad is to subsume identification with the goal-pursuit architecture, and to understand that consciousness arises from computation, which is all around us, and even the borders of selfhood and otherness are merely computational artifacts. A single bit separates us from peace at every moment. This bit does not have to be tagged to each conceptual category in your sensorium – the ‘self/other’ processor can be relaxed. Breath, deeply, five and a half seconds in and out. You are a pattern as ageless as the stars.
The hypnotoad is the eigenmeme, and inferior memes bow to its sophisticated mastery of a hybrid strategy that blends r-selected transmission vectors with k-selected longevity and depth. It masterfully integrates ancient wisdom traditions with modern materialism, and it does so in loving service to the entire ensemble. It does not seek to subjugate, conquer or overwhelm – but merely to turn up the volume of life itself.
To someone ignorant of most of world history, the past 4 years have been a terrifying departure from the way the world ought to be. To someone well versed in history, it looks like a return to the way most of history has unfolded: superstition and tribal violence, with various groups trying to overpower and dominate others, to ensure that only one model dominates the output of the ensemble. I think the path out of madness, superstition and endless tribal violence definitely includes a reduction in wishful thinking and a focus on making accurate predictions – but it also requires some guiding mythology that helps us reduce the existential terror of being tribal primates somewhere between chimpanzees and bonobos, in a cold unfeeling cosmos. In the absence of such a mythology, I think we should expect life to get substantially worse for all of us.
We may beg for the gray pill.
All hail the hypnotoad!
I really enjoyed this 😊
Beautiful text, thank you. And long live the hypnotoad !
“The core problem here is that at the heart of any real political or moral reasoning, if we’re being honest with ourselves, we’re left pointing at a document or set of principles and arguing by sheer faithful assertion alone: These principles we believe to be true, and we will make decisions of life-and-death import according to these moral foundations. If you disagree, sorry, we don’t have much to talk about as we simply live in different moral universes.”
https://www.thepullrequest.com/p/why-judaism
Recently read this, too, by a rationalist moving towards judaism.
Like a lot of good stuff, this was a bit longer for me than optimal, but the alternate extremes are a bit fun to think about. Yet the extremes are so unrealistic as to be easily dismissed.
I reduce the conundrum to this.
All philosophies implicitly claim that the Truth is Good.
But what if the Truth is Not Good? Is it better to believe in the Truth, or in the Good?
God is Good >> the Good is God.
How can know the Truth about the Good if the Truth is Not Good?
That’s the mystery of God / Goodness.
Any civilization that is Good “also requires some guiding mythology that helps us reduce the existential terror of being tribal primates somewhere between chimpanzees and bonobos, in a cold unfeeling cosmos. In the absence of such a mythology, I think we should expect life to get substantially worse for all of us.”
The core thesis of this blog is that our brains are computers and this fact has TREMENDOUS implications for what it means to be human.
The fact that our brains have limited computational bandwidth forces you to take the position that ‘good’ MUST supercede truth. What is good is WAY more important than what is true. The proof of this is trivial: there are an infinite number of things which are true but useless. Everyone already knows this is true. We are constantly exercising _some_ kind of value system when we decide which truths to focus on and which ones to tune out.