P ⊂ NP causes Political Polarization

Some time last spring, I reached a conclusion. I was drinking my morning coffee, and the thought appeared in my mind. It was as clear as the first drop of milk into the otherwise black mug: perhaps many people aren’t capable of holding beliefs that differ wildly from those of their peers.  It seemed entirely plausible once I’d articulated it, and it also gave me a sense of peace. This would explain a lot of frustrating encounters, and gives me a good reason to just stop trying to have certain categories of conversation.

The conversations go poorly not because some of my ideas are too out there. It’s that lots of people have trouble considering “out there” ideas.  It’s as if those out there ideas get identified as viruses and shut down before a person can think about them. So here’s another out there idea: these conversations go poorly for the same reason american politics is currently toxically polarized: because of the widespread intuitive perception, modelled by evolution, that P ⊂ NP. 

Have you considered that maybe you’re just crazy?

At this point, the reader might raise the simple objection that hey, maybe you do have ideas which are totally insane.  Sure, you don’t think your ideas are comparable to ‘the moon is made of cheese’, but nobody thinks that. Every crackpot thinks their beliefs and ideas are reasonable.  When you claim that computational complexity theory has some effect on politics, you might as well advertise your new cryptocurrency as working better than a tin-foil hat to protect you from the x-rays that the new COVID vaccines give off when they are in proximity of 5G transmitting towers.

So, instead of claiming that many people can’t play around with ‘out there’ ideas, how do I know I’m not just full of totally batty ideas? And, honestly, this is a fair point!  At one point in my life, after seeing “The Men who Stare at Goats” I  tried seeing whether or not I could walk through walls just by willing it to be so. How do I know I’m not still in that mentality?

My response to this question would be that in my time at Google, the people I found who were most willing to consider my heterodox ideas tended to be more successful, and further up in the Google hierarchy.  A director with 300 people on his team was more willing to hear out some of my ideas that many of the junior individual contributors might have dismissed as absurd.  So I suppose it’s possible that I happen to have lots of ridiculous “the moon is made of cheese“ beliefs, and that for whatever reason, people who are willing to consider these absurd ideas are more heavily represented higher up the management chain.

What I suspect is more likely is that “willingness to consider heterodox ideas” is something of a rare skill that has to be consciously developed, and in the absence of effort to develop this skill, most adults probably over-rely on the top-down prediction that “this idea is absurd.”

I now think this pattern – aggressively rejecting foreign ideas as absurd –  ends up characterizing one of two failure modes of reasoning. One failure mode is “epistemic outsourcing”: adopting all the beliefs of your social class, without considering that some of them may be internally contradictory, or in conflict with empirical reality.  The second, complementary failure mode is “epistemic autism”: insisting on “going it alone” and rejecting any ideas that don’t fit in with your own personal way of viewing the world.   

I’m using the word ‘autism’ here both because it’s the classical sense of ‘going it alone’, and because i think this is a common pattern exhibited by autists.   Neurotypicals, on the other hand, I think default to epistemic outsourcing.   And, since this is a blog about computational complexity and the human experience,  I think there’s a neat-tie in here to the P vs NP problem.  And, since this is a blog about the distributed system that is humanity, I think there’s also a tie in here to network partitioning, aka political polarization. But first, let’s talk about the two failure modes.

Epistemic Outsourcing vs Epistemic Autism

Note that both of these descriptions represent extreme forms – a pattern taken as far as it can go. I’m guessing they are mostly caricatures which exhibit the extreme form of a more mild pattern that’s probably much more common.  Most people don’t have a hunched back, but most people do slouch.  With that said, here are the descriptions of two forms of an epistemic hunched back:

Someone stuck in epistemic outsourcing is incapable of expressing support for beliefs that would shock or upset their peers. Social approval acts as an incredibly strong filter on the ideas they can think, to the point that it is difficult, or impossible, for them to seriously consider an idea unless they have seen a number of other people they consider to be part of their peer group take it seriously as well.  Likewise, once a number of their peers take an idea seriously, they feel compelled to do so as well.  They may disagree with their peers, but they won’t consider the possibility that something many of their peers believe is absurd.

We might say that this person has a very high prior value on belief: “My group is unlikely to be in agreement on something important that is totally wrong.”

Someone stuck in epistemic autism is incapable of expressing support for other people’s ideas and beliefs. “Why would I bother exploring these ideas when they are obviously wrong?” is the kind of thing I used to say all the time. I did very poorly in a college “Philosophy of Time” class because I didn’t understand “the name of the game”: we needed to be able to articulate what lots of important philosophers have said about time.  I read the arguments that other people (i.e. Philosophers) made. Their ideas seemed irrelevant to me since their axiomated bases seemed off to me.  I did not see any utility in running with a set of ideas based upon axioms I thought were wrong. I wasn’t thinking about getting a good grade or what the teacher wanted me to do, since those things seemed irrelevant to me. I didn’t ever have to consider the teacher’s intentions for us when i was in physics class – getting the right answer was enough. “Why should any other field be different”, I thought to myself.   I thought what we were supposed to be doing was developing our own philosophy of time.  I was stuck in epistemic autism.

We might say that a person stuck in epistemic autism has a very high prior value on the belief, “My epistemic foundations are both sound and complete; all true statements must be expressible within the current frameworks in which my mind operates” 

In both cases, the inability to express support for the beliefs seems to limit people’s ability to express the beliefs at all.  I have often found that, the moment someone is able to articulate another person’s beliefs in their own language, the “argument” often boils away, or at least calms down a bit. The mere act of stating someone else’s beliefs, in your own language, seems like it’s a good portion of the way along the path to agreeing with the other person.   Of course, people sometimes walk halfway down a path, and decide not to go all the way.  But I get the impression that for a lot of people, even setting one foot on the path is not doable, if they think it’s a path to the enemy’s camp. 

Although these two failure modes might look different on the surface (one of them is extremely isolating, whereas the other probably makes you feel closer and more connected to your group), they exhibit the same underlying shape: “[Me/We] understand how the world works, and [You/They] do not.”

 If you perform a ‘conceptual squint’, you might consider “me” to consist of a community of many different people – the different versions of ‘you’ that you’ve been throughout your life. Likewise, you might consider “us” to be a culture’s ego thinking by means of your own brain.  In both cases, there is a something which considers itself, and its own beliefs, so highly, that it aggressively rejects beliefs that appear to conflict with its own beliefs, without considering how well those beliefs fit its bottom up stream of sensory experience, or even whether there truly is a conflict at all!  An initial prediction (“this idea is absurd”) is made and not challenged, so the prediction becomes self-fulfilling.

So what do these have to do with P vs NP?   And, what are those, again? Why do they matter?

An O(1) Summary of Computational Complexity Theory

There are two ways you might go about solving a difficult problem:

When we try to answer “give me two numbers, each greater than 1, whose product is 199036921”,  we can see why the power of always guessing correctly wins out. It’s easy to multiply two numbers we guess, and see if their product matches the target.  Finding those two numbers is probably way more work.  

If someone gave you the magic power of “your guesses will always be correct, if that is possible”, you would probably think that this magic power would guarantee that you’d always solve problems faster than the person constructing solutions step by step.   You’d have the same intuition most computational complexity theorists have: that P ⊂ NP, that NP is bigger than P.

Now, instead of something mostly irrelevant to the human experience (i.e. asking whether two numbers multiply to hit some target), consider a very different computational problem:

“How can I live a good life?”

You might imagine two approaches to solving this problem. The P approach is to attempt to reason, step by step, about all of these terms, until we can algorithmically transmute the question into a recipe for action in the world.

The NP approach is to guess various approaches, and see how well they work.  Of course, we might prime our intuition by considering approaches other people are taking, and have taken in the past, and then ask whether or not they work.

Which do you think is easier? To develop a comprehensive theory about how the world works, and how to act in it, from scratch? Or to cobble one together by combining patterns of belief and action that you’ve already seen to be effective?

So we might extract from this dichotomy a basic pattern: if you see something that works for someone else, you might decide to copy it.   Because this pattern is so powerful, we might expect to see it all over the place, and this is what we do see with humans. All of us copy beliefs, ideas, and ways of living, from each other, because validating that someone else’s approach works is often cheaper than finding new ways that work.   

This computational shortcut has a cost: yes, it can help you adopt new ideas without having to find them yourself. If you rely on it too heavily, though, you might be rejecting true ideas and beliefs. From what I can tell, this looks like it’s hard to avoid.  It also looks like it’s not limited to adults – even my four-year-old daughter does this.

Computability Theory Helps When Teaching Your Child to Read

I’ve been teaching my four-year old to read.  And let me tell you, i’m very glad i’ve learned theoretical computer science for this purpose.  That knowledge has provided an interesting lens into the approach my daughter is taking: she suspects, intuitively, that P is entirely contained in NP. She understands that intuitive approaches are often more efficient than constructive, step by step approaches. Accordingly, she takes an approach to reading which is more along the lines of the ‘guess and check’ approach of a nondeterministic turing machine. 

Whenever she’s looking at a sentence in the book, she uses the picture to tell her what the sentence might be saying. She uses visual information to guess what the sentence says – just like the NP approach. She then checks her guess against the words. In other words, she does not  compute the answer to “which sound does this sequence of letters produce?”  Instead, her intuition generates a guess based upon context, and she computes the answer to the much easier question “does this sequence of letters produce the sound ‘i can get up?’”   Except, she isn’t even doing that computation!  She scans her fingers along the letters of the page, and says “I can get up”, even though the page says “I can get on a rock.”   She is substituting what she thinks she’s seeing, for what she actually sees.

She isn’t seeing reality in this case. She’s seeing her beliefs about reality, superimposed onto a bunch of lossy signals from a messy, chaotic reality.  That’s what you’re seeing right now. You probably aren’t even noticing the periods at the end of these sentences as being composed of unique pixels – my guess is your eyes are doing a linear scan, and most of your working memory is composed of concepts. You’d only notice the pixels by their absence, and you’d probably notice that absence with a feeling of annoyance, such as ‘why is this guy not using periods and commas’, which is basically what a computer means when it throws an ‘invalid statement, missing ;’ error. 

It took me a little while to figure out what she was doing, but once I saw it, it all made sense: It’s hard to observe a pattern in your senses, faithfully, and then to transcribe it into a set of symbols.  It’s much easier to test whether or not some pattern in your senses matches up to what you already suspect to be true. And it’s even easier than that to guess and not even bother checking! Unless someone or something you trust is telling you ‘this guess was wrong’, you can just keep going after bad guesses, without even realizing it.  It turns out that “guess and don’t check”, is possibly the cheapest form of computation, and likely to be wrong in subtle ways.

While practicing the piano, I had noticed it took conscious effort not to trust my muscle memory. Once my fingers learned the motions, they would move automatically, which meant I was no longer interpreting each note on the sheet music, naming the note, and then trying to visualize the corresponding piano key.  The predictions dominated sensory inputs – my brain predicted the next motion for me, and as a result, it took conscious effort to focus on the sensory input of black musical notes on a white sheet of paper. The focus on raw sensations, over in-built predictions, seems like it makes the difference between practicing and going through the motions.  It’s the difference between learning, and restating what you already believe to be true. It’s the difference between listening to what someone believes, and reacting to what you already think they believe.

My daughter’s mistakes in reading, and my experience learning the piano also made sense of what I see a lot of adults doing. Once a guess pops into her head, It looks like it’s very hard for her not to use the ‘guess and don’t check’ approach.  Once my brain learned the motion of my fingers, it was very hard to continue having the motion plan for my fingers flow from recognizing which notes I saw on the page, instead of predictive muscle memory. Once our brains give us a template for a situation, and say ‘here is what is happening’, it looks like it takes a decent amount of effort to discard the template, and direct our attention to the bottom-up stream of signals coming in from our eyes and ears – or fingers and toes and skin, etc.


And this gets us back into the realm of what adults often do when considering ideas that don’t line up with their worldviews.  This gets us back to why American politics is a mess.  It’s easy to have two kinds of filters, both of which fail for the same reason:

The epistemic autist says “everyone but me is likely to have false beliefs, whereas all my beliefs are reasonable.” The epistemic outsourcer says “everyone outside of my social class has ridiculous beliefs, but my social class knows better.”


The difference between what they are doing, and what my daughter does when she guesses the words in a sentence, or when my fingers play the piano and I’m not carefully reading the notes, is probably just a matter of degree. When a child uses a “guess and don’t check” strategy, they might misread some words on a page slightly. When I use that strategy, I play bad music. When large groups of primates use a “guess and don’t check strategy” to evaluate beliefs about the world, they might divide themselves into mutually antagonistic dominance hierarchies, which only increases the strength of the prior that “those people have ridiculous beliefs, and we have the Truth.”    These are both forms of epistemic isolationism.

Flipping the Bozo Bit: Trapped Priors as Rational Tradeoff

Have you caught yourself  flipping the bozo bit on someone? Once you have decided that another person has no useful information for you, it’s easy for this decision to be a self-fulfilling prophecy.  If you think someone else has nothing useful to contribute to your belief system, this belief then becomes the truth, because your belief renders you incapable of listening to them.  Their inability to contribute information to you is real, but it’s a function of your beliefs, not their experience. This looks like an instance of ‘‘trapped prior’.

What’s a trapped prior? Someone who finds spiders terrifying will react in extreme fear whenever they are near a spider, and the intensity of their reaction becomes ever more evidence that spiders really are scary.  Likewise, someone who thinks “Bob is an idiot” is going to interpret whatever the other Bob says in the simplest, most naive way possible.  The belief “Bob is an idiot” becomes self reinforcing. Someone who holds this belief interprets anything that Bob says as either being obviously true, or totally absurd.

The “trapped prior” explanation for this phenomenon is that negative experiences (i.e. seeing a very scary spider, or talking to an adult with ridiculous beliefs) cause our brains to put less weight on the stream of incoming information. That is, when we have an existing, strong belief, it’s likely that we will pay less attention to incoming evidence which counters the belief. The hypothesis for why we would pay less attention to evidence when dealing with phenomena that we have intensely negative reactions to, is that it’s possibly a way of protecting us from the intensity of scary experiences.

I think this seems plausible – it can be really unpleasant to talk to someone with different perspectives – but it may be worth considering whether it makes sense to put less focus on the stream of incoming attention because of the computational cost of doing so.  I suspect that priors can get trapped as a result of rational decisions about how to allocate scarce energy. Our brains use ⅓ of the calories in our body. Thinking is expensive, so it makes sense that we’d evolve strategies to avoid doing so.

When adults don’t think, and just react based upon previous experience, I suspect this is an evolutionary feature that’s only a bug in our current environment, which is much more complex and subtle than the ancestral environment in which our hardware self-optimized.

In the case of encountering something on your arm which looks, vaguely, like a poisonous spider, you could either:

  • Get out of there in a hurry
  • Determine whether or not the thing that looks vaguely like a poisonous spider is really a poisonous spider

Both of these approaches require lots of energy. Since you only have a limited amount of energy, it makes more sense for your body to just get away from the threat.

Carefully observing a bottom-up stream of signals, without overfitting from top down concepts, requires more energy than executing a fuzzy match.  Conservation of energy is likely the reason we do predictive processing in the first place. It only makes sense that there would be situations where we over-fit existing beliefs onto observed phenomena.   Especially situations where the limited calorie budget we have might be better put to use handling a (possibly nonexistent!) threat. 


If you do ‘guess and don’t check’ with the spider, and you were wrong, you’re still alive.
If you do “determine whether this thing which looks like a dangerous predator really is a dangerous predator”, or, perhaps “determine whether this person which seems to be part of the enemy tribe really is part of the enemy tribe”, you might be dead.

Ok, so it looks like this “don’t think a lot in seemingly dangerous situations” pattern is probably some default tendency we have, that exists for reasons of computational efficiency.  Is it really worth my time to have another discussion with a guy who thinks the moon landing is fake, vaccines are a scam, and that 9/11 was planned by lizard people? That seems very unlikely.  Given that we all have limited time and energy, it only makes sense to flip the bozo bit on some people, right?

I don’t think it’s that simple. I think writing people off is like selling a bitcoin for $5 in 2011.  Most of the time, you’ll end up right. But in the cases where you were wrong, you might really be missing out.  And, unlikely bitcoin, nobody’s going to come by and tell you afterwards just how wrong you were.  You’re still executing “Guess and don’t check” unless you periodically go back and check. 

Difficult Conversations have Asymmetric Payoffs

If you’re talking about something complex, that you care about, with someone you know tangentially, and you’re doing this over the internet, and this person doesn’t agree with you – then you’re almost certainly wasting your time.

If you’re talking about something complex, that you care about, with someone you know well, and you respect, and you’re doing this either in person (ideally) or via voice –  then i think you’re engaging in something that’s either going to waste your time and damage your relationship, or be one of the most valuable things a person can do.  If the conversation goes well, it will strengthen your relationship and give your worldview much more flexibility.   The outcome depends partially on chance, and partially on how well you can stay calm and just listen to what the other person is saying. In the best case scenario, you’re likely to come away with some new knowledge about how the world works.  Even if the outgroup really does believe all kinds of absurd things, those absurd beliefs inform a lots of human action, and are therefore an essential component to a robust, predictive model of reality.

Earlier, I said that Epistemic Autism and Epistemic Outsourcing were caricatures.  That they were exaggerated versions of what people might do.  But is that really true?

If you’re an American and you align with one tribe or the other in the red/blue conflict, are you really open to the possibility that your tribe might be largely in agreement on something that’s wildly incorrect?  Are you really open to the possibility that some beliefs that the other tribe has might be more accurate than your tribe’s perspective? Both candidates in 2020 received over 70 million votes.   Are you really open to the possibility that there are 70 million people who reached a conclusion you find abhorrent, and yet they mostly try to do the right thing, they try to make sense of the world in the ways they can? Are you open to the possibility that they, like you, want to be good, but are stressed and tired and worried about their future and those they love?  Are you open to the possibility that yes, they are flawed and ignorant and sometimes even hateful, but that’s because they are human, and if you’re honest with yourself, you’re kind of the same way at times?  Are you open to the possibility that, just as you feel justified and reasonable in terms of who you hate, their hatred also feels justified and reasonable to them?  If you tell me you don’t hate anyone – is that really true? Or is it just something that you’re supposed to say because it’s uncomfortable to acknowledge some of our primate tendencies, like pooping, or arranging ourselves in competitive dominance hierarchies that aggressively attack competing hierarchies in a hybrid warfare that combines physical, mythological, and economic strategies?

And how’s your posture right now – are you slouching? Or frowning? Or Squinting? Those are easy habits to fall into!  Gravity makes us spouse, and only constant effort can prevent your slouch from turning into a hunched back over the decades. Likewise, I suspect that our widespread intuition that P ⊂ NP makes us fight each other, and only commitment to a story of ‘us’, and constant effort, can prevent us from dividing into warring tribes. 

I think a lot of us have this idea that we divide ourselves into differing groups of belief systems because we’re tribal primates. And there’s almost certainly some truth here. But it looks like any system of agents that use models to reason about the world, and are subject to limited energy budgets, would probably fall into similar patterns, unless they explicitly value understanding and working with models other than their own. Predictive processing – using an NP approach where a predictive model generates guesses – looks like it’s both an artifact of limited energy budgets, and a driver of epistemic isolation. We humans can’t change the hardware we run on, but we can change our values system by consciously prioritizing understanding and listening to people with different belief systems. My guess is that this value ends up being a form of instrumental rationality, and machines will need to do it too.

Executing Foreign Code Requires Trust


The path out of epistemic isolationism requires, ultimately, trust.  Getting out requires you to trust yourself enough to reason independently and not reach ridiculous conclusions. Getting out requires you to trust your peer group enough that if you do reach a conclusion that they find ridiculous, you won’t be ostracized and isolated. Escaping epistemic isolationism requires you to trust the hated other group to consist largely of reasonable people trying to make sense of their own experiences, which might be vastly different from your own.

The path to epistemic interconnection also requires you to be able to listen closely to what is said, which is different from what you think you’re hearing. This is much easier to do if you are calm and feel safe, and the other person feels the same way.  When humans get stressed out, we tend to make decisions faster, at the cost of having them be lower quality. Perhaps that makes sense if you’re running from what might be a predator – but when the stress is caused by interpersonal relationship struggles, “speaking faster, but issuing lower quality sentences” is a recipe for disaster. Hence the requirement for trust.    You can’t fix a problem at the application layer if you degrade the emotional connection between you and the other person; trust and empathy are like a solid ethernet connection. Without them, you’ve got no chance.

If you can do these things, I promise you, the end result is very rewarding.   Not only will you find yourself understanding people who used to infuriate you, you’ll find that they, mysteriously, seem to understand you better. It’s almost as if there is something reciprocal at work there. 

This is also a great spot to bring up meditation. When i sit for long periods of time, what seems to happen is that concepts stop being ‘computed’ by my brain, and i’m able to pay more attention to bottom up sensory signals, instead of ‘top down’ concepts. When I focus on “my breath” long enough, I start to notice things like the feeling of wetness/dryness at a specific spot in my throat.  Most of the time, I dont notice this feeling at all.

Over the course of our lives, we have experiences, which we remember. Our brains use concepts to compress these experiences down, by finding commonalities and arranging these experiences into patterns.  Over time, we can pay more attention to the patterns than the experiences themselves.  I’d like to say I won’t give my wife “just another kiss” but the reality is it’s happened before and will happen again. I’d like to always give the kind of kisses for which words don’t exist.  But that takes time and energy.  Those are always limited.

The older you get, the more patterns you accumulate, and the more you are in real danger of overfitting. A pattern that worked for 30 years may have petered out – but you’ll keep seeing it unless you’re truly open to changing and rewriting some of these patterns. If you aren’t putting in effort, you’ll likely cling to old dead patterns long after they’ve stopped being accurate.

I suspect if anyone were to live forever, they’d’ have to perpetually have the mind of a wise child – open, curious, aware, humble, and forever eager to learn more.   Otherwise, I think you’d become, by default, more and more bitter, as ever more possibilities get written off as absurd, and doors to better worlds close, one at a time, as you collect experiences and bundle them into beliefs which are rarely taken apart or thrown away. When it happens to just one person, that’s sad. If it happens to a bunch of primates arranged in a computational network that controls the world economy and nuclear weapons – well, i suppose that might be a little more scary.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.