“Treat other people the way you’d like to be treated.” It sounds simple, right? How hard could it be?
And yet most of us who say we believe this rule fail to abide by it. I know I do. Why? Most people would say something like, “because we are human, and no one is perfect.” I think this is a perfectly true, but it doesn’t tell us useful things like how we could do better. If we start digging into what this means, exactly, we come to realize that the reason we can’t follow the golden rule is because we humans aren’t good enough at computing.
Rather than say “because no one is perfect”, I think it might be equally true – and more informative – to say, “no model is perfect.” A lot of what makes us human stems from the fact that we operate as computers, and it isn’t something we’re very good at. Understanding ourselves as computers allows us to re-write our own code. Ultimately, that kind of recursive self improvement is what this blog is all about. I think that understanding myself as a computer has allowed me to become more fully human.
The Golden Rule Assumes Unrealistic Computational Capacity
If you want to treat others the way you’d like to be treated, you first need to have an answer to the question, “How would I like to be treated in this scenario?” This is, effectively, a simulation problem. You’d need to simulate yourself in that situation, and then run through possible courses of treatment to see how they’d make you feel. Doing this correctly would require a lot of computational capacity; probably more than you have access to.
Ideally, we’d make every social decision, slowly and consciously, in light of all our principles. Doing this would require every human to be a supercomputer with infinite bandwidth between memory and present attention.
Most of us aren’t supercomputers. We need our extremely valuable, limited attention to be focused on everyday tasks. We rely largely on heuristics and intuition for our interactions with each other; Our social interactions operate mostly on autopilot. I don’t think this is avoidable, because we aren’t supercomputers. We can’t compute the answers to social situations in real time, and so we rely heavily on reading from a cache. Questions about the accuracy of the cache are left for another day. For now, let’s focus on instances where the pop culture cache doesn’t provide an immediate answer of “how I should I act here?”
What I do in these situations is wait for ideas of how I might act to pop up in my imagination. For each of those ideas, I try to imagine how that action would make the other person would feel. Once I have an action that seems to line up with my general goals, and it seems like it would wouldn’t upset the other person, I go for it. I didn’t learn to develop this skill until my late twenties. I got surprisingly far without this ability, but I was also miserable. I’m a lot happier now that I’ve learned the important skill of simulating a remote human architecture. Thank you to all of the people in my life who patiently kept trying to explain empathy to me.
Limited Accuracy and Asymmetric Error Distribution
My experience tells me that imagining how someone else will react to my behavior is a computationally difficult problem. Sure, we have all kinds of instincts, but making perfectly accurate predictions is difficult, even for well understood physics problems. For example, the trajectory to launch a basketball into a hoop is a very well understood physics problem. Yet a quick search tells me that the top player in the NBA made just 90% of their free throws.
Are we to believe that reliably predicting human responses to our actions is easier than throwing a ball into a hoop? Or do you really think you’re better at predicting how you’re impacting others than the all-time top NBA player is at shooting free throws?
Perhaps figuring out how your behavior will cause others to feel isn’t nearly so complicated. In a lot of cases, you can get close using your intuition. I see this as being similar to how, in a lot of cases, using a ‘spherical cow’ can give you pretty decent results. Maybe 90% accuracy is enough. We could call this the “21 karat golden rule.”
However, once you accept that this empathic process has error, an obvious question to ask is “How is that error distributed?” What do you think happens, in aggregate, when you run this computational model, to ask “how will this other person feel, if I do X”? Do you think that your errors are uniformly distributed around the other person’s emotional response? Or is it more likely that, when you run this simulation, your errors will tend to be self serving?
In other words, is it possible that you’re treating other people only 90% as well as you’d like to be treated?
Your mental computation is not going to be perfectly correct. If you are trying to shoot for treating others to a certain standard, you need to anticipate the error rate in your computation of what behavior reaches that standard. Therefore, if you want to achieve the golden rule, in practice, then what you need to be doing internally, is trying to treat people better than you expect to be treated. This way, the error in your empathic process won’t cause your output to fall below your desired emotional-reciprocity SLO.
Imagine there’s a noisy channel between you and every other person in the world. That channel is the world itself. We can communicate reliably over noisy channels. It requires redundancy. We have to repeat ourselves. The world is a noisy channel, and if we want to transmit our care and concern for each other, we need to overdo it a bit. In order to overcome the noise in the channel, we should be kinder than we think is reasonable. Otherwise, our intentions and concern will be lost in the noise.
Until we start thinking of this whole process in terms of computing, it’s easy to make all kinds of assumptions that sneak in there, or to give this situation up as a lost cause. If we say “no one is perfect”, then this is great if it’s used in order to forgive ourselves and each other. If we use that truth as an excuse to stop trying to do better, then the world stops getting better. Seeing yourself as a process you can debug and improve takes away the shame of failure, without removing the desire to improve.
The advice “Treat people the way you’d like to be treated” assumes that you know how to compute the mapping of your treatment to their emotional state, accurately, in real time. I’ve found that lots of people unconsciously make these assumptions about computational complexity theory, which are just straight up wrong. For more in this vein, there’s great paper called “Why Philosophers should care about Computational Complexity.” Check out the part about the Omniscience Problem, in particular.
Now, maybe people haven’t talked about this idea so far because even treating people 90% as well as you’d like to be treated is better than a lot of us do. Maybe this is like health advice, where if most people would just do what they know is good for them – eat more vegetables and less junk food, sit less and exercise more, society’s health would be way better. I’m all for incremental improvement. I certainly don’t think everyone is trying to follow the golden rule. At the same time, I think it’s important to have your ideals correctly defined. I think there are a lot of intelligent adults out there who aren’t trying to be kinder than they expect others to be, perhaps because they haven’t considered this idea.
Why This Accuracy Gap Matters
The gap between 90% and 100% accuracy matters because our expectations are modified continually by our experiences. If everyone continually treats people with only some rough fraction of the goodness they’d like to receive themselves, then everyone’s expected model of decent behavior starts to decay over time. How we want to be treated is affected by how other people treat us.
The ‘limited accuracy’ problem, coupled with the fact that our expectations are modified by our experiences, and vice versa, leads to a situation where our behavior can degrade over time. If you work on distributed computing, this sort of thing happens all the time. If you live only in the land of the narratives humans tell each other, this kind of thing sounds weird, strange, and perhaps almost alien. So let’s talk in human terms.
If you’ve ever done any kind of personal growth through a reflective practice, you’ve probably realized that you are not always kind to yourself. One of the benefits of meditation is learning how to stop mentally hitting yourself. You sit down to just watch your breath, and your mind wanders. At first, when your mind wanders, you might get annoyed at yourself for having your mind wander, and then maybe you notice that now you’re upset at yourself, which causes you to be further annoyed at yourself for wandering. Meditation helps cut off this unpleasant feedback loop.
I noticed once that the way I’d treat people online, if I was really pissed off or annoyed, was the same dismissive, angry way I’d react to myself when I failed or did something I’d rather not. I think a lot of us treat ourselves in ways that we’d never treat someone else, unless that person is a stranger on the internet with the wrong beliefs.
We are constantly learning from the world, and there’s a two-way mirroring process going on. Our minds are reflecting the world, and the world reflects the contents of our minds. I am convinced that continually striving to improve the contents of our own minds – making them more accurate, and more kind – is how we can collectively improve the world. If we want a just world, I suspect it can only be inhabited by a large number of people who are intentionally kind to themselves, and even kinder to others.
When I was in Boy Scouts, we had a rule that said we had to leave the campground cleaner than we found it. This is a very different kind of rule than saying “leave the campground as clean as you found it.”
At the golden rule campground, everyone who leaves tries to make sure the campground is no messier than it was when they arrived. Everyone makes some errors, of course, and over time these errors pile up in the form of paper cups, toilet paper, garbage bags, used matches, and other bits of noise. After a few years, the golden rule campground will become a disgusting garbage pit, even if each departing group of campers studiously makes a concerted effort to return the campground to a state no messier than the state they found it in.
On Boy Scout camping trips, we’d all link hands before we left, and walk across the entire campsite. Everyone looked down and picked up whatever trash was in their immediate path. In computational geometry, we would call this a sweep line algorithm. I think this kind of deliberate process, fully exploring a space to remove any defects or problems, is the only way we could have a just society.
If you want to maintain the state of a distributed system, a ‘reactive’ framework that processes each event is usually not enough, because it always makes errors. These errors pile up over time, like garbage in a campground, or false beliefs and prejudices in the meat-brain of a primate. So what’s the solution?
The Golden Codec
I think we should replace the golden rule with a more sophisticated algorithm which says:
- At every interaction, treat people slightly better than you want to be treated.
- Periodically, review all of your recent interactions, and ask how well they adhered to this standard.
Think of this like a video codec that periodically transmits keyframes, and also transmits deltas. Maybe I’ll call this the Golden Codec. I think a lot of us have the sense that we don’t always act according to the principles and desires we have for ourselves. I see this problem as a communication problem: there’s limited bandwidth into our conscious attention, and much of it is used in handling the present moment. Not only are we communicating with others, using the world as a noisy medium – We also communicate with ourselves, over time – also using the world as a communication mechanism.
What we need is a better codec between the timeless ideals and memories we’ve accumulated, and the real-time processor that’s handling situations as they occur. Ideally, the me that has time to think, contemplate, and peacefully consider objective Truth and all of the human experience could be communicating in real time to the me that has to manage the challenges of day to day life as a human. Ideally, I’d have infinite attentional bandwidth from the source of these ideals, to the places where they would be most helpful.
Unfortunately, I’ve got a primate browser running on a mammal operating system, on top of a lizard BIOS. Of course I’m sometimes an asshole to people on the internet and in real life. That lizard BIOS modulates attentional throughput in response to threats that it wildly overpredicts, in order to keep me safe. I can gradually train the lizard, the mammal and the primate – reconfiguring the BIOS, the Operating System, and the Browser, but that change takes time. I can’t do it in real time.
The best we, beings of limited computational capacity, can manage is to compress and transmit that vital information lossily, and then periodically spend time reviewing our own behavior to see how well it lines up against our principles. Like a video codec that periodically sends an accurate keyframe, and then sends a bunch of deltas. You need the deltas for it to work in real time; you need the keyframes to prevent dropped deltas from accumulating over time.
Transmitting every frame in its entirety takes up too much bandwidth to be feasible. If we had infinite bandwidth, that’s what we’d do. Sadly, we live in a world of scarcity where ‘Doing the right thing’ often means making a tradeoff between two things, both of which we value dearly.
Transmitting only the differences between each frame means that lost packets will cause the entire picture to degrade in a hurry. From this perspective, the golden rule is a delta-only codec. It tells us what to do on a moment-by-moment basis, but does nothing about global signal loss.
If I try to picture a world that’s fair and just, I can’t get that picture to remain coherent in my imagination if the justice comes from incentives, stories, laws, or even a religion. I can only get that picture to cohere and remain stable in my imagination if that just society is full of people who are continually reflecting on their own behavior and asking “did I live up, fully, to the standard that I want to live up to?”
I want to live in a just world. I think it’s worth thinking about utopia. I think we can build closer and closer approximations of a utopian society, and this is something we should strive for. I also think this just, fair world requires a bunch of computational technology that I’ll talk about in the next post, because this one is already long enough. I’ll leave you with my favorite speech by Martin Luther King, Jr. He argues, essentially, that society is in a local maximum that isn’t the global maximum. So if things in the world seem choppy and kind of scary right now, perhaps we’re on the way down from this mountain top, on our way to some place better.
2 thoughts on “How to (Actually) Follow the Golden Rule”