Utilitarianism (selecting actions based upon how they are likely to impact the people of the world) sounds like a great idea. At first glance, this sounds like the correct way to make ethical choices; so long as you can accurately tell what will harm or help people in general, of course you should take likely outcomes into consideration.
A problem with utilitarianism – one that I haven’t heard articulated – is that it is often completely computationally infeasible as a practical source of values.
Consider the act of arguing angrily with your spouse.
From the point of view of utilitarianism, in order to answer the question “should I argue angrily with my spouse right now”, one would need to compute the likely outcome from doing so vs not doing so, and see how these outcomes might harm or benefit the people around us.
Not only that, but we should be computing the different likely outcomes from various things we might shout. Is it better to shout “you never care about the small things I do?” Or would it create more net positive benefit to the world to roll my eyes and sigh loudly? As I write these examples, my intuition screams “these are all terrible ideas”. Of course these are terrible ideas. It’s hard to imagine a person actually trying to weigh the pros and cons of approaches like this, but isn’t that exactly what a utilitarian should do?
Most of us, when faced with a ridiculous question like this, respond with the intuitive answer that it’s stupid to even ask the question: of course you shouldn’t select from either of these approaches. Those aren’t going to work. We don’t need to simulate the future in order to know this.
But remember, intuition is often wrong! Isn’t our goal as rationalists to learn to use system 2, the slow deliberate way of thinking, in order to weed out the errors and biases of system 1? What if we are placed in a machine which is analyzing our words, and will respond to our choices according to some bizarre internal heuristic, and by swearing at my wife, I might trigger the release of malarial vaccines for thousands of children? Only a utilitarian approach will help me do the right thing in this situation.
Or, perhaps less ridiculously, What if, by shouting just the right thing, i might cause my spouse to realize that they are in fact ignoring the small things i do for them, and that some good might come of this?
The fact that analyzing the likely outcomes from shouting various angry things at my spouse strikes me as ridiculous, a feeling, is a clue, to me, that we are engaged in computational shortcuts without realizing it. If it were possible to cheaply, accurately, and rapidly analyze the likely outcomes from saying these possible things, then perhaps it would make sense to do so.
The reality of being human is that we are intensely constrained in terms of our computational resources, and so we rely on shortcuts.
Deontology: Causal Heuristic
A rule which says “do not yell at your spouse”, can be seen as a computational heuristic which says “don’t even bother executing a search for the right thing to yell, because however you execute the search, you’ll end up computing that it’s not worth it to yell the thing.”
In other words, Deontology – an ethical system based upon following rules – can be seen as a system that recognizes the limited computational processing power human beings have, and instead of asking them to run countless simulations of how their decisions might play out, instead asks them to memorize a set of “solutions” that, in practice, usually lead to positive outcomes.
The main problem with this approach is that the memorized heuristics might be wrong. And so we arrive at a trade-off: utilitarianism (when carried out with accurate models and infinite computing power) is more likely to get us the right outcome, and accurately grab corner cases – but it is computationally expensive, and very often we have to react far faster than we can compute the best possible strategy.
Utilitarianism works great when we have a fixed, small, set of choices to evaluate, we have good confidence that we can predict the likely outcome of our actions, and we have the time and computational resources necessary to do so. There are plenty of situations where these conditions hold, but for most day-to-day human interactions, they don’t. That’s why deontology is so helpful – it’s a practical solution that consistently, and cheaply provides usually-good solutions.
Deontological rules act like a cache of the results of ethical computation. “Thou shalt not kill” might be seen as a heuristic which says “most of the time when you do this, the consequences end up net negative for everyone around you.” We see this trade-off all over the place in computing; video game developers often use approximate solutions, because they need to do lots of computation, fast. Instead of computing all the points of interaction between a complex 3d mesh and the environment, a game engine may approximate the complex mesh with something much simpler. This leads to results which are usually correct, and much cheaper to compute – at the cost of occasionally leaving a game character’s fingers protruding through walls.
The ethical equivalent of this shortcut would be a set of rules that normally produces good outcomes, but occasionally makes you act like a jerk to people you love, or cause harm to strangers who are far away. Due to the inevitable errors that moral rules (aka deontological heuristics) contain, Inadvertently harming people would still happen if we always followed the rules. And yet, for a human, deontology still comes short of being practical. It’s easy to say “don’t yell at anyone ever”; but it might be much harder to put this rule in to practice. I think most people are struggling in life, in part because most of us don’t do the things that we know are good for us. And this is where virtue ethics comes in.
Virtue Ethics: Improving the Hardware
If utilitarianism says “you should yell at your spouse when the benefits of doing so outweigh the harms”, deontology would say “do not ever yell at your spouse.”
At which point the human in me says “OK, but how?”
Virtue ethics attempts to solve this problem by focusing on hardware improvements.
How many of us know what would be good for us, and have trouble operationalizing that value system? How many of us do things that hurt people we love because we did the cost benefit analysis and it told us to go ahead? How many of us sometimes do things that hurt people we love because we don’t think there’s a problem with doing so?
How many of us yell at people we love despite knowing it’s not helpful and really not wanting to do so, but, in the moment, experiencing priority inversion between “a desire to produce long term positive outcomes”, and “anger, frustration, and pain”? When our short term, system-1 impulses overpower system 2, the speech-selection network acts more like a weapon than medicine.
This is where virtue ethics comes in. If deontology reduces the cost of computing positive courses of action, virtue ethics decreases the cost (and increases the probability) of operationalizing these courses of action by changing our hardware.
To keep going with the same example, the thing which will prevent me from yelling at my spouse in an argument is patience.
I can cultivate patience at all times, whether or not I am in an argument with my spouse. When the argument does happen, if i have cultivated patience in the weeks leading up to it, i am less likely to yell at my spouse – not because I’ll do a cost benefit calculation and it’ll come up negative, not because i will have cached the result of that computation, and will know it’s a bad idea, but because the me that arrives at the argument will be fundamentally different on the hardware level.
Whenever we think a certain way, neuroplasticity changes our brains slightly, to make that way of thinking easier and more automatic. By practicing patience whenever the opportunity arises, it makes it easier and more natural for me to be patient in moments of stress and anxiety.
In order words, virtue ethics is a hardware solution. Someone who is “more patient” isn’t someone with a different belief – it’s someone with different hardware.
Exercise to Improve Your Mental Hardware
A great way to make yourself happier in life is to exercise physically – strength conditioning is particularly effective. I’ve been practicing hanging from a pull-up bar for as long as i can. It’s hard. I often want to give up, to let go, because it’s difficult and it feels unpleasant. The practice of holding on when a part of me is screaming to “give up!” is transferring elsewhere. Enduring through pain and difficulty, in pursuit of a goal, is essential for being happy in life.
I think happiness rests on a foundation of strength. When the movement of your body feels light and easy because you have lots of muscle mass, small disturbances bother you less. When you’ve had lots and lots of practice denying the voice inside your head that wants to give up, it’s easier to do so just one more time – whether that’s holding tight to a bar, or keeping your mouth shut when someone you love is saying things that make you feel hurt.
I lost 40 pounds using the keto diet. I feel much happier on a regular basis, in part because I’m no longer carrying around those extra 40 pounds. How would it make you feel to wear a 40lb backpack all day? Changing my hardware, by losing weight and building muscle, made it easier to stay calm in all situations because I was no longer carrying 40 pounds worth of extra weight.
When I shout at my wife, it’s because a short term emotional instinct, system 1, which feels hurt, overcomes system 2, the rational mind, which knows this is a terrible idea. When I let go of the bar before 55 seconds comes, it’s because the desire to give up has overcome the pile of evidence that knows I’ve held on for 55 seconds and can do so again. The more times I hold onto that bar for 55 seconds, the stronger that pile of Bayesian evidence gets – and the stronger my muscles get – and the easier it is to hold on for 56 seconds.
If you don’t feel like you’re in control of your life, it may be because your pile of evidence that you can change yourself just isn’t that large yet. It can be changed very simply, with very tiny steps. Evidence that “i can change myself” accumulates exponentially, as long as you believe that it will, and act on that belief. Start with tiny changes – even building a new habit of brushing your teeth puts evidence in that pile – and as you gain evidence, it becomes easier to add more.
We have the image of an AI rapidly making itself more intelligent. What do you think that would look like, in practice? Would it be rewriting its software? Or modifying its hardware? Which takes more time and effort? Which one has a higher potential reward?
What do you think would improve your life more? Overcoming 10% of the biases in your brain and being 20% more accurate in predicting how scenarios you cannot control will unfold?
Or would it be a lower resting heart rate, the confidence that comes from having repeatedly overcome challenges that once felt impossible, and a massive collection of Bayesian evidence that informs you “I am capable of changing myself”?
We’re all computers, but we’re not all recursively improving ourselves. I think the reason for this is largely software – if we had cultural norms that told us that recursive self improvement were possible and encouraged, we’d all be doing more of it. That’s partially why i write this – to tell you that, as a human being, the stronger I become, the happier and more at peace I feel. As a young man I was harmed by the stereotype of the nonathletic nerd. Perhaps you were, too? Whoever you are, I want you to feel happier and be better at accomplishing your goals, because that is what makes the world richer and more alive.
I don’t think many people believe that the goodness of the world is fundamentally constrained by computing power. And yet many people would agree that more patience and understanding would make the world better. Patience and understanding are just ways of talking about “hardware which can maintain priorities under stress”, and “data models that have predictive accuracy.”
Utilitarianism is the ethical system that would be used by an agent with infinite computational power attached to pristine models of reality and infinitely rich data sources. It’s only universally correct if you don’t understand computational complexity .
Deontology gives a practical way to get many of the benefits of utilitarianism, without paying the high cost. It’s a software solution. Virtue ethics gives a practical way to operationalize beliefs we already have, by changing our own hardware to make it easier to carry out our desired values.
At first glance, ethics might seem like it has very little to do with computational trade-offs and improvements in hardware and software. I see this pattern over and over – the computational nature of the human experience may be hidden because it’s all over the place, and it so strictly limits what we can and cannot do, that most people use ‘common sense’ to avoid doing something like enumerating a search for the best possible thing to yell at their spouse.
And that’s why I feel compelled to keep writing about this topic 🙂
So, if we rewind this argument and play it back with “social norms” in the place of “ethical systems”, I think it still flies; We might need to substitute “improving community norms” for “improving hardware”.
I wonder also if we can replay it again with utility maximisation replaced “individual reproductive fitness” (or group fitness in some sense) in the place of utility maximisation.
“Improving hardware” still works at the community level, honestly: it means improving the hardware of the individual people in the community. Virtue ethics, as practiced by a community, would mean everyone in the community cultivating patience, strength, discipline, etc.
I’d have to think a bit more about the utility maximization bit.