Why Justice Requires Computation
I’d like to be able to say for certain that I own zero slaves. Of course, I don’t own slaves outright, in the de jure sense. There is no person who works for me without pay, and I don’t have any title or deed to such a person. But I’m not interested in merely following the law, and I assume you’re not either. I’m guessing that, like me, you’d rather not be using slave labor either, even if you don’t own the slave directly.
I may not own any slaves, but I do own shares of an index fund. Owning shares of the index fund means that I am the indirect owner of shares of hundreds of different corporations. If any of those corporations use slaves in any of their supply chains, then I’d say I’m morally guilty of owning a fractional slave.
Making a purchase means interacting with Millions of humans.
Living in the modern world means interacting with extremely large numbers of people on a regular basis. Every time I make a single purchase, I am simultaneously benefiting from, and rewarding, the behavior of millions of people I don’t know. How can I ensure that my personal interaction with each of these millions of people is something I consider just and fair?
Some people might argue that the price system alone is sufficient to ensure this. I believe that might be true if we lived in a world where work was voluntary, and nobody was in fear for their survival, and nobody was enslaved. But that isn’t the world we live in. Slavery, in anyone’s supply chain, anywhere, becomes connected to the global economy everywhere. It isn’t just slavery – environmental degradation and animal abuse play a role here, too. We’re all benefiting from horribly unjust things, done out of our sight, far away from us. We don’t have to think about this fact, unless we want to. The injustice benefiting us might be a tiny fraction of what’s going in the world, but it’s still there. Being a modern human in a wealthy, developed country, is like eating like a bowl of ice cream with just a tiny bit of shit in it.
To be fair, there probably is some shit in your ice cream already. It’s a tiny amount, enough that you can ignore it from a health perspective. I think doing the right thing is a different kind of problem from being healthy. Doing what’s right requires a level of rigor that would be absurd if we weren’t dealing with germs. I can’t imagine saying there was some acceptable, non-zero quanity of slavery that you’d be OK with in the world. If you oppose slavery as a gross violation of human rights, you want the value to be zero, not “statistically small enough that I can sleep at night.”
We’d like to ensure sure that our interactions with the world aren’t promoting evil causes, such as slavery. How could we possibly do this, given the massive number of people we interact with, directly and indirectly? What would a solution look like?
Simple Moral Algorithms
My process for buying eggs is as follows:
- Look for eggs that are labelled “pasture raised.”
- Buy the cheapest of these eggs.
I choose to buy pasture raised eggs because I believe these eggs have the best living conditions for the hens. My egg-buying process is a moral algorithm. It’s an algorithm – a filter, and a sort. The algorithm is informed by morals – personal beliefs about right and wrong.
I don’t know anyone else using terms like “moral algorithms” to describe this process. Computer Science is, at its core, the study of how to work with information. When we make personal choices and evaluate tradeoffs, what we are doing is computation. The fact that we evaluate the tradeoffs using a brain made of meat is irrelevant; our moral choices are still acts of computation, and therefore needs to be debugged, designed, and reasoned about carefully. Once we see this fact, we can use our knowledge of computer science to improve our ability to make choices in line with our values. That’s what this blog is all about.
A lot of people try to do what’s right. Most people, I think. Yet many awful problems remain in the world. The problems remain, not because good people don’t want to make them go away, but because we lack the tools that would let us do so. I can’t imagine that a better world would come about through the selective application of violence (i.e., laws), or their cousins, social pressure and guilt. The only way I can imagine a problem-free world happening is if people stopped giving any money to people doing bad things.
When you buy any product, you encourage the complex set of human behavior that created the product. When I buy pasture-raised eggs, I reward the companies that produce them, and thus encourage that behavior. When I buy eggs that are cheaper, but aren’t labeled as ‘pasture raised’, I’m rewarding those companies, and encouraging that behavior. Using my algorithm allows me to make my purchases of eggs in a way that lines up just a little better with my personal values. Buying the cheapest pasture raised eggs I can is encouraging egg producers to try to make their eggs available for low cost, while still treating animals well.
Of course, my system isn’t nearly enough. I don’t have information about all kinds of behavior on the part of the egg producer, and thus I have no idea whether I’m encouraging:
- Slavery, or poor working conditions
- Bribery, or lobbying for favors and benefits
- Environmental Damage
My algorithm reduces the amount of shit I’m eating in my ice cream, but there’s still a non-zero amount of shit present. It’s progress, but it’s not enough. I wish I could make every purchase with a system like this, but far more powerful. I want a system powerful enough to make information about all externalities visible alongside prices, so that I can automatically buy the cheapest product that’s consistent with my moral beliefs.
Information Flows Carrying Human Values
Viewing human beings as computers means that we can view all human situations in terms of information flows. We can view my purchase of the ‘morally produced eggs’ as being a flow of information from me, to the manufacturer of the eggs. That flow of information says “yes, I will buy eggs that are priced higher, as long as they are labeled as being pastured raised.” It would be awesome if the other manufacturers of eggs also received that signal, telling them “this customer chose not to purchase your eggs, because they are unwilling to buy any eggs that aren’t pastured raised, no matter how cheap.”
At present, we just don’t have enough bandwidth on the information flows that convey human values between other human beings. That’s one explanation for why the world is so unjust at present. Yeah, some people are evil and some people don’t care. I don’t think that’s a sufficient explanation. We just don’t have enough bandwidth on dataflows about human values. We shout at each other about these values on twitter, and sometimes laws are passed. I don’t think that’s nearly enough dataflow to enable the outcomes most of us want.
Here’s how I imagine a more just world, which relies upon the heavy use of computing to increase the bandwidth of dataflows carrying human values.
In a more just future, everything for sale has labels that say:
- The working conditions and compensation of the employees who produced the product
- The conditions of any animals used in the creation of the product
- How much environmental damage was caused in the creation of the product
Everyone who makes purchases does so through the use of an AI agent that understands their moral preferences, as expressed in code. Whenever people make purchases, the producers they buy from are notified about the consumer’s moral algorithms, as are the creators of the products that they passed over. Manufacturers would be able to get these signals and realize that their products are being passed over for products made by competitors who pay employees better, don’t pollute as much, and treat animals better.
Human values would be transmitted along fiber optic cables, and through the air, via a medium other than just prices. The transmission of these values, coupled with economic incentives, would reshape human activity to better line up with what humans truly value. “I didn’t buy your product because I think you have shitty moral values” would certainly register on a spreadsheet somewhere, even inside EvilCo headquarters.
The Power, and Responsibility, of Consumer Choice
Thinking of myself as a machine means giving up the right to tell myself that I’m not harming others, just because I haven’t acted with malice or ill intent. Being careless around big machines is usually a bad idea, and the global economy is a massive machine. If you put your hand into a running car engine, you might lose a finger. If you stick your hand into the global economy, you might pull out some cheap clothing, and cause a statistical fraction of a person to lose their finger, far out of your sight.
Of course, we’d all like to say that we want products produced by handsomely paid employees working with happy, smiling animals, in a pristine, eden-like environment that resuscitates extinct species as a byproduct of their operations. This is the primate way of thinking: telling ourselves a story that makes us feel good and allows us to ignore the inputs we have that don’t fit the narrative.
The reality that we find ourselves in is one where constant tradeoffs abound. In general, we have to make choices between a number of things, all of which are desirable to us. The moment we deal with tradeoffs in a space of large variables, a computer is a far more useful tool than a meat brain.
The savvy reader might ask, “Ok, but where do those labels come from? What makes us think those are accurate? Wouldn’t producers just lie about how their animals were treated, or how much pollution they created?” And a savvy author might reply… Well, shit, I don’t know what a savvy author would do. I’ve never been all that savvy. What I’m gonna do is say, “Yeah! You’re right! Let’s talk about that next time.”