Moral Algorithms

Why Justice Requires Computation

I’d like to be able to say for certain that I own zero slaves.  Of course, I don’t own slaves outright, in the de jure sense. There is no person who works for me without pay, and I don’t have any title or deed to such a person.  But I’m not interested in merely following the law, and I assume you’re not either. I’m guessing that, like me, you’d rather not be using slave labor either, even if you don’t own the slave directly.

I may not own any slaves, but I do own shares of an index fund.  Owning shares of the index fund means that I am the indirect owner of shares of hundreds of different corporations. If any of those corporations use slaves in any of their supply chains, then I’d say I’m morally guilty of owning a fractional slave.

Making a purchase means interacting with Millions of humans.

Living in the modern world means interacting with extremely large numbers of people on a regular basis. Every time I make a single purchase, I am simultaneously benefiting from, and rewarding, the behavior of millions of people I don’t know.  How can I ensure that my personal interaction with each of these millions of people is something I consider just and fair?

Some people might argue that the price system alone is sufficient to ensure this. I believe that might be true if we lived in a world where work was voluntary, and nobody was in fear for their survival, and nobody was enslaved. But that isn’t the world we live in. Slavery, in anyone’s supply chain, anywhere, becomes connected to the global economy everywhere. It isn’t just slavery – environmental degradation and animal abuse play a role here, too. We’re all benefiting from horribly unjust things, done out of our sight, far away from us. We don’t have to think about this fact, unless we want to. The injustice benefiting us might be a tiny fraction of what’s going in the world, but it’s still there. Being a modern human in a wealthy, developed country, is like eating like a bowl of ice cream with just a tiny bit of shit in it.   

To be fair, there probably is some shit in your ice cream already. It’s a tiny amount, enough that you can ignore it from a health perspective. I think doing the right thing is a different kind of problem from being healthy. Doing what’s right requires a level of rigor that would be absurd if we weren’t dealing with germs.  I can’t imagine saying there was some acceptable, non-zero quanity of slavery that you’d be OK with in the world. If you oppose slavery as a gross violation of human rights, you want the value to be zero, not “statistically small enough that I can sleep at night.”

We’d like to ensure sure that our interactions with the world aren’t promoting evil causes, such as slavery.  How could we possibly do this, given the massive number of people we interact with, directly and indirectly? What would a solution look like?  

Simple Moral Algorithms

My process for buying eggs is as follows:

  1. Look for eggs that are labelled “pasture raised.”
  2. Buy the cheapest of these eggs.

I choose to buy pasture raised eggs because I believe these eggs have the best living conditions for the hens. My egg-buying process is a moral algorithm. It’s an algorithm – a filter, and a sort. The algorithm is informed by morals – personal beliefs about right and wrong.

I don’t know anyone else using terms like “moral algorithms” to describe this process.  Computer Science is, at its core, the study of how to work with information. When we make personal choices and evaluate tradeoffs, what we are doing is computation. The fact that we evaluate the tradeoffs using a brain made of meat is irrelevant; our moral choices are still acts of computation, and therefore needs to be debugged, designed, and reasoned about carefully. Once we see this fact, we can use our knowledge of computer science to improve our ability to make choices in line with our values. That’s what this blog is all about.

A lot of people try to do what’s right. Most people, I think. Yet many awful problems remain in the world. The problems remain, not because good people don’t want to make them go away, but because we lack the tools that would let us do so. I can’t imagine that a better world would come about through the selective application of violence (i.e., laws), or their cousins, social pressure and guilt.  The only way I can imagine a problem-free world happening is if people stopped giving any money to people doing bad things.

When you buy any product, you encourage the complex set of human behavior that created the product. When I buy pasture-raised eggs, I reward the companies that produce them, and thus encourage that behavior. When I buy eggs that are cheaper, but aren’t labeled as ‘pasture raised’, I’m rewarding those companies, and encouraging that behavior.  Using my algorithm allows me to make my purchases of eggs in a way that lines up just a little better with my personal values. Buying the cheapest pasture raised eggs I can is encouraging egg producers to try to make their eggs available for low cost, while still treating animals well.

Of course, my system isn’t nearly enough. I don’t have information about all kinds of behavior on the part of the egg producer, and thus I have no idea whether I’m encouraging:

  • Slavery, or poor working conditions
  • Bribery, or lobbying for favors and benefits
  • Environmental Damage

My algorithm reduces the amount of shit I’m eating in my ice cream, but there’s still a non-zero amount of shit present.  It’s progress, but it’s not enough. I wish I could make every purchase with a system like this, but far more powerful.   I want a system powerful enough to make information about all externalities visible alongside prices, so that I can automatically buy the cheapest product that’s consistent with my moral beliefs.

Information Flows Carrying Human Values

Viewing human beings as computers means that we can view all human situations in terms of information flows. We can view my purchase of the ‘morally produced eggs’ as being a flow of information from me, to the manufacturer of the eggs. That flow of information says “yes, I will buy eggs that are priced higher, as long as they are labeled as being pastured raised.”  It would be awesome if the other manufacturers of eggs also received that signal, telling them “this customer chose not to purchase your eggs, because they are unwilling to buy any eggs that aren’t pastured raised, no matter how cheap.”

At present, we just don’t have enough bandwidth on the information flows that convey human values between other human beings. That’s one explanation for why the world is so unjust at present. Yeah, some people are evil and some people don’t care. I don’t think that’s a sufficient explanation. We just don’t have enough bandwidth on dataflows about human values. We shout at each other about these values on twitter, and sometimes laws are passed. I don’t think that’s nearly enough dataflow to enable the outcomes most of us want.

Here’s how I imagine a more just world,  which relies upon the heavy use of computing to increase the bandwidth of dataflows carrying human values.

In a more just future, everything for sale has labels that say:

  • The working conditions and compensation of the employees who produced the product
  • The conditions of any animals used in the creation of the product
  • How much environmental damage was caused in the creation of the product

Everyone who makes purchases does so through the use of an AI agent that understands their moral preferences, as expressed in code.  Whenever people make purchases, the producers they buy from are notified about the consumer’s moral algorithms, as are the creators of the products that they passed over. Manufacturers would be able to get these signals and realize that their products are being passed over for products made by competitors who pay employees better, don’t pollute as much, and treat animals better.

Human values would be transmitted along fiber optic cables, and through the air, via a medium other than just prices.  The transmission of these values, coupled with economic incentives, would reshape human activity to better line up with what humans truly value. “I didn’t buy your product because I think you have shitty moral values” would certainly register on a spreadsheet somewhere, even inside EvilCo headquarters.

The Power, and Responsibility, of Consumer Choice

Thinking of myself as a machine means giving up the right to tell myself that I’m not harming others, just because I haven’t acted with malice or ill intent. Being careless around big machines is usually a bad idea, and the global economy is a massive machine. If you put your hand into a running car engine, you might lose a finger. If you stick your hand into the global economy, you might pull out some cheap clothing, and cause a statistical fraction of a person to lose their finger, far out of your sight.

Of course, we’d all like to say that we want products produced by handsomely paid employees working with happy, smiling animals, in a pristine, eden-like environment that resuscitates extinct species as a byproduct of their operations. This is the primate way of thinking: telling ourselves a story that makes us feel good and allows us to ignore the inputs we have that don’t fit the narrative.

The reality that we find ourselves in is one where constant tradeoffs abound. In general, we have to make choices between a number of things, all of which are desirable to us.  The moment we deal with tradeoffs in a space of large variables, a computer is a far more useful tool than a meat brain.

The savvy reader might ask, “Ok, but where do those labels come from? What makes us think those are accurate? Wouldn’t producers just lie about how their animals were treated, or how much pollution they created?”   And a savvy author might reply… Well, shit, I don’t know what a savvy author would do. I’ve never been all that savvy. What I’m gonna do is say, “Yeah! You’re right! Let’s talk about that next time.”

4 thoughts on “Moral Algorithms

  1. These points are well worth considering. The global economy is indeed a massive, complex and dangerous machine; and we can discover all the implications of our actions only with great difficulty, if at all.

    However, I think you underestimate the ability of egg producers to discover that demand for pasture-raised eggs has increased relative to other production methods. If consumers prefer one kind of egg to another, the egg producers will figure that out pretty quickly. The problem in that instance would not be solved by better availability of relevant information, but by persons availing themselves of the information that is available and deciding to change their behavior. I suspect that problem must be solved by changing the minds of consumers.

    I see the broader problem of detecting and eliminating indirect dependence upon unethical activity as much more difficult to solve. If I deal ethically with all my suppliers and customers (and they deal ethically with me, at least in our direct interactions), that would not excuse my dealing with someone who I know deals unethically with others as a systematic part of their business. If I take steps to avoid such second order ethical violations, I may still fall foul of third or fourth order violations. The complexity, depth and breadth of the global economy makes this problem intractable at the individual level. Unlike the case of the eggs, where it seems easy to discover and avoid the first and second order unethical factors, the highly interconnected nature of the global economy makes the general case unsolvable for particular persons.

    Can we even solve it on the systemic level? Can we persuade all of humanity to unanimously agree on ethical principles? Would it be ethical for us to force those who disagree to comply with our system?

    What goal do we pursue by seeking to purify the production chain? Do we wish to actually eliminate specific evils, or just to separate ourselves from them? If no one buys products from North Korea, do we expect that to have a positive effect on the treatment of workers there, or do we just wish to keep our own hands clean, while making no difference in the lives of the victims?

    Morality can’t obligate us to do the impossible. Is it really possible for us to separate ourselves from the sins of the world?

    Slaves were recently sold openly in Libya. This perhaps implies we must boycott any goods from Libya. Must we also boycott those who deal directly with Libyans? Or indirectly? How many nodes have to separate us from Libya in the network of transactions to purify our dealings of Libya’s taint?

    1. Thanks for your thoughtful comment! I apologize for the delayed response; I didn’t have notifications on and plan to turn them on, thanks to questions like this.

      The complexity, depth and breadth of the global economy makes this problem intractable at the individual level.

      I think this is almost certainly true of the world in 1970. I doubt it will be true in a few decades. I’m maybe 40% confident that it’s true today. One of the main points I want to make in this blog is that questions of “complexity, depth and breadth” are only intractable for individual persons given a specific amount of computational resources. As computers get faster and cheaper, and as humans have more wattage available to them, we should expect individual persons, aided by computers, to be able to solve far more complex problems. We should expect societies with more computing power to support more diversity and individuality, because there are computational costs associated with lots of variation between individuals.

      Can we even solve it on the systemic level? Can we persuade all of humanity to unanimously agree on ethical principles? Would it be ethical for us to force those who disagree to comply with our system?

      The way I imagine this would play out (if it did happen) is you’d first have some basic standard which was an extremely low floor. Something like “You must pay anyone that is working for you. You must _always_ give people the freedom to leave working for you.” People who adhered to this standard would agree to only trade with each other. If there are enough people adhering to this standard, businesses would then choose to adhere to it in order to gain access to those markets.

      I don’t think you’d have to force anyone to adhere to this standard. The goal would be producers complying voluntarily, because they want to gain access to that market. This market would have to be entirely self-sufficient. If it’s tiny, its members would need a very austere standard of living. If we assume that most people in democratic countries believe both of these are valid ethical norms, then i think it’s entirely plausible to imagine these countries trading only amongst each other.

      That said, I would be totally ok with stealing from people who own slaves, for example. I’d love doing that, and feel no ethical qualms about doing so. I have no problem taking things from people who say “I should have the right to own slaves,” and especially people who own slaves and claim they don’t.

      Do we wish to actually eliminate specific evils, or just to separate ourselves from them? If no one buys products from North Korea, do we expect that to have a positive effect on the treatment of workers there, or do we just wish to keep our own hands clean, while making no difference in the lives of the victims?

      If we have a policy of always welcoming refugees, the goal here would be that people who are working in oppressed places have the ability to leave, possibly with our active assistance. If you couple this with a norm that says “anyone outside this standard, who refuses to comply, gets no protection from us, and there’s no problem with stealing from them” – what I imagine (and I’m not sure this is what would happen) is that there’d be this powerful economic forcing function that basically made it economic suicide to go against the protocol.

      Morality can’t obligate us to do the impossible. Is it really possible for us to separate ourselves from the sins of the world?

      I totally agree that morality can’t obligate us to do the impossible. However, what is possible changes over time – and so, to me, this means that morality has to change over time as well. This is another key belief that inspired me write the blog. The idea of “timeless moral ideals” is deep in a lot of people’s heads, and I don’t think it’s valid.

      I believe that moral standards need to advance with material prosperity. This also means we shouldn’t be judging our ancestors for not having our moral standards. If you view morality as nothing more than that which keeps moloch at bay, there’s no reason to believe that the ideal strategies should be timeless. I think a wealthier, more prosperous society should need a different set of rules than a poorer, agrarian society. In some ways more restrictive, in other ways more permissive.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.