Is there an objective notion of right and wrong, good and bad? People have spent a lot of time arguing over this question. Whenever an argument goes on for a long time without a conclusion, I find it useful to ask the question, “What would any possible answer to this question look like?”
In computing terminology, I would ask what the type of the answer is. I find that reasoning about data types can often help me come up with answers that are wrong in strange ways. These wrong answers can end up being illuminating because they help mark out the boundary between “wrong” and “not even wrong.” The “correct” answer is usually (but not always!) somewhere inside a space of “wrong” answers, further away from the “not even wrong” answers.
For example, if someone asks “how many shoes do you own”, the correct answer has a numerical type. The answer “nine hundred trillion” is wrong, but it’s at least got the correct type. Computer code might take this answer and estimate that I have a shoe closet roughly the size of mount Everest. If someone responds by listing out governors of US states with an “r” in them, then their answer is “not even wrong” – the compiler would give a type error.
Telling someone they have a shoe closet the size of mount Everest is, of course, still wrong – but after centuries of (often extremely violent) arguments about shoe cabinets it might be nice to find at least some kind of consensus on the problem. Agreeing that “the type should be numeric” might then lead to further agreements, such as “the type should be a positive integer”, which rules out further weird answers, such as “-5” or the smallest limit point of a geometric series – which is still numeric, not necessarily an integer.
Value Systems are Orderings
So, what type does a value system have? I think the right answer is a mathematical object called an ordering. There are different types of orderings, but what they all have in common is that they are made up of a bunch of different binary comparisons, each one saying “A < B”, for some value of A and B.
A system that values Beanie Babies might tell us one is preferable over another. A system that values League of Legends players can tell us which player is better than another. It might give the wrong answer, of course – nobody said ordering systems had to be correct. I could choose to value League of Legends players by “the number of r’s in their name”, but if I act on that value system, I’ll feel surprised when “Rrrrrrrrrrrudy the Rrrrrrocket Rrrracoon” loses to a gamer named “sbug”.
It doesn’t matter whether the system compares “league of legends players” or “flavors of ice cream” or “systems of human organization”: Any system that values one option over another is an encoded value system. An ordering on ice cream flavors, implemented as part of a machine that ends up making some choice about which flavor of ice cream to serve, is an extremely narrow form of a value system.
The ice-cream valuing system might not have an answer to the trolley problem, but that doesn’t matter due to the composability of machines. If we connected the “ice cream flavor choosing machine” to the lever that controls the trolley tracks, then life and death decisions would be made on the basis of some machine’s encoded preference for black raspberry chocolate chip over peanut butter fudge ripple.
Of course it would be silly to connect a system for valuing ice cream to a machine that has profound human consequences, right?
That’s basically what we’ve done with social media. Binary comparisons are being used trillions of times per day, by machines, to deliver information to you in a way that almost certainly does not align with your values. What kind of levers have these machines pulled in your mind?
Sort Functions are Operationalized Value Systems
When you log into a Big Social Network to peruse stories on the feed, you’re consuming the output of a sort function. That sort function looks at all the possible stories the network could show you, and applies an ordering to determine which stories to show you, and in which order they’ll be shown. That ordering is an encoded value function: it says some stories are better than others. If the Big Social Network wants to promote engagement, it will sort the stories it shows you in the order which the Network’s models believe are most likely to cause you to respond to them.
When some networks implement this feed ranking mechanism, there’s usually a strong negative pushback from users on the network, which the owners of the network ignore because their data shows ‘people are engaging more.’ We see right away two things:
People do not like having a value system pushed on them. Showing posts chronologically means that the network isn’t inserting any value function. The order of the posts is decided by the users making the posts themselves. We might see this value system as being organic – it’s the natural confluence of choices of a bunch of different people. I can’t control when my friends post, but I can choose who I follow and when I get on the site. Those choices will determine what I see. Algorithmic feeds insert a hidden step in there, reducing the importance of when I get online, when my friends choose to post, and even whom I follow – some of my friends might be seen as ‘more useful to me’ than others, based on metrics like how often I click a ‘like’ indicator or reply.
I believe the large scale negative responses to the algorithmic sorting of news feeds are an intuitive rejection of having some foreign entity impose its own value system.
Instead of taking people’s anger seriously, the big social networks ignore what people say, and choose to focus on what people do. Seeing engagement numbers go up, and time spent on the site go up, defenders of algorithmic feeds inside the companies say “look, people aren’t great at knowing what they want. All our data show engagement going up. So that’s why we’ll stick with the feeds.” These arguments are easier to win inside the companies, since they lead to more revenue. More engagement leads to more ad revenue. The same models that deliver posts people are likely to comment on can be used to deliver ads that make people act. The purpose of all advertising is to inspire action – not silent contemplation, empathy, or reflection. Which could the world use more of? Action, or contemplation and reflection?
This willingness to ignore people who say they are upset, because they keep on clicking the “provide stimulus” button, is a hubris at the scale that can only be made possible by abstract mathematics and gigawatts of computing power. Only very intelligent people could make a mistake this catastrophic. Less intelligent people would say “hmm, maybe making lots of people angry is a bad idea.” Intelligent people have developed methods for questioning any conclusion they don’t want to reach – especially conclusions that highlight a disconnect between basic human ethics and the desire to make more money.
Social Networks enforce their own values using us as tools. When we ask “whose values are encoded in the ordering of these posts,” for networks with chronological feeds, the answer is “the value that each individual person in my network placed upon deciding what to post, and when.” For a network with a sorted feed, the answer is closer to “the values that the network itself has decided are correct.” Some of my friends’ values are still encoded, but it’s a filtered, sorted version of my friends’ values: enough like them to pass for being theirs, but different enough to no longer reflect their values.
Suppose someone followed you over the course of your entire life and picked out moments that were likely to provoke a crowd to respond to your actions, and compiled these into a film. Would you consider that film to be a fair representation of you? Of course not! Yet if you follow a thousand people on social media, just one month of calendar time is close to a hundred years of human time. The stories you see from that month of posting by a thousand people are no more representative of their values, than the most reaction-driving highlights of your life would be representative of your values.
I found old family videos in my parents house. They were heartwarming. The fact that the camera was a pain to get out means that the old VHS camera captured only great memories. Lowering the cost of transmitting information (from the present to the present, or the past to the future) doesn’t make the world better if the amount of computational processing power can’t keep up with the volume of data.
In the distant past, we had more brain cycles than we did data to integrate with those cycles, and so our evolved hardware tells us to highly value new data. Now we’ve got too much data, just like we have too much sugar, for the same reason: the values encoded in our hardware don’t match our current environment.
Watching the family videos makes childhood seem like a highlights real – birthday, birthday, Christmas, birthday, birthday, Easter, birthday, birthday, etc.
How different would those videos look if a computer trolled through some video archive of my childhood, and filtered out the segments that were most likely to provide immediate responses? Whatever makes a child cry or a parent squirm with discomfort rises to the top. Whatever makes you stop and think, frozen in your tracks by the power of what is being revealed to you – something which jars you, leaves you speechless because of what it means – that stuff goes straight to the garbage can.
Would you want to watch such a video of your own life? When you consume algorithmic social media, you are consuming the equivalent from a lifetime’s worth of different people’s experiences. What makes you react is highlighted. What makes you stop and think is dropped.
We all know the story of the blind men arguing over the elephant – each can only see a piece, and those pieces all feel different. What happens if the elephant does it damndest to show each man the piece that will make him react the most? I would expect the men to become angry and shout at each other even more loudly, until they eventually kill each other, or the elephant. If those men are wise, they’ll kill this elephants before they kill each other. They may not agree on much, but “whatever this thing is, it’s making us all fight each other” is something they can agree on.
Self Sovereignty Requires Owning the Sort Function
The bitcoin community has a saying: “not your keys, not your bitcoin.” I have adopted my own internal model: “not your comparison operator, not your values.” I have deleted my account on several Big Social Networks, and immediately feel myself feeling calmer, more grounded, and more relaxed. Why wouldn’t this be the case?
Why would the revolution be televised when it can be live-tweeted and reblogged in a manner that massively boosts advertising revenue for the organizations that can post solemnly about how concerned they are over the chaos they profit from? What does it matter that no major newspapers endorsed Trump explicitly, when they all endorsed him implicitly with loads of coverage during the primary? Businesses – especially data driven Fortune 500 powerhouses – are good at getting what they want. They will happily warp societies in order to make their desired outcomes happen. A chaotic, turbulent society where people continuously react in anger is intensely profitable for companies that monetize attention.
What better way to monetize outrage about racism than to cause more racism to happen? [Edit: I don’t mean to imply that this was done intentionally, but rather that it likely occurs as a natural side effect of promoting content that leads people to react.] What would happen to the attention economy if everyone became intensely mindful and kind to each other at all times?
By sorting content which is likely to lead to responses and reactions, social networks with algorithmic feeds train their users not to think. We are all taking part in a massive distributed experiment in social psychology, where people are rewarded with social status for communicating in a manner that provokes an immediate response, and penalized (via the withholding of social status) for communicating in a manner that is likely to provoke deliberation and thought, but not an immediate response.
If you aren’t running the sorting function yourself, it’s someone else’s values that are being enacted on your mind. I wrote earlier about how ego seems to manifest as a result of internal sorting functions; I think there are similar effects happening in society at large, with similar levels of destruction. Sitting mindfully aware, and relaxing the sort function being applied to my inputs, makes me feel more present and at peace. When I am calm, I am more capable of engaging constructively with everyone in my life. Algorithmic Feeds are amplifying the narrative fire of Ego that Buddhists believe is the root of all suffering. We watch the world burn to upvotes, retweets, and animated frowny faces interspersed with targeted advertising. Witness the destructive ego of the global consciousness.
Computers make choices trillions of times each day in accordance with the values systems encoded in them. The choices made by these machines exert influence over the world, shaping it according to the value systems encoded in the computers. Big tech and media companies spend hundreds millions of dollars per year on PhD level statisticians, machine learning experts, and expensive computing hardware. They are competing with each other in an algorithmic attention arms race. They are putting these enormous efforts into learning how to control your attention. Nothing personal, just business. Your own brain is becoming part of a computer network used by other people to make money.
This statement is at once obviously true to most people who program computers for a living, and sounds like downright mysticism to people who don’t. That’s been my general experience when talking about how people are computers – some people get it right away and think it’s obviously true, while others (typically those who don’t work with computers) think it’s ridiculous to compare the two. That barbell distribution in responses is what convinces me I need to keep writing about this topic. A truth which is simultaneously obvious to some people, and seemingly absurd to others, has got to have important consequences.
2 thoughts on “Sorting Algorithms are Operationalized Value Systems”