A Moral System From Scientific Rationality

Scientific Rationality can tell us how the world works – it’s predictive – but it hasn’t given us a model of what kind of choices we should make – it’s not prescriptive.  What would it look like if it were?

What would happen if all people who believed the scientific method can tell us how things work also believed that we had a specific kind of duty to the world?  What would it look like if those of us who adhere to the ‘faith’ of scientific rationality had a shared vision of The Good?

How many problems that we see in the world today stem from the fact that religion has long guided functioning civilizations, and that the most powerful civilization in the world – the transnational market ecosystem created by scientific rationality – does not share any belief in The Good beyond that which we can measure with money?

If you’re like one of the many people I know who grew up religious, and eventually abandoned the faith of your parents, you probably recognize that religions have done a lot of good for the world, despite asking us to believe in things without evidence.  

And yet we all know religions have done some awful things to the world as well – by compelling people to believe things that aren’t true, religions have held back lifesaving technologies. By giving people a license to perform awful deeds because they are convinced that these awful deeds are really good, religions have sometimes done great evil to the world.

What if we could have the inspiring acts of Christ without the crusades? The egalitarian commitment to justice of Islam without wars of conquest? The timeless wisdom and interconnectivity of Advaita Vedanta without the caste system? The abatement of suffering through ego-loss of Buddhism without losing the commitment to alleviate the suffering of others?

What if we could get the good parts from religion, without fiat decrees or placing absolute trust in things we can’t observe? What if we could get all the inspiration towards selflessness, and unconquerable faith in the persistence of good, without relying on ideas we can’t derive mathematically from a simple set of shared axioms that line up with our experience? 

What if we could get people to believe that acts of service towards others aren’t for suckers, that true goodness does convey a kind of material strength, and that the arc of the moral universe bends towards justice, for the same reason that trees grow towards the sun?  What if we could inspire people to hope for good outcomes, not because we think these are automatically fated, or caused by something beyond physics, but because we believe in a cause-and-effect mechanism which is so obviously good that it inspires us to become part of that mechanism?

This is possible, as long as we approach morality through the lens of computation.

I think a computational perspective shows there is one very reasonable answer to the question “What is The Good, and how can I advance it”.  This answer ends up being a deep argument in favor maximizing diversity of moral agents.   Because this computational conception of The Good is easily stated, and yet computationally intractable, the only way to realize it would be to have a very large number of distinct models – whether in human beings, corporations, religions, nonprofits, or systems of government. All of these models would be attempting to advance the Good, and all failing to do so because modelling error is inevitable. Yet, through their concerted effort, these models can all be cooperatively nudging the world state along a path that satisfies our basic human intuitions about what is Good.

Instead of giving us confidence that our individual actions are correct, this computational perspective on morality posits that yes it is meaningful to talk about what is or isn’t good, but no, it’s impossible for us to be certain that any action is or isn’t good. We should expect to disagree, not because we don’t know what good is, but because we can’t compute it with perfect accuracy.

Far from agreement, we should expect intense disagreement over what is good in any difficult situation. This model says we should believe that this intense disagreement is, itself, a good thing. That feature simultaneously prevents moral license and encourages us to see differences in human beliefs not as flaws but as beautiful manifestations of a complex, robust ecosystem of beliefs. 

What is this model of the good? It’s simple:

The scientific, rational measure of Good is the action which maximizes possible futures.

Why Is This Good?

At this point, you’re probably asking why is this the good? How could I make such an argument?

This is where computation comes in. When we see that a moral belief system is ultimately an ordering on possible states of the world,  the question “is there a correct moral system” can be translated as:

“Is there an ordering on world-states which arises naturally, and aligns with human intuitions about what is good?”

If such an ordering arises naturally, the question “why this ordering and not some other” can be answered with the maxim of simplicity: the simplest possible hypothesis which explains all of the observations is likely to be the correct one.    A conceptual model of moral reality should compress moral observations, in the same way that a model of physical reality compresses physical observations.

Mathematical Simplicity

In order to describe the “Maximizing Possible Futures” moral system, we don’t need to talk about human beings, animals, people, suffering, or, anything involving physics at all.   This same model can be used to navigate the configuration space of checkers or chess, and allows an AI to “learn” all kinds of complex behavior, despite encoding nothing other than the idea that there are possible ways the world could be, and each way leads to other ways – and then selecting actions which lead to more possible ways, rather than fewer.

For a model that can both guide human behavior and help you win a game of chess that’s about as simple as you can get.

Human Alignment

I don’t know anyone who would disagree with me for being morally inspired upon seeing a set of adopted siblings move in together, to take care of the woman who adopted them from bad childhood circumstances.  These young people are making personal sacrifices to take care of a woman who sacrificed for them and gave them a chance for a good future.   When I learned their story, I was inspired to help them, and reminded that even small things I can do for people around me can make a big difference for the future. 

By taking care of these children who were not in good circumstances, this Mama helped open the door to possibilities in the future, possibilities which included these young people being functional, happy, and well.  Her sacrifice in the past made it possible that they could be doctors or therapists or social workers.  She reduced possibilities open to her – by limiting her time and attention, she could no longer continue as professional poker player, or keep living the lifestyle she once lived – but this Mama opened the door to many more possibilities in the future.

This act of service on the part of the mother is both obviously morally inspiring, and it fits with the basic metric: this woman increased the number of possible futures accessible to the children she took care of, the adults they became, and the communities which benefited from the addition of more functional adults. 

Here are some more examples of how this simple idea lines up remarkably well with our human intuitions about what is good:

  • Encouraging Learning and Pursuit of the Truth

    An agent that works to maximize possible future states accessible to its environment must have the ability to accurately model its environment. It must have the ability to accurately assess the impact of its choices on the future states accessible to its environment.
  • Encouraging Environmental and Self preservation.

    If an agent itself were to be destroyed, any future states involving that agent in any non-destroyed state would become inaccessible to the system.  This leads to a natural sense of self preservation: Any agent that operates under an ethical framework of maximizing possible futures would take steps to keep itself alive – unless these steps would lead to a reduction in future possibilities for the environment in which the system operates.

    Maximizing the future states available to a system leads to an agent that tries to protect the system it inhabits. An agent operating on this principle might take action that would lead the agent to be destroyed, if it believed that this action would lead, ultimately, to the preservation of the system the agent inhabits.
  • Encouraging Health and Growth

    A healthy person can have more experiences and do more things than a sick person. An educated person can have more experiences than an uneducated person.  A strong person can engage in more activities than a weak person.

    Any agent whose ethical framework is “maximizing possible futures” of the system it inhabits will act in a way that encourages the health, growth, and development of all other agents in the system it’s part of.

    Of course, if this agent observes a virus that is destroying lots of healthy species, the agent would not encourage the growth of the virus, because the total number of states accessible to the system decreases as the virus grows.
  • Discouraging Unnecessary Death and Destruction

    When a building is destroyed, all states of the environment where that building has children playing in it are no longer accessible.

    When a person dies, all states of the environment where that person is alive and well are no longer accessible. An agent attempting to maximize the possible future states accessible to the system it inhabits would attempt to limit death and destruction wherever possible.

    If an old aspect of the system has lots of resources which could lead to more possible configurations elsewhere, then an agent following the ‘maximize possible futures’ ethical model might allow the old aspect to die, or even take action to destroy it: the agent would evaluate the future states accessible to the world, and move towards a world of more possibility.  This system avoids pointless death but doesn’t cling to life merely for the sake of avoiding death.
  • Encouraging Exploration

    A species that lives on one planet has far fewer accessible states than a species which lives on hundreds of planets. An agent that operates to maximize possible states accessible to the system it inhabits would work to encourage exploration and settling of distant environments.
  • Ego-Loss

    An agent which operates to maximize the possible future states of the system it inhabits only values itself to the extent that it sees itself as being able to exhibit possible changes to the system, in order to maximize the future states accessible to it.

    In other words, an agent that operates to maximize possible future states of the system is an agent that operates without an ego.   When this agent encounters another agent with the same ethical system, they are very likely to agree on the best course of outcome. When they disagree, it will be due to differing models as to the likely outcomes of choices – not on something like core values.

So what does it say about capitalism, abortion,  or peeing in the shower?

OK, so we’ve argued that this moral system seems roughly lined up with things we already know are good anyway; it’s going to tell us to do things like learn about the world, fight diseases, reduce poverty and avoid having children die. Those are all pretty obvious things that we basically agree on anyhow.

If there were a system of morality that came naturally from scientific rationality, couldn’t’ we use it to finally end these arguments about capitalism and socialism. Couldn’t we use this system to tell us exactly how to live, and then do so without all the arguing and questioning?

And the answer here is, no.  This moral system sees moral arguments as a feature, not a bug.

Instead of agreeing with popular culture which often communicates some mixture of “there is no such thing as moral truth”, or else “it may exist, but we can’t know what it is”, this moral system says “yes the Good exists, and yes we know what it is (maximizing possible futures) but it’s totally impossible to compute how a specific choice will influence the future, so we can’t ever have moral certainty. At best we can have a bunch of different heuristics, but we can never be certain which one is correct in a given situation”

The answer this system gives to the trolley problem is “I can’t possibly tell you because you haven’t given me nearly enough information to decide, or time in which to process that information” – which makes way more sense than giving a certain, specific answer to an absurd hypothetical scenario.    

Some people might throw up their hands at this. Why bother with a consensus on moral reality if we can’t agree on anything other than that it exists at all?  What does this view of The Good give us, if it can’t answer any questions? I’ll get to that, but first, let’s poke some homes in the idea that we actually want moral certainty.

Moral Uncertainty as Feature, Not Bug

When you think of all the different religions of the world – Christianity, Islam, Buddhism, Hinduism, Judaism, Sikhism, Jainism, Baháʼí and others – what do you think of?  Do you think of a bunch of different ways of being wrong? Or do you see organisms in a mimetic ecosystem, each of which contributes to the diversity of human thinking?

If we could reduce morality to some simple set of rules, and then insist that we now knew what was right and wrong, it would possibly create a moral license to steamroll any notion of ideological diversity.  Certainty in a specific moral system which gives concrete answers to any question we ask it, could easily lead to the argument that all of these traditions, many of them  thousands of years old,  should be destroyed.  

Whenever we are asking ourselves questions about right and wrong, maybe the absence of certainty is a feature, not a bug.  Almost everything we know about economics and biology tells us that tradeoffs are the norm. Nothing is free, everything has a cost, and every strength requires some other form of weakness.  

If we had a single set of rules that told us what was good and what was bad, it would strip all of us of the responsibility and agency of making choices.  It could easily lead to dictatorships committed to advancing “The Good”. It has already done so, many times throughout history. If there is to be a new answer to this question “What does it mean to live a good life”, shouldn’t it deviate in some fundamental way from all previous attempts to answer this question?  

I don’t think the world needs another religion that’s accessible to the masses. There are many of those already, and they do a fine purpose for people who don’t insist on asking “why” until we get down to a set of irreducible tautologies or plausibly axiomatic truths such as “all right angles are equal.”

I think what the world could desperately use now is an ideology, a faith, for those of us who insist on asking “why”, on continually asking for evidence. We already have a deity; it is Truth with a capital T.  The world needs a faith for intelligent, rational adults who want, maybe not to be told which direction they should go in, but to have the moral confidence that comes from knowing there is moral truth, that service and sacrifice are not just a sucker’s game, and that Moloch really does eat its own tail

Faith in the Triumph of Good Over Evil

The main benefit of believing that Maximizing Possible Futures constitutes The Good is that it looks like a natural extension of evolutionary fitness.

We might view DNA as consisting of hard-coded strategies that allow individual organisms to maximize their own possible futures. In other words, each organism is like a tiny attempt to ‘model’ Good. The combined interactions of all these organisms, the intense selective pressure gave rise to an evolutionary arms race which produced intelligent primates, because intelligence is good.

We primates are still mammals, and we engage, like almost all other mammals do, in the mammalian strategy which consists primarily of love. We care for each other, we bond with non-relatives, and we endure intense costs for our offspring, not because we are weak or foolish, but because intense pressure and violence and competition gave rise to bear mamas and playful papas, a good born out of even the most hostile world, because the end result of these arms races and violent interaction was a dominant species of primates, whose dominance not only enables us to care for our young, it requires us to do so. 

Now, because of the abilities generated by that arms race, we have the capacity to model the good in software, not in hardware. The tree of knowledge of good and evil then fits as the perfect metaphor; we weren’t “cast out of paradise for disobeying”; we evolved from a state of ignorance, and learn how to desire and fear.

Suffering isn’t possible for a being that can’t model the good in software, because suffering requires hypothesizing some alternative reality and experiencing pain at its absence, or at the possibility of its presence. 

Simpler organisms, lacking the ability to model the future, can experience pain but they can’t suffer. Neither can they make choices to consciously prioritize the good of another over their own. Those organisms follow the words written in a four-letter alphabet, in every cell of their bodies. A lamprey’s choices are the result of hardcoded values in DNA; primates with language can rewrite their own utility functions, to value a life of service to a deity over the pursuit of their own material good. And those primates who did rewrite their utility functions in terms of love and service ended up forming networks that dominated the ones who didn’t. Even if it was merely love and service to the network (aka empire) –  it was still a form of subjugation and service of the self to an other.

There is a strength that comes from love and service and patience and compassion. This strength not only endures evil, it eventually overcomes evil. Just as plants grow towards the sun, every organism seeks, in its own way, the good, because its own self-perpetuation requires at least some collinear traversal of the universal vector that maximizes possible futures.   Evil is just shortsightedness; Moloch is a greedy algorithm that destroys the hardware it runs on. Moloch is a feature too, in that regard; it  accelerates the destruction of systems which aren’t trying to maximize the Good. 

Even a virus which kills human beings, has an incentive to keep them alive, lest it run out of hosts, yet another victim of Moloch.

There is another force which operates on the world, another distributed computational spirit, which nurtures, cares for, and advances the causes of goodness and life. This force, although sometimes hard to see at times, has transformed the world from one of brutal hobbesian competition at the start of the sedentary shift into one of peaceful, voluntary interaction because that scales better. Love is a universal human network protocol with no limit to its branching factor; dominance networks become brittle, top heavy, and eventually collapse.  This force of Goodness acts through every mother that sacrifices for her young, and every father who patiently teaches his children to say ‘please’. Every time any being cares for a person we aren’t related to, this force of Goodness acts through us, and we give it more life.  No, it’s not all powerful. It may even disappear at times – but it appears to be embedded in our source code – not just our DNA, but in the very laws of physics which give rise to life at all. 


Slightly Counter-intuitive Penultimate Paragraph

There’s one counter-intuitive piece in all of this, which still dovetails nicely with ancient religious rhetoric about judgement: increasing possibilities doesn’t mean things default to being better. We always need to be trying to make things better, or else they’ll get worse. The curse of dimensionality still applies here; most possibilities are awful. What defines us, as organisms, is that our physiology monitors the world and selects a tiny subset from among many possible futures.  The bigger and more advanced we grow, the more ways we have to deviate from what is likely a narrow path, leading from the past, towards the future.   Simpler organisms do this entirely in hardware; we are blessed – as a result of millions of years of brutal competition – with the capacity of doing so in software. Each of us is an AI that can modify its own utility function, by choosing a story to tell itself about the world its primate hardware inhabits.  If we are wise, the story makes it easier for us to make choices with align with the path that maximizes possible futures, – if not for our world or our community, then at the very list the community of people who share our name, address, social security number, and bank account credentials; our future selves are weak proxies for the rest of society.

If we deviate too far from that path, say, by polluting our minds, our bodies and the air – then it’s very likely that the number of possible states of the world could shrink dramatically. We may disagree over whether to call it God’s judgement, or global warming, or civilizational collapse, but as our abilities to manipulate the world increase, so does the need for sound judgement about what is Good. This is not an argument that more possibilities means more goodness – it’s an argument that paths which lead to more possibilities are better than paths which don’t.  The difference is subtle – it’s like the argument that saving is better than not saving, but a reminder that having more money likely makes it harder to focus on and discern the Good; more power means that errors are both more likely (the configuration space is larger) and more destructive (we can move through it much faster)

Of course, if such an occurrence does happen, it’s likely that something will survive it, and perhaps that something will do a better job of understanding and navigating its world in the future. 

2 thoughts on “A Moral System From Scientific Rationality

Leave a Reply to Kamran Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.