Making Moral Realism Pay Rent

We construct ‘value maps‘ of the world using our brains. If predictive processing is true, we effectively live inside these value maps, since we live inside the worlds our brains construct for us based upon our beliefs.

A question arises: is there any real territory to which these maps point? Moral realism says ‘yes’. Moral anti-realism says no.

How could we know if there we some reality? Can we make moral realism pay rent? YES!

We can make moral realism to pay rent by asking, if value maps have some real territory, what will happen if I assume there is a moral reality and try to make my map reflect it as closely as possible?

Experiment 1: Attempting to Converge Our Own Maps

If there is a moral reality, and I act like there is, and make my goal to cause my map to reflect reality as accurately as possible, I should expect my moral map to eventually converge or stabilize, rather than just continually shift and change. If there is an external reality, and I continually test my map against it, I should expect to gain more information about the external reality, and this information should increase the fidelity of my moral map until it is sufficiently accurate to become more or less stable. AND, as a bonus prediction, I should expect this convergence of my moral map to be accompanied by a sense of peace, because I should expect some amount of unpleasantness to be caused by a non-converged map which is continuously changing to prefer one incorrect approximation over another. This convergence, ensued by peace, is exactly what the story of ‘the six animals‘ predicts.”

If there is not a moral reality, and I act like there is, I should expect at least the possibility that I will keep ‘chasing my tail’ in terms of finding the external moral reality, because no such external reality exists! The search for an external reality, and attempts to gather new information in the map, may never terminate if no such moral reality exists.

However, it might terminate if I choose to do something like wirehead myself, by totally running the map. A search that never ends in anything approaching convergence might be evidence against reality. But, also, maybe I’m just bad at finding the external reality but it really does exist?

The experiment works better if lots of us do this, and then meet up to compare maps. And, again, moral realism pays rent!

Experiment 2: Exchanging Converged Maps


If there is not a moral reality, then, even if I manage to make my map converge, say by wire heading myself – there’s no reason to believe that a large number of people must do so, in the exact same way. If we get a large number of us to make our own individual maps converge, and we see these maps converge, but they all differ wildly from each other and contain no comparable features – then this might be considered evidence against a moral reality.

If a number of us have been pursuing the stabilization of our maps, and we have all collectively had success in this stabilization (by assuming there is some external reality and attempting to explore it), AND our maps start to converge with each other – then this is exactly what we would expect if moral realism were true.

This second experiment sounds like asking: if a bunch of different leaders of different religions all met up, would they find they had some things in common? How much would they have in common? And the answer sounds like ‘there is a ton that is in common, but also much that isn’t – although it’s worth asking, how are they trying to make their maps converge? A group which says ‘the only moral truth which exists is written in this one text’ can easily be expected to conflict with the rest of the convention, because they aren’t running the same experiment as the rest of us; they aren’t trying to make their maps more accurate by finding more information in general, so there’s no reason we should expect them to converge with everyone else.

So moral realism predicts only the convergence of maps in the heads of people who went out looking for any information that would make their maps converge, and not restraining themselves to a particular source of data.

Of course, maybe what we’ve discovered isn’t ‘the true value system’, but, rather only works for human beings? It’s a fair objection! But enough of us agreeing that ‘there is some reality to the true value system for humans’ is a great start. And we might have to ask ‘what exactly is the content of that map’? If it turns out that the map is actually extremely simple from a mathematical perspective, and it generalizes not just to human beings, but also robots, animals, and possible aliens, I would consider that to be so wildly unlikely in the absence of an external reality that this occurrence would be really strong evidence that yes, an external reality exists. That last scenario sounds very unlikely, sure, but moral anti-realism absolutely rules it out.

So, now that moral realism and anti-realism can possibly pay rent, why not try the experiment and see what happens? If you have been doing the experiment long enough to take moral realism seriously, shall we compare maps?

5 thoughts on “Making Moral Realism Pay Rent

  1. it all depends on if what you call “moral maps” and “moral reality” are actually universal fundamental properties and are not in fact “christian morals” or “human evolutionary desires”. i think your argument is fairly similar to CS lewis’ proof of moral realism/christianity (was it outlined in “mere christianity”? i forget), and probably has been proposed before in non-rationalist/non-predictive-processing terms.

    here’s a thought: what significance does a water bottle have? what does it mean? well, i store water in it and drink it; it serves that function much better than it is a screwdriver or a lamp-post. but if you take that water bottle and send it into a parallel universe where it exists in a vaccum, just the bottle, then it doesn’t particularly mean anything. there is no water, there is no drinker, it has no relationship to anything else, it exsits in a void.

    so what does my life mean? well, i’m in relation to the earth and other people i meet, and people i haven’t met, and our society has a relationship with the rest of the universe, but what does the entire universe mean? it can’t mean anything, because it is in relation to nothing (that we know), and this meaninglessness cascades, in part, down to me. religions short-circuit this dilemma with an all-powerful being above it all, and we don’t really care what’s above or behind him, because the premise is that he’s in charge of our personal eternal bliss or damnation, so there’s good incentive to take god’s meaning of life as fact.

    1. How would you answer someone who said “physical reality is just a human construct. We pretend that laws of physics are just universal, but for all we know they are just things that the material world does in response to humans looking at it.”

      It’s an interesting hypothesis. But i can think of many good arguments that this thesis is wrong. Most of those arguments also apply to the belief that moral reality, i.e. valence, is real.

      > can’t mean anything, because it is in relation to nothing (that we know)

      This way of thinking asserts materialism as the ground truth. I reject this way of thinking, in part because we still don’t have a materialist explanation for consciousness. You know, the thing you are doing literally all the time, the phenomena for which you have more evidence than anything else, because all the evidence of which you are aware was transmitted to you by the mechanism of consciousness.

      I don’t have any trouble with the statement that prime numbers exist, a priori, independent of physical reality. I think it makes far more sense to see physical reality as a special subset of mathematical truths, rather than the other way around.

  2. How does moral realism predict that things being moral/immoral affects reality? How would we know that those effects are related to whether an action is objectively (im)moral as opposed to just being an amoral or even immoral law of physics?

    1. The question you’re asking suggests you might be missing the frame of the post. I’ll answer it in a bit, but i want to restate the frame, so that the answer doesn’t seem wildly incorrect, which it might sound, unless you really ‘get the frame. What i’ve done is translate “moral realism means that there is truth to whether some things are moral and other things are immoral” into the language of predictive processing, and the notion of valence.

      So, translated, the theory translates the claim “moral beliefs have some reality” into “predictive maps of likely outcomes, and their desirability (i.e, valence), correspond to real territory, not only in the likelihood dimension, but in the valence dimension.”

      More concretely, this says “you can be wrong about how much you want something, and you can be wrong in thinking about how good some outcome is.”

      So then the evidentiary channel would be how things _actually_ make you feel, corresponding to how _you think_ they are going to make you feel. This is not a claim that ‘feelings and moral reality are indistinguishable from each other’, It’s a claim that _correct_ feelings map onto reality in the same way that _correct_ visual observations map onto reality – but optical illusions can, and certainly do exist. In the same way that we use our eyes to gather information about a subset of the electromagnetic field in our environment, i think emotions act like a sense of moral reality, i.e. the valence dimension of physical reality.

      So the answer to your question (“how does the morality/immorality of things affect physical reality”) is: through our emotions. This is definitely not an argument that emotions _can’t be wrong_, though. On the contrary, it’s an argument for the opposite: that there are _correct_ ways to feel about reality, and that if you make your map of the world converge with reality itself (e.g. by believing moral truth exists, believing accurate knowledge of it will allow your valence map to converge and stabilize, this giving you peace, and seeking to make your map converge to reality) you should both be less surprised (because you assign probabilities to outcomes in proportion to their likelihood), and more at peace.

      This doesn’t mean if your kids die in a car crash you won’t feel sad. On the contrary, you _will_ feel sad and this theory says that it’ sonly correct that you would feel sad, an appropriate amount. I.e. you wouldn’t decide to just give up on life and the world. You’d miss them, you’d feel grief, but eventually you would be able to get on with your life and function, because you’d accept their loss as part of the real valence landscape in which you live.

  3. Well that’s a great idea. And because of this u am pretty sure some guy must have already done it.

    E.g. collected the moral maps from religions,wise old people etc. And summarized it all in a nice structured way

Leave a Reply to apxhard Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.