We construct ‘value maps‘ of the world using our brains. If predictive processing is true, we effectively live inside these value maps, since we live inside the worlds our brains construct for us based upon our beliefs.
A question arises: is there any real territory to which these maps point? Moral realism says ‘yes’. Moral anti-realism says no.
How could we know if there we some reality? Can we make moral realism pay rent? YES!
We can make moral realism to pay rent by asking, if value maps have some real territory, what will happen if I assume there is a moral reality and try to make my map reflect it as closely as possible?
Experiment 1: Attempting to Converge Our Own Maps
If there is a moral reality, and I act like there is, and make my goal to cause my map to reflect reality as accurately as possible, I should expect my moral map to eventually converge or stabilize, rather than just continually shift and change. If there is an external reality, and I continually test my map against it, I should expect to gain more information about the external reality, and this information should increase the fidelity of my moral map until it is sufficiently accurate to become more or less stable. AND, as a bonus prediction, I should expect this convergence of my moral map to be accompanied by a sense of peace, because I should expect some amount of unpleasantness to be caused by a non-converged map which is continuously changing to prefer one incorrect approximation over another. This convergence, ensued by peace, is exactly what the story of ‘the six animals‘ predicts.”
If there is not a moral reality, and I act like there is, I should expect at least the possibility that I will keep ‘chasing my tail’ in terms of finding the external moral reality, because no such external reality exists! The search for an external reality, and attempts to gather new information in the map, may never terminate if no such moral reality exists.
However, it might terminate if I choose to do something like wirehead myself, by totally running the map. A search that never ends in anything approaching convergence might be evidence against reality. But, also, maybe I’m just bad at finding the external reality but it really does exist?
The experiment works better if lots of us do this, and then meet up to compare maps. And, again, moral realism pays rent!
Experiment 2: Exchanging Converged Maps
If there is not a moral reality, then, even if I manage to make my map converge, say by wire heading myself – there’s no reason to believe that a large number of people must do so, in the exact same way. If we get a large number of us to make our own individual maps converge, and we see these maps converge, but they all differ wildly from each other and contain no comparable features – then this might be considered evidence against a moral reality.
If a number of us have been pursuing the stabilization of our maps, and we have all collectively had success in this stabilization (by assuming there is some external reality and attempting to explore it), AND our maps start to converge with each other – then this is exactly what we would expect if moral realism were true.
This second experiment sounds like asking: if a bunch of different leaders of different religions all met up, would they find they had some things in common? How much would they have in common? And the answer sounds like ‘there is a ton that is in common, but also much that isn’t – although it’s worth asking, how are they trying to make their maps converge? A group which says ‘the only moral truth which exists is written in this one text’ can easily be expected to conflict with the rest of the convention, because they aren’t running the same experiment as the rest of us; they aren’t trying to make their maps more accurate by finding more information in general, so there’s no reason we should expect them to converge with everyone else.
So moral realism predicts only the convergence of maps in the heads of people who went out looking for any information that would make their maps converge, and not restraining themselves to a particular source of data.
Of course, maybe what we’ve discovered isn’t ‘the true value system’, but, rather only works for human beings? It’s a fair objection! But enough of us agreeing that ‘there is some reality to the true value system for humans’ is a great start. And we might have to ask ‘what exactly is the content of that map’? If it turns out that the map is actually extremely simple from a mathematical perspective, and it generalizes not just to human beings, but also robots, animals, and possible aliens, I would consider that to be so wildly unlikely in the absence of an external reality that this occurrence would be really strong evidence that yes, an external reality exists. That last scenario sounds very unlikely, sure, but moral anti-realism absolutely rules it out.
So, now that moral realism and anti-realism can possibly pay rent, why not try the experiment and see what happens? If you have been doing the experiment long enough to take moral realism seriously, shall we compare maps?