Resolving Moral Dilemmas using Uncertainty and Insanity

Resolving Moral Dilemmas using Uncertainty and Insanity

One of the main criticisms of Consequentialism is that it leads to a state where anything not forbidden is mandatory.  In order to optimise future consequences, no choice has absolutely zero effect either way, which is what would be required for it to be merely permissible.  The seemingly innocuous question of “should I eat a sandwich for lunch?” becomes something with a morally absolute right and wrong answer. Even if the extent of forbiddenness or mandatoriness is variable by the severity of the consequences, we are left with a system in which any expression of free will is fundamentally negative.  An automaton that always picked the morally mandatory option would be a better person than anyone that ever chose differently. This outlook neglects uncertainty however – in an uncertain world, there is necessarily a grey area between things with obvious positive consequences and obvious negative consequences. In these situations, it is reasonable to consider the grey area to be permissible choices, as there is no way to determine which choice is better.

Intuitions about Real World Uncertainty

Uncertainty actually renders many philosophical gotchas irrelevant – the trolley problem variant in which pushing a fat man onto the tracks can save 5 people makes this obvious – there is no realistic world where one could be certain that this action would have the described effect, only different expected probabilities.  Our reaction to this question, to say that it seems wrong to push him is correct – even aside from the very human and visceral feeling of difference between flicking a switch and pushing someone to their death, we know instinctively that there is more uncertainty about whether it will stop the train, compared with switching tracks.  What if we push the man, and the others still die – we will have made the situation worse. At least when we are pushing a button to switch the train onto other tracks there is a very high probability that the switch will have the desired effect. We can insist when we phrase the question, that we are absolutely certain that the fat man’s body will stop the train, but this is simply impossible – we can never attribute a probability of 100% to anything, so the question simply becomes unrealistic.  The removal of uncertainty takes away a key feature of the scenario.

In general, it can reasonably be said that a large positive outcome that is justifying a negative action will always carry with it a risk that it is not guaranteed.  Performing the negative action could result in no positive outcome, so the positive must be discounted by a risk factor. This risk factor is then highly situational – it could be small, but in some situations it could be so large that it reverses the conclusion of the utility calculation.  This demonstrates very effectively why utilitarianism can often give nonsensical results in highly constrained situations. The scenario often demands that we know for “certain” the outcome of a particular action, which is impossible, and goes against one’s common sense. In the hypothetical world where the outcomes are all certain, the nonsensical result is correct, but only because the world is nonsensical too – absolute certainty can never be achieved, not least because there is always a non-zero probability that one is hallucinating, and the real world is not real after all.  This probability is usually small enough that it can be ignored within the calculation, but in a sufficiently contrived scenario, it becomes more likely, and can no longer be ignored.

Losing Your Sanity Has Non-Zero Probability

The concept of uncertainty can be applied to one’s own mental state.  It is possible to come up with many “traps” for utilitarians and consequentialists, in which you offer even higher stakes and more horrific acts than the aforementioned trolley problem.  Would you [commit extremely horrific/violent/traumatic act] against a [innocent bystander/child/loved one] in order to prevent the [the same horrific act being committed on a far greater scale/the death of all of humanity]?  (Please note, I have deliberately avoided specifics here, because I am aware that some people take these questions extremely seriously, and can experience psychological trauma through trying to wrestle with such questions about their own willingness or not to do these things.)  The point behind raising this, is that these situations are extremely contrived – if you think about one example, then consider the chain of events that would have to take place for you to end up having to make that choice, it is a highly unlikely series of events. I argue that it is far more likely that someone would be suffering from a mental illness such as a form of psychosis than that any such chain of events would occur to them.

This gives us a “get out of jail free card” for such questions – we may agree that technically the answer might be to do the [insert bad action here] in order to prevent the [insert even worse consequence], but we can state that in reality we would never do the [insert bad action here] because when faced with this choice, our uncertainty about our own mental state becomes relevant.  It is more likely that we are suffering from a form of psychosis that has convinced us that we have been presented with this choice, than that we have actually been presented with this choice in reality. This means that doing the [insert bad action here] would simply result in a world where a good person inexplicably has a psychotic break and does a terrible thing for no discernable reason.  This tells us that deontological injunctions are useful for utilitarians and consequentialists to avoid psychosis related issues. If what it would take for you to do [insert bad action here] is sufficiently contrived, it is reasonable to follow the rule “never do [insert bad action here], regardless of the circumstances” because the likelihood of you ever being in that situation is far lower than the likelihood of you ever getting a mental illness that warps your sense of reality, and your mind can make the stakes as high as it wants.

Further extending this, the question of what you would do in a “Groundhog Day” scenario becomes much clearer (where reality will reset itself, and the consequences of your actions will cease to exist).  Again the situation is extremely contrived – if you believe yourself to be in a state of reality in which there will be no persisting consequences, you are far more likely to be suffering from a mental illness.  Therefore the answer “no, I wouldn’t murder anyone” is completely reasonable, because if you ansewered any differently, you would be the kind of person that could be convinced to murder someone just because they thought they were in the plot of a movie.

When to Break the Rules

The effect of uncertainty also impacts the Law-Morality-Axiology hierarchy (archive).  The decision of whether your moral qualms are sufficient to override the law – finding the point at which the immorality of a situation overrides the negative societal impact of being seen to be eroding the legitimacy of the law, is a grey area riven with uncertainty.  Likewise the concept of “don’t let your morality get in the way of doing what is right,” in which your consequentialist calculation suggests that in this particular situation, your normal moral rules are actually suggesting a harmful action – allowing yourself to break your own rules is a dangerous temptation to rationalise, but it is another uncertain grey area, as it can sometimes be for the best.  The boundaries of each of these are blurry, therefore there are choices between them that are permissible rather than mandatory.

Law and morality often suggest the same course of action, resulting in reasonably low uncertainty.  When morality starts suggesting a different course of action, uncertainty increases – there may be unintended consequences of breaking the law, or factors that you are not aware of or have not considered.  When faced with uncertainty, it is prudent to be conservative, sticking with the rule just in case. After a certain point however, the immorality of the law becomes less in doubt, and uncertainty starts to decrease again.  At this point it is not obvious whether to conservatively stick with the rule, or to follow what is moral. It only becomes clear that you should ignore the rule when uncertainty has decreased sufficiently – when it is blindingly obvious that the law is immoral, and needs to be changed, and the mere presence of that law is in itself eroding the legitimacy of the legal system.  The same applies when deciding whether to override morality with axiology – although we might prudently stick with our rules during high uncertainty, the area after the peak in uncertainty gives rise to a grey area in which it is permissible to override the rule, but not yet mandatory.

The Benefit of Prudence

This area of uncertainty is highly relevant to the concept of ethical offsetting – the idea that ethical decisions can be fungible, allowing us to do slightly unethical things, but apply effort in other areas to negate the effect.  Three key considerations contribute to this uncertainty:

  1. Fungibility of outcome.  Taking an extreme example, allowing a person to kill someone, because it will save many lives, ignores the irreplaceability and non-fungibility of people. Without specific information about the people involved, the outcomes would need to be overwhelmingly skewed, before we could be sure enough of a positive outcome.
  2. Risk of failure.  Even if people were fungible, and you decided to kill someone to harvest their organs, an organ donation might be rejected, thus not saving the recipient; the trolley problem examples which are instinctively more problematic such as “pushing the fat man”, are in part problematic because in reality the end results of the situations are much more uncertain. Invoking physically impossible levels of certainty that the trolley will be stopped does not convince the mind that the risk of failure has been removed.
  3. Societal damage.  If, for example, wealthy people are allowed to break the law in too fundamental a way, due to having “offset” their transgressions, the rule of law itself could cease to be respected by wider society.  This would have huge negative impacts not initially factored into the offsetting calculation.

Again, we need to be well beyond the peak in uncertainty around any axiology-morality disagreement, in order to avoid other unconsidered negative externalities.  This then does potentially allow ethical offsetting in certain areas, but the outcome of doing “the axiologically better thing” must be adequately likely to be more positive than sticking with our moral code.

Personal Experience

[Trigger warning – injury, psychological trauma]

Going back to the idea that there is a non-zero probability that you are hallucinating, and that any consequentialist should acknowledge this in their assessment of a situation, I have a relevant personal anecdote.

The concept itself is one that I realised around a decade ago, when undertaking some very aggressive introspection. I had posed myself a hypothetical scenario, and was trying to work out what exactly it was, that made me so uncomfortable with the idea of committing a horrendous act in order to save all life on earth. It was not a pleasant train of thought, and I do not recommend such aggressive introspection to anyone (it is probably not great for your mental health). Once I realised why however, I was able to gradually incorporate “awareness of possible insanity” into my internal decision making apparatus. This may seem ridiculous, pointless or unreasonably abstract, but as luck would have it, this turned out to be far more beneficial than even I was expecting.

Just over four years ago, I was in a serious accident in which I broke… most things. The first few days in hospital I was sedated, but when I finally came around, I was extremely disoriented. The cocktail of painkillers that I was on at the time resulted in hallucinations, which exacerbated the disorientation and resulted in paranoia that I can best describe as a temporary Capgras delusion. I am fine now, both physically and mentally, but I still vividly remember how real the hallucinations felt while I was in this delusional state. I was convinced that the people I recognised around my bed were evil alien doppelgangers trying to lull me into a false sense of security so that they could experiment on me to find ways to enslave all of humanity. This sounds so obviously and completely ridiculous to any sane person, that it is almost laughable, but drug induced hallucinations can be really very convincing. Adrenaline coursed through my body as I realised that I had to escape at any cost – humanity depended on it.

Things could have gone very badly at this point. My body was still in a fairly precarious state, that would not have dealt well with a serious attempt at fight or flight. More than that though, the people around my bed who had been waiting for me to wake up were in an understandably fragile emotional state – it is quite scary to think how much psychological trauma I could have dished out if I had acted on my beliefs at that time without any reservations.

Thankfully, I did have reservations. Something in the back of my mind was already primed to consider the fact that I might not be entirely sane. This thought caused me to try to assess how likely it was that the situation was real, and despite my daze, I was able to conclude that at the very least the probability of alien abduction did not eclipse the probability that I was hallucinating. I was still highly suspicious of everyone, and fought against nurses trying to inject me with things, but I didn’t say or do anything that couldn’t be undone, just in case it wasn’t real.

6 Replies to “Resolving Moral Dilemmas using Uncertainty and Insanity”

  1. Great post! Given this context, can one justify taking an action against the non-aggression principle?

  2. Thanks! That is a difficult question and I don’t think it has a simple answer.

    I don’t think it can completely preclude any action against the non-aggression principle, because difficult and complex scenarios do ultimately happen, and it is not reasonable to expect people to be completely paralysed to act. Really, it depends on the level of aggression – the less reversible the aggression is, the less easy a mistake is to mitigate, and therefore the more convoluted the situation must be to justify it.

    For example, breaking someone’s finger in order to close a door to prevent the spread of a fire seems fairly reasonable. They weren’t deliberately endangering people, but “aggression” is justified in order to save lives. Even if the number of lives at risk is uncertain, it would be hard to argue that breaking that person’s finger was not reasonable.

    Killing someone, or causing severe physical or psychological trauma are the real issues here, as they are basically irreversibe. Even in a situation where the Non-Aggression Principle would not usually apply, such as self-defence, this concern about uncertainty is still valid.

    For example, if a knife wielding maniac comes into the room threatening everyone’s life, it may still be worth trying not to use lethal force, just in case you are hallucinating and end up harming an innocent person. Alternatively, you could reduce the risk of it being a hallucination by getting confirmation of the situation from people around you (requiring either them to be a part of your hallucination too, or for some sort of mass delusion to be occurring, both of which are even lower probability situations). This additional assurance could potentially allow greater force to be justified.

  3. OK thank you for your reply

    What about in situations with an extended time horizon, where there is no urgency / time pressure?

    Does the same logic apply?

  4. I think the extended time horizon gives you both more time to investigate the situation, allowing you to reduce uncertainty in some areas, but also more time to investigate solutions, reducing the likelihood that aggression is the only solution that can be found. In one way or another, it is usually the time constraint that gives rise to these ethical dilemmas, as without it, the solution space can be much larger.

    Do you have a specific example in mind?

Leave a Reply