Dedomic Utilitarianism

Dedomic Utilitarianism

This post loosely follows on from Resolving Moral Dilemmas using Uncertainty and Insanity.

Utilitarianism – A Brief Background

My aim here is not to provide yet another response to all of the common challenges to Utilitarianism.  A fairly good background of the challenges Utilitarianism has been faced with over the years can be found by reading through its Wikipedia article.  Alternatively for an even deeper dive, this article gives a very thorough summary of such things.  The issue is, that having fended off these initial criticisms, the philosophy of Utilitarianism has encountered a few problems which are not so easily dispatched.  Of the different types of Utilitarianism, some resolve certain problems, and others resolve other problems, but all reveal their own cracks in turn. It is on these deeper issues that I intend to focus.

That being said, I will at least spend a couple of paragraphs on the more straight-forward criticisms, so that it doesn’t seem like I am brushing them under the rug. As such, feel free to skip to the next section if that doesn’t hold any interest for you.

Utilitarianism is a very compelling system of ethics, both for its seeming simplicity, and for the fact that most of its proponents have been demonstrably far ahead of their time regarding many moral questions (e.g. Jeremy Bentham’s views on animal rights, and John Stuart Mill’s views on gay rights).  It is also much more resilient to challenges than many people initially expect. Upon first hearing about Utilitarianism, people often raise objections to it on relatively simplistic grounds, perhaps not realising that as a philosophy it has been around since at least the 1780s. Through over 200 years of careful deliberation, it has evolved into a cluster of philosophies that have developed a significant following in various different communities.

Many challenges take the form “what if situation X occurred – Utilitarianism would tell you to do Y, but this has bad consequences.”  As a form of consequentialism, the answer to this is very simple – why does the presence of this bad consequence not have a coefficient in your utility function?  Once this is included, Utilitarianism no longer suggests taking action Y, as its consequences are bad. Even the question “what if taking the time to consider the utilities of different actions causes negative consequences?” can be answered by simple pragmatism.  Of course you can’t do a full utilitarian calculation around every action you ever take. Ethical rules are very useful, and we can use Utilitarianism to work out which rule sets result in the best society. There may be occasional grey areas where the rules don’t give clear answers, at which point it becomes far more reasonable to fall back to the calculation. This is a perfect example of the Law-Morality-Axiology hierarchy in action (archive). In fact, this is referred to as Two-level Utilitarianism, a synthesis of two earlier viewpoints – Act Utilitarianism and Rule Utilitarianism.

Addressing More Stubborn Issues

Whether you subscribe to Hedonic Utilitarianism or Preference Utilitarianism (or indeed, the somewhat more fuzzily defined Ideal Utilitarianism), there is still the question of whether you are interested in maximising the average utility, maximising the total utility or minimising negative utility.  You can construct a thought experiment in which Total Utilitarianism would be happy to fill the universe with clones or utility-bots, or even just with people that were only-just-happy, in a kind of Malthusian race to the bottom. Equally, with Average Utilitarianism, under some conditions painlessly killing people (even ones with positive utility) might be justified, if it improves the average level of utility. Both of these scenarios are commonly referred to as the “Repugnant Conclusion“.  Negative Utilitarianism gives rise to thought experiments in which it is desirable for humanity or even all life to die out, if certain amounts of suffering are unavoidable (though there are some fairly sensible attempts to avoid this conundrum).

Therefore, my objective is to resolve some conflicts between Negative Utilitarianism, Average Utilitarianism and Total Utilitarianism, whilst attempting to synthesise Preference Utilitarianism and Ideal Utilitarianism by explaining why people might sacrifice themselves to protect monuments or knowledge from destruction (e.g. Library of Alexandria, Sack of Rome, Palmyra), and why people value “being remembered” and their legacy/long term impact on society.

So here goes:

  • People are defined by their minds and their bodies.
  • People’s bodies are described by their DNA.
  • People’s minds are neural nets, which are described by the matrix of values at each of their nodes.
  • From someone’s DNA and neural net matrix, it should (at least hypothetically) be possible to recreate them.
  • Therefore people are data.
  • If people are data, people dying is data loss.
  • Further to this, people produce additional data over the course of their lives.
  • Works of art can be described by data.
  • Scientific progress is the steady accumulation of data.
  • Importantly, suffering is still a key element of the utilitarian calculus – suffering is bad.

This gives us a useful starting point – as well as caring about happiness or the removal of suffering, it seems that in an abstract sense we already naturally care about data.  Based on this train of thought, data loss is bad and should be avoided.

Replacing Subjective Value Judgements with Data

Looking at Ideal Utilitarianism, the view is that some things have greater worth than simple hedonistic “lower pleasures”.  A lifetime spent in ecstasy is judged as in some way inferior to a life of discovery, or a life creating beautiful art. In this sense, data becomes a more rigorous objective than the concept of “intrinsic good” – it separates the pleasure of creation or of making a discovery from the “worthiness” of the creation or discovery, as well as giving us a metric which might allow us to assess some aspect of worthiness. We then need to determine some sort of conversion rate between pleasure and data.

Once separated from the “worthiness” judgement, I personally find that the concepts of pleasure and intrinsic good (with worthiness factored out) map fairly well onto Herzberg’s Hygiene and Motivational Factors model. You can only become so happy by satisfying the hygiene factors (lower pleasures), and beyond that point you are wasting effort and resources. Motivational factors (higher pleasures/intrinsic good) can only come into play once the hygiene factors are satisfied, enabling one to reach a much higher level of happiness.  This allows for the recognition that a life of discovery can be quite rough, and could be improved by a little hedonism, despite being entirely worthwhile, as well as the recognition that whilst it can be very fun to rediscover something for yourself that has already been established, it is even better to discover something new, creating new data rather than backing up the old.

Valuing data also allows for Preference Utilitarianism – if someone prefers to suffer, in order to generate more data, this can be acceptable.  Hedonic Utilitarianism would try to stop people from suffering, even if they wanted to, denying free choice, but allowing for data to be another component of utility, allows for a balance between data and suffering.  It also resolves the issue around death – whilst we might associate dying with suffering, as soon as we are dead we do not suffer any more. Death may cause suffering to those who care about us, but if the entire planet was incinerated, who would care?  Valuing data itself, and seeking to avoid data loss as another key consideration alongside utility resolves this, viewing any death as negative, as this is the loss of the data that defines that person.

Δεδομένα

What I am considering is therefore a slight adjustment to Utilitarianism, in which as well as maximising utility or minimising suffering, we also aim to maximise data or minimise data loss.  Whilst this does complicate things somewhat (how much data is one utilon worth?), it keeps the fundamentals of the system as simple as possible. This has the potential to give us a more complete framework, of which we could view Preference Utilitarianism and Ideal Utilitarianism as facets.  They are effectively projections of the full system that look similar but disagree in places because the fundamental nature of the system is hidden. Much like seeing the shadow of a cube, and seeing a square one moment, and a hexagon the next.

For want of a better term, I refer to this as Dedomic Utilitarianism, from the greek “δεδομένα” meaning data.  I considered “Data-oriented Utilitarianism” or similar variants, but this makes it sound more like computer science than philosophy.  Dedomic Utilitarianism then stands as an alternative to Hedonic Utilitarianism, accepting a slightly more complicated, but hopefully still a reasonably objective definition of what we consider to be good.

Diversity vs. Redundancy

Every animal, plant, race and culture is able to be defined by some sort of data blueprint, and is demonstrably able to exist in some environment.  The environments in which they succeed and fail, and the advantages they confer under different scenarios make them valuable data to anyone trying to find ways to improve the world.  This sets them apart from random noise, which for these purposes is not data. Diversity is therefore data. Homogeneity and uniformity are redundancy – some redundancy is useful for guarding against data loss, too much redundancy is a waste of resources.

At this point it is worth mentioning entropy – random noise is high entropy and data is low entropy, but homogeneity is also low entropy, so it is not sufficient to simply minimise entropy.  In fact, minimising the increase of entropy is a corollary of Utilitarianism. Entropy always increases, and low entropy is a precious resource that we should spend wisely – if we take actions that give us utility or data but cost a lot of entropy, when there are less expensive ways to generate the utility or data, we could have taken the less expensive actions, and generated more utility or data over the long term.

How Does it Stack Up

Having roughly described the idea behind Dedomic Utilitarianism, it is worth seeing how it stacks up against the issues we raised earlier:

  • We observed that Negative Utilitarianism would prefer to destroy the entire universe if it were found to contain too much suffering.  Dedomic Utilitarianism says that this would result in massive data loss, therefore a higher bar of suffering is required to justify such a conclusion.
  • Average Utilitarianism would suggest that under some conditions, painlessly killing people might be justified, if it improves the average level of utility.  Dedomic Utilitarianism says that killing people is data loss, so they would have to be suffering significantly to justify this.
  • Total Utilitarianism would be happy to fill the universe with super-happy clones/utility-bots.  Dedomic Utilitarianism says that this is redundant – the additional clones/bots do not add anything other than improving resilience against data loss, which has diminishing returns as duplication increases.

Taking a common thought experiment, we can see whether it stacks up against our intuition: if you were cloned in a teleporter malfunction, what would the ethical implications of painlessly terminating one of the two of you be?  Dedomic Utilitarianism gives us the answer that it depends on when the termination happens – if it is immediately after the duplication, it is only a minor concern, as you are identical, so no data is being lost, you are just losing some potential for redundancy.  Later, and your two different minds will have started to diverge due to different experiences – the longer the two of you exist, the more different you will become, and the more data loss will result from one of you being terminated. In the extreme case, this starts to look like having an identical twin.  Identical twins are different people, who share DNA – although the DNA is duplicated and therefore beneficial only through improving redundancy, the mind’s configuration is a much larger factor in the quantity of data, and this data is much more likely to be highly different to that of the other twin, making preserving their existence worthwhile.

Further Considerations

So far, I have been rather vague about whether we are trying to maximise utility or minimise suffering, and whether we are trying to maximise data, or minimise data loss.  This is not obvious, but as far as utility goes, people’s happiness tends to renormalise, so we are usually left seeking a greater high. Contentedness and lack of suffering seems more tractable and well behaved, so minimising suffering seems the safer option.  As far as data goes, minimising data loss would result in lots of redundancy, whereas maximising data would allow for some redundancy, whilst also seeking progress.

Under Dedomic Utilitarianism, I am therefore inclined to propose that we are trying to minimise the suffering of conscious beings, whilst maximising data.  This approach aligns well with intuitions about people being very unhappy when their legacy is destroyed, the value of stored knowledge and monuments, and our views of the kind of people that would seek to destroy them.

A few further questions to ponder:

  • Should we care if people’s minds are “put on ice”?  To me this seems preferable to being destroyed through death or data loss.  If you are never booted up again, this is unfortunate, but your data can still be useful for other conscious minds to work out how to be happy and not be unhappy.
  • Should we care if data exists but is inaccessible?  If inaccessible means permanently lost, I would view this as data loss, therefore bad.  Something outside the cosmological horizon is permanently lost for all practical purposes given our current understanding of the universe.
  • Do we care about data because of its use to conscious minds, or just for itself?  Personally, I think I still care about its use to conscious minds. Perhaps the theory might be purer if this were irrelevant, but I feel like an ethical system in the absence of conscious minds is a little pointless.

How do we value data? The human genome contains around 1.5GB of data.  The human brain contains around 100 billion neurons each of which can have 10,000 connections.  Making a very rough estimate for the amount of data it would take to fully define someone’s brain, let’s take every possible pair of neurons in the cerebellum (roughly half of the total neurons), and assign it a 1 if the neurons are directly connected, and a 0 if not.  The number of pairs of neurons is 50 billion choose 2, which is approximately 10^21, but presuming that of the 10,000 connections they can each have, these tend to be some set of their 1 million closest neighbours, we can get a figure of 50 billion x 1 million which is 5×10^16.  This gives us 50 ExaBytes. Both of these probably have a lot of redundancy however – your genome is more than 99% the same as everyone else on the planet, and it is possible that the data that makes up people’s brains could be compressed significantly too. It becomes further complicated when we consider knowledge – some data is clearly more valuable than other data.  The laws of general relativity may be able to be written very concisely, but if we lost that knowledge and had to re-discover them, that would take a long time and a lot of effort. Perhaps the value of data is given by the entropy increase it would take to re-acquire it.

This question, and the question of how the value of data compares to the value of utility are both difficult, but probably worth investigation.  Despite inevitable uncertainty, I am sure that reasonable upper and lower bounds can be found, that allow for useful discussion and calculation.

Leave a Reply