Arguing for utilitarianism

By Omnizoid @ 2021-12-14T19:31 (+3)

Section 1 the first three arguments 

It shall be the aim of this post to argue in favor of utilitarianism.  I shall argue that we can derive utilitarianism from a diverse set of plausible axioms, each with extremely compelling supporting arguments.  Many of these arguments came directly from utilitarianism.net with my own expansion of them.   

The first argument for utilitarianism is on the basis of theoretical virtues.  Utilitarianism does quite well here.  It is simple and clear, has great explanatory power, and is universally applicable.  This gives it an advantage over other leading theories which require far more assumptions.  Virtue ethics requires a complex and disunified account of virtues, while deontology requires we grant a variety of specific rights with no unifying factor.  This counts in favor of utilitarianism because, prima facie, we should prefer simpler theories.  Utilitarianism is simpler than its counterparts, so we should accept it.  

Critics of utilitarianism could raise a variety of objections.  They could first argue that its simplicity is a problem for utilitarianism.  Much like we would reject accounts of human nature that provided a single account of all actions, we should reject accounts of morality that are too simple.  This, however, misses the point.  The reason we should reject a simple account of all human action is because it fails to explain the data.  If it were able to adequately explain the data we would have a reason to accept a single cause of all human actions.  Whether utilitarianism can provide an adequate explanation of morality will be discussed later.  Thus, this is not an objection to the simplicity argument as much as it is a separate argument, that will be discussed later.  

They could next argue that we don’t need simplicity in moral theories.  If morality is not truth tracking, and instead is merely a way of explaining our intuitions, we have no reason to prefer simplicity.  This is perhaps an adequate account if we accept moral anti realism.  However, my argument is merely intended to establish that if we accept realism, we should prefer utilitarianism on grounds of simplicity.  

They could next accept this argument counts in favor of utilitarianism, but argue that it is not sufficient to prove utilitarianism.  This will depend on how much weight we give to the other arguments for and against utilitarianism.  However, the strength of this argument should not be understated.  In other domains, simplicity is extremely important.  Other moral views are unable to give a parsimonious account of morality.  Much like we would have a decisive reason to prefer a theory of physics that postulates only one force, rather than twenty six forces, we should strongly prefer a simpler theory in ethics.  Thus, this argument counts strongly in favor of utilitarianism.  

The next argument for utilitarianism that shall be presented is on the basis of the historical track record.  If utilitarianism were correct we would expect utilitarian philosophers to generally be progressive, ahead of their time, and on the right side of history.  If utilitarian philosophers were not correct, we would not expect this to be the case.  Utilitarian philosophers are indeed progressive, ahead of their time, and on the right side of history.  Thus, utilitarianism is more likely to be correct than it would otherwise be.  

Utilitarian philosophers are likely to be progressive and ahead of their time.  As (Utilitarianism.net no date) argues, utilitarian philosophers were often on the right side of history.  Bentham favored decriminalizing homosexuality, abolition of slavery, and protection for non human  animals.  Mill was the second member of parliament to advocate for women's suffrage and argued for gender equality.  Sidgwick advocated for religious freedoms.  In contrast, philosophers like Kant harbored far less progressive views.  As (Utilitarianism.net no date) says “However, Kant also defended many ideas that would be unacceptable to express today:18 He called homosexuality an “unmentionable vice” so wrong that “there are no limitations or exceptions whatsoever that can save [it] from being repudiated completely”.19 He believed masturbation to be so wrong it “exceed[s] even murdering oneself”.20 He argued that organ donation is impermissible and that even “cutting one’s hair in order to sell it is not altogether free from blame.”21 Kant stated that women, servants and children “lack civil personality” and that they are “mere underlings” that “have to be under the direction and protection of other individuals”;22 thus he believed they should not be permitted to vote or take an active role in the affairs of state.23 Further, he wrote about the killing of bastards that “a child that comes into the world apart from marriage is born outside the law” and that society “can ignore its existence (...) and can therefore also ignore its annihilation”.24 Finally, Kant argued for the idea of racial superiority, claiming that “humanity exists in its greatest perfection in the white race”.25.”  

The opponent of utilitarianism could make several objections.  First, they could argue that utilitarians were often on the wrong side of history, with Mill supporting colonialism.  This is true, however, if we compare utilitarians to other philosophers, they seem empirically to have been far more progressive than other philosophers.  The odds are extremely low that the wrong moral theory would conclude that homosexuality is permissible hundreds of years before it became acceptable to even suggest as much.  Most people throughout history have harbored dreadful moral views that clash with our modern sensibilities.  The fact that utilitarians were far less accepting of these barbaric practices counts strongly in favor of utilitarianism.  

Next they could object that this does not count in favor of utilitarianism because it requires a meta morality, to decide which moral principles are true.  Thus, we can’t say utilitarianism is best, for getting the correct answer to moral questions, without a mechanism for identifying the moral conclusion.  If the mechanism for identifying the moral conclusion is utilitarianism, then the argument would be circular.  It would merely suggest that utilitarianism identified the conclusions that utilitarianism suggests.  

This, however, is false.  We don’t need to decide upon a precise mechanism for identifying the moral conclusion in all cases to conclude that slavery is immoral, gay people shouldn’t be killed, and women should have the right to vote.  For those who accept the immoralities of the aforementioned actions, this argument for utilitarianism should hold weight.  If we can agree upon certain moral principles, then the moral theory to conclude them originally is more likely to be true.  

They could finally accept that while this counts in favor of utilitarianism, it is not decisive.  This is true.  However, this is merely part of the cumulative case for utilitarianism.  

The third argument that shall be presented for utilitarianism is based on universalizing egoism.  It has 12 premises.  

1 A rational egoist is defined as someone who does only what produces the most good for themselves

2 A rational egoist would do only what produces the most well-being for themselves

3 Therefore only well-being is good (for selves who are rational egoists) 

4 The types of things that are good for selves who are rational egoists are also good for selves who are not rational egoists unless they have unique benefits that only apply to rational egoists 

5 Well-being does not have unique benefits that only apply to rational egoists 

6 Therefore only well-being is good for selves who are or are not rational egoists 

7 All selves either are or are not rational egoists 

8 Therefore, only well-being is good for selves 

9 Something is good, if and only if it is good for selves

10 Therefore only well-being is good 

11 We should maximize good 

12 Therefore, we should maximize only well-being 

I shall present a defense of each of the premises.  

Premise 1, which states that  a rational egoist is defined as someone who does only what produces the most good for themselves is trivial because it is a definition.  

Premise 2 states that a rational egoist would do only what produces the most well-being for themselves.  This has several supporting arguments.  

1 This seems to be common sense.  When we see someone being selfish, it seems to roughly track doing what makes them happy, even if it doesn’t make others happy.  Other criterions for morality include rights and virtue, but it seems strange to imagine an egoist would maximize their virtue, or minimize the risks of the violations of their rights.  If they did, they would spend all their time indoors, in a bunker, to minimize the risks of their rights violations, or immediately commit suicide, to prevent the possibility of anyone violating their rights.  It also seems strange to imagine an egoist who merely tries to be as virtuous as possible, despite the personal suffering caused.  When we analyze the concept of virtue, it seems clear that the motivation for virtue ethics is based on it being good for others, or good in some other abstract sense, rather than good for the virtuous agent.  

2 When combined with the other arguments, we conclude that what a self interested person would do should be maximized generally.  However, it would be extremely strange to maximize the other criterions for morality.  Virtually no one is a consequentialist, who cares about maximizing virtue, or rights protection.  

3 We can imagine a situation with an agent who has little more than mental states, and it seems like things can be good or bad for them.  Imagine a sentient plant, who feels immense agony.  It seems reasonable to say that the agony they experience is bad for them, and that if they were rational and self interested they would try to end their agony.  However, in this imagined case, the plant has no ability to move and all it’s agony is the byproduct of natural features.  Given that it’s misery is the result of its own genetic formation, it seems strange to say it’s rights are being violated, and given it’s causal impotence, it seems unclear how it could have virtue.  Yet it seems despite that it could still have interests.  Similarly, an AI that's not sentient but that can act and achieve goals would not be worthy of rights.  This is because rights only apply to beings that can experience well-being and suffering.  

One could object that rights and virtue are an emergent property of well-being, such that one only gains them when they can experience well-being.  However, this is deeply implausible.  This would require strong emergence.  As Chalmers explains (Chalmers 2006), weak emergence involves emergent properties that are not merely reducible to interactions of the weaker properties.  For example, chairs are reducible to atoms, given that we need nothing more to explain the properties of a chair than knowing the ways that atoms function.  They are purely the result of atoms functioning.  However, strongly emergent properties are not reducible to weaker properties.  Chalmers argues that there is only one thing in the universe that is strongly emergent; consciousness.  Whether or not this is true, it does prove the broader principle that strong emergence is prima facie unlikely.  Rights are clearly not reducible to well-being; no amount of happiness magically turns into a right.  This renders this claim deeply implausible.  

4 Hedonism seems to unify the things that we care about for ourselves.  If someone is taking an action to benefit themselves, we generally take them to be acting rationally if that action brings about well-being for them.  When deciding upon what food to eat, we act based on what food will bring us the most well-being.  It seems strange to imagine any other criterion for deciding upon food, hobbies, or relationships.  We generally think someone is acting reasonably if they were in a romantic relationship, given the joy it brings to them, but if someone spent their days picking grass, we would see them as likely making a mistake, particularly if it brought them no happiness.  Additionally, the rights that we care about are generally conducive to utility, we care about the right not to be punched by strangers, but not the right to not be talked to by strangers, because only the first right is conducive to utility.  Additionally, we care about beauty only if it's experienced, a beautiful unobserved galaxy would not be desirable.  Even respect for our wishes after our death is something we only care about if it increases utility.  We don’t think that we should light a candle on the grave of a person who’s been dead for 2000 years, even if they had a desire during life for the candle on their grave to be lit.  

5 As Neil Sinhababu argues (Sinhababu, 2010), we reach the belief in pleasures goodness via the process of phenomenal introspection, whereby we think about our experience and determine what they’re like.  This is widely regarded as being reliable, organisms that were able to form accurate beliefs about their mental states are more likely to reproduce, and psychologists and biologists generally regard it as being reliable.  

6 Hedonism provides the only plausible account of moral ontology.  While any attempt at accounting for moral ontology will be deeply speculative, hedonism provides a plausible account.  It seems at least logically possible that there would be certain mental states that are objectively worth pursuing.  Much like it’s possible to have vivid mental states, involving colors that we as humans will never see, it seems equally probable that there could be mental states that provide us with reasons to pursue them.  It’s not clear why desirable experiences are any stranger than experiences of confusion, dizziness, new imaginary colors, echolocation, or the wild experiences people experience when they consume psychedelics.  Given this, there seems to be a possible way of having desirable mental states.  Additionally, there seems to be a plausible evolutionary account of why we would have desirable mental states.  If those mental states are apprehended as desirable, then organisms would find them motivating.  This serves as a mechanism to get organisms to do things that increase fitness, and not do things that decrease fitness.  On the other hand, there is no plausible evolutionary account of how rights could arise.  How would beings evolve rights?  What evolutionary benefit would accrue from evolving rights?  Only the desirability of well-being can be explained by the fine tuning process of evolution.  

7 An additional argument can be made for the goodness of well-being.  A definition along the lines of “good experiences,” is the only way to explain what well-being is.  It’s not merely a desired experience, given that we can desire bad experiences, for example, one who thinks they deserve to suffer could desire experiencing suffering.  Additionally, if we are hungry, but never think about it, and wish it would stop, it seems that our hunger is still causing us to suffer, even though we never actually desire for it to stop.  The non hedonist could accept this, but argue that it’s not sufficient to prove that well-being is desirable.  It merely proves that we perceive well-being as desirable mental states, not that the mental states are in fact desirable.  However, if the only way we can explain what well-being truly is is by reference to their desirability, this would count in favor of hedonism.  Other accounts require us being systematically deluded about the contents of our mental experiences.  

8 Only well-being seems to possess desire independent relevance.  In the cases that will be discussed with preferential accounts, it seems clear that agents are acting irrationally, even if they’re acting in accordance with their desires.  A person who doesn’t care about their suffering on future Tuesdays is being irrational.  However, this does not apply to rights.  A person who waives their right to their house and gives their house to someone is not being irrational.  A person who gives a friend a house key, allowing them to come into their house without their consent is similarly not acting irrationally.  Rights seem to be waivable.  well-being does not.  

One could object that the things that it is rational for one to pursue is their own desires, rather than their own well-being.  However, this view seems mistaken.  It seems clear that there are some types of desires that are worth pursuing, and others that are not.  I shall argue that only the desires for well-being are worth pursuing.  

1 Imagine several cases of pursuers of their own desires.  

A A person who is a consistent anorexic, who has a preference for a thin figure, even if it results in them starving.  

B A person who desires spending their days picking grass, and would prefer that to having more happiness.

C A person who has a strong preference for being in an abusive relationship, even if it minimizes their happiness.

D A person who is indifferent to suffering that occurs on the left side of their body.  They experience suffering exactly the same way and apprehend it as undesirable, BUT they have an arbitrary aversion to right side suffering infinitely greater than their aversion to left side suffering

It seems intuitively like these preference maximizers are being irrational.  These intuitions seem to be decisive.  

2 What should we do about a person with infinitely strong preferences?  Suppose someone has a strange view, that being within 50 feet of other humans is unfathomably morally wrong.  They would endure infinite torture rather than be within 50 feet of other humans.  It seems like having them be around other humans would still be less bad than inflicting infinite torture upon them.  Infinite preferences seem to pose a problem for preference utilitarianism.  

3 What about circular preferences?  I might prefer apples to bananas, bananas to oranges, but oranges to apples.  

4 If preferences are not linked to mental states, then we can imagine strange cases where things that seem not to be good or bad for an agent, are good or bad for the agent according to preference utilitarianism.  For example, imagine a person who has a strong preference for their country winning the war, despite being trapped in an inescapable cave.  It seems strange that their side losing the war would be bad for them, despite them never finding out about it.  It also seems strange to imagine that Marx is made worse off, by communism not being implemented, after his death, or that Milton Friedman is made worse off by regulations that pass after his death.  It seems equally strange to imagine that slave owners were made worse off by racial equality, occurring long after their death.  

5 Imagine a dead alien civilization who had a desire for there being no other wide scale civilizations.  If the civilization had enough people, preference views would imply that humanity would have an obligation to go extinct, to fulfill the preference of the alien civilization, despite none of them ever knowing that their preference was fulfilled.  

One might object that we should look at peoples rational preferences, rather than the preferences that they in fact have.  However, this is circular.  When analyzing what a rational self-interested person would desire, merely saying their rational self interest is not helpful.  However, using this criterion explains why a consistent anorexic, grass picker, or one who desires being in an abusive relationship seems to harbor irrational preferences.  Similarly, preferences for states of the world after one dies seem irrational, for one cannot benefit from them if they are dead.  It’s hard to imagine a rational preference that is not linked back to well-being.  For one to hold this view they’d have to give a non hedonistic account of rational preferences.  

One might additionally argue that what matters is people's mental experience of having their preferences fulfilled.  Yet this seems not to be the case.  If a person had got addicted to a drug, that was free, had no negative side effects, and that they had easy access to, each day, they would have a preference for consuming the drug, rather than not doing it, yet it seems hard to imagine that it would be good for them to consume the drug, assuming it does not make them happy at all.  

Additionally, imagine a case where, for most of a person's life they have had a strong desire to die a democrat.  However, on their death they convert to become a conservative, knowing that their desire to be a registered democrat upon death has not been fulfilled.  It seems it would be good for them to register as a republican, if it made them happy, even if it reduced their overall preference fulfillment.  

One might object that this is a negative preference, rather than a positive one.  This person has no positive preference for the drug, merely a preference for not missing the drug.  Yet this seems hard to justify.  In this case of the drug, they would not be harmed by not consuming the drug, their desire would merely not be fulfilled.  It seems nonetheless like the drug would not benefit them.  

Conversely, it seems clear that other types of preferences are good to create.  Creating a preference for reading books that brings one immense joy is a good thing.  Preferences should be created if and only if they improve the well-being of the subject.  This refutes the negative preference objection.  If one had a strong preference for not going through a day without reading a book, and the books that they read brought them great joy, it would be good to create the preference, even if the preference were a negative preference.  


 

We might adopt the view that what matters morally is that one derives well-being from things that are truly good.  This would avoid the counterintuitive conclusion of utilitarianism that we should plug into the experience machine, where we would have a simulated life that we believe is real, with more well-being and it avoids Sidgwick’s objection that all the things that seem to be rights or virtues are well-being maximizing heuristics.  However, this view has a variety of issues.  

For one, it is not clear what makes some act truly good.  This requires additional stipulations that are not provided by utilitarianism.  From first principles, there seems no justification for our holding of certain acts as worthy of pleasure, and other acts as unworthy of pleasure.  People may describe certain strange sexual acts as being types of pleasure that do not truly make one better off.  However, it is not clear why, from first principles, there is anything less noble about strange sexual acts than about getting a massage.  Both produce physical pleasure for its own sake.  The intuitive difference seems explainable by us finding one of them gross and the other one not, which is clearly not morally relevant.  

Additionally, there are compelling counterexamples.  Suppose one gained infinite joy from picking grass.  Surely picking grass would make them better off.  Additionally, suppose that a person was in a simulated torture chamber.  Surely that would be bad for them.  Unless there’s some fundamental asymmetry between well-being and suffering, the same principle would apply to well-being.  A simulated experience of well-being would still be a good thing.  Additionally, it’s unclear how this would account for composite experiences of well-being from two different things.  Suppose that someone gains well-being by the combination of gaining deep knowledge and engaging in some strange sexual act.  Would the well-being they got from that act be morally relevant?  If so, then if there is at least some good act relating to any “impure” pleasure they get, such as knowledge acquisition or careful introspection, then that act would be morally good.  Additionally, suppose that one was in the experience machine, but exercised wisdom and virtue.  Then would their well-being be good for them?  This shows the logistical troubles with such a view.  

If we say that well-being is not at all good, absent virtue, then it would be morally neutral to make people already in the experience machine achieve far less than they would otherwise.  This is a very counterintuitive view.  Additionally, if we accept this view, we would have to accept one of two other conclusions.  If the suffering of the people in the experience machine is morally bad, but their well-being is not morally good, giving them five million units of well-being and one unit of suffering would be morally bad, because it brings about something bad, but nothing good.  This is a very difficult pill to swallow.  However, if neither their well-being nor their suffering is morally relevant, then it would be morally neutral to cause them to suffer immensely (aside from issues of consent).  

This broader point can be illustrated with an example.  Imagine a twin earth, identical to our own, but where no one had a preference for not being in the experience machine.  To them, only their mental experiences of things matter.  It makes no difference, to them, whether they are in the experience machine or not.  It seems that in this world, there would  be nothing wrong with plugging into the experience machine.  The only reason plugging into the experience machine seems objectionable is because most people have a preference for not plugging into the experience machine, and find the thought distressing.  

Additional objections can be given to the experience machine (Lazari-Radek, 2014).  Several factors count against our intuitions about the experience machine.  First, there is widespread status quo bias.  As Singer explains “Felipe De Brigard decided to test whether the status quo bias does make a difference to our willingness to enter the experience machine. He asked people to imagine that they are already connected to an experience machine, and now face the choice of remaining connected, or going back to live in reality. Participants in the experiment were randomly offered one of three different vignettes: in the neutral vignette, you are simply told that you can go back to reality, but not given any information about what reality will be like for you. In the negative vignette, you are told that in reality you are a prisoner in a maximum-security prison, and in the positive vignette you are told that in reality you are a multi-millionaire artist living in Monaco. Of participants given the neutral vignette, almost half (46 per cent) said that they would prefer to stay plugged into the experience machine. Among those given the negative vignette, that figure rose to 87 per cent. Most remarkably, of those given the positive vignette, exactly half preferred to stay connected to the machine, rather than return to reality as a multi-millionaire artist living in Monaco.23”.  This strongly counts against the conclusion that we have an intrinsic preference for reality.  Additionally, there is a strong evolutionary reason for organisms to have a preference for actually doing things in the real world, rather than wireheading.  

A final point that can be made is that the preference for the real can be explained on a utilitarian account; preferring the real tends to maximize well-being.  In cases where it does not, this intuition seems to fade.  It is counterintuitive to think that there would be something wrong about plugging into a virtual reality game, for a short time, because that is something we have familiarity with, and tends to maximize well-being.  Thusm premise 2 seems to be on firm ground.

Premise 3 says therefore only well-being is good (for selves who are rational egoists).  This follows from the previous premises.  

Premise 4 says the types of things that are good for selves who are rational egoists are also good for selves who are not rational egoists unless they have unique benefits that only apply to rational egoists.  This is trivial.  Better is a synonym of more good, so this sentence is essentially the types of things that are good for selves who are rational egoists are also good for selves who are not rational egoists unless they are more good for rational egoists.

Premise five says well-being does not have unique benefits that only apply to rational egoists.  This premise is deeply intuitive.  It makes no difference to our judgement that the joy of friendship, soup, and enlightenment whether we are a rational egoist.  

Premise six says therefore only well-being is good for selves who are or are not rational egoists.  This follows from the previous premises.  

Premise seven says all selves either are or are not rational egoists.  This premise is trivial. 

Premise eight says therefore, only well-being is good for selves.  This follows from the previous premises.  

Premise nine says Something is good, if and only if it is good for selves

This claim is hard to deny.  It seems hard to imagine something being good, but being good for literally no one.  If things can be good, while being good for no one, there would be several difficult entailments that one would have to accept.  

1 A universe with no life could have moral value, given that things can be good or bad, while being good or bad for no one.  The person who denies I could claim that things that are good must relate to people in some way, despite not being directly good for people, yet this would be ad hoc, and a surprising result, if one denied I.  

2 If something could be bad, while being bad for no one, then it could be the case that galaxies full of people experiencing horrific suffering, for no ones benefit could be a good state of affairs, relative to one where everyone is happy and prosperous, but things that are bad for no one, yet bad, nonetheless are in vast quantities.  For example, suppose we take the violation of rights to be bad, even if it’s bad for no one.  A world where everyone violated everyone elses rights unfathomable numbers of times, in ways that harm literally no one, but where everyone prospers, based on the number of people affected, could be morally worse than a world in which everyone endures the most horrific forms of agony imaginable.  

3 Those who deny this principle usually do so, not on the basis of the principle sounding implausible, but on the basis of the principle denying other things that they think matter.  I shall argue those things don’t matter.  

First, People often argue for retributivism.  This runs into several problems.  

1 It runs into the second issue discussed.  If it is good to punish bad people, then we should trade off a certain amount of pleasure with the punishing of bad people.  To give an example of a metric, imagine we say that a year of punishment for bad people is good enough to offset an amount of suffering equivalent to one punch in the face.  If this is true, googolplex bad people being punished for a year each, combined with as much suffering of benevolent people as googol holocausts, would be better than a world where everyone, including the unvirtuous bad people is relatively happy.  Given the absurdity of this, we have reason to reject this view.  

The retributivist may reply by arguing that there’s some declining value of retributivism, such that punishing one bad person is worth a punch in the face, but repeated punches in the face outweighs any amount of punishments of bad people.  However, this is implausible, given the mere addition  paradox.  It seems clear that one torture can be offset by several slightly less unpleasant tortures, each of which can be offset by several even less unpleasant tortures.  THis process can continue until we get a large numbers of “tortures' ' equivalent in pain to a punch in the face, being worse than a torture.  If the number of bad people punished is large enough, it could thus outweigh the badness of horrifically torturing galaxies full of people.  

They could bite the bullet, however, this is a view that’s so intuitive that we have decisive reasons to reject it.  There are other issues with retributivism 

2 (Kraaijeveld 2020) has argued for an evolutionary debunking of retributivism.  It’s extremely plausible that we have an evolutionary reason to want to prevent people from doing bad things.  It’s unsurprising that we feel angry at bad people, and want to harm them.  

3 There’s an open question of how exactly we determine who to punish.  Do we punish people for doing bad things?  If so, should we punish politicians who do horrific things as a result of bad ideas?  Would an idealistic communist leader who brings their country into peril be worthy of harm?  If it’s based on motives, then should we punish egoists, who only do what makes them happy, even if they help other people for selfish reasons?  If we only punish those who possess both characteristics, would we not punish nazi’s who truly believed they were acting in the greater good?  Additionally, should we punish people who think meat is immoral, but eat it anyways?  If so, we’d punish a large percentage of people.  

4 Our immorality is largely a byproduct of chance.  Many serial killers would likely not have been serial killers, had they been raised in a different family.  Additionally, many violent criminals would not have been violent criminals had there not been lead in the water.  Is it truly just to punish people for things that occurred outside their control, that are causally responsible for their crimes.  As we’ve seen throughout history, if we’d been in nazi germany, we’d likely have been nazi’s.  In scenarios similar to the Stanford prison experiment, we’d do horrible things.  Most people would do horrible things in certain circumstances.  

Secondly, People will often argue for rights.  This runs into its own issues.  

1 It seems a world without any rights would still matter morally.  For example, imagine a world with sentient plants, who can’t move, where all harm is the byproduct of nature.  It seems plants being harmed, despite their rights not being violated, is bad.  

2 Everything that we think of as a right is reducible to well-being.  For example, we think people have the right to life.  Yet the right to life increases well-being.  We think people have the right not to let other people enter their house, but we don’t think they have the right not to let other people look at their house.  The only difference between shooting bullets at people, and shooting soundwaves (ie making noise) is one causes a lot of harm, and the other one does not.  Additionally, we generally think it would be a violation of rights to create huge amounts of pollution, such that a million people die, but not a violation of rights to light a candle that kills no people.  The difference is just in the harm caused.  Additionally, if things that we currently don’t think of as rights began to maximize well-being to be enshrined as rights, we would think that they should be recognized as rights.  For example, we don’t think it’s a violation of rights to look at people, but if every time we were looked at we experienced horrific suffering, we would think it was a rights to look at people.  

3 If we accept that rights are meta ethically significant then there’s a number of rights violated that could outweigh any amount of suffering.  For example, suppose that there are aliens who will experience horrific torture, that gets slightly less unpleasant for every leg of a human that they grab, without the humans knowledge or consent, such that if they grab the leg of 100 million humans the aliens will experience no torture.  If rights are meta ethically significant, then the aliens grabbing the legs of the humans, in ways that harm no one would be morally bad.  The amount of rights violations would outweigh.  However, this doesn’t seem plausible.  It seems implausible that aliens should have to endure horrific torture, so that we can preserve our magic rights based forcefields, from infringement that produces no harm for us.  If rights matter, a world with enough rights violations, where everyone is happy all the time could be worse than a world where everyone is horrifically miserable all of the time but where there are no rights violations.  

4 This is not especially counterintuitive and does not rob our understanding or appreciation for rights.  It can be analogized to the principle of innocence until proven guilty.  The principle of innocent until proven guilty is not literally true.  A person's innocence until demonstration of guilt is a useful legal heuristic, yet a serial killer is guilty, even if their guilt has not been demonstrated.  

5 An additional objection can be given to rights.  We generally think that it matters more to not violate rights than it does to prevent other rights violations.  We intuitively think that we should kill one innocent person to prevent two murders.  Preventing a murder is no more morally relevant than preventing another death.  A doctor should not try any harder to save a person's life on the basis of them being shot, than on the basis of them having a disease not caused by malevolent actors.  I shall give a counterexample to this.  Suppose we have people in a circle each with two guns that will each shoot the person next to them.  They have the ability to prevent two other people from being shot, with another person's gun, or to prevent their own gun from shooting one person.  Morally, if we take the view that one's foremost duty is to avoid rights violation, then they would be obligated to prevent their gun from shooting the one person.  However, if everyone prevents one of their guns from being shot, rather than two guns from other people from being shot, then everyone in the circle would end up being shot.  If it’s more important, however, to save as many lives as possible, then each person would prevent two guns from being shot.  World two would have no one shot, and world one would have everyone shot.  World one seems clearly worse.  

Similarly, if it is bad to violate rights, then one should try to prevent their own violations of rights at all costs.  If that’s the case, then if a malicious doctor poisons someone's food, and then realizes the error of their ways, the doctor should try to prevent the person from eating the food, and having their rights violated, even if it’s at the expense of other people being poisoned in ways uncaused by the doctor.  If the doctor has the ability to prevent one person from eating the food poisoned by them, or prevent five other people from consuming food poisoned by others, they should prevent the one person from eating the food poisoned by them.  This seems deeply implausible.  

6 It’s very difficult to find a criterion for deciding upon rights that’s unlinked from well-being.  We think it’s a violation of rights to shoot people, but not to make noises.  Yet if making noises caused as much suffering as shooting people we would think it were a violation of rights.  

7 We have decisive scientific reasons to distrust the existence of rights, which is an argument for utilitarianism generally.  As Greene et al argue “A substantial body of evidence indicates that utilitarian judgments (favoring the greater good) made in response to difficult moral dilemmas are preferentially supported by controlled, reflective processes, whereas deontological judgments (favoring rights/duties) in such cases are preferentially supported by automatic, intuitive processes.”  

People with damaged vmpc’s (a brain region responsible for generating emotions) were more utilitarian (Koenigs et al 2007), proving that emotion is responsible for non-utilitarian judgements.  While there is some dispute about this thesis, a meta analysis (Fornasier et al 2021) finds that better and more careful reasoning results in more utilitarian judgements across a wide range of studies.  They write “The influential DPM of moral judgment makes a basic prediction about individual differences: those who reason more should tend to make more utilitarian moral judgments.  Nearly 20 years after the theory was proposed, this empirical connection remains disputed. Here, we assemble the largest and most comprehensive empirical survey to date of this putative relationship, and we find strong evidence in its favor.” 

Premise 10 says Therefore only well-being is good.  It follows from the previous premises.  

Premise 11 says We should maximize good 

This can be supported through the following argument 

1 If something is good this gives us a reason to pursue it 

2 The most good thing gives us the most reason to pursue it 

3 We should pursue what we have the most reason to pursue 

Therefore, we should pursue the most good thing.

This is deeply intuitive.  When considering two options, it is better to make two people happy than one, because it is more good than merely making one person happy.  Better is a synonym of more good, so if an action produces more good things it is better that it is done.  

If there were other considerations that counted against doing things that were good, those would be bad, and thus would still relate to considerations of goodness 

Additionally, as Parfit has argued (Parfit 2011) the thing that makes things go best, is the same as the thing that everyone could rationally consent to and that no person could reasonably reject.  Parfit made this argument in the context of rules, but it applies equally to acts.  

Premise 12 says Therefore, we should maximize only well-being.  This follows from the previous premises.  

Section 2, the fourth argument 

This section will seek to demonstrate the moral correctness of utilitarianism by arguing that the solutions that utilitarianism comes to about a series of moral dilemmas are correct.  This would provide a strong inductive argument for utilitarianism

P1 After thinking deeply through difficult thought experiments the conclusion that we come to is likely to be correct

P2 After thinking deeply through difficult thought experiments the conclusion that we come to is almost always consistent with the utilitarian line of reasoning

Therefore, a solution that is likely to be correct corresponds with the utilitarian line of reasoning 

Therefore utilitarianism is likely correct

One could object by arguing that if the reason that we come to discover those moral truths is for moral reasons divorced from utilitarian reasoning, then this would merely prove that correct reasoning often corresponds with utilitarian reasoning, but utilitarianism would not be the reason for these conclusions

However, this view seems mistaken.  While it’s true that the proximate reason for the conclusion is not based on utilitarian justifications, if utilitarianism is consistently able to predict the correct view, that gives us strong reason to believe utilitarianism is correct.  If in each individual situation some principle compels a person to the utilitarian view, the ability for utilitarianism to predict the existence of those principles serves as a strong inductive argument for utilitarianism.  It would serve as a strong argument for string theory if it were consistently able to make reliable predictions and bring together an elegant account of the movement of all objects in the universe, even if those objects movement was caused by a force that could be, yet wasn’t necessarily emergent from string theory.  

Part 1 The Trolley Problem

In this part I will defend the view that one ought to flip the switch in the trolley problem.  The utilitarian defense of this is somewhat straightforward, flipping the switch saves the most lives, and thus makes the world maximally better.  However, there are a series of principles to which one could appeal to argue against flipping the switch.  In this section, I will argue against them

The first view shall be referred to as view A.  

According to view A, it is impermissible to flip the switch, because a person ought not take actions that directly harm another without their consent.  

However, this view seems wrong for several reasons

First, an entailment of this view is that one oughtn’t steal a dime, in order to save the world, which seems intuitively implausible

Second, this view is insufficient to demonstrate that one oughtn’t flip the switch unless one has a view of causality that excludes negative causality, ie a view which claims that harm caused by inaction is not a direct harm caused by action.  Given that not flipping the switch is itself an action, one needs a view of direct causation that excludes failing to take an action.  However, such a view is suspect for two reasons.  

First, it’s not clear that there is such a view that is coherent.  One could naively claim that a person could not be held accountable for inaction, yet all actions can be expressed in terms of not taking actions.  Murder could be expressed as merely not taking any action in the set of all actions that do not include murder, in which case one could argue that murder is permissible, given that punishing murder is merely punishing a persons failing to take actions in the aforementioned non murder set.  

One could take a view called view A1 which states that 

A person's action is the direct cause of a harm if them becoming unconscious during that action would negate that harm.  

This view would allow a person to argue that one is the direct cause of harm during the trolley problem only if they flip the switch.  However this view has quite implausible implications.

It would argue that if a person is driving and sees a child in front of them in the road, and fails to stop their car and runs over the child they would not be at fault, because failing to stop is merely inaction, not action.  It would also imply that if a heavy shed was about to fall on five people, yet could be prevented by the press of a button, a person who failed to press the button would not be at fault.  Thirdly, it would imply that people are almost never the cause of car related mishaps, because things would surely have gone worse if a person were unconscious while driving.  Fourthly, it would entail that a person would not be at fault for failing to intervene during a trust fall exercise over metal spikes.   Finally, it would entail that if a person was murdering another by using a machine to drive a metal spike slowly into them, that drove the spike forward up until the press of a button, they only be at fault for initially pressing the button, yet they would not be acting wrongly by allowing the machine to continue it’s ghastly act.  

This view is also illogical.  Inaction is merely not taking an action.  It seems implausible that the moral value of an action would be determined by the harms avoided by being rendered unconscious.  

Another view could claim that a person causes an event, if, had that person not existed that event wouldn’t have occured.  However, this is still subject to the aforementioned second objection.  It is subject to several other objections

First it would argue that an actress would be at fault if a person became enraged at the shallowness of their life, after seeing an actress act in a movie, and in rage, committed a murder.  It would also argue that, assuming Hitlers mothers classmates were almost all responsible for the holocaust, if they changed her life enough to make her baby sufficiently different to have prevented the holocaust.  Finally, it would argue that Jeffrey Dahmer would act wrongly to donate to charity, because his non existence would avoid harm.  

To resolve these issues one could add in the notion of predictability.  They could argue that a person is at fault if their non-existence could be predictably expected to prevent harm.  This would still imply that if a heavy shed was about to fall on five people, yet could be prevented by the press of a button, a person who failed to press the button would not be at fault.  Furthermore it’s subject to two unintuitive modified trolley problems.  

First it implies that if a train were headed towards five people, and one could flip a switch such that it would run over zero people they would not be at fault for not flipping the switch.  

Second it has a fairly non intuitive second modified trolley problem.  Suppose one is presented with the initial trolley problem, yet trips and accidentally flips the switch.  Should they flip the switch back to its initial state, such that five people die, rather than the one.  This view would imply that they should.  However, this view seems unintuitive, it seems not to matter whether the trolley's position was caused initially by a person's existence.  

If one accepts the view that they should flip the switch back to its original position, they’d have to accept that in most situations, if a person's non existence would likely result in the same type of harm, they are not at fault.  For example, if it were the case that most doctors would harvest the organs of one to save five, then a person would not be blameworthy for doing it, because if they didn’t exist, another doctor would harvest their organs.  It would also say that if the majority of people would flip the switch, and that if a person weren’t on the railroad, another would be, then one ought to flip the switch.  

It would also argue that in a world where as a child, I accidentally brought about the demise of Jeffrey Dahmer, it would be permissible to kill all the people who Dahmer would have otherwise killed, assuming I could have knowledge that he would have gone on to murder many people and of who he was going to murder.  

Thus it could be revised to be the following

A2 A person's action is responsible for a harm if there would be a less great differential in harm to at least one person between  the actual world and a world in which they don’t exist, after taking the action

For example on this view Jim murdering a child would be immoral, because after murdering a child, there is harm to at least one person ie the child which would be prevented by Jim’s non existence, such that had Jim stopped existing the instant before he took the action, there would have been a person who would not have suffered

Yet this view also seems implausible.


 

Firstly because it sacrifices a good deal of simplicity and elegance.  The complexity and ad hoc nature of the principle gives us reason to distrust it relative to a simpler principle.  

Secondly, it would entail that if person A came across person B murdering person C by using a machine to drive a metal spike slowly into them, that drove the spike forward up until the press of a button, person A would not be at fault for failing to press the button

Thirdly, it would still entail the permissibility of person C failing to press the button, given that their non existence the moment before would result in the same outcome

Therefore we have decisive reasons to reject this view 

Instead utilitarianism provides a desirable answer to the trolley problem

ONe should flick the switch because it would maximize well-being

Therefore, utilitarianism is 1-0 in terms of according with the view we’d reach upon reflection

Part 2 Torture vs Dust specks 

If given the choice, which should one prevent, one innocent person from being horrendously tortured or 10!!! People (assuming we lived in a universe with that many people) from getting slightly irritating dust specks in their eyes, that are forgotten about 5 seconds later

The common sense view is that one should prevent torture 

The common sense view, I shall argue, is wrong

ONe should prevent dust specks

There must be some number of dust specks that can outweigh the badness of torture

However, when arguing for the view that we should prioritize preventing the dust specks one could defend the following view which I will refer to as the lexical difference view

According to the lexical difference view there are some types of suffering that are so heinous that no lower level sufferings can ever outweigh.  

However, this view seems clearly mistaken.  Suppose that we were deciding between 1 torture causing 1000 units of pain (which we’ll take to be the amount of pain caused by tortures on average) versus 1000 tortures each causing 999 units of pain.  It seems clear that the thousand tortures would clearly be worse.  Now, we can do this process again.  Which would be worse 1000 tortures with 999 units of pain or 1 million tortures each with 998 units of pain.  This process can continue until we conclude that some vast numbers of “tortures” each inflicting as much misery as a speck of dust should be preferred to 1 torture causing 1000 units of pain.  To hold the view that there’s a lexical difference between different types of pain, one would have to hold the view that there’s some threshold of pain which has the odd characteristic of being the cutoff point.  At this cutoff point any tiny amount of suffering above the cutoff point outweighs any amount of suffering below the cutoff point.  For example, if one claims that the cutoff point is at an amount of pain equivalent to stubbing one's toe, then they’d have to claim that infinite people experiencing pain one modicum below a toe stub is less bad than 1 person having a 1 in 100 quadrillion chance of experiencing one unit of suffering above a toe stub

One could hold a separate view, a liberty based view, according to which torture outweighs because it involves violating peoples rights, which categorically matters more.  Yet this view is wrong.  If given the choice between preventing one person from having 5 dollars stolen, which would clearly be a liberty violation, and getting rid of all deadly diseases, it seems clear that preventing all diseases would matter more than preventing one small liberty violation.  THere’s a reason we spend more money trying to eradicate disease than we do trying to prevent single individuals from having their stuff stolen.  If we held the view that preventing theft mattered more than eradicating disease, then we should devote no money to preventing disease until all theft had been eradicated.  Thus if natural phenomenon, like nature can matter more than liberty violations, other natural phenomena like dust specks can also matter more.  

ONe could hold the view that these types of suffering are so different that they can’t be compared, yet this view is also wrong.  The pain of losing a loved one is a very different type from the pain of a toe stub, yet it’s clear that losing a loved one is worse than a toe stub.  Difference does not indicate incomparability.  Thus the torture vs dust specks question is an argument that favors utilitarianism and demonstrates it’s ability to get the correct answer to difficult questions 

Part 3 The (inaptly named repugnant conclusion)

The repugnant conclusion is an argument provided against utilitarianism.  It argues that by the lights of utilitarianism there is necessarily a number of people with lives barely worth living, (10^40 let’s say) who would make the world better than trillions of people living great lives.  There is a number of people whose lives consist of getting a backrub and then disappearing who possess more moral worth than quadrillions of people living unimaginably good lives.  Many people find this counterintuitive.  However, as I shall demonstrate, other views are more counterintuitive.  I owe many of these ideas to (Huemer 2008), who wrote an excellent paper on the subject.   

We can take a similar approach to the one taken in the previous section.  Suppose that we have one person who is extremely happy all of the time.  Suppose that they live 1000 years.  Surely, it would be better to make 100 people with great lives who live 999 years, than one person who lives 1000 years.  We can now repeat the process.  100000 people living 998 years would surely be better than 100 living 999 blissful years.  Once we get down to one day and some ungodly large number of people 10^100 for example, we can go down to hours and minutes.  IN order to deny the repugnant conclusion one would have to argue for something even more counterintuitive, namely that there’s some firm cutoff.  Suppose that we say that the firm cutoff is at 1 hour of enjoyment.  They would have to say that infinite people having 59 minutes of enjoyment matters far less morally than one person having a 1 in a billion chance of having an hour of enjoyment.  One might reply that the types of enjoyment are incomparable and different.  However, as we’ve seen with the torture argument, different types of pleasures are comparable.  It is clear that the most blissful physical sensations matter more than learning one relatively trivial thing, that barely improves quality of life.  The harms of never being able to experience any positive physical sensations outweigh those of having one interesting book stolen, that would have otherwise been read and enjoyed

Another argument can be made for the conclusion.  Most of us would agree that one very happy person existing would be worse than 7 billion barely happy people existing.  If we just compare those states of the universe, iterated 1 trillion times, we conclude that 7x10^21 people with barely happy lives matters more morally than 1 trillion people with great lives.  To deny this view one could claim that there is some moral significance to the number a billion, such that the moral picture changes when we iterate it a trillion times.  Yet this seems extremely counterintuitive.  Suppose we were to discover that there were large numbers of happy aliens that we can’t interact with.  It would be strange for that to change our consideration of population ethics.  The morality of bringing about new people with varying levels of happiness should not be contingent on causally inert aliens.  This demonstrates that our anti repugnant conclusion intuitions fall prey to a series of biases.  

1 We are biased having won the existence jackpot.  A non existing person who could have lived a marginally worthwhile life would have perhaps a different view.  

2 We have a bias towards roughly similar numbers of people to the numbers who exist today.  

3 Humans are bad at conceptualizing large numbers

Numerous theories have been generated to attempt to avoid the repugnant conclusion.  However, these theories all have deeply counterintuitive implications.  Even if we accept that the repugnant conclusion is as implausible as many of the other views that reject the RC, to the extent that whenever we reject utilitarian conclusions, after reflection we either accept the utilitarian conclusions, or have to accept such counterintuitive implications that it’s labeled a paradox, that would certainly count in favor of utilitarianism.  

Thus, the repugnant conclusion should be renamed the nice and pleasant conclusion.  

Part 4, the far future 

Utilitarianism argues that we should give immense weight to the well-being of future people given the vast number of future people that are likely to exist, and their immense capacity for value.  This section shall argue that we have the duty to bring happy people into existence, in a way that is immediately predicted by utilitarianism, yet either mistakenly rejected or added in ad hoc to non utilitarian views.  If we reject that it is good to bring about happy people we get several counterintuitive conclusions.  

A We would conclude that there’s no moral difference between bringing about a person who experiences tree 3 (tree 3 is a very big number) happiness and bringing about one who experiences no happiness in their lives, assuming neither had any suffering.  This is already an implausible view, yet it can be used to derive an even more implausible one.  Suppose now that we add a bit of suffering to both lives.  Given that the cases were previously symmetrical, adding the same amount of suffering to both sides should result in equal moral value.  Thus, this would entail that pressing a button that deprives all future people of happiness would be morally neutral, which is a profoundly counterintuitive conclusion.  If one says that there’s no difference morally between the case with one person who has tree 3 units of happiness, and one person who has no happiness, but one says that there is a morally relevant difference between the case of one person who has 60 units of suffering and no happiness, but no relevant difference between the case of a person who has tree 3 units of happiness and 60 units of suffering, it’s unclear on what grounds they would reject the symmetry, given the same operation done to both sides.  Yet even if we were to accept that as being justified, it still runs into a host of unintuitive consequences.  If we say that bringing about an unfathomably happy person who has 60 units of suffering is worse than not bringing anyone into existence, that is already a counterintuitive conclusion.  Yet if we say that it’s better to bring a person into existence with tree 3 units of happiness, but 60 units of suffering than non existence, when the position already entailed there being no difference between non existence and tree 3 units of happiness, then we’d have to say that tree 3 units of happiness with sixty units of suffering is better than tree 3 units of happiness.  If this is true, then bringing about future happy people would be good, because they’re likely to have some suffering, which apparently actualizes the desirability of their pleasure.  

B This view would say that having a child with a 99.99% chance of unfathomable bliss, but a .01% chance of agony would be morally bad.  If bringing about people in agony is bad, but bringing about happy people is not good, then an action that might be bad by bringing about future agony would be bad, even if it would probably bring about a happy person.  One could take the view that the action would be morally neutral, because there’s nothing morally bad about bringing about a suffering person.  However, this is deeply counterintuitive.  It seems clear that a mother would be acting wrongly if she deliberately had a child who would live a life of immense agony.  This is why as a society we have a prohibition on drinking while pregnant.  We have good reason to believe that future people have interests that ought to be respected.  

One could take the view, as many economists do, that we ought to discount the future. However, this view runs into difficulty.  If we take the view that we should  discount the future at a rate of 2% per year, we’d have to hold that 1 person's death today has more moral significance than 1,682,967,360 people dying a thousand years from now, which is deeply counterintuitive.  

One could take the view that for something to be bad it must be bad for a person.  However, this view runs into problems as well.  If the government passes climate legislation that makes the lives of people in 100 years better, the people will be different because climate policy will change the population.  Thus, this passage of climate policy will be better for no individual.  However, it seems clear that we still have reason to not cause suffering to future people, just because future people would have a changed composition if we improved their lives.  

Thus utilitarianism correctly identifies the overwhelming importance of shaping the far future.  This again counts in favor of utilitarianism.  

Part 5 Harvesting organs 

By the light of utilitarianism we ought to kill one person and harvest their organs if it could save five people.  Yet this strikes people as deeply counterintuitive.  I shall argue our initial intuition is wrong and can be explained away.  

First, there’s a way to explain it away sociologically.  Rightly as a society we have a strong aversion to killing.  However, our aversion to death generally is far weaker.  If it were as strong we would be rendered impotent, because people die constantly of natural causes.  Thus, there’s a strong sociological reason for us to regard killing as worse than letting people die.  However, this developed as a result of societal norms, rather than as a result of accurate moral truth tracking processes.  This intuition about the badness of killing only exists in areas where killing to save people is usually not conducive to well-being.  Many of us would agree that the government could kill an innocent person in a drone strike, to kill a terrorist who would otherwise kill ten people.  The reason for the divergence in intuitions is that medical killings are very often a bad thing, while government killings via drone strikes are often perceived to be justified. 

Second, as was pointed out in section one, we have reason to distrust our non consequentialist intuitions.  Our intuitions about the badness of killing one to save five are largely an automatic revulsion to killing, rather than a well reasoned moral analysis.  Upon reflection, I have found that the intuition about the badness of killing one to save five is gone.  

Third, we have good reason to say no to the question of whether doctors should kill one to save five.  A society in which doctors violate the hippocratic oath and kill one person to save five regularly would be a far worse world.  People would be terrified to go into doctors offices for fear of being murdered.  Cases of one being murdered to save five would be publicized by the media resulting in mass terror.  While this thought experiment generally proposes that the doctor will certainly not be caught and the killing will occur only once, the revulsion to very similar and more easily imagined cases explains our revulsion to killing one to save 5.  It can also be reasonably argued that things would go worse if doctors had the disposition to kill one and save five.  Given that a utilitarian's goal is to take the acts, and follow the principles who make things go best in the long term, a more valuable principle that entails that one does not take this act, can be justified on utilitarian grounds.  

Fourth, we can imagine several modifications of the case that makes the conclusion less counterintuitive.  

First, imagine that the six people in the hospital were family members, who you cared about equally.  Surely we would intuitively want the doctor to bring about the death of one to save five.  The only reason why we have the opposite intuition in the case where family is not involved is because our revulsion to killing can override other considerations when we feel no connection to the anonymous, faceless strangers who’s death is caused by the Doctors adherence to the principle that they oughtn’t murder people.  

It could be objected that even with family members the intuition is the same.  Yet this doesn’t seem plausible, particularly if no one had any knowledge of the doctor's action.  If no one knew that the doctor had killed the one to save the other five, surely it would be better for this to happen.  An entire family being killed would clearly be less bad than one family member dying.  

It could be objected that adding in family makes the decision making worse, by adding in personal biases.  Yet this is not so.  Making it more personal requires us to think about it in a more personal way.  It is very easy to neglect the interests of the affected parties, when we don’t care much about them.  Making it entirely about close family members matters, because we care about family.  If we care about what is good for our family, then making the situation entirely about our family is a good way to figure out what is good, all things considered.  Yet this is not the only case that undercuts the intuition.  

Second, suppose that a Doctor was on their way to a hospital with organ transplants that would save 5 people, who would otherwise die.  They see on the side of the road a murder that they could prevent, yet it would require a long delay, that would cause the death of the five people in the hostpital.  It seems clear that the doctor should continue to the hospital.  Thus, when we merely add up the badness of allowing 5 to die, versus one murder the badness of five people dying outweighs.  

Third, imagine that 90% of the world needed organs, and we could harvest one person's organs to save 9 others, who would  live a perfect life.  It seems clear that it would be better to kill the ten percent, rather than to let the other 90% die.  

The purpose of all of these is not to prove definitively that we have a reason to harvest organs that are immediately derivable.  Rather the purpose is merely to undercut the intuition about the definitive badness of harvesting organs.  We may be uncertain about the organ case.  If we have uncertainty, then we have the most reason to support utilitarianism if it best accounts for our other intuitions and the other views that we’d reach upon reflection.  

Finally, let’s investigate the principle behind not harvesting organs.  

We could adopt the view NK, which says that one ought not kill innocent people.  Yet view NK is clearly subject to many counterexamples.  If the only way to stop a terrorist from killing a million people was by killing one innocent person, we should surely kill the innocent person.  And most people would agree that if you could kill a person and harvest every cell in their bodies to save a million people, that action would be permissible.  

WE could adopt view NKU, which says one ought not kill unless there is an overriding concern that involves vast numbers of people.  Yet this view also seems to run into a problem.  

It seems the objection differs depending on the context in which a person is killed.  A terrorist using a human shield who is about to kill five people could be killed, yet it seems less intuitive to kill the person to harvest their organs.  Thus, the badness of killing is context specific.  This adds credence to the utilitarian view, in that the context seems to generally follow rules that determine if killing in most similar cases would make things go best.  

We could take the view DSK, which says Doctors shouldn’t kill.  However, this view is once again very easily explainable sociologically; it is very good for society that doctors don’t generally kill people.  But in a deeper meta ethical sense it seems to make less sense.   

We can consider a similar case that doesn’t seem to go against the obligations of a doctor.  Suppose that a doctor is injecting their patients with a drug that will cure disease in low doses, but in too high of doses, it will kill someone.  Midway through injecting the drug, they realize that they’ve given a lethal dose to five other patients, in another room.  The only way that they can save them is by leaving immediately, and allowing their current patient from getting too high of a dose, who will die.  It seems intuitive that the doctor should save the five, rather than the one.  

One might object that the crucial difference is that the doctor is killing, rather than merely failing to save.  However, we can consider another case, where the doctor realizes they’ve given a placebo to five people, rather than life saving medicine.  THe only way to give the life saving medicine would be to abandon the room that the doctor is in, much like in the previous example.  It seems very much like the doctor should go to the other room, even though it will result in a death, caused by their injection.  It seems clear that the cause of the lethal condition shouldn’t matter in terms of what they should do.  As Shelly Kagan has argued (Kagan 1989) that there is no plausible doing vs allowing distinction that survives rigorous scrutiny.  Given the repeated failure to generate a plausible counter theory, we have reason to accept the utilitarian conclusion.  

Additionally, imagine a situation where very frequently people were afflicted by flying explosives, which would blow up and kill five surrounding people, unless they were killed.  In a world where that frequently happened, it starts to seem less intuitive to think we shouldn’t kill one to save five.  

Thus, the example of the harvesting organs case seems to, once again, count in favor of utilitarianism.  If we accept the conclusions so far, and accept that the odds are 40% that an incorrect theory would, after reflection, be judged to have the correct conclusion about particular cases that initially appear counterintuitive.  Given these inflated assumptions, the odds are still only 1.024% that utilitarianism would consistently come to the correct conclusions about these particular cases.  

Part 6 should we be voluntarily conquered by the utility monster?  

First, as (Chappell 2021) has argued, the intuition flips with a negative utility monster.  Suppose there was a utility monster who experienced trillions of times more suffering than any human.  It seems intuitive that their suffering would be worse than the collective suffering of humans.  The reason for this divide in intuitions is simple; we can imagine something close to the most extreme forms of suffering.  We can imagine, at least to some degree, what it’s like to be tortured horrifically.  While this does not come close to the badness of the negative utility monsters suffering, we can still get a sense for how bad their misery is.  

Second, we as humans are essentially utility monsters.  We treat insects as having zero value compared to us.  While our treatment of insects may be unjust, it makes the utility monster begin to seem less inconceivable.  The well-being of the utility monster is literally inconceivable.  However, when we conceive of what it’s like to be a utility monster, it begins to seem less intuitive.  Imagine a being who’s existence is so good, that it would be rational for them to trade thousands of years of torture from the perspective of a human, for a single moment of enjoyment.  When we imagine how far above us the utility monster is, it begins to seem far less unintuitive.  Our intuition is largely born out of our inability to conceive of large numbers.  

Third, the utility monster is very much like the repugnant conclusion on loop.  If we ran the repugnant conclusion thousands of times, with each person with a good life living their good life many times, and each person with a life barely worth living living their life enough times to make their life exactly as good as our lives are now, the utility monsters acceptance requires also rejecting the repugnant conclusion.  

One could object that we can’t merely run the experiences on loop; something changes as a result of the single run through.  However, this distinction is arbitrary.  Surely the repetition of the repugnant conclusion would not change our reasons for accepting it.  Additionally, an exceptionally long life that is pretty good, can surely be just as good as a pretty long life, that is exceptional.  Thus, the intuitions end up conflicting.  This seems to be a case of us being very bad at doing mental math, with exceptionally large quantities, generating divergent and incorrect intuitions.  

Fourth, we can apply a similar procedure to the utility monster, as we have in other cases.  Surely, if we have the choice between giving a slightly enjoyable experience to two entities, with slightly intense experiences, or one entity with experiences that are thousands of times more vivid, we would give the experience to the entity with experiences that are thousands of times more vivid.  Then, if we compare the two with the thousands of times more vivid experiences to one with a billion times more vivid experience, we would benefit the one with the billions of times more vivid experience.  Enough iterations of this process gets us to the utility monster.  While this doesn’t get us to the rights violating conclusions, it does get us to the conclusion that, if we could either feed every person on earth, or feed one meal to the utility monster, we ought to feed the meal to the utility monster.  We can add the additional supposition that, if we could kill one person, to give everyone on earth a meal, we ought to do that.  Giving everyone on earth a meal will surely save many lives, so it’s analogous to the bridge version of the trolley problem, but with hundreds, or thousands of people on the track below.  This gets us to the utility monster.  Additionally, the previous arguments have explained why rights don’t have moral significance.  

Part 7 Should George take the job?

This section will address a thought experiment given by Bernard Williams (Williams 1973).  He wrote “(1) George, who has just taken his Ph.D. in chemistry, finds it extremely difficult to get a job. He is not very robust in health, which cuts down the number of jobs he might be able to do satisfactorily. His wife has to go out to work to keep them, which itself causes a great deal of strain, since they have small children and there are severe problems about looking after them. The results of all this, especially on the children, are damaging. An older chemist, who knows about this situation, says that he can get George a decently paid job in a certain laboratory, which pursues research into chemical and biological warfare. George says that he cannot accept this, since he is opposed to chemical and biological warfare. The older man replies that he is not too keen on it himself, come to that, but after all George’s refusal is not going to make the job or the laboratory go away; what is more, he happens to know that if George refuses the job, it will certainly go to a contemporary of George’s who is not inhibited by any such scruples and is likely if appointed to push along the research with greater zeal than George would. Indeed, it is not merely concern for George and his family, but (to speak frankly and in confidence) some alarm about this other man’s excess of zeal, which has led the older man to offer to use his influence to get George the job . . . George’s wife, to whom he is deeply attached, has views (the details of which need not concern us) from which it follows that at least there is nothing particularly wrong with research into CBW. What should he do?”  

Perhaps it is a side effect of thinking about many of the other thought experiments, but this one did not strike me as counterintuitive at all.  Suppose that George’s contemporary would bring about 5 more deaths than George would, if George had worked at the job.  While it might seem somewhat counterintuitive to say that George ought to take the job, it seems much less unintuitive when we think about it from the perspective of the victims.  Imagine being one of the five people killed, as a result of George’s inactions.  Surely, it would seem reasonable to say that George has a duty to take the job, to prevent your death from being brought about.  It only seems counterintuitive when the people saved are unnamed, faceless people, rather than someone who we care about.  Surely what Oscar Shindler did was morally right, despite sending weapons to the Germans, because it prevented large numbers of innocent jews from being killed.  Additionally, if we take seriously the principle that Williams has espoused, we run into some wacky results.  Suppose that everyone in South America would die a horrible painful death, unless a vegan ate a cheeseburger, or even worked at a cheeseburger restaurant.  It would be reasonable to say that the vegan would be obligated to work at the restaurant, for the overwhelming greater good.  

Additionally, consider two states of the world.  In each state of the world we have a thousand Georges.  However, for each of them, if they don’t take the job, the number of people killed will be one one thousandth of the world's population.  If we say that George should not take the job, then we endorse a state of affairs where every human dies.  If we say George should take the job, we burden a thousand people, but prevent the world ending.  If this principle would end the world, and be worse for everyone, including George, it seems reasonable to suppose that George is acting wrongly.  

Consider a parallel case.  George is deciding whether or not to mow the lawn of a neighbor.  George, an ardent supporter of animal rights, thinks mowing the lawn is somewhat immoral.  However, he knows that if he doesn’t mow the lawn, Fred will be hired.  Fred has a curious habit of bringing a shotgun to mow the lawn.  While he’s mowing the lawn, he makes sure to shoot twenty passerbys, throw acid into the face of twenty other passerbys, sexually assault twenty other passerby’s, beat twenty other passerbys with a cane, and engage in vandalism, treason, assault, assault with a deadly weapon, violation of traffic laws, consumption of every single title one drug, looting, violation of intellectual property rights, illegal possession of a firearm, sale of heroin to minors, and discrimination on the basis of race.  In this case George clearly has an obligation to take the job, even though he finds it ethically objectionable.  

Part 8 Should Jim Kill Indians?

This thought experiment also comes from (Williams 1973), who writes “(2) Jim finds himself in the central square of a small South American town. Tied up against the wall are a row of twenty Indians, most terrified, a few defiant, in front of them several armed men in uniform. A heavy man in a sweat-stained khaki shirt turns out to be the captain in charge and, after a good deal of questioning of Jim which establishes that he got there by accident while on a botanical expedition, explains that the Indians are a random group of the inhabitants who, after recent acts of protest against the government, are just about to be killed to remind other possible protestors of the advantages of not protesting. However, since Jim is an honoured visitor from another land, the captain is happy to offer him a guest’s privilege of killing one of the Indians himself. If Jim accepts, then as a special mark of the occasion, the other Indians will be let off. Of course, if Jim refuses, then there is no special occasion, and Pedro here will do what he was about to do when Jim arrived, and kill them all. Jim, with some desperate recollection of schoolboy fiction, wonders whether if he got hold of a gun, he could hold the captain, Pedro and the rest of the soldiers to threat, but it is quite clear from the set-up that nothing of that kind is going to work: any attempt at that sort of thing will mean that all the Indians will be killed, and himself. The men against the wall, and the other villagers, understand the situation, and are obviously begging him to accept. What should he do?”

In this case, it once again seems intuitive that Jim should kill Indians.  If he does not, everyone is worse off.  We should take options that make everyone better off.  

It might be true that it would be hard to blame Jim for acting wrongly here, given the difficulty of taking the right actions.  Certain actions are psychologically difficult, partly as a result of desirable heuristics, such that we oughtn’t fault people too much for failing to take those actions.  However, how much we would blame Jim would be distinct from the wrongness of his actions.  

We can consider a parallel case; suppose that the only way to save the world was to steal a penny.  In this case, it seems reasonable to say one ought to steal the penny.  Additionally, suppose that an asteroid was about to collide with earth.  The only way to prevent the asteroid deflection was to press a button, which would compress it into a javelin, which would be subsequently launched at a person, thereby killing them.  Surely we should launch the javelin.  Now suppose that the button presser had to witness the person being hit with the javelin.  It still seems clear that the button ought to be pressed.  Finally, suppose that they had hurt the javelin.  It once again seems intuitive that it would be extremely morally wrong not to hurl the javelin.  We ought to save the world, even if we have to sully ourselves in the process.  

Williams goes on to say “To these dilemmas, it seems to me that utilitarianism replies, in the first case, that George should accept the job, and in the second, that Jim should kill the Indian. Not only does utilitarianism give these answers but, if the situations are essentially as described and there are no further special factors, it regards them, it seems to me, as obviously the right answers. But many of us would certainly wonder whether, in (1), that could possibly be the right answer at all; and in the case of (2), even one who came to think that perhaps that was the answer, might well wonder whether it was obviously the answer.”  This is, however, not a mark against utilitarianism.  Ideally, a theory will make us able to come to conclusions quickly, even if they had previously seemed impossible to solve quickly.  It makes no more sense to criticize utilitarianism for its ease of generating a solution to this thought experiment than it would to criticize the chain rule in calculus, for simplifying seemingly very complex math problems, or the intersection of marginal revenue and marginal cost, in economics, for quickly figuring out what the price would be, or the labor theory of value for quickly figuring out the long run equilibrium price of goods.  Regardless of whether or not we accept the labor theory of value, it would be silly to reject it on the basis of it being too quick at generating accurate predictions of long run equilibrium price.  

Part 9: Should we frame an innocent person to appease a mob

This part will address whether we should frame an innocent person in the following case.  A mob of people plans to kill five innocent people, unless someone is confirmed to be the culprit.  Ought one lie about having discovered the culprit, and claim an innocent person is the culprit, to prevent the mob from killing five people.  

One first point that can be made is that in most realistic situations, one ought not frame people.  Thus we have strong reason to say no to the question of whether innocent people ought to be framed, even if we can imagine sparse situations in which it would maximize well-being to do so.  

Second, as was argued in section one, we have reason to distrust our non utilitarian intuitions.  

Third, we can explain away our revulsion sociologically, by appealing to the revulsion which we rightly feel for framing an innocent person.  I touch on these points only briefly, because they are very similar to the ones raised in part five of this section.  

Fourth, we can make modifications like the ones made in part five by making the people family members.  Surely you would rather one family member was framed, rather than all your family members killed.  

Fifth, suppose we could prevent either five murders from a lynch mob, or one innocent person from being framed and killed by a lynch mob.  Surely we should prevent the former.  One could appeal to the act omission distinction in this case.  However, we can modify the case to avoid this.  

Imagine a case in which a person (we’ll call him Tim) wrote an anonymous letter that would be delivered, which would frame an innocent person, who would be killed by the mob.  However, after writing and mailing the letter, Tim had a change of heart, and decided to prevent the letter from being delivered.  When he uncovered the stash where his letter was stored, it turned out to be in an iron box, with a robust security system, such that if two letters were taken out and destroyed, it would sound the alarm, and letters would be returned to their original location.  As Tim is about to take out his letter, he sees another letter, which has the names of five people on it, who Tim knows are being framed, and will be killed by the mob.  If Tim takes his letter out, he will not have framed an innocent person, and no one will be left worse off, as a result of Tim’s actions.   However, if Tim takes out the letter containing five names, he will prevent a mob from killing five innocent people.  In this case, it seems very intuitive to take out the letter with five names, yet it is very much like the earlier case.  When one is taking out the letters, it should not matter who wrote the letter.  Additionally, suppose one wrote the letter when they were asleep (sleep letter forging is considerably rarer than sleep walking, but not impossible).  In that case, it seems even more bizarre to take out the letter, because it was written by them, rather than the five that would save more lives.  

One could object that the cases are not parallel.  However, the cases have been designed to be identical in morally relevant respects.  In both cases, one is framing an innocent person to save five people.  The only difference is that one framing is dragged out over a longer period of time, and is done over mail.  Those, however, are not morally relevant differences.  

Part 10: Sorry Jones, but you’re going to have to take one for the team

The next objection comes from Scanlon (Scanlon 1998), who writes “Jones has suffered an accident in the transmitter room of a television station. Electrical equipment has fallen on his arm. and we cannot rescue him without tuming off the transmitter for fifteen minutes. A World Cup match is in progress, watched by many people, and it will not be over for an hour. Jones's injury will not get any worse if we wait, but his hand has been mashed and he is receiving extremely painful electrical shocks. Should we rescue him now or wait until the match is over? Does the right thing to do depend on how many people are watching... ?”  

In this case, we can apply a similar method to the one applied to previous cases.  Suppose that we compare Jones’ situation to two people experiencing painful electric shocks that are only 90% as painful as Jones’ shocks.  Surely it would be better to prevent the shocks to the two people.  Now compare each of those two shocks to two more shocks, which are 60% as painful as Jones’ original one.  Surely the 4 shocks are worse than the two.  We can keep doing this process until the situation is reduced to a large number of barely painful shocks.  Surely a large number of people enjoying football can outweigh the badness of a large number of barely painful shocks.  A similar point has been made by (Norcross 2002).  

Additionally, as Norcross points out, we regularly make similar trade offs.  When we lower the speed limit, we recognize that some number of people will die, to increase the speed at which people can reach their destination.  

Scanlon could reject transitivity.  However, there are extremely strong arguments for transitivity.  First, transitivity is incredibly intuitive.  It seems extremely implausible that A could be morally better than B, which could be better than C, but C could be better than A.  If, as I shall argue, morality has to do what we have most reason to do, transitivity will follow trivially.  

The second objection given by (Huemer 2008) is the money pump objection.  If one prefers A to B, then they should be willing to trade B and something of value for A.  If one prefers B to C, then they should be willing to trade C and something of value for B.  If one prefers C to A, then they should be willing to trade A and something of value for C.  If one has all these preferences, we can take everything of value to them, by trading B and something of value for A, C and something of value for B, and C and something of value for A.  We can keep going through these trades until they lose everything of value.  

The second argument comes from (Huemer 2008), who writes “The second argument for Transitivity relies on the following two premises:

 Dominance: For any states of affairs x1, y1, x2, and y2, if (i) x1 is better than y1, (ii) x2 is better than y2, and (iii) there are no evaluatively significant relationships among any of these states, then the combination of x1 and x2 is better than the combination of y1 and y2. 

Asymmetry: If x is better than y, then y is not better than x. 

To illustrate the Dominance principle, suppose that I am deciding whether to buy a Honda or a Ford. I am also deciding whether to live in California or Texas. Assume there are no evaluatively significant relationships between these choices: where I live has nothing to do with what kind of car is best, and vice versa. Finally, suppose that the Honda is better than the Ford, and living in California is better than living in Texas. Then it seems that buying the Honda and living in California would be better than buying the Ford and living in Texas. Now suppose that Transitivity is false, and that there is a series of unrelated values, A, B, C, and D, where A is better than B, which is better than C, which is better than D, which is better than A. I shall denote the combination of A and C, ‘A+C’ (and similarly for other combinations). If A and C are two states of affairs, A+C is the state of affairs that obtains when A and C both obtain. Now consider which is better: A+C, or B+D? By Dominance, A+C is better than B+D, because A is better than B and C is better than D. But at the same time, B+D is better than C+A, because B is better than C, and D is better than A.”  

This can be illustrated with the following demonstration.  

A>B 

B>C

C>A

AC = AB

BC > CA > BC 

We can apply this to the Jones case.   

1= Jones being painfully shocked.  

2= Two people being shocked slightly less 

3= Four people being shocked slightly less

100 = a lot of people experiencing very minor shocks 

101 = The basketball game being watched by a large number of people.  

101>100>99…>3>2>1

1>101 according to the transitivity deniers in this case.  

All the worlds are isolated from each other.  

101 + 100 > 100 + 99 > 99 + 98 > 98 + 97… > 2>1

G1 contains 101+100+99+98…+3 +2 

G2 contains 100+99+98… +3 +2 +1 

G1 > G2

We know this is true because each member of the series is better than the parallel member of the other series.  The first member of G1 > the first member of G2 the second member of G1 is better than the second member of G2 and so on.  

However, if we change the order of the worlds, we get a contradiction.  

G1 101+100+99… +3 +2

G2 1+100+99+98… +3 +2 

The first member of G2 is better than the first member of G1.  However, all of the other members are equal.  Thus, G2 is, better than G1.  However, as we saw earlier, G1>G2.  It cannot be the case that both G1>G2 and that G1<G2.  Thus, we have decisive reason to reject this view.  

Part 11: Should we condemn everyone to almost certain unfathomable misery, rather than certain bliss 

This objection to utilitarianism is the one that tickles my intuitions the most.  However, I think the conclusion is true.   For background, Graham’s number is a very large number.  To get a sense of how large of a number it is, take the following number  

Start with 

F1 = 3!!!! (! are factorial signs).  

F2 = 3!!!!!!!!!!!...!!! where there are F1 factorial signs

F3 = 3!!!!!!!!!!!!!...!!! Where there are are F2 factorial signs 

F64

F64 is the parallel of graham’s number.  However, this number is much, much smaller than Graham’s number.  Instead of using factorial signs, Graham’s number uses up arrows, which generate much larger numbers than factorial signs.  To get a sense of how quickly growing up arrow based equations are 

3 with three up arrows and then another 3 = 

3^3^3^3^3^3…^3^3 where the number of threes in the power is 7625597484987.  

Tree (3) is a far larger number than Graham’s number.  It’s bigger than G65 or even G100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000

Thus, consider two possible states of the world.  

T We have a 1/G64 chance of experiencing Tree (3) utility.  However, if we don’t end up experiencing Tree (3) utility, we will experience G64 units of suffering.  

P We have a certainty of experiencing G(64) units of well-being.  

Utilitarianism would say that T>P.  However, that is deeply counterintuitive.  

For this, we can apply the same procedure we did to debunk the other non utilitarian views.  Compare 

P to L

L is a world where we have a 99% chance of experiencing G65 units of well-being.  

L>P 

Compare L to R 

R is a world where we have a 98% chance of experiencing G88 units of utility.  Surely R is better than L.  

We can keep going through this process until we have a very remote chance of experiencing (Tree 3) units of utility, which is better than P.  

There’s an obvious reason why our intuitions are so off track here.  G64 is literally inconceivable.  It is hard to be precise about quantities that are so far outside of our experience that we can’t imagine them.  

Part 12: Let the Children Burn

The following scenario is called CB.  In it, we see a child and a valuable painting in a burning building.  The painting could be sold to save many children from dying of malaria.  Utilitarianism prescribes that we take out the painting (all else equal), sell the painting, and allow the child to burn.  However, this strikes people as counterintuitive.  

In order to see why CB is less counterintuitive upon reflection than it appears to be at first, we can consider the scenario UTPAAC (use the painting as a crowbar).  In this scenario, we can either save the child or the painting.  However, there are hundreds of children in a burning building across the street.  The only way to save them is to take the painting, and use it as a crowbar, to pry open the door to the neighboring burning building.  Surely in this case we should save the painting.  

Then we compare two states of the world.  In the first, we can save hundreds of children in a burning building.  In the second, we can save thousands of children from malaria.  Surely the second one would be as good as the first.  Thus, if saving children from malaria is just as good as saving children from a burning building, and saving children from a burning building is sufficient grounds for leaving one child in a burning building, then we should save the painting, rather than the child. 

One might object that proximity and directness matters.  However, in this case we can consider UTPAACTPTB (use the painting as a crowbar to push the button).  In this scenario, we can either  save the child or the painting from a burning building.  If we save the painting from the burning building, we can use it to pry open the door to a second building, which contains a button.  If we press the button, doors will open up and hundreds of children in overseas burning buildings will be saved.  Surely we should use the painting to pry open the door and save the hundreds of children.  

One might object that selling the item makes it less virtuous.  However, we can modify the case again to be 

Trade: The painting can be traded for a crowbar, which can pry open a door, allowing you to press a button that saves hundreds of children.  In this case, we should sell it for the crowbar.  Thus, whether it’s traded is not a morally relevant feature.  

Additionally, we have several reasons to distrust our intuitions about this case.  

1 It’s very difficult to sympathize with nameless, faceless children, half a world away.  

2 The world we live in is morally very counterintuitive.  We did not evolve to be in situations very often in which we could save lives indirectly, by complex, third party exchanges.  

3 We have a bias for those near us, who we can directly see.  

4 Our judgement may be clouded by our self interest about particular cases.  If we accept that our duty to donate is sufficiently strong, that failing to do so is analogous to leaving children in burning buildings, this undermines our high moral views of ourselves.  

5 We may have a tribalist bias towards people closer to us, rather than people in other countries.  

Recap 

What we’ve seen so far is that there are independent, plausible arguments for accepting the utilitarian conclusions about a variety of diverse cases.  It would be surprising if this were the case about an incorrect moral theory.  

If we assume that the justifications for each case are dependent on one other, then we really have 6 cases of utilitarianism generating plausible support for conclusions.  Even if we assume that an incorrect moral theory would have a 50% chance of being able to be defended by robust defenses, for each counterintuitive case, we would still find that utilitarianism has performed quite well.  Following these assumptions, we find that the odds that it would do as well as it does or better for each pair of cases, is .5^6= 0.015625.  Thus, we get less than a 2% chance that utilitarianism would do this well or better, following extremely conservative assumptions.  If we set the probability of it doing this well or better for each case at 20% we get odds of 0.000064.  The probability of this is incredibly low.  

Additionally, we have reasons to believe that the odds would be very low.  Consider, for example, the counterintuitive moral views held by Kant.  It seems nearly impossible to generate arguments for any of the following.  

1 Homosexuality is an immensely immoral unmentionable vice 

2 Masturbation is so immoral, it’s as bad as suicide.  

3 Organ donation is morally impermissible as is cutting one’s hair.  

4 Children born to unmarried parents can permissibly be murdered. 




 

Section 3: A fifth argument 

This section shall attempt to derive utilitarianism from three plausible axioms.  Those axioms are

Hedonism, which says that the well-being that one experiences during their life determines how well their life goes for them 

Anti egalitarianism, which says that the distribution of well-being is irrelevant 

Pareto Optimality, which says that we should take actions that are better for some, but worse for none, regardless of consent.  

If we accept these axioms, we get utilitarianism.  The action that maximizes well-being could be made Pareto Optimal by redistributing the gains.  Anti egalitarianism says that redistributing the gains has no morally significant effect on the situation.  If Pareto improvements should be taken, and the utilitarian action is morally indistinct from Pareto improvements, utilitarian actions should be taken.  

This can be illustrated with the trolley problem as an example.  In the trolley problem it would be possible to make flipping the switch Pareto optimal by redistributing the gains.  If all of the people on the track gave half of the well-being they’d experience over the course of their life to the person on the other side of the track, flipping the switch would be pareto optimal.  In that case, everyone would be better off.  The person on the other side of the track would have 2.5 times the good experience that they would otherwise have had, and the other people would all have .5 a life's worth of good experience more than they would have otherwise had. Thus, if all the axioms are defensible, we must be utilitarians.  

Hedonism was defended in chapter 1.  

Anti egalitarianism can be defended in a few ways.  The first supporting argument (Huemer 2003) can be paraphrased (and modified slightly) in the following way.  

Consider two worlds, in world 1 one person has 100 units of utility for 50 years and then 50 units of utility for the following 50 years.  A second person has 50 units of utility for the first 50 years, but 100 units of utility for the next 50 years.  In world 2, both people have 75 units of utility for all of their lives.  These two worlds are clearly equally good, everyone has the same total amount of utility.  Morally, in world 1, the first 50 years is just as good as the last 50 years, in both of them, one person has 100 units of utility and the other person has 50 years.  Thus the value of world 1 = two times the value of the first 50 years of world 1.  World one is just as good as world two, so the first 50 years of world one are half as good as world two.  The first 50 years of world 2 are half as good as the total value of world 2, thus the first half of world one, with the same total utility, but greater inequality of utility is just as good as the first half of world 2, with greater inequality of utility but the same total utility.  This proves that the distribution of utility doesn’t matter.  This argument is decisive and is defended at great lengths by Huemer.  

Another argument can be deployed for non egalitarianism based on the difficulty of finding a viable method for valuing equality.  If the value of one’s utility depends on equality this runs into the spirits objection; it implies that if there were many causally inert spirits living awful lives this would affect the relative value of giving people happiness.  If there was one non spirit person alive, this would imply that the value of them being granted a desirable experience was diminished by the existence of spirits that they could not effect.  This is not plausible; causally inert entities have no relevance to the value of desirable mental states.  

This also runs into the Pareto objection; to the extent that inequality is bad by itself then a world with 1 million people with utility of six could plausibly be better than a world with 999,999 people with a utility of six and one person with a utility of 28, given the vast inequality.  

Rawls formulation doesn’t work; if we are only supposed to do what benefits the worse off then we should neglect everyone’s interests except those who are horrific victims of the worst forms of torture imaginable.  This would imply that we should bring to zero the quality of life of all people who live pretty good lives if that would marginally improve the quality of life of the worst off human.  

Rawls' defense of this rule doesn’t work.  As Harsanyi showed (Harsanyi 1975) we would be utilitarian from behind the veil of ignorance.  This is because the level of utility referred to as 2 utility is, by definition, the amount of utility which is just as good as a 50% chance of 4 utility.  Thus, from behind the veil of ignorance we would necessarily value a ½ chance of 4 utility at equal to 2 utility and always prefer it to certainty of 1.999 utility.  

Rawls attempts to avoid this problem by supposing that we don’t know how many people are part of each class.  However, it is not clear why we would add this assumption.  The point of the veil is to make us impartial, but provide us with all other relevant information.  To the extent that we are not provided information about how many people are part of each social class that is because Rawls is trying to stack the deck in favor of his principle.  Simple mathematics dictates that we don’t do that.  

We have several reasons to distrust our egalitarian intuitions.  

First, egalitarianism relates to politics and politics makes us irrational.  As (Kahan et al 17) showed, greater knowledge of math made people less likely to solve politicized math problems.  

Second, equality is instrumentally valuable according to utilitarianism; money given to poor people has greater utility given the existence of declining marginal utility.  As (Baron 93) argues it is possible to easily explain our support for equality as a utilitarian heuristic.  Heuristics make our moral judgments often unreliable (Sunstein 2005).  It is not surprising that we would care about something that is instrumentally valuable and that should often be pursued given that it’s pursuit is a good heuristic.  We have similar reactions in similar cases.  

Third, given the difficulty of calculating utility we might have our judgement clouded by our inability to precisely quantify utility.  

Fourth, given that equality is very often valuable; an egalitarian cookie, money, or home distribution produces greater utility than an inegalitarian one, our judgement may be clouded by our comparison of utility to other things.  Most things have declining marginal utility.  Utility, however, does not.  

Fifth, we may have irrational risk aversion that leads us to prefer a more equal distribution.  

Sixth, we may be subject to anchoring bias, with the egalitarian starting point as the anchor.  

Several more arguments can be provided against egalitarianism.  First is the iteration objection.  According to this objection, if we found out that half of people had a dream giving them unfathomable amounts of well-being that they had no memory of, their well-being would become subsequently less important.  Given that egalitarianism says that the importance of further increases in well-being relates to how much well-being they’ve had previously, to the extent that one had more well-being previously - even if they had it in a dream that they couldn’t remember - their well-being would become subsequently less important.  

The egalitarian could object that the only thing that matters is well-being that one remembers.  However, this runs into a problem.  Presumably to an egalitarian what matters is total well-being that one imagines rather than average well-being.  It would be strange to say that a person dying of cancer with months to live is less entitled to well-being than a person with greater average well-being but less lifetime total well-being.  However, if this is true then the importance of increasing the well-being of one’s dream self is dramatically more important than increasing the well-being of themselves when they’re awake.  To the extent that they’ll forget their dream self, their dream self is a very badly off entity, very deserving of well-being.  It would be similarly strange to prioritize helping dementia patients with no memory of most of their lives based on how well off they were during the periods of their life which they can no longer recall.  

A second argument can be called the torture argument.  Suppose that a person has been brutally tortured in ways more brutal than any other human such that they are the worst off human in history by orders of magnitude.  From an egalitarian perspective, their well-being would be dramatically more important than that of others given how poorly off they are.  If this is true, then if we set their suffering to be great enough, it would be justified for them to torture others for fun.  

A third argument can be called the non prioritization objection.  Surely any view which says that the well-being of poorly off people matters infinitely more than the well-being of well off people is false.  If it were true it would imply that sufficiently well off people could be brutally tortured to make poorly off people only marginally well off.  Thus, the egalitarian merely draws the line at a lower level of well-being in terms of how much well-being for a poorly off person outweighs improving the well-being of a well-off person.  If this is true non egalitarianism ceases having counterintuitive implications.  The intuitive appeal of “bringing one person with utility one to zero utility in order to bring another person with utility 1 to 2 being morally wrong,” dissipates when alternative theories endorse  “bringing one person with utility one to zero utility in order to bring another person with utility 1 to 5 (it could be more than 5, 5 is just an example) being morally wrong.”  At that point utilitarians and egalitarians are merely haggling over the degree of the tradeoff.  

We can now turn to the Pareto optimality premise which says we should take actions if they increase the well-being of some but don’t decrease the well-being of any others.  This principle is deeply intuitive and widely accepted.  It’s hard to imagine something being bad while making some people better off and no people worse off.  

One might object to the Pareto principle with the consent principle, which says that an act is wrong if it violates consent even if it is better for some and worse for none.  However, this runs into questions of what constitutes consent.  For example, a person throwing a surprise party violates the consent of the person for whom the party is being thrown.  Yet a surprise party is obviously not morally wrong if it makes everyone better off.  

Similarly, if a pollutant was being released into the air that went into people’s lungs without consent, it would seem that would be bad only if it were harmful.  One might argue that we should only take Pareto optimal actions that don’t violate rights, yet views that privilege rights have already been discussed.  Additionally, the areas where consent seems to matter are precisely those where one who does not consent can be seriously harmed.  Consent to marriage is valuable because unconsentual marriage would obviously be harmful.  Yet to the extent that one is not harmed, it’s hard to imagine why their consent matters.  

An additional objection can be given to rights based views.  Suppose that someone is unconscious and needs to be rushed to the hospital.  It seems clear that they should be rushed to the hospital.  Ordinarily, transporting someone without consent is seen to be wrong.  However, in a case like the one just stipulated, it increases well-being and thus is morally permissible.  










 

 

Bibliography

Baron, J. (1993). Heuristics and biases in equity judgments: A utilitarian approach. In B. A. Mellers & J. Baron (Eds.), Psychological perspectives on justice: Theory and applications (pp. 109–137). Cambridge University Press. https://doi.org/10.1017/CBO9780511552069.007

Chappell, R. (2021). Negative Utility Monsters. Utilitas, 1-5. doi:10.1017/S0953820821000169

Chalmers, D. J. (2006). Strong and weak emergence. The re-emergence of emergence, 244-256.

Harsanyi, J. C. (1975). Can the maximin principle serve as a basis for morality? A critique of John Rawls's theory. American political science review, 69(2), 594-606.

Huemer, M. (2003). Non-egalitarianism. Philosophical Studies: An International Journal for Philosophy in the Analytic Tradition, 114(1/2), 147-171.

Huemer, M. (2008). In defence of repugnance. Mind, 117(468), 899-933.

Kagan, S. (1989). The limits of morality.

Koenigs, M., Young, L., Adolphs, R., Tranel, D., Cushman, F., Hauser, M., & Damasio, A. (2007). Damage to the prefrontal cortex increases utilitarian moral judgements. Nature, 446(7138), 908-911.

Kahan, D. M., Peters, E., Dawson, E. C., & Slovic, P. (2017). Motivated numeracy and enlightened self-government. Behavioural Public Policy, 1(1), 54-86.

Kraaijeveld, S. R. (2020). Debunking (the) retribution (Gap). Science and Engineering Ethics, 26(3), 1315-1328.

Norcross, A. (2002). Contractualism and aggregation. Social Theory and Practice, 28(2), 303-314.

Parfit, D. (2011). On What Matters. Spain: OUP Oxford.

Patil, I., Zucchelli, M. M., Kool, W., Campbell, S., Fornasier, F., Calò, M., ... & Cushman, F. (2021). Reasoning supports utilitarian resolutions to moral dilemmas across diverse measures. Journal of personality and social psychology, 120(2), 443.

Paxton, J. M., Bruni, T., & Greene, J. D. (2014). Are ‘counter-intuitive’deontological judgments really counter-intuitive? An empirical reply to. Social cognitive and affective neuroscience, 9(9), 1368-1371.

Scanlon, T. M. (1998). What we owe to each other.

Singer, P., Lazari-Radek, K. d. (2014). The Point of View of the Universe: Sidgwick and Contemporary Ethics. United Kingdom: OUP Oxford.

Sinhababu, N. (2010). The epistemic argument for hedonism.

Sunstein, C. R. (2005). Moral heuristics. Behavioral and brain sciences, 28(4), 531-541.

Utilitarianism.net. (n.d.). Introduction to utilitarianism. Utilitarianism. Retrieved September 10, 2021, from https://www.utilitarianism.net/introduction-to-utilitarianism. 

Williams, B., Smart, J. J. C., & Williams, B. (1973). A critique of utilitarianism. Cambridge/UK.



 


Darius_M @ 2021-12-15T03:33 (+4)

Utilitarianism.net has also recently published an article on Arguments for Utilitarianism, written by Richard Yetter Chappell. (I'm sharing this article since it may interest readers of this post)

Omnizoid @ 2021-12-15T05:21 (+1)

Yeah, that has some good arguments, thank you for sharing that.