Against Negative Utilitarianism

By Omnizoid @ 2021-12-14T20:17 (+1)

Negative utilitarianism has some traction with effective altruists.  This is in my view a shame given that it is false.  I shall spell out why I hold this view.  

The most basic version of negative utilitarianism which says that only the avoidance of pain is morally relevant is trivially false.  Preventing a pinprick is less valuable than bringing about googolplex utils.  However, this view is not widely believed and thus not particularly worth discussing.  

A more popular form of negative utilitarianism takes the form of Lexical Threshold views, according to which certain forms of suffering are so terrible that they cannot be outweighed by any amount of happiness.  This view is defended by people like Simon Knutsson, Brian Tomasik, and others.  My main objection to this view is that it falls prey to the sequencing objection.  Suppose we believe that the badness of a horrific torture cannot be outweighed by any amount of happiness.  Presumably we believe that the badness of a mild headache can be outweighed by some amounts of happiness.  Therefore, the badness of horrific torture can't be outweighed by any amount of headaches (or similar harms, headaches were just the example that I picked.)  

This view runs into a problem.  There are certainly some types of extreme headaches whose badness are as bad as brutal tortures at least in theory.  Suppose that the badness of these horrific headaches are 100,000 units of pain and that benign headaches contain 100 units of pain.  Presumably 5 headaches with 99,999 units of pain would be in total worse than 1 headache with 100,000 units of pain.  Additionally, presumably 25 headaches with 99,998 units of pain would be worse than 5 headaches with 99,999 units of pain.  We can keep decreasing the amount of pain and making it affect more people, until 1 headache with 100,000 units of pain is found to be less bad than some vast number of headaches with 100 units of pain.  The Lexical Threshold Negative Utilitarian view would have to say that there's some threshold of pain below which no amount of pain experienced can outweigh any amount of pain above the threshold, regardless of how many people experience the pain.  This is deeply implausible.  If the threshold is set at 10,000 units of pain, then 10^100^100 people experiencing 9,999 units of pain would be preferable to one person experiencing 10,001 units of pain.  

The negative utilitarian might object that there is no neat cutoff.  However, this misunderstands the argument.  If there is no neat cutoff point then the gradual decrease in pain, despite being applied to an increasing number of people, would always be preferrable to the previous point with far fewer people experiencing marginally more pain.  

The negative utilitarian might say that pain can't be neatly delineated into precise units.  However, precise units are only used to represent pain.  It's very intuitive that pain that is very bad can be made gradually less bad until it's reduced to being only a little bit bad.  This process requires the negative utilitarian to declare that at some point along the continuum, they've passed a threshold whereby no amount of the things below the threshold can ever outweigh the things above the threshold.  Being scalded in boiling water can be made gradually less unpleasant by lowering the temperature of the water until it's reduced to merely a slight inconvenience.  

Simon Knutsson responds to this basic objection saying "Third, perhaps Ord overlooks versions of Lexical Threshold NU, according to which the value of happiness grows less and less as the amount of happiness increases. For example, the value of happiness could have a ceiling, say 1 million value “units,” such that there is some suffering that the happiness could never counterbalance, e.g., when the disvalue of the suffering is 2 million disvalue units."  However, the way I've laid out the argument proves that even the most extreme forms of torture are only as bad as large amounts of headaches.  If this is the case, then it seems strange and ad hoc to say that no amount of happiness above 1 million units can outweigh the badness of a headache.  Additionally, a similar approach can be done on the positive end.  Surely googol units of happiness for one person and 999,999 units for another is better than 1,000,000 units for two people.  

The main argument given for negative utilitarianism is the intuition that extreme suffering is very bad.  When one considers what it's like to starve to death, it's hard to imagine how any amount of happiness can outweigh it.  However, we shouldn't place very much stock in this argument for a few reasons.  

First, it's perfectly compatible with positive utilitarianism (only in the sense of being non negative, not in the sense of saying that only happiness matters) to say that suffering is in general far more extreme than happiness.  Given the way the world works right now, there is no way to experience as much happiness as one experiences suffering when they get horrifically tortured.  However, this does not imply that extreme suffering can never be counterbalanced--merely that it's very difficult to counterbalance it.  No single thing other than light travels at the speed of light, but that does not mean that light speed is lexically separate from separate speeds, such that no number of other speeds can ever add up to greater than light speed.  Additionally, transhumanism opens the possibility for extreme amounts of happiness, as great as the suffering from brutal torture.  

Second, it's very hard to have an intuitive grasp of very big things.  The human brain can't multiply very well.  Thus, when one has an experience of immense misery they might conclude it's balance can't be counterbalanced by anything, when in reality they're just perceiving that it's very bad.  Much like how people confuse things being astronomically improbable with impossible, people may have inaccurate mental maps, and perceive extremely bad things as bad in ways that can't be counterbalanced.  

Third, it would be very surprising a priori for suffering to be categorically more relevant than well-being.  One can paint a picture of enjoyable experiences being good and unenjoyable experiences being bad.  It's hard to imagine why unenjoyable experiences would have a privileged status, being unweighable against positive experiences.  

I'd be interested in hearing replies from negative utilitarians to these objections.  


Brian_Tomasik @ 2021-12-18T18:55 (+17)

As antimonyanthony noted, I think we have conflicting intuitions regarding these issues, and which intuitions we regard as most fundamental determine where we end up. Like antimonyanthony, I regard it as more obvious that it's wrong to allow a single person to be tortured in order to create a thousand new extremely blissful people who didn't have to exist at all than it's obvious that pleasure can outweigh a pinprick. In my own life I tend to act as though pleasure can outweigh a pinprick, but (1) I'm not sure if I endorse this as the right thing to do; it might be an instance of akrasia and (2) given that I already exist, I'd experience more than a pinprick's worth of suffering from not having certain positive experiences. If we're considering de novo pleasure that wouldn't appease any existing cravings, then my intuition that creating new pleasure can outweigh a pinprick isn't even that strong to begin with.

I would probably create extra people to feel bliss if doing so caused literally no harm. But even if it only caused a moderate harm rather than torture, I'm not sure creating the bliss would be worth it. There's no need for the extra bliss to come into existence. The universe is fine without it. But the people who would be harmed, even if only moderately, in order to create that extra bliss would not be fine.

You wrote:

I perceive ethics as being about what an egoist would do if they experienced the sum total of human experience

But I think this framing really favors views according to which pleasure can outweigh suffering, because most ethicists feel that pleasure can outweigh suffering within a given life, but many of them do not think it's right to harm one person for the greater benefit of another person. If instead of your "open individualism" standpoint we take the standpoint of "empty individualism", meaning that each moment of conscious experience is a different person from each other one, then it's no longer clearly okay to force significant suffering upon yourself for greater reward, at least if there are moments of suffering so bad that you temporarily regret having made that tradeoff. (If you never regret having made the tradeoff, then maybe it's fine, just like it may be fine for one person to voluntarily suffer for the greater benefit of someone else.)

One possible resolution of our conflicting intuitions on these matters could be a quite suffering-focused version of weak NU. We could hold that suffering, including torture, can theoretically be outweighed by bliss, but it would take an astronomical amount of bliss to do so. This view accepts that there could be enough happiness to outweigh a pinprick while also rejecting the seemingly cruel idea that one instance of torture could be outweighed by just a small number of transhumanly blissful experiences. Weak NUs who give enough weight to suffering in practice tend to act the same as strong NUs or lexical-threshold NUs. The expected amount of torture in the far future is not vastly smaller than the expected amount of transhuman-level bliss, so a sufficiently suffering-focused weak NU will still be extremely pessimistic about humanity's future and will still prioritize reducing s-risks.

Omnizoid @ 2021-12-19T22:40 (+4)

Hmm, this may be a case of divergent intuitions but to me it seems very obvious that if we could make it so that at the end of people's lives they have an experience of unfathomable bliss right before death, containing more well-being than the sum total of all positive experiences that humans have experienced so far, at the cost of one pinprick, it would be extremely good to do so.  In this case it avoids the objection that well-being is only desirable instrumentally, because this is a form of well-being that would have otherwise not been even been considered.  That seems far more obvious than any more specific claims about the amount of well-being needed to offset a unit of suffering, particularly because of the trickiness of intuitions dealing with very large numbers.  

You say "But I think this framing really favors views according to which pleasure can outweigh suffering, because most ethicists feel that pleasure can outweigh suffering within a given life, but many of them do not think it's right to harm one person for the greater benefit of another person."   I agree that this does favor positive utilitarianism, however, I spend quite a while justifying it in my second most recent ea forum post.  

Finally, you say "One possible resolution of our conflicting intuitions on these matters could be a quite suffering-focused version of weak NU."  I certainly update on the intuitions of negative utilitarians to place more weight on suffering avoidance than I otherwise would, however, even updating on that I still conclude that transhuman bliss could be good enough to offset torture.  The badness of torture seems to be a fact about how extreme the experience is as a result of evolution.  However, it seems possible to create more extreme positive experience in a transhumanist world, where we can design experiences to be as good as physics allows. Additionally, I'd probably be more sympathetic to suffering reducing than most positive utilitarians.  I do think that the expected amount of torture in the future is smaller than the expected amount of transhuman bliss largely for reasons laid out here.  

Brian_Tomasik @ 2021-12-20T15:35 (+2)

Thanks for the replies. :)

if we could make it so that at the end of people's lives they have an experience of unfathomable bliss right before death, containing more well-being than the sum total of all positive experiences that humans have experienced so far, at the cost of one pinprick, it would be extremely good to do so.

If people knew in advance that this would happen, it would relieve a great deal of suffering during people's lives. People could be much less afraid of death because the very end of their lives would be so nice. I imagine that anxiety about death and pain near the end of life without hope of things getting better are some of the biggest sources of suffering in most people's entire lives, so the suffering reduction here could be quite nontrivial.

So I think we'd have to specify that no one would know about this other than the person to whom it suddenly happened. In that case it still seems like probably something most people would strongly prefer. That said, the intuition in favor of it gets weaker if we specify that someone else would have to endure a pinprick with no compensation in order to provide this joy to a different person. And my intuition in favor of doing that is weaker than my intuition against torturing one person to create happiness for other people. (This brings up the open vs empty individualism issue again, though.)

When astronomical quantities of happiness are involved, like one minute of torture to create a googol years of transhuman bliss, I begin to have some doubts about the anti-torture stance, in part because I don't want to give in to scope neglect. That's why I give some moral credence to strongly suffering-focused weak NU. That said, if I were personally facing this choice, I would still say: "No way. The bliss isn't worth a minute of torture." (If I were already in the throes of temptation after a taste of transhuman-level bliss, maybe I'd have a different opinion. Conversely, after the first few seconds of torture, I imagine many people might switch their opinions to saying they want the torture to stop no matter what.)

I do think that the expected amount of torture in the future is smaller than the expected amount of transhuman bliss

I agree, assuming we count their magnitudes the way that a typical classical utilitarian would. It's plausible that the expected happiness of the future as judged by a typical classical utilitarian could be a few times higher than expected suffering, maybe even an order of magnitude higher. (Relative to my moral values, it's obvious that the expected badness of the future will far outweigh the expected goodness -- except in cases where a posthuman future would prevent lots of suffering elsewhere in the multiverse, etc.)

Omnizoid @ 2021-12-20T18:41 (+1)

Hmm, this may be just a completely different intuition about suffering versus well-being.  To me it seems obvious that an end of life pinprick for ungodly amounts of transhuman bliss would be worth it.  Even updating on the intuitions of negative utilitarians I still conclude that the amount of future transhuman bliss would outweigh the suffering of the future.  

Sidenote, I really enjoy your blog and have cited you a bunch in high school debate.  

Brian_Tomasik @ 2021-12-30T09:07 (+2)

To me it seems obvious that an end of life pinprick for ungodly amounts of transhuman bliss would be worth it.

I also have that intuition, probably even if someone else has to endure the pinprick without compensation. But my intuitions about the wrongness of "torture for bliss" are stronger, and if there's a conflict between the intuitions, I'll stick with the wrongness of "torture for bliss".

Thanks for the kind words. :) I hope debate is fun.

Omnizoid @ 2021-12-31T22:36 (+2)

When I reflect about the nature of torture it seems obvious that it's very bad.  But I'm not sure how by the nature of reflection on the experience alone we can conclude that there's no amount of positive bliss that could ever outweigh it.  We literally can't conceive of how good transhuman bliss might be and any case of trying to add up trillions of positive minor experiences seems very sensitive to scope neglect.  

Brian_Tomasik @ 2022-01-06T08:33 (+8)

Your point that I simply can't conceive of how good transhuman bliss might be is fair. :) I might indeed change my intuitions if I were to experience it (if that were possible; it'd require a lot of changes to my brain first). I guess we might change our intuitions about many things if we had more insight -- e.g., maybe we'd decide that hedonic experience itself isn't as important as some other things. There's a question of to what extent we would regard these changes of opinion as moral improvements versus corruption of our original values.

I guess I don't feel very motivated by the abstract thought that if I were better able to comprehend transhuman-level bliss I might better see how awesome it is and would therefore be more willing to accept the existence of some additional torture in order for more transhuman bliss to exist. I can see how some people might find that line of reasoning motivating, but to me, my reaction is: "No! Stop the extra torture! That's so obviously the right thing to do."

Omnizoid @ 2022-01-06T23:40 (+2)

That's true of your current intuitions but I care about what we would care about if we were fully rational and informed.  If there was bliss so good that it would be worth experiencing ten minutes of horrific torture for ten minutes of this bliss, it seems that creating this bliss for ungodly numbers of sentient beings is quite an important ethical priority.  

Brian_Tomasik @ 2022-01-07T00:41 (+5)

Yeah, that's a fair position to hold. :) The main reason I reject it is that my motivation to prevent torture is stronger than my motivation to care about how my values might change if I were to experience that bliss. Right now I feel the bliss isn't that important, while torture is. I'd rather continue caring about the torture than allow my loyalty to those enduring horrible experiences to be compromised by starting to care about some new thing that I don't currently find very compelling.

There's always a bit of a tricky issue regarding when moral reflection counts as progress and when it counts as just changing your values in ways that your current values would not endorse. At one extreme, it seems that merely learning new factual information (e.g., better data about the number of organisms that exist) is something we should generally endorse. At the other extreme, undergoing neurosurgery or taking drugs to convince you of some different set of values (like the moral urgency of creating paperclips) is generally something we'd oppose. I think having new experiences (especially new experiences that would require rewiring my brain in order to have them) falls somewhere in the middle between these extremes. It's unclear to me how much I should merely count it as new information versus how much I should see it as hijacking my current suffering-focused values. A new hedonic experience is not just new data but also changes one's motivations to some degree.

The other problem with the idea of caring about what we would care about upon further reflection is that what we would care about upon further reflection could be a lot of things depending on exactly how the reflection process occurs. That's not necessarily a reason against moral reflection at all, and I still like to do moral reflection, but it does at least reduce my feeling that moral reflection is definitely progress rather than just value drift.

antimonyanthony @ 2022-01-01T02:12 (+3)

Here's an intuition pump: Is there any number of elegant scientific discoveries made in a Matrix, where no sentient beings at all would benefit from technologies derived from those discoveries, that would justify murdering someone? Scientific discoveries do seem valuable, and many people have the intuition that they're valuable independent of their applications. But is it scope neglect to say that whatever their value, that value just couldn't be commensurable with hedonic wellbeing? If not, what is the problem in principle with saying the same for happiness and suffering?

Omnizoid @ 2022-01-01T08:51 (+2)

I don't have the intuition that scientific discoveries are valuable independent of their use for sentient beings.  

antimonyanthony @ 2022-01-03T15:41 (+1)

Fair enough, I don't either. But there are some non-hedonic things that I have some intuition are valuable independent of hedonics—it's just that I reject this intuition upon reflection (just as I reject the intuition that happiness is valuable independent of relief of suffering upon reflection). Is there anything other than hedonic well-being that you have an intuition is independently good or bad, even if you don't endorse that intuition?

Omnizoid @ 2022-01-03T22:11 (+1)

Yeah, to some degree I have egalitarian intuitions pre reflection and some other small non utilitarian intuitions.  

Brian_Tomasik @ 2021-12-20T15:46 (+1)

Regarding the example about bliss before death, there's another complication if we give weight to preference satisfaction even when a person doesn't know whether those preferences have been satisfied. I give a bit of weight to the value of satisfying preferences even if someone doesn't know about it, based on analogies to my case. (For example, I prefer for the world to contain less suffering even if I don't know that it does.)

Many people would prefer for the end of their lives to be wonderful, to experience something akin to heaven, etc, and adding the bliss at the end of their lives -- even unbeknownst to them until it happened -- would still satisfy those preferences. People might also have preferences like "I want to have a net happy life, even though I usually feel depressed" or "I want to have lots of meaningful experiences", and those preferences would also be satisfied by adding the end-of-life bliss.

Omnizoid @ 2021-12-20T18:43 (+2)

I get why that would appeal to a positive utilitarian but I'm not sure why that would be relevant to a negative utilitarians' view.  Also, we could make it so that this only applies to babies who died before turning two, so they don't have sophisticated preferences about a net positive QOL.  

Brian_Tomasik @ 2021-12-30T08:57 (+1)

but I'm not sure why that would be relevant to a negative utilitarians' view

People have preferences to have wonderful ends to their lives, to have net positive lives, etc. Those preferences may be frustrated by default (especially the first one; most people don't have wonderful ends to their lives) but would become not frustrated once the bliss was added. People's preferences regarding those things are typically much stronger than their preferences not to experience a single pinprick.

Good point about the babies. One might feel that babies and non-human animals still have implicit preferences for experiencing bliss in the future, but I agree that's a more tenuous claim.

Teo Ajantaival @ 2021-12-15T08:10 (+16)

More recent arguments for lexical views are found here:


Also, if we are comparing additively aggregationist NU with additively aggregationist CU, then we should arguably compare the plausibility of their (strongest) repugnant conclusions with each other:


For me, the core issue is the implicit assumption of all else being equal and what it implies for the metaphor of counterbalancing. Specifically, I don’t think any torture is positively counterbalanced by the creation of a causally isolated black box, regardless of what hedonic or non-hedonic things the box contains.

Regarding the independent value of hedonic goods, it is worth noting that the symmetric utilitarian already argues that people may be systematically conflating the instrumental value of non-hedonic goods with them being independently valuable. And the suffering-focused perspective may simply add that this also holds for what we call positive experiences (quote from footnote 49 here):

there is a prima facie argument that strong axiological asymmetries should seem especially plausible to those sympathetic to a hedonistic view. This is because the hedonistic utilitarian already holds that most people are systematically mistaken about the intrinsic value of non-hedonic goods. The fact that people report sincerely valuing things other than happiness and the absence of suffering, even when it is argued to them that such values could just be a conflation of intrinsic with instrumental value, often gives little pause to hedonistic utilitarians. But this is precisely the position a strongly suffering-focused utilitarian is in, relative to symmetric hedonists. That is, although this consideration is not decisive, a symmetric hedonist should not be convinced that suffering-focused views are untenable due to their immediate intuition or perception that happiness is valuable independent of relief of suffering. They would need to offer an argument for why happiness is indeed intrinsically valuable, despite the presence of similar debunking explanations for this inference as for non-hedonic goods.

Finally, I feel drawn to axiological monism as a solution to the problem of value incommensurability, to which I have not seen a satisfying response for pluralist views. For example, about the point that “transhumanism opens the possibility for extreme amounts of happiness, as great as the suffering from brutal torture”, I wonder how people could reach interpersonal agreement on such a tradeoff ratio even in principle. But I am (more) hopeful that people can agree on the independent badness of torture, and that we could use that as a common ground for prioritization if all else fails.

Omnizoid @ 2021-12-15T18:29 (+3)

I'll first respond to the first article you linked.  The problem I see with this solution is it violates some combination of completeness and transitivity.  Vinding says that we can say that for this list (1 e′-object, 1 e-object, 2 e-objects, 3 e-objects) we can say that 3e-objects are categorically worse than any number of 1e' objects but that some number of 1e' objects can be worse than 1e-objects, which can be worse than 2 e-objects, and so on.  This runs into an issue.  

If we say that 1000 e' objects are worse than 1 e-object and 1000 e-objects are worse than 1 2 e-object, and 1000 2e-objects are worse than 1 3e-object than we get the following inequality 

1 trillion e' objects > 1 billion e - objects> 1 million 2e-objects>1000 3e-objects.  


The fifth example runs into a similar problem to the one addressed in this post.  We can just apply the calculation at the level of populations.  Surely inflicting 10000 units of pain on one person is less bad than inflicting 9999 units of pain on 10^100^100 people.  

The second article that you linked runs into a similar problem.  It says that what matters is the experience rather than the temperature--thus, it claims that steadily lowering the temperature and asking the NU at what point they'd pinpoint a firm threshold is misleading.  However, we can use units of pain rather than temperature.  While it's hard to precisely quantify units of pain, we have an intuitive grasp of very bad pain, and we can similarly grasp the pain being lessened slightly.  

Next, Vinding argues consent views avoid this problem.  Consent views run into several issues. 

1 Contrary to Vinding's indication, there is in fact a firm point at which people no longer consent.  For any people, if offered googolplex utils per second unit of torture, there is a firm point at which they would stop consenting.  The consent views would have to say that misery slightly above this threshold categorically outweighs misery slightly below this threshold.  

2 Consent views seem to argue that the badness of pain has to do with the weakness of will (or more specifically, the willingness of people to endure pain, independent of the badness of the pain).  For example, suppose we have a strict negative utilitarian who argues that no amount of pain is worth any amount of pleasure.  This person would never consent.  However, it seems bad to say that a pinprick for this person is considerably worse than a pinprick for someone else, who experience the same amount of pain.  

3 It seems we can at least imagine a type of pleasure which one would not consent to ceasing.  A person experiencing unfathomable joy might be willing to endure future torture for one more moment of bliss.  Thus, this view seems to imply caring about pleasure as well.  Love songs often contain sentiments relating to the singer being willing to endure anything for another moment of being with the object of the love song.  However, it seems strange to say that love is lexically better than all other goods.  

Next, the repugnant conclusions of positive utilitarianism are presented, including creating hell to please the blissful.  This is a bullet I'm very happy to bite.  Much like it would be reasonable to, from the perspective of a person, endure temporary misery for greater joy, on a population level, it would be reasonable to inflict misery for greater joy.  I perceive ethics as being about what an egoist would do if they experienced the sum total of human experience--from this perspective it does not seem particularly counterintuitive.  Additionally, as I argued in the article, we suck at multiplying.  Hypotheticals involving vast numbers melt our intuitions.  

Additionally, as I argue here, many of our intuitions that go against positive utilitarianism crumble upon reflection.  I'll try to argue against the creating hell objection.  Presumably a pinprick to please the blissful would be permissible, assuming enough blissful people.  As the argument I've presented shows, a lot of pinpricks are as bad as hell.  Thus, enough pleasure for the blissful is worth hell.  

I agree that we should be careful to consider the full implications of all else equal, however, I don't think that refutes any part of the argument I've presented.  When people experience joy, even when they're not suffering at all, they regard more joy as desirable.  

You argue for axiological monism, that seems fully consistent with utilitarianism.  Much like there are positive and negative numbers, there are positive and negative experiences.  It seems that we regard the positive experiences as good, much like how we regard the negative experiences as bad. 

It seems very strange for well-being to be morally neutral, but suffering to be morally bad.  If you were imagining a world without having had any experiences, it seems clear one would expect the enjoyable experiences to be good, and the unenjoyable experiences to be bad.  Evolutionarily, the reason we suffer is to deter actions, while the reason we feel pleasure is to encourage actions.  Whatever mechanism causes suffering to be bad, a similar explanation would seem to cause well-being to be good.  

Thanks for the comment, you've given me some interesting things to consider! 

antimonyanthony @ 2021-12-16T03:52 (+13)

Much like it would be reasonable to, from the perspective of a person, endure temporary misery for greater joy, on a population level, it would be reasonable to inflict misery for greater joy. I perceive ethics as being about what an egoist would do if they experienced the sum total of human experience--from this perspective it does not seem particularly counterintuitive.

Interesting, I have the exact opposite intuition. i.e. To the extent that it seems to me clearly wrong to inflict misery on some people to add (non-palliative) joy to others, I conclude that I shouldn't endorse my intuitions about egoistically being willing to accept my own great misery for more joy. Though, such intuitions aren't really strong for me in the first place. And when I imagine experiencing the sum total of sentient experience, my inclination to prioritize suffering gets stronger, not weaker.

ETA: A lot of disagreements about axiology and population ethics have this same dynamic. You infer from your intuition that pinpricks to please the blissful is acceptable that we can scale this up to torture to (more intensely) please many more blissful people. I infer from the intuitive absurdity of the latter that maybe we shouldn't think it's so obvious that pinpricks to please the blissful is good. I don't know how to adjudicate this, but I find that people who dismiss NU pretty consistently seem to assume their choice in "modus ponens vs reductio ad absurdum" is the obvious one.

Omnizoid @ 2021-12-16T16:00 (+4)

That's interesting, it seemed very obvious that it's worth it to endure very brief misery for greater overall pleasure.  Surely if given the option to experience 10 seconds of intense misery for a trillion fold increase in your future joy, it seems obvious to me that this would be worth it.  However, even if you conclude it wouldn't, that doesn't address the main objection I gave in this post.  

antimonyanthony @ 2021-12-16T18:19 (+3)

What do you consider the main objection?

Omnizoid @ 2021-12-16T23:07 (+3)

The one I explained in the post starting with "This view runs into a problem."

antimonyanthony @ 2021-12-16T23:46 (+10)

Got it. That objection doesn’t apply to purely additive NU, which I’m more sympathetic to and which you dismissed as “trivially false.” Basically my response to your argument there is: If these googolplex "utils" are created de novo or provided to beings who are already totally free from suffering, including having no frustrated desire for the utils, why should I care about their nonexistence when pain is at stake—even mild pain?

More than that, though, while I understand why you find the pinprick conclusion absurd, my view is that the available alternatives are even worse. i.e., Either accepting a lexical threshold vulnerable to your continuity argument, or accepting that any arbitrarily horrible degree of suffering can be morally outweighed by enough happiness (or anything else). When I reflect on just how bad “arbitrarily horrible” can get, indeed even just reflecting on bad experiences for which there exist happy experiences of matching or greater intensity, I have to say that last option seems more absurd than pure NU’s flaws. It seems like the least-bad way to reconcile continuity with the intuition I notice from that reflection.

(I prefer not to go much further down this rabbit hole because I’ve had this same debate many times, and it unfortunately just seems to keep coming down to bedrock intuitions. I also have mixed thoughts on the sign of value spreading. Suffice it to say I think it’s still valuable to give some information about why some of us don’t find pure NU trivially false. If you’re curious for more details, I recommend Section 1 of this post I wrote, this comment, and Tomasik’s “Three Types of Negative Utilitarianism.” Incidentally I’m working on a blog post responding to your last objection, the error theory based on empirical asymmetries and scope neglect. The "even just reflecting on bad experiences for which there exist happy experiences of matching or greater intensity" thing I said above is a teaser. Happy to share when it’s finished!)

Omnizoid @ 2021-12-19T23:06 (+2)

Okay.  One question would be whether you share my intuitions in the case I posed to Brian Tomasik.  For reference here it is. "Hmm, this may be a case of divergent intuitions but to me it seems very obvious that if we could make it so that at the end of people's lives they have an experience of unfathomable bliss right before death, containing more well-being than the sum total of all positive experiences that humans have experienced so far, at the cost of one pinprick, it would be extremely good to do so.  In this case it avoids the objection that well-being is only desirable instrumentally, because this is a form of well-being that would have otherwise not been even been considered.  That seems far more obvious than any more specific claims about the amount of well-being needed to offset a unit of suffering, particularly because of the trickiness of intuitions dealing with very large numbers. "

antimonyanthony @ 2021-12-20T01:29 (+4)

Before reflection, sure, that seems like a worthy trade.

But the trichotomy posed in "Three Types of NU," which I noted in the second paragraph of my last comment, seems inescapable. Suppose I accept it as morally good to inflict small pain along with lots of superhappiness, and reject lexicality (though I don't think this is off the table, despite the continuity arguments). Then I'd have to conclude that any degree of horrible experience has its price. That doesn't just seem absurd, it flies in the face of what ethics just is to me. Sufficiently intense suffering just seems morally serious in a way that nothing else is. If that doesn't resonate with you, I'm stumped.

Omnizoid @ 2021-12-20T03:41 (+3)

Well I think I grasp the force of the initial intuition.  I just abandon it upon reflection.  I have a strong intuition that extreme suffering is very very bad.  I don't have the intuition that it's badness can't be outweighed by anything else, regardless of what the other thing is.  

antimonyanthony @ 2021-12-23T03:57 (+4)

Here's the post I said I was writing, in my other comment.

Brian_Tomasik @ 2022-01-06T09:02 (+2)

Thanks. :) When I imagine moderate (not unbearable) pains versus moderate pleasures experienced by different people, my intuition is that creating a small number of new moderate pleasures that wouldn't otherwise exist doesn't outweigh a single moderate pain, but there's probably a large enough number (maybe thousands?) of newly created moderate pleasures that outweighs a moderate pain. I guess that would imply weak NU using this particular thought experiment. (Other thought experiments may yield different conclusions.)

MichaelStJules @ 2021-12-16T09:14 (+12)

Why think pleasure and suffering are measurable on the same hedonistic scale? They use pretty different parts of the brain. People can make preference-based tradeoffs between anything, so the fact that they make tradeoffs between pleasure and suffering doesn't clearly establish that there's a single hedonistic scale.

For further related discussion, see some writing by Adam Shriver:

https://library.oapen.org/bitstream/handle/20.500.12657/50994/9783731511083.pdf?sequence=1#page=285

https://link.springer.com/article/10.1007/s13164-013-0171-2 

MichaelStJules @ 2021-12-17T03:45 (+6)

The problem I see with this solution is it violates some combination of completeness and transitivity.

Or just additivity/separability. One such view is rank-discounted utilitarianism:

Maximize , where the  represent utilities of individual experiences or total life welfare and are sorted increasingly (non-decreasingly), , and . A strict negative version might assume .

In this case, there are many thresholds, and they depend on others' utilities and .

For what it's worth, I think such views have pretty counterintuitive implications, e.g. they reduce to ethical egoism under the possibility of solipsism, or they reduce to maximin in large worlds (without uncertainty). This might be avoidable in practice if you reject the independence of irrelevant alternatives and only consider those affected by your choices, because both arguments depend on there being a large "background" population. Or if you treat solipsism like moral uncertainty and don't just take expected values right through it. Still, I don't find either of these solutions very satisfying, and I prefer to strongly reject egoism. Maximin is not extremely objectionable to me, although I would prefer mostly continuous tradeoffs, including some tradeoffs between number and intensity.

Omnizoid @ 2021-12-17T20:50 (+5)

Sorry, I'm having a lot of trouble understanding this view.  Could you try to explain it simply in a non mathematical way.  I have awful mathematical intuition.  

MichaelStJules @ 2021-12-18T21:47 (+3)

For a given utility , adding more individuals or experiences with  as their utility has a marginal contribution to the total that decreases towards 0 with the number of these additional individuals or experiences, and while the marginal contribution never actually reaches 0, it decreases fast enough towards 0 (at a rate ) that the contribution of even infinitely* many of them is finite. Since it is finite, it can be outweighed. So, even infinitely many pinpricks is only finitely bad, and some large enough finite number of equally worse harms must be worse overall (although still finitely bad). In fact the same is true for any two bads with different utilities: some large enough but finite number of the worse harm will outweigh infinitely many of the lesser harm. So, this means you get this kind of weak lexicality everywhere, and every bad is weakly lexically worse than any lesser bad. No thresholds are needed.

In mathematical terms, for any  , there is some (finite)  large enough that 

because the limit (or infimum) in  of the left-hand side of the inequality is lower than the right hand side and decreasing, so it has to eventually be lower for some finite .

 

*countably

Omnizoid @ 2021-12-19T22:58 (+2)

Okay, still a bit confused by it but the objections you've given still apply of it converging to egoism or maximin in large worlds.  It also has a strange implication that the badness of  a person's suffering depends on background conditions about other people.  Parfit had a reply to this called the Egyptology objection I believe, namely, that this makes the number of people who suffered in ancient Egypt relevant to current ethical considerations, which seems deeply counterintuitive.  I'm sufficiently confused about the math that I can't really comment on how it avoids the objection that I laid out, but if it has lexicality everywhere that seems especially counterintuitive--if I understand this every single type of suffering can't be outweighed by large amounts of smaller amounts of suffering.  

MichaelStJules @ 2021-12-19T23:47 (+2)

The Egyptology objection can be avoided by applying the view only to current and future (including potential) people, or only to people otherwise affected by your choices. Doing the latter can also avoid objections based on far away populations living at the same time or in the future, too, and reduce (but maybe not eliminate) the convergence to egoism and maximin. However, I think that would also require giving up the independence of irrelevant alternatives (like person-affecting views often do), so that which of two options is best can depend on what other options are available. For what it's worth, I don't find this counterintuitive.

if it has lexicality everywhere that seems especially counterintuitive--if I understand this every single type of suffering can't be outweighed by large amounts of smaller amounts of suffering. 

It seems intuitive to me at least for sufficiently distant welfare levels, although it's a bit weird for very similar welfare levels. If welfare were discrete, and the gaps between welfare levels were large enough (which seems probably false), then this wouldn't be weird to me at all.

Omnizoid @ 2021-12-20T03:35 (+1)

Does your view accept  lexicality for very similar welfare levels?  

MichaelStJules @ 2021-12-20T03:59 (+2)

I was sympathetic to views like rank-discounted (negative) utilitarianism, but not since seeing the paper on the convergence with egoism, and I haven't found a satisfactory way around it. Tentatively, I lean towards negative prioritarianism/utilitarianism or negative lexical threshold prioritarianism/utilitarianism (but still strictly negative, so no positive welfare), or something similar, maybe with some preference-affecting elements.

Brian_Tomasik @ 2021-12-19T17:23 (+2)

Should the right-hand-side sum start at i=N+1 rather than i=0, because the utilities at level v occupy the i=0 to i=N slots?

MichaelStJules @ 2021-12-19T17:59 (+4)

Not in the specific example I'm thinking of, because I'm imaging either the 's happening or the 's happening, but not both (and ignoring other unaffected utilities, but the argument is basically the same if you count them).

Pablo @ 2021-12-20T01:24 (+1)

And the suffering-focused perspective may simply add that this also holds for what we call positive experiences (quote from footnote 49 here)

Although this is an interesting argument, I think it fails against the most plausible versions of hedonistic utilitarianism. The reason I think pleasant experiences are intrinsically good is simply that I believe I can apprehend their goodness directly, by becoming immediately acquainted with how those experiences feel like. This is exactly the same mechanism that I think gives me access to the intrinsic badness of unpleasant experiences. I think it's much harder to debunk a belief in the intrinsic goodness of pleasantness rooted in this kind of immediate acquaintance, than it is to debunk beliefs about the value of other objects whose goodness cannot be directly introspected. But should I become persuaded that the debunking arguments could be extended to my beliefs about the goodness of pleasant experience, I would become persuaded that my beliefs about the badness of unpleasant experience are also debunkable. So the argument gives no dialectical advantage to the negative utilitarian vis-à-vis this type of hedonistic utilitarian.

Matthew_Barnett @ 2021-12-15T23:51 (+12)

I'm curious whether you think your arguments apply to negative preference utilitarianism (NPU): the view that we ought to minimize aggregate preference frustration. It shares many features with ordinary negative hedonistic utilitarianism (NHU), such as,

But NPU also has several desirable properties that are not shared with NHU:

Moreover,

That said, there are a number of problems with the theory, including the problem of how to define preference frustration, identify agents across time and space, perform interpersonal utility comparisons, idealize individual preferences, and cope with infinite preferences.

Omnizoid @ 2021-12-16T00:41 (+4)

I'm not sure I quite understand the theory.  Wouldn't global destruction be better than utopia if it were painless because there are no unmet preferences.  I also laid out some problems with preference utilitarianism in my other post arguing for utilitarianism.  

Matthew_Barnett @ 2021-12-16T01:33 (+7)

World destruction would violate a ton of people's preferences. Many people who live in the world want it to keep existing. Minimizing preference frustration would presumably give people what they want, rather than killing them (something they don't want).

Omnizoid @ 2021-12-16T03:35 (+1)

Sure, but it would say that creating utopia with a pinprick would be morally bad.   

Matthew_Barnett @ 2021-12-17T05:33 (+9)

Moving from our current world to utopia + pinprick would be a strong moral improvement under NPU. But you're right that if the universe was devoid of all preference-having beings, then creating a utopia with a pinprick would not be recommended.

Omnizoid @ 2021-12-17T20:54 (+4)

That seems deeply unintuitive.  Another strange implication is that enough worlds of utopia plus pinprick would be worse than a world of pure torture.  A final implication is that for a world of Budhist monks who have rid themselves completely of desires and merely take in the joys of life without having any firm desires for future states of the world, it would be morally neutral to bring their well-being to zero.  

Matthew_Barnett @ 2021-12-17T22:41 (+10)

Another strange implication is that enough worlds of utopia plus pinprick would be worse than a world of pure torture.

I view this implication as merely the consequence of two facts, (1) utilitarians generally endorse torture in the torture vs. dust specks thought experiment, (2) negative preference utilitarians don't find value in creating new beings just to satisfy their preferences.

The first fact is shared by all non-lexical varieties of consequentialism, so it doesn't appear to be a unique critique of negative preference utilitarianism. 

The second fact doesn't seem counterintuitive to me, personally. When I try to visualize why other people find it counterintuitive, I end up imagining that it would be sad/shameful/disappointing if we never created a utopia. But under negative preference utilitarianism, existing preferences to create and live in a utopia are already taken into account. So, it's not optimal to ignore these people's wishes.

On the other hand, I find it unintuitive that we should build preferenceonium (homogeneous matter optimized to have very strong preferences that are immediately satisfied). So, this objection doesn't move me by much.

A final implication is that for a world of Budhist monks who have rid themselves completely of desires and merely take in the joys of life without having any firm desires for future states of the world, it would be morally neutral to bring their well-being to zero.  

I think if someone genuinely removed themselves of all desire then, yes, I think it would be acceptable to lower their well-being to zero (note that we should also take into account their preferences not to be exploited in such a manner). But this thought experiment seems hollow to me, because of the well-known difficulty of detaching oneself completely from material wants, or empathizing with those who have truly done so. 

The force of the thought experiment seems to rest almost entirely on the intuition that the monks have not actually succeeded -- as you say, they "merely take in the joys of life without having desires". But if they really have no desires, then why are they taking joy in life? Indeed, why would they take any action whatsoever?

MichaelStJules @ 2021-12-16T08:47 (+7)

I actually don't think pleasure has any inherent value at all. Is it actually the case that the lexical threshold view giving some positive weight to pleasure is more popular? Note that some negative utilitarians may hold that pleasure doesn't matter (or is lexically dominated by suffering), and also that some intense suffering lexically dominates less intense suffering.

On terminology, I thought the levelling down objection is specifically the objection that making people worse off can be good according to some egalitarian views, and what you're describing is a sequence argument.

Omnizoid @ 2021-12-16T15:56 (+4)

On the first point, would you hold that a world with a billion people with experiences trillions of times better than the best current experience and one pinprick would be morally bad to create?

On the second point, I didn't realize the leveling down argument had an official name.  I'll fix the terminology issues now.  

Teo Ajantaival @ 2021-12-16T18:26 (+5)

Which examples of pleasure cannot be explained as contentment, relief, or anticipated relief?

Those are how I currently think of pleasure as being inversely related to craving to change one's experience. Below are some perhaps useful resources for such views:

On these views, the perfect state is no craving (as far as we look only at the individual herself), and the scale does not go higher than that so to speak. Of course, an open question is whether this is easy to reach without futuristic technology. But I think that perfect contentment is already possible today.

Omnizoid @ 2021-12-16T23:06 (+7)

Most of them.  The experience of reading a good book, having sex,  the joy of helping others, the joy of learning philosophy, and nearly every other happy experience seems distinct from being merely the absence of pain.  Very good moments do not merely contain the absence of unpleasantness--they contain good qualia.  I think our knowledge of our own mental states is reasonably reliable (our memory of them isn't though) and we can be pretty confident that our well-being is, in fact, desirable.  The anti-phenomenon claim seems strange and run counter to my own view of my experiences.  I'm sure it would be possible to find meditators who came to the opposite conclusion about well-being.  

Leksu @ 2021-12-18T00:50 (+3)

I wonder how one could explain the pleasures of learning about a subject as contentment, relief, or anticipated relief. Maybe they'd describe it as getting rid of the suffering-inducing desire to get knowledge / acceptation from peers / whatever motivates people to learn?

I'm sure it would be possible to find meditators who came to the opposite conclusion about well-being.

If someone reading this happens to know of any I'd be interested to know! I wouldn't be that surprised if they were very rare, since my (layman) impression is that Buddhism aligns well with suffering-focused ethics, and I assume most meditators are influenced by Buddhism.

Teo Ajantaival @ 2021-12-18T11:02 (+4)

Pleasures of learning may be explained by closing open loops, which include unsatisfied curiosity and reflection-based desires for resolving contradictions. And I think anticipated relief is implicitly tracking not only the unmet needs of our future self, but also the unmet needs of others, which we have arguably 'cognitively internalized' (from our history of growing up in an interpersonal world).

Descriptively, some could say that pleasure does exist as a 'separable' phenomenon, but deny that it has any independently aggregable axiological value. Tranquilism says that its pursuit is only valuable insofar as there was a craving for its presence in the first place. Anecdotally, at least one meditator friend agreed that pleasure is something one can 'point to' (and that it can be really intense in some jhana states), but denied that those states are all that interesting compared to the freedom from cravings, which also seems like the main point in most of Buddhism.

MichaelStJules @ 2021-12-17T02:55 (+4)

On the first point, would you hold that a world with a billion people with experiences trillions of times better than the best current experience and one pinprick would be morally bad to create?

I don't think any world would be good to create in itself, including an extremely blissful world without any suffering, so it's at best neutral.

If the pinprick creates a preference against the overall experience (not just that another experience would be better) or a negative overall experience, then I would say that creating that world is bad, but barely so, only as bad as that pinprick.

MichaelStJules @ 2021-12-16T09:02 (+6)

One response to the claim that thresholds would be arbitrary is that utilitarianism may already be arbitrary, too. Why do you think pleasure and suffering can be measured cardinally at all (even on their own scales, but also on a common scale)? Why would people's preferences or intuitions actually track the cardinal value? Why think there is anything cardinal to track at all? What plausible theory of consciousness actually establishes these numbers? Maybe people just have preferences over experiences or their only ordinal hedonic intensities, but it's the preferences that construct the cardinal units, and these cardinal units don't come with the experiences themselves.

If this is the case, and we rely on preferences to determine cardinal values, then we may get lexical thresholds in practice, because people may have lexical preferences (at least while in sufficiently extreme pain, perhaps). Or, we have to cast doubt on our use of preferences at all, and then what can we do?

Omnizoid @ 2021-12-16T23:13 (+4)

It seems clear intuitively that we can distinguish between different levels of well-being and different levels of suffering.  People get different amounts of enjoyment from reading different books for example.  Any view that doesn't put them on a precise cardinal scale is susceptible to Dutch Book arguments.  I don't have a super strong view on consciousness--I think it's real and probably irreducible.  To the extent that pain arose to deter actions and pleasure arose to encourage other actions, organisms would evolve to experience pain for when they do things  that decrease their fitness roughly proportional to how much they decrease their fitness.  Quantifications always seem arbitrary, but it's clear that there is some quantity involved-unfathomable bliss is far better than a cupcake- even if it's hard to quantify.  

I'm a moral realist, so I don't care only about what things people actually care about.  I care about what we would care about if we were perfectly rational.  I think the argument I presented provides a challenge for consistent lexical views.  I'm a hedonic utilitarian, not a preference utilitarian.  

MichaelStJules @ 2021-12-17T02:32 (+6)

Any view that doesn't put them on a precise cardinal scale is susceptible to Dutch Book arguments.

This would only tell us that our preferences over pleasure and suffering (and everything else!) must be measurable on a cardinal scale (up to affine transformations), but it doesn't tell us that there is one objective scale that should work for everyone. The preference-based scales are definitely different across people, and there's no way to tell which preferences are right (although you might be able to rule out some).

Quantifications always seem arbitrary, but it's clear that there is some quantity involved-unfathomable bliss is far better than a cupcake- even if it's hard to quantify.  

I'm a moral realist, so I don't care only about what things people actually care about.  I care about what we would care about if we were perfectly rational.

I suspect there's actually no definitely correct cardinal scale when it comes to hedonic intensity, so there's no very precise way we ought to care about tradeoffs between intensities (or between pleasure and suffering). 

There might be arguments that can give us bounds, e.g. if a pattern of activation responsible for pleasure happens twice as many times per second in a brain experiencing A than in a brain experiencing B (by activating more neurons in the same pattern, or activating the same neurons more often, or both), then we might think there's twice as much pleasure in A than B per second. Similarly for suffering. However,

  1. This doesn't tell us how to compare pleasure and suffering.
  2. These arguments lose their applicability across brains when they're sufficiently different; they might use totally different patterns of activations, and just counting neurons firing per second is wrong.
  3. It's not clear this is the right approach to consciousness/hedonic intensity in the first place (although it seems fairly intuitive to me, and I assign it considerable weight).
Aaron Gertler @ 2021-12-15T10:24 (+3)

A couple of notes:

  1. I recommend using hyperlinks rather than just writing out URLs — it makes a post much easier to read, and  signals to potential readers that the post will be easy to read in other ways, too.
  2. You say: "Negative utilitarianism has gained a lot of traction with effective altruists." What makes you think this? 
    1. In particular, I wonder what "a lot" is — my impression is that these views are generally unpopular, and that every time I've seen them discussed here, many more people are on your "side" (in the general sense, if not all the particulars) than on the NU "side".
Omnizoid @ 2021-12-15T17:21 (+1)

I agree with both points and have adjusted them accordingly.  The reason I originally said that those views have gained a lot of popularity with ea's is that, in my experience, negative utilitarians are primarily effective altruists.  However, it's still a very minority view among effective altruists.