Why do you find the Repugnant Conclusion repugnant?

By Will Bradshaw @ 2021-12-17T10:00 (+58)

The Repugnant Conclusion has always seemed straightforwardly and unobjectionably true to me. I've always been confused by its alleged repugnance, or why such an anodyne-seeming conclusion merits such a dramatic name.

This isn't like the other standard objections to utilitarianism. I'm not persuaded by concerns about utility monsters or trolley problems, but I feel the sting of those objections – they feel like bullets I need to bite. Whereas the Repugnant Conclusion just seems like a non-problem to me.

I say all this not to argue against concerns about the Repugnant Conclusion, but to motivate my question here. I'd like to have a better understanding of the intuitions that lead people to seeing this as such a serious problem, and whether I'm missing something that might cause me to put more weight on these sorts of concerns. I'm less interested in technical philosophical arguments here than in intuition pumps – simple thought experiments, or real-world scenarios, or related problems that might help me feel the sting of the objections a bit more.


MichaelStJules @ 2021-12-17T17:32 (+30)

I have asymmetric person-affecting intuitions, and I think the Repugnant Conclusion is a clear example of treating individuals as mere vessels/receptacles for value. Sacrificing the welfare of just one person so that another could be born — even if they would be far better off than the first person — seems wrong to me, ignoring other effects. That I could have an obligation to bring people into existence just for their own sake and at an overall personal cost seems wrong to me. The RC just seems like a worse and more extreme version of this.

In a hypothetical world where I'm the only one around, I feel I basically should be allowed to do whatever I want, as long as no one else will come into existence, and I should have no reason to bring them into existence. In my world, I should do whatever I want. If no one is born, I'm not harming anyone else or failing in my obligations to others, because they don't and won't exist to be able to experience harm (or experience an absence of benefit or worse benefits).

That I should make sacrifices to prevent people with bad lives from being born or to help future people who would exist anyway (including ensuring better off people are born instead of worse off people) does seem right to me. If and because these people will exist, I can harm them or fail to prevent harm to them, and that would be bad.

I have some more writing on the asymmetry here.

Pablo @ 2021-12-17T18:15 (+12)

I'm confused by your answer.

  • You say that "sacrificing the welfare of just one person so that another could be born... seems wrong". But the Repugnant Conclusion is a claim about the relative value of two possible populations, neither of which is assumed to be actual. So I don't understand how you reach the conclusion that, in judging that one of these populations is more valuable, by bringing it about you'd be "sacrificing" the welfare of the possible people in the other population. The situation seems perfectly symmetrical, so either you are "sacrificing" people no matter what you do, or (what seems more plausible) talk of "sacrificing" doesn't really make sense in this context.
  • Even ignoring the above, I'm confused about why you think that "the Repugnant Conclusion is a clear example of treating individuals as mere vessels/receptacles for value" given your endorsement of asymmetrical views. How are you not treating individuals as mere vessels/receptacles for value when, in deciding between two worlds both of which contain suffering but differ in the number of people they contain, you bring about the world that contains less suffering? What do you tell the person whom you subject to a life of misery so that some other person, who would have been even more miserable, is not born?
  • You have said that you don't share the intuition that positive welfare has intrinsic value. But lacking this intuition, how can you intuitively compare the value of two worlds that differ only in how much positive welfare they contain?
  • The Repugnant Conclusion arises also at the intrapersonal level, so it would be very surprising if the reason we find it counterintuitive, insofar as we do, at the interpersonal level has to do with factors—such as treating people as mere receptacles of value or sacrificing people—that are absent at the intrapersonal level.
Lumpyproletariat @ 2021-12-17T21:53 (+10)

This comment seems to me to be requesting clarification in good faith. Might someone who downvoted it explain to why, if it wouldn't take too much time or effort? I'm fairly new to the forum and would like a more complete view of the customs.

Edited to add: Perhaps because it was perceived as lower effort than the parent comment, and required another high-effort post in response, which might have been avoided by a closer reading?

MichaelStJules @ 2021-12-18T08:02 (+6)

I never downvoted his comments, and have (just now) instead upvoted them.

However, I would interpret all of Pablo's points in his response not just as requesting clarification but also as objections to my answer, in a post that's only asking for people's reasons to object to the RC and is explicitly not about technical philosophical arguments (although it's not clear this should extend to replies to answers), just basic intuitions.

I don't personally mind, and these are interesting points to engage with. However, I can imagine others finding it too intimidating/adversarial/argumentative.

Lumpyproletariat @ 2021-12-18T08:30 (+1)

Thank you for the explanation! 

MichaelStJules @ 2021-12-17T19:36 (+8)

(I've made a bunch of edits to the following comment within 2 hours of posting it.)

You say that "sacrificing the welfare of just one person so that another could be born... seems wrong". But the Repugnant Conclusion is a claim about the relative value of two possible populations, neither of which is assumed to be actual. So I don't understand how you reach the conclusion that, in judging that one of these populations is more valuable, by bringing it about you'd be "sacrificing" the welfare of the possible people in the other population. The situation seems perfectly symmetrical, so either you are "sacrificing" people no matter what you do, or (what seems more plausible) talk of "sacrificing" doesn't really make sense in this context.

If you're a consequentialist whose views are transitive and complete, and satisfy the independence of irrelevant alternatives, then the RC implies what I wrote (ignoring other effects and opportunity costs). The situation is not necessarily symmetrical in practice if you hold person-affecting views, which typically require the rejection of the independence of irrelevant alternatives. I'd recommend the "wide, hard view" in The Asymmetry, Uncertainty, and the Long Term by Teruji Thomas as the view closest to common sense that satisfies the intuitions of my answer above (that I'm aware of), and the talk is somewhat accessible, although the paper can get pretty technical. This view allows future contingent good lives to make up for (but not outweigh) future contingent bad lives, but, as a "hard" view, not to make up for losses to "necessary" people, who would exist regardless. Because it's "wide", it "solves" the Nonidentity problem. The wide version would still reject the RC even if we're choosing between two disjoint contingent populations, I think because "excess" (in number) contingent people with good lives wouldn't count in this particular pairwise comparison. Another way to think about it would be like matching counterparts across worlds, and then we can talk about sacrifices as the differences in welfare between an individual and their counterpart, although I'm not sure the view entails something equivalent to this.

My own views are much more asymmetric than the views in Thomas's work, and I lean towards negative utilitarianism, since I don't think future contingent good lives can make up for future contingent bad lives at all.

How are you not treating individuals as mere vessels/receptacles for value when, in deciding between two worlds both of which contain suffering but differ in the number of people they contain, you bring about the world that contains less suffering? What do you tell the person whom you subject to a life of misery so that some other person, who would be even more miserable, is not born?

I tell them that I did it to prevent a greater harm that would have otherwise been experienced. The foregoing of benefit caused by someone never being born would not be experienced by that non-existent person. I have some short writing on the asymmetry here that I think can explain this better.

You have said that you don't share the intuition that positive welfare has intrinsic value. But lacking this intuition, how can you compare the value of two worlds that differ only in how much positive welfare they contain?

Lives most people consider good overall can still involve disappointment or suffering, so the RC doesn't necessarily differ only in how much positive welfare there is, depending on how exactly we're imagining it. If we're only talking about positive welfare and no negative welfare, preferences aren't more frustrated/less satisfied than otherwise, and everyone is perfectly content in the "repugnant" world, then I wouldn't object. If I had to make a personal sacrifice to bring someone into existence, I would probably not be perfectly content, possibly unless I thought it was the right thing to do (although I might feel some dissatisfaction either way, and less if I'm doing what I think is the right thing).

Plus, it's worth sharing my more general objection regardless of my denial of positive welfare, since it may reflect others' views, and they can upvote or comment to endorse it if they agree.

The Repugnant Conclusion arises also at the intrapersonal level, so it would be very surprising if the reason we find it counterintuitive, insofar as we do, at the interpersonal level has to do with factors—such as treating people as mere receptacles of value or sacrificing people—that are absent at the intrapersonal level.

Assuming intrapersonal and interpersonal tradeoffs should be treated the same (ignoring indirect effects), yes. It's not obvious that they should be, and I think common sense ethics does not treat them the same.

But even then, the intrapersonal version (+welfarist consequentialism) also violates autonomy and means I shouldn't do whatever I want in my world, so my objection is similar. I think "preference-affecting" views (person-affecting views applied at the level of individual preferences/desires, especially Thomas's "hard, wide view") would likely fare better here for structurally similar reasons, so the "solution" could be similar or even the same.

Symmetric total preference utilitarianism and average preference utilitarianism would imply that it's good for a person to create enough sufficiently strong satisfied preferences in them, even if it means violating their consent and the preferences they already have or will have. Classical utilitarianism implies involuntary wireheading (done right) is good for a person. Preference-affecting views and antifrustrationism (negative preference utilitarianism) would only endorse violating consent or preferences for a person's own sake in ways that depend on preferences they would have otherwise or anyway, so you violate consent/some preferences to respect others (although I think antifrustrationism does worse than asymmetric preference-affecting views for respecting preferences/consent, and deontological constraints or limiting aggregation would likely do even better).

Pablo @ 2021-12-17T21:01 (+8)

[ETA: You say you've made edits to your post, so it's possible some of my replies are addressed by your revisions. I am always responding to the text I'm quoting, which may differ from the final version of your comment.]

If you're a consequentialist whose views are transitive, complete and satisfy the independence of irrelevant alternatives, the RC implies what I wrote (ignoring other effects and opportunity costs). The situation is not symmetrical if you hold person-affecting views, which typically require the rejection of the independence of irrelevant alternatives. I'd recommend the "wide, hard view" in The Asymmetry, Uncertainty, and the Long Term by Teruji Thomas as the view closest to common sense that satisfies the intuitions of my answer above (that I'm aware of), and the talk is somewhat accessible, although the paper can get pretty technical. This view allows future contingent good lives to make up for (but not outweigh) future contingent bad lives, but, as a "hard" view, not losses to "necessary" people, who would exist regardless. Because it's "wide", it "solves" the Nonidentity problem.

I don't have time to look into this right now, but I also feel that this probably won't provide an answer to the question I meant to ask. (Apologies if my wording was unclear.) Call the world with few, very happy people, A, and the world with lots of mildly happy people, Z. The question is, then, simply: "If bringing about Z sacrifices people in A, why doesn't bringing about A sacrifice people in Z?" You say that you'd be sacrificing someone "even if they would be far better off than the first person", which seems to commit you to the claim that you would indeed be sacrificing people in Z by bringing about A.

I tell them that I did it to prevent a greater harm that would have otherwise been experienced. The foregoing of benefit caused by someone never being born would not be experienced by that non-existent person. I have some short writing on the asymmetry here that I think can explain this better.

I don't understand how this answer explains why you are not treating the person as a value receptacle, given that you believe this is what the total utilitarian does in the Repugnant Conclusion. I can see why a negative utilitarian and/or a person-affecting theorist would treat these two cases differently. What I don't understand is why the difference is supposed to consist in that people are being treated as value receptacles in one case, but not in the other. This just seems to misdiagnose what's going on here.

The comment you shared helps me understand the Asymmetry, but not your claim about value receptacles.

Lives most people consider good overall can still involve disappointment or suffering, so the RC doesn't necessarily differ only in how much positive welfare there is, depending on how exactly we're imagining it.

I agree that you can have people with lifetime wellbeing just above neutrality either because they live their entire lives at that level or because they have lots of ups and downs that almost perfectly cancel each other out (and anything in between). I think discussions of the Repugnant Conclusion sometimes make the stronger assumption that people's lives are continuously just above neutrality ("muzak and potatoes"), and that people may respond to the thought experiment differently depending on whether or not this assumption is made.

For a negative utilitarian, it seems that whether the assumption is made is in fact crucial, since the "muzak and potatoes" life is as good as it can be (it lacks any unpleasantness) whereas lives in other Repugnant Conclusion scenarios could contain huge amounts of suffering. I handn't appreciated this point when I wrote my previous comment, but now that I do, I feel even more confused.

Assuming intrapersonal and interpersonal tradeoffs should be treated the same (ignoring indirect effects), yes. It's not obvious that they should be, and I think common sense ethics does not treat them the same.

Oh, I wasn't saying they should be treated the same. It's pretty clear that commonsense morality treats them differently.

My point is that the phenomenology of the intuitions at the interpersonal and intrapersonal levels is essentially the same, which strongly suggests that the same factor is triggering those intuitions in both cases. Any explanation of the counterintuitiveness of the Repugnant Conclusion in terms of factors that are specific to the interpersonal case is therefore implausible.

Although I'm not sure I'm understanding you correctly, you then seem to be suggesting that your views can in fact vindicate the claim that people would also in some sense be sacrificed in the intrapersonal case. Is this what you are claiming? It would help me if you describe what you yourself believe, as opposed to discussing the implications of a wide variety of views.

[Of course, feel free to ignore any of this if you aren't interested, etc.]

MichaelStJules @ 2021-12-18T07:39 (+8)

(FWIW, I never downvoted your comments and have upvoted them instead, and I appreciate the engagement and thoughtful questions/pushback, since it helps me make my own views clearer. Since I spent several hours on this thread, I might not respond quickly or at all to further comments.)

The question is, then, simply: "If bringing about Z sacrifices people in A, why doesn't bringing about A sacrifice people in Z?" You say that you'd be sacrificing someone "even if they would be far better off than the first person", which seems to commit you to the claim that you would indeed be sacrificing people in Z by bringing about A.

Sorry, I tried to respond to that in an edit you must have missed, since I realized I didn't after posting my reply. In short, a wide person-affecting view means that Z would involve "sacrifice" and A would not, if both populations are completely disjoint and contingent, roughly because the people in A have worse off "counterparts" in Z, and the excess positive welfare people in Z without counterparts don't compensate for this. No one in Z is better off than anyone in A, so none are better off than their counterparts in A, so there can't be any sacrifice in a "wide" way in this direction. The Nonidentity problem would involve "sacrifice" in one way only, too, under a wide view.

(If all the people in Z already exist, and none of the people in A exist, then going from Z to A by killing everyone in Z could indeed mean "sacrificing" the people in Z for those in A, under some person-affecting views, and be bad under some such views.

Under a narrow view (instead of a wide one), with disjoint contingent populations, we'd be indifferent between A and Z, or they'd be incomparable, and both or neither would involve "sacrifice".)

 

 

On value receptacles, here's a quote by Frick (on his website), from a paper in which he defends the procreation asymmetry:

For another, it feeds a common criticism of utilitarianism, namely that it treats people as fungible and views them in a quasi-instrumental fashion. Instrumental valuing is an attitude that we have towards particulars. However, to value something instrumentally is to value it, in essence, for its causal properties. But these same causal properties could just as well be instantiated by some other particular thing. Hence, insofar as a particular entity is valued only instrumentally, it is regarded as fungible. Similarly, a teleological view which regards our welfare-related reasons as purely state-regarding can be accused of taking a quasi-instrumental approach towards people. It views them as fungible receptacles for well-being, not as mattering qua individuals.29 Totalist utilitarianism, it is often said, does not take persons sufficiently seriously. By treating the moral significance of persons and their well-being as derivative of their contribution to valuable states of affairs, it reverses what strikes most of us as the correct order of dependence.30 Human wellbeing matters because people matter – not vice versa.

I haven't thought much about this particular way of framing the receptacle objection, and what I have in mind is basically what Frick wrote later: 

any reasons to confer well-being on a person are conditional on the fact of her existence.

This is a bit vague: what do we mean by "conditional"? But there are plausible interpretations that symmetric person-affecting views, asymmetric person-affecting views and negative axiologies satisfy, while the total view, reverse asymmetric person-affecting views and positive axiologies don't really seem to have such plausible interpretations (or have fewer and/or less plausible interpretations).

I have two ways in mind that seem compatible with the procreation asymmetry, but not the total view:

First, in line with my linked shortform comment about the asymmetry, a person's interests should only direct us from outcomes in which they (the person, or the given interests) exist or will exist to the same or other outcomes (possibly including outcomes in which they don't exist), and all reasons with regards to a given person are of this form. I think this is basically an actualist argument (which Frick discusses and objects to in his paper). Having reasons regarding an individual A in an outcome in which they don't exist direct us towards an outcome in which they do exist would not seem conditional on A's existence. It's more "conditional" if the reasons regarding a given outcome come from that outcome than from other outcomes.

Second, there's Frick's approach. Here's a simplified evaluative version: 

All of our reasons with regards to persons should be of the following form:

It is in one way better that the following is satisfied: if person A exists, then P(A),

where P is a predicate that depends terminally only on A's interests.

Setting P(A)="A has a life worth living" would give us reason to prevent lives not worth living. Plus, there's no P(A) we could use that would imply that a given world with A is in one way better (due to the statement with P(A)) than a given world without A. So, this is compatible with the procreation asymmetry, but not the total view.

It could be "wide" and solve the Nonidentity problem, since we can find P such that P would be satisfied for B but not A, if B would be better off than A, so we would have more reasons for A not to exist than for B not to exist.

It's also compatible with antifrustrationism and negative utilitarianism in a few ways:

  1. If we apply it to preferences instead of whole persons, with predicates like P(A)="A is satisfied"
  2. If we use predicates like "P(A)=if A has interest y, then y is satisfied at least to degree d"
  3. If we use predicates like "P(A)=A has welfare at least w", allowing for the possibility of positive welfare being better than less in an existing individual, but being perfectionistic about it, so that anything worse than the best is worse than nonexistence.

I think part of what follows in Frick's paper is about applying/extending this in a way that isn't basically antinatalist.

 

For a negative utilitarian, it seems that whether the assumption is made is in fact crucial, since the "muzak and potatoes" life is as good as it can be (it lacks any unpleasantness) whereas other lives could contain huge amounts of suffering.

Ya, this seems right to me.

 

My point is that the phenomenology of the intuitions at the interpersonal and intrapersonal levels is essentially the same, which strongly suggests that the same factor is triggering those intuitions in both cases.

What do you mean by "the phenomenology of the intuitions" here?

One important difference between the interpersonal and intrapersonal cases is that in the intrapersonal case, people may (or may not!) prefer to live much longer overall, even sacrificing their other interests. It's not clear they're actually worse off overall or even at each moment in something that might "look" like Z, once we take the preference(s) for Z over A into account. We might be miscalculating the utilities before doing so. For something similar to happen in the interpersonal case, the people in A would have to prefer Z, and then similarly, Z wouldn't seem so objectionable.

 

Although I'm not sure I'm understanding you correctly, you then seem to be suggesting that your views can in fact vindicate the claim that you'd be sacrificing your future selves or treating them as value receptacles. Is this what you are claiming? It would help me if you describe what you yourself believe, as opposed to discussing the implications of a wide variety of views.

It's more about my interests/preferences than my future selves, and not sacrificing them or treating them as value receptacles. I think respect for autonomy/preferences requires not treating our preferences as mere value receptacles that you can just make more of to get more value and make things go better, and this can rule out both the interpersonal RC and the intrapersonal RC. This is in principle, ignoring other reasons, indirect effects, etc., so not necessarily in practice.

I have moral uncertainty, and I'm sympathetic to multiple views, but what they have in common is that I deny the existence of terminal goods (whose creation is good in itself, or that can make up for bads or for other things that matter going worse than otherwise) and that I recognize the existence of terminal bads. They're all versions of negative prioritarianism/utilitarianism or very similar.

Pablo @ 2021-12-20T01:55 (+4)

Thanks for the detailed reply. For now, I will only address your comments at the end, since I haven't read the sources you cite and haven't thought about this much beyond what I wrote previously. (As a note of color, Johann and I did the BPhil together and used to meet every week for several hours to discuss philosophy, although he kept developing his views about population ethics after he moved to Harvard; you have rekindled my interest in reading his dissertation.)

What do you mean by "the phenomenology of the intuitions" here?

I mean that the intuitions triggered by the interpersonal and the intrapersonal cases feel very similar from the inside. For example, if I try to describe why the interpersonal case feels repugnant, I'm inclined to say stuff like "it feels like something would be missing" or "there's more to life than that"; and this is exactly what I would also say to describe why the intrapersonal case feels repugnant. How these two intuitions feel also makes me reasonably confident that fMRI scans of people presented with both cases would show very similar patterns of brain activity.

One important difference between the interpersonal and intrapersonal cases is that in the intrapersonal case, people may (or may not!) prefer to live much longer overall, even sacrificing their other interests. It's not clear they're actually worse off overall or even at each moment in something that might "look" like Z, once we take the preference(s) for Z over A into account. We might be miscalculating the utilities before doing so. For something similar to happen in the interpersonal case, the people in A would have to prefer Z, and then similarly, Z wouldn't seem so objectionable.

I think that supposed difference is ruled out by the way the intrapersonal case is constructed. In any case, what I regard as the most interesting intrapersonal version is one where it is analogous to the interpersonal version in this respect. Of course, we can discuss a scenario of the sort you describe, but then I would no longer say that my intuitions about the two cases feel very similar, or that we can learn much by comparing the two cases.
 

I have moral uncertainty, and I'm sympathetic to multiple views, but what they have in common is that I deny the existence of terminal goods (whose creation is good in itself, or that can make up for bads or for other things that matter going worse than otherwise) and that I recognize the existence of terminal bads. They're all versions of negative prioritarianism/utilitarianism or very similar.

Makes sense. Thanks for the clarification.
 

willbradshaw @ 2021-12-18T13:56 (+5)

Thanks, I appreciated reading this. I think you and I think about morality very differently, which means this doesn't update me very much, but it's still good to get a more emotional grasp of what people feel about these questions.

jackmalde @ 2021-12-17T13:54 (+23)

I'll try to help you understand why (I think) some people feel the sting of the repugnant conclusion (RC), but why I think they are ultimately wrong to do so. I should say that I personally don't find the repugnant conclusion repugnant so what I'm about to say might be completely missing the point. I am slightly stung by the "very repugnant conclusion", but that might be for another time.

In short, I think some people find RC repugnant based on a misunderstanding of what a life "barely worth living" would mean in practice. I think most people imagine such a life to be quite "bad" on the whole, but I think this is a mistake.

Note that the vast majority of people on earth want to continue living. This would include the vast majority of people who live in extreme poverty or who are undergoing horrific abuse. It would also include people who constantly consider suicide to end their pain but never go through with it. In normal parlance we would say these people live "bad" lives. However, we might conclude that these people are living lives worth living if they don't want their life to end / don't choose to end their life. So my guess is people imagine "a life barely worth living" to be a pretty "bad" one. The actual wording of "a life barely worth living" is inherently negative in how it is framed anyway. So RC would amount to a load of people with pretty "bad" lives by intuitive standards, being better than a smaller number of people with absolutely amazing lives. Accepting RC would be like creating another Africa with all it's poverty and hardship instead of creating another Norway with all it's happiness. Or creating loads of people attending daily suicide support groups rather than a smaller number of people living the best lives we can imagine. Most people would find these repugnant things to do and I personally would feel the sting here.

The problem with the above reasoning becomes clear when we think more carefully about "a life barely worth living". Firstly, to state what should be obvious, such a life is worth living by definition. So to be put off by the existence of such lives doesn't really make logical sense, unless you deny the theoretical existence of positive lives in the first place. This doesn't negate people's feeling of repugnance, but I think it should cause them to question it.

Where does this leave us with people attending daily suicide support groups? Well my preferred way forward is to question if these people do in fact have lives worth living, or at least to question if we have any idea on the matter. As is pointed out by Dasgupta (2016), the idea that someone who wants to continue living must be living a life of positive welfare ignores the badness of death. It is certainly possible for someone to be living a life of negative welfare, but be reluctant to end it because the subjective badness of death exceeds the badness of continuing to live. Death is indeed a horrible prospect for most when you consider factors such as religious prohibition, fear of the process of dying, the thought that one would be betraying family and friends, the deep resistance to the idea of taking one’s own life that has been built into us through selection pressure would cause someone even in deep misery to balk, and the revelation of one's misery to others when one wants it to remain undisclosed even after death.

In light of this Dasgupta puts forward the "creation test" as a way to determine the zero-level of wellbeing. What is the worst life that you would willingly create? Dasgupta says that should be the zero level. Most altruists wouldn't create more people living in extreme poverty, or people with constant thoughts of suicide, implying these people probably live negative lives. I personally would only create a life that most of us would say is very good!

I'm not saying Dasgupta's creation test is perfect - I'm undecided on how useful it is. This paper argues that we have no sufficiently clear sense of what a minimally good life is like. If this is indeed true, as the paper argues, the RC loses its probative force because we can not judge lives "barely worth living" as being "bad" as we don't really have a clue.

So to sum up my rather lengthy response, I think that many people who think RC is repugnant assume that "lives barely worth living" are those we would say are "bad" in common parlance which can lead to an understandable feeling of repugnance. I think they are wrong - either "lives barely worth living" are much better than being "bad", in which case RC loses repugnance, or we don't know how good "lives barely worth living" are and RC doesn't even get off the ground at all.

sawyer @ 2021-12-22T19:56 (+7)

This is exactly my intuition. When I think about "lives barely worth living" I imagine someone who is constantly on the edge of suicide. Then I think, well that seems really bad to me, but who am I to say that that person's life is not worth living? If I can't look that person in the eye and say, "your life is not worth living" (which I almost certainly can't do) , then how can I say that my world of "lives barely worth living" is made up of people with better lives than them?

Your paraphrasing of Dasgupta's insights is helpful, and I think incorporating the negativity of death may alleviate some of my perceived Repugnancy of the aforementioned Conclusion.

Florian Habermacher @ 2021-12-18T13:13 (+2)

Interesting suggestion! It sounds plausible that "barely worth living" might intuitively be mistaken as something more akin to 'so bad, they'd almost want to kill themselves, i.e. might well have even net negative lives' (which I think would be a poignant way to say what you write).

willbradshaw @ 2021-12-17T14:01 (+2)

While I appreciate you sharing your thoughts, I don't think replying to a post asking people to talk about why they dislike the repugnant conclusion with a lengthy argument about why those people are making a basic mistake is really going to help me achieve my goal here.

I don't want to litigate these intuitions here, I want to understand them. We can do the litigation elsewhere.

jackmalde @ 2021-12-17T14:17 (+6)

You say "I'd like to have a better understanding of the intuitions that lead people to seeing this as such a serious problem, and whether I'm missing something that might cause me to put more weight on these sorts of concerns" in which case I think my whole comment should be of relevance and I am confused by your pushback, unless of course you are only interested in the opinion of people who find RC repugnant in which case I apologise.

Derek Shiller @ 2021-12-17T17:09 (+21)

I find your attitude somewhat surprising. I'm much less sympathetic to trolley problems or utility monsters than the repugnant conclusion. I can see why some people aren't moved by it, but I have a hard time seeing how someone couldn't get what it is moving about it. Since it is a rather basic intuition, it's not super easy to pump. But I wonder, what do you think about this alternative, which seems to draw on similar intuitions for me:

Suppose that you could right now, at this moment, choose between continuing to live your life, with all its ups and downs and complexity, or going into a state of near-total suspended animation. In the state of suspended animation, you will have no thoughts and no feelings, except you will have a sensation of sucking on a rather disappointing but not altogether bad cough drop. You won't be able to meditate on your existence, or focus on the different aspects of the flavor. You won't feel pain or boredom. Just the cough drop. If you continue your life, you'll die in 40 years. If you go into the state of animation, it will last for 40,000 years (or 500,000, or 20 million, whatever number it takes.) Is it totally obvious that the right thing to do is to opt for the suspended animation (at least, from a selfish perspective) ?

willbradshaw @ 2021-12-19T15:27 (+8)

Thanks for trying to come up with a thought experiment that targets your intuitions here! That's exactly what I was hoping people would do.

For me, this thought experiment feels like it raises more "value of complexity" questions than the canonical RC. Though from the comments it seems like complexity vs homogeneity intuitions are contributing to quite a few people's anti-RC feelings, so it's not bad to have a thought experiment that targets that.

In any case, I think there probably is a sufficiently large number of years at which I would take the cough drop, all else equal. Certainly I don't feel extremely strong resistance to the idea of doing so. However, I'm a slightly non-optimal person to pose this thought experiment to, in that I'm not at all sure that my life so far has been good for me on net.

jackmalde @ 2021-12-18T14:24 (+2)

By the way I apologise for implying you should "remove" something from your comment which I didn't literally mean. What I should have said is I think the words led to an unhelpful characterisation of the life being lived in the thought experiment. The OP doesn't appreciate my contributions so I am going to leave this post.

Pablo @ 2021-12-17T13:10 (+18)

The term 'repugnant' is unfortunate; I think it's best to focus on whether there's anything morally problematic or deficient about such a world, irrespective of whether it elicits emotions of moral repugnance.

Personally, when I reflect on a universe that only contains experiences of "muzak and potatoes", I feel there's something missing from it, no matter how many such experiences it contains. I'm still willing to bite the bullet and conclude that my feeling is non-veridical, but I do experience the feeling.

One can also consider the parallel situation at the intrapersonal level. Parfit asks us to compare a "Century of Ecstasy" with a "Drab Eternity". I definitely feel the appeal of the former, even if, on reflection, I'd probably opt for the latter. (Though note that Parfit's wording here is also tendentious; a better name for the second option would be a "Mildly Pleasant Eternity".)

But I'm not sure I can describe this feeling more clearly or accurately, though, so this isn't really an answer to your question.

antimonyanthony @ 2021-12-18T17:41 (+10)

It depends on the formulation. I don't find Parfit's version of the RC, where the people with muzak-and-potatoes lives "never suffer," repugnant. But according to total (symmetric) utilitarianism, that RC is morally equivalent to another version, which I find highly repugnant. Imagine (A) as large and blissful a utopia as you like. Now imagine (Z) a world where many more people than in this utopia each have the following life: for a million years, they endure constant, unbearable torture. After that, they eat potatoes and listen to muzak peacefully for a sufficiently large number of years.

I just don't see how the latter experiences, no matter how many of them, could be considered morally significant in a way that outweighs the torture. You can chalk this up to scope neglect if you want, but (1) my intuitions are definitely not scope-neglectful when comparing suffering to suffering, and (2) I have the same intuition about milder cases where the amount of happiness a classical utilitarian would (probably) accept as outweighing is practically imaginable. e.g. Each person is born experiencing 1 day of depression, then eats potatoes for a normal human lifespan (~30,000 days).

MichaelStJules @ 2021-12-19T19:11 (+9)

Adding another answer, although I think it's basically pretty similar to my first.

I can imagine myself behind a veil of ignorance, comparing the two populations, even on a small scale, e.g. 2 vs 3 people. In the smaller population with higher average welfare, compared to the larger one with lower average welfare, I imagine myself either

  1. as having higher welfare and finding that better, or
  2. never existing at all and not caring about that fact, because I wouldn't be around to ever care.

So, overall, the smaller population seems better.

 

I can make it more concrete, too: optimal family size. A small-scale RC could imply that the optimal family size is larger than the parents and older siblings would prefer (ignoring indirect concerns), and so the parents should have another child even if it means they and their existing children would be worse off and would regret it. That seems wrong to me, because if those extra children are not born, they won't be wronged/worse off, but others will be worse off than otherwise.

In the long run, everyone would become contingent people, too, but then you can apply the same kind of veil of ignorance intuition pump. People can still think a world where family sizes are smaller would have been better, even if they know they wouldn't have personally existed, since they imagine themselves either

  1. as someone else (a "counterpart") in that other world, and being better off, or
  2. not existing at all (as an "extra" person) in their own world, which doesn't bother them, since they wouldn't have ever been around in the other world to be bothered.

Naively, at least, this seems to have illiberal implications for contraceptives, abortion, etc..

 

There's also an average utilitarian veil of ignorance intuition pump: imagine yourself as a random person in each of the possible worlds, and notice that your welfare would be higher in expectation in the world with fewer people, and that seems better. (I personally distrust this intuition pump, since average utilitarianism has other implications that seem very wrong to me.)

willbradshaw @ 2021-12-20T08:32 (+6)

Thanks. We of course run here into the standard total-vs-person-affecting dispute, namely that I would prefer to exist with positive welfare than not exist, and all this "not around to care" stuff feels like a very odd way to compare scenarios to me.

Matt Ball @ 2021-12-22T19:47 (+7)

I have serious doubts about inter-personal trade-offs.
https://www.mattball.org/2021/12/note-and-more-on-ethics-including-case.html
which follows
https://www.mattball.org/2021/12/ethics-is-not-simple-math-problem.html
 

willbradshaw @ 2021-12-19T15:18 (+7)

One approach I was expecting someone to try here, but haven't seen, is trying to motivate the intuition at a smaller scale – e.g. comparing a small number of very happy people to a large-but-easily-imaginable number of slightly happy people.

If the intuitions underlying aversion to the Repugnant Conclusion only kick in for extremely large populations, then I'm more confidently inclined to say they are a mistake arising from an inability to imagine at that scale. But given that the original argument for the RC is based on infinite regress, it seems like the issues that make people averse to it should start to kick in much sooner. But most commenters here have focused entirely on the vast-population case.

MichaelStJules @ 2021-12-19T21:45 (+3)

I thought my first answer already did what you're asking for, and it has (right now) the most upvotes, which may reflect endorsement. Are you looking for something more concrete or that isn't tied to people who would exist anyway being worse off? I added another answer.

The ways to avoid the RC, AFAIK, should fall under at least one of the following, and so intuitions/thought experiments should match:

  1. Have some kind of threshold (a critical level, a sufficientarian threshold or a lexical threshold), and marginally good lives fall below it while the very good lives are above. It could be a "vague" threshold.
  2. Non-additive (possibly aggregating in some other way, e.g. with decreasing marginal returns to additional people, average utilitarian, maximin or softer versions like rank-discounted utilitarianism which strong prioritize the worst off, or strongly prioritizing better lives, like geometrism).
  3. Person-affecting.
  4. Carry in other assumptions/values and appeal to them, e.g. more overall bad in the larger population.

See also:

https://plato.stanford.edu/entries/repugnant-conclusion/#EigWayDeaRepCon

antimonyanthony @ 2021-12-19T16:44 (+2)

This is a fair point. For what it's worth, I do honestly think a world of 10 people with utopian lives (of normal length) is better than a world with 10 billion people with lives like the ones I described in my answer. I guess it depends on the details of "utopian" - seems plausible that for me and many others to endorse this claim, such lives need not be so imaginably awesome that a classical utilitarian would agree the total utility of the 10 billion population world is worse.

Jackson Wagner @ 2021-12-17T11:24 (+7)

To answer on the level of imagery and associations rather than trying to make a strong philosophical argument: The Repugnant Conclusion makes me think of the dire misery of extremely poor places, like Haiti or Congo. People in extreme poverty are often malnourished, they have to put up with health problems and live in terrible conditions. On top of all those miseries, they have to get through it all with very limited education / access to information, and very limited freedom / agency in life. (But I agree with jackmalde that their lives are nevertheless worth living vs nonexistence -- I would still prefer to live if I was in their situation.)

Compared to an Earth with 10 Billion people living at developed-world standards, it just seems crazy to me that anyone would prefer a world with, say, 1 Trillion people eking out their lives in a trash-strewn Malthusian wasteland. The latter seems like a static world with no variety and no future, without the slack necessary for individuals to appreciate life or for civilization as a whole to grow, explore, learn, and change.

This image leads to various wacky political objections, which are not philosophically relevant since nobody said the Repugnant Conclusion was supposed to apply to the actual situation of Earth in 2021 (as opposed to, say, a hypothetical comparison between 10 Billion rich people vs 3^^^3 lives barely worth living). But emotionally and ideologically, the Repugnant Conclusion brings to mind appropriately aversive images like:

So, in the practical world, the idea that humanity should aim to max out the Earth's carrying capacity without regard to quality-of-life seems insane, so the Repugnant Conclusion will therefore always seem like a bizarre idea totally opposed to ordinary moral reasoning, even if it's technically correct when you use sufficiently big numbers.

Separately from all the above, I also feel that there would be an extreme "samey-ness" to all of these barely-worth-living lives. It seems farfetched to me that you are still adding moral value linearly when you create the quadrillionth person to complete your low-quality-of-life population -- how could their repetitive overlapping experiences match up to the richness and diversity of qualia experienced by a smaller crew of less-deprived humans?

willbradshaw @ 2021-12-19T15:13 (+4)

Thanks, this is one of my favourite responses here. I appreciated your sharing your mental imagery and listing out some consequences of that imagery. I think I am more inclined than you to say that many people alive today have lives not worth living, but you address confusion about that point in another comment. And while I'm more pro-hedonium than you I also wonder about "tiling" issues.

Do your intuitions about this stay consistent if you reverse the ordering? That is, as I think another comment on this post said elsewhere, if you start with a large population of just-barely-happy people, and then replace them with a much smaller population of very happy people, does that seem like a good trade to you?

Jackson Wagner @ 2021-12-21T05:13 (+5)

Yes, my intuition stays the same if the ordering is reversed; population A seems better than population Z and that's that. (For instance, if the population of an isolated valley had grown so much, and people had subdivided their farmland, to the point that each plot of land was barely enough for subsistence and the people regularly suffered conflict and famine, in most situations I would think it good if those people voluntarily made a cultural change towards having fewer children, such that over a few generations the population would reduce to say 1/3 the original level, and everyone had enough buffer that they could live in peace with plenty to eat and live much happier lives. Of course I would have trouble "wishing people into nonexistence" depending on how much the metaphysical operation seemed to resemble snuffing out an existing life... I would always be inclined to let people live out their existing lives.)

Furthermore, I could even be tempted into accepting a trade of Population A (whose lives are already quite good, much better than barely-worth-living) for a utility-monster style even-smaller population of extremely good lives. But at this point I should clarify that although I might be a utilitarian, I am not a "hedonic" utilitarian and I find it weird that people are always talking about positive emotional valence of experience rather than a more complex basket of values. I already mentioned how I value diversity of experience. I also highly value something like intelligence or "developedness of consciousness":

  • It seems silly to me that the ultimate goal becomes Superhappy states of incredible joy and ecstasy. Perhaps this is a failure of my imagination, since I am incapable of really picturing just how good Superhappy states would be. Or perhaps I have cultural blinders that try to ward me off of wireheading (via drug addiction, etc) by indoctrinating me to believe statements like "life isn't all about happiness; being connected to reality & other people is important, and having a deep understanding the universe is better than just feeling joyful".

  • Imagine the following choice: "Take the blue pill and you'll experience unimaginable joy for the rest of your life (not just one-note heroin-esque joy, but complex joy that cycles through the multiple different shades of positive feeling that the human mind can experience). Take the red pill, and you'll experience a massive increase in the clarity of your consciousness together with a gigantic boost in IQ to superhuman levels, allowing you to have many complex experiences that are currently out of reach for you, just like how rats are incapable of using language, understanding death, etc. But despite all those revelations, your net happiness level will be totally similar to your current life." Obviously the joy has its appeal -- both are great options! -- but I would take the red pill.

  • Although I care about the suffering of animals like chimps and factory-farmed chickens and would incorporate it into my utilitarian calculus, I also think that there is a sense in which no number of literal-rats-on-heroin could fully substitute for a human. If you offered me to trade 1 human life for creating a planet with 1 quadrillion rats on heroin, I'd probably take that deal over and over for the first few thousand button-presses. But I wouldn't just keep going until Earth ran out of people, because I'd never trade away the last complex, intelligent human life to just get one more planet of blissed-out lower life forms.

  • By contrast, I'd have far fewer qualms going the other way, and trading Earth's billions of humans for a utopian super-civilization with mere millions of super-enhanced, godlike transhuman intelligences.

Even with my basket of Valence + Diversity-of-experience + Level-of-consciousness, I still expect that utilitarianism of any kind is more like a helpful guide for doing cost-benefit calculations than a final moral theory where we can expect all its assumptions (for instance that moral value scales absolutely linearly forever when you add more lives to the pile) to robustly hold in extreme situations. I think this belief is compatible with being very very utilitarian compared to most ordinary people -- just like how I believe that GDP growth is an imperfect proxy for what we want from our civilization, but I am still very very pro economic growth moreso than ordinary people.

Charles_Guthmann @ 2021-12-18T02:23 (+2)

"The latter seems like a static world with no variety and no future, without the slack necessary for individuals to appreciate life or for civilization as a whole to grow, >explore, learn, and change."

If you're a total utilitarian, you don't care about these things other than how they serve as a tool for utility. By the structure of the repugnant conclusion, there is no amount of appreciating life that will make the total utility in smaller world greater than total utility in bigger world. 

Jackson Wagner @ 2021-12-18T11:07 (+2)

Certainly. Some of those values I mentioned might be counted as direct forms of utility, and some might be counted as necessary means to the end of greater total utility later. And the repugnant conclusion can always win by turning up the numbers a bit and making Population Z's lives pretty decent compared to the smaller Population A.

Partially I am just trying to describe the imagery that occurs to me when I look at the "population A vs population Z" diagram.

I guess I am also using the repugnant conclusion to point out a complaint I have against varieties of utilitarianism that endorse stuff like "tiling the universe with rats on heroin". To me, once you start talking about very large populations, diversity of experiences is just as crucial as positive valence. That's because without lots of diversity I start doubting that you can add up all the positive valence without double-counting. For example, if you showed me a planet filled with one million supercomputers all running the exact same emulation of a particular human mind thinking a happy thought, I would be inclined to say, "that's more like one happy person than like a million happy people".

Charles_Guthmann @ 2021-12-18T18:30 (+1)

I have the same feeling. I have an aversion to utility tiling as you describe it but I can't exactly pinpoint why other than that I guess I am not a utilitarian. As consequentialists perhaps we should focus more on the ends ends, i.e. aesthetically how much we like the look of future potential universes, rather than looking at the expected utility of said universes. E.g. Star wars is prettier than expansive VN probe network to me so I should prefer that. Of course this is just rejecting utiliarianism again. 

Teo Ajantaival @ 2021-12-18T11:31 (+6)

For me (currently with minimalist intuitions), the repugnance depends on whether the lives in the larger population are assumed to never suffer (cf. this section). Judging from the different answers here, people seem to indeed have wildly different interpretations about what those lives feel like.

At one extreme, they could contain absolutely no craving for change and be simply lacking in additional bliss; at the other, they could be roller coaster lives in which extreme craving is assumed to be slightly positively counterbalanced by some of their other moments.

As a practical example, I deny that factory farms could be net positive (all else being equal) regardless of how much bliss the victims could be induced to experience.

lincolnq @ 2021-12-17T14:48 (+5)

A world which supports the maximum number of people has no slack. I instinctively shy away from wanting to be in a world with resource limits that tight.

antimonyanthony @ 2021-12-17T16:33 (+6)

I think the point of the RC is to assume away these kinds of practical contingencies - suppose you know for certain that the muzak-and-potatoes lives would never drop into the territory of more suffering than happiness.

Eric Chen @ 2021-12-17T15:35 (+4)

Do you also find the Reverse Repugnant Conclusion to be straightforwardly and unobjectionably true? (This would help tailor an intuition pump that gets at the repugnance)

willbradshaw @ 2021-12-17T16:01 (+4)

Yes.

Teo Ajantaival @ 2021-12-17T16:22 (+4)

Ditto for Creating Hell to Please the Blissful?

willbradshaw @ 2021-12-17T16:41 (+8)

I think any scenario that involves hypothetical vast populations in a very simple abstract universe isn't going to change my views here. I can't actually imagine that scenario (a flaw with many thought experiments), so I'm forced to fall back on small-scale intuitions + intellectual beliefs. The latter say such a thing would be the right thing to do, given a sufficiently large blissful population and all the caveats and restrictions that always apply in these thought experiments. 

I think trying to convince the former might be more tractable, but big abstract thought experiments like this don't do that, because they are so unimaginable and unrealistic. That's (one framing of) why I'm looking for something less abstract. This is what I was trying to get at in the OP, though I accept I wasn't super clear about what exactly I was & wasn't looking for.

Pablo @ 2021-12-17T19:09 (+7)

I thought the OP was clear. Sorry that most of the answers, including mine, do not actually answer your question.

Given what you say, maybe the reason you don't find the Repugnant Conclusion counterintuitive is that you have already internalized that you can't adequately represent the thought experiment in imagination, so your brain doesn't generate the relevant intuitions in the first place. Whereas I personally agree, on reflection, that my internal representation of the thought experiment is inadequate, but this doesn't prevent me from feeling the intuitive appeal of the less populous world. This might also explain why you do feel the sting of trolley problems, which generally involve small numbers of people. (However, you also say that you find utility monsters counterintuitive, which would be inconsistent with this explanation. Interestingly, in Reasons and Persons Parfit dismisses the force of Nozick's thought experiment on the grounds that it's impossible to properly imagine a utility monster. But he doesn't take this same approach for dealing with the Repugnant Conclusion.)

willbradshaw @ 2021-12-18T10:24 (+4)

Yeah, I do think that "I can't actually realistically represent this scenario in my imagination, and if I try I'll just deceive myself, so I won't" has become a pretty deep intuition for me over the years.

I think it's more thoroughly internalised for scenarios that are unimaginably large (many people, very long stretches of time) than scenarios that are small but weird. Possibly because the intuition for size has been trained by  a lot of real-world experiences – I don't think a human can really imagine even a million people, so there are many real-world cases where the correct response is to back off from visual imagination and shut up and multiply.

Utility monsters (and the Fat Man trolley problem variant) are small but weird, so it's more difficult for me to accept that my intuitive imagination of the scenario is likely to be misleading. I've seen fictional representations of utility monsters, and in general when I try to imagine a single sentient being it's difficult not to imagine something like a human. So even though I believe that a real utility monster would in fact be a profoundly alien and hard-to-imagine being, when I think about the scenario my brain conjures up a human tyrant and it seems really bad.

Whereas for the RC my brain sees the words "unimaginably vast" and decides not to try and imagine.

MichaelStJules @ 2021-12-20T19:41 (+2)

(Another answer...)

In humans, fertility rates have been declining while average quality of life has been increasing. Considering only human life until now, the RC might suggest things would have been better had fertility rates and average quality of life remained constant, since we'd have far more people with lives worth living. It can undermine the story of human progress, and suggest past trajectories would have been better.

We could also ask whether lifting people out of poverty is good, in case it would lead to lower populations. In general, as incomes increase, people have more access to contraceptives and other family planning services, even if we aren't directly funding such things. (Life-saving interventions would likely not lead to lower populations than otherwise, and would likely lead to higher ones at least in some places, according to research by David Roodman for GiveWell (GiveWell blog post).)

From https://ourworldindata.org/future-population-growth

https://en.wikipedia.org/wiki/List_of_countries_by_population_growth_rate

https://en.wikipedia.org/wiki/List_of_sovereign_states_and_dependencies_by_total_fertility_rate

Larks @ 2021-12-18T20:43 (+2)

A slightly tongue-in-cheek response: the thought experiment is often introduced by name, and calling it 'repugnant' is priming people to consider it bad, in a way that 'the trolley problem' does not.

Frank_R @ 2021-12-18T09:26 (+2)

I suggest the following thought experiment. Imagine wild animal suffering can be solved. Then it would be possible to populate a square mile with millions of happy insects instead of a few happy human beings. If the repugnant conclusion was true, the best world would be populated with as many insects as possible and only a few human beings that take care that there is no wild animal suffering. 

Even more radical, the best thing to do would be to fill as much of the future light cone as possible with hedonium. Both scenarios do not match the moral intuitions of most people.

If you believe in the opposite, namely that a world with fewer individuals with higher cognitive functions is more worthy, you may arrive at the conclusion that a world populated with a few planet-sized AIs is the best.  

As other people have said, all kinds of population ethics lead to some counter-intuitive conclusions. The most conservative solution is to aim for outcomes that are not bad according to many ethical theories. 

tessa @ 2021-12-17T15:50 (+2)

In the maximally repugnant world, no one's life is all that good. I feel the sting of that. It's hard for me to get excited about a world in which all of the people I know personally have barely-net-positive lives full of suffering and struggle, even if that world contains more people.

The Wikipedia page you linked gives a pretty not-upsetting version of the paradox: 

From Wikipedia, the four situations, A, A+, B-, and B of the Mere Addition Paradox, illustrated as bars of different widths and heights with "water" between (in the case of A+ and B-), following Parfit's book Reasons and Persons, chapter 19.

whereas the thing that people find repugnant looks more like:
 

From the Stanford Encyclopedia page on the repugnant conclusion. 


I accept the conclusion, but it feels like I am biting a bullet when I say that World Z is worth fighting for.

jackmalde @ 2021-12-18T06:44 (+3)

It's hard for me to get excited about a world in which all of the people I know personally have barely-net-positive lives full of suffering and struggle, even if that world contains more people.

I'd imagine they must have lots of brilliant and amazing experiences to make up for the suffering, in order to leave them at a net-positive life.

tessa @ 2021-12-18T15:32 (+2)

Is this necessary? I feel like many people judge their lives as worth living even though their day-to-day experiences contain mostly pain. I wonder if we're imagining different definitions  for "barely-net-positive". Maybe you mean "adding up the magnitude of moment-to-moment negative or positive qualia over someone's entire life" (hedonistic utilitarianism) whereas I am usually imagining something more like "on reflection, the person judges their life as worth living" (kinda preference utilitarian).

Jonathan Mustin @ 2021-12-20T21:18 (+8)

My sense is that people choose to weather currently-net-negative lives for at least two  reasons that they might endorse on reflection:

  1. The negative parts of their life may be solvable, such that the EV of their future is plausibly positive
  2. Ending their life has a few terrible externalities, e.g. the impact it would have on their close loved ones

Eliminating those considerations, I would expect the bar for World Z to be much better than the worst lives people reflectively consider worth living today.
 

Florian Habermacher @ 2021-12-18T13:45 (+1)

Might simply also a big portion of status-quo bias and/or omission bias (here both with similar effect) - be at play, helping to explain the typical classification of the conclusion as repugnant?

I think this might be the case when I ask myself  whether many people who classify the conclusion as repugnant, would not also have classified as just as repugnant the 'opposite' conclusion, if instead they had been offered the same experiment 'the other way round':

Start with a world counting huge numbers of lives worth-living-even-if-barely-so, and propose to destroy them all, for the sake of making very few really rich and happy! (Obviously with the nuance that it is the rich few whose net happiness is slightly larger than the sum of the others). It is just a gut feeling, but I'd guess this would evoke similar types of feelings of repugnance very often (maybe even more so than in the original RC experiment?)! A sort of Repugnant Conclusion 2.

MichaelStJules @ 2021-12-19T20:48 (+3)

I think the killing would probably explain the intuitive repugnance of RC2 most of the time, though.

Florian Habermacher @ 2021-12-22T16:31 (+3)

Fair point, even if my personal feeling is that it would be the same even without the killing (even if indeed the killing itself indeed would alone suffice too).

We can amend the RC2 attempt to avoid the killing : Start with the world with the seeds for huge numbers of lives worth-living-even-if-barely-so, and propose to destroy that world, for the sake of creating a world for very few really rich and happy! (Obviously with the nuance that it is the rich few whose net happiness is slightly larger than the sum of the others).

My gut feeling does not change about this RC2 still feeling repugnant to many, though I admit I'm less sure and might also be biased now, as in not wanting to feel different, oops.

willbradshaw @ 2021-12-20T08:27 (+3)

I moderately agree, but I do think there is commonly an ordering effect here, arising both from the phrasing of the RC and the way people often discuss it.

dominicroser @ 2021-12-18T04:40 (+1)

There was a somewhat unusual short philosophical paper this year signed by lots of philosophers which claimed that avoidance of the repugnant conclusion should not be seen as a necessary condition for an adequate population ethics. I guess it's driven by a similar concern you have here: the repugnant conclusion is much less obviously repugnant than its name makes it seem.