Utilitarianism and the replaceability of desires and attachments

By MichaelStJules @ 2024-07-27T01:57 (+33)

Summary

  1. Consider a pill that would cause a happy person with a fulfilling life to abandon their most important desires and cherished attachments, including goals, career and loved ones, but increase their lifetime subjective well-being. If what’s best for someone is just higher subjective well-being (including even higher lifetime preference/desire satisfaction), then it would be better for them to take the pill. However, it seems to me that if they prefer not to take such a pill, to honour their current specific desires and attachments, it could be worse for them to take it (more).
  2. I anticipate some responses and reply to them:
    1. People aren’t always right about what’s best for themselves. R: That’s true, but attitude manipulation is quite different from other cases, where individuals neglect, discount or otherwise misweigh attitudes they do or will have (more).
    2. Deontological constraints against involuntary manipulation. R: Deontological constraints could oddly recommend not to do what’s better for someone on their behalf (more).
    3. Indirect reasons count against involuntary attitude manipulation. R: Probably, but I also think it wouldn’t be better for them in many cases where it would increase their well-being (more).
    4. We can’t compare someone’s welfare between such different attitudes. R: We wouldn’t then have reason either way about manipulation, or to prevent manipulation (more).
    5. The thought experiment is too removed from reality. R: In fact, reprogramming artificial minds seems reasonably likely to be possible in the future, and regardless, if this manipulation would be worse for someone, views consistent with this could have important implications for cause prioritization (more).
  3. This kind of attitude manipulation would be worse for someone on preference-affecting views, which are in favor of making preferences (or attitudes) satisfied, but neutral about making satisfied preferences (for their own sake). Such views are also person-affecting, and so neutral about making happy people or ensuring they come to exist (for their own sake). I expect such views to give relatively less priority to extinction risk reduction within the community (more).

 

Acknowledgements

Thanks to Lukas Gloor and Chi Nguyen for helpful feedback. Thanks to Teo Ajantaival, Magnus Vinding, Anthony DiGiovanni and Eleos Arete Citrini for helpful feedback on earlier related drafts. All errors are my own.

 

Manipulating desires and abandoning attachments

Let’s start with a thought experiment. Arneson (2006, pdf) wrote the following, although I substitute my own text in italics and square brackets to modify it slightly:

Suppose I am married to Sam, committed to particular family and friends, dedicated to philosophy and mountain biking, and I am then offered a pill that will immediately and costlessly change my tastes, so that my former desires disappear, and I desire only [to know more about the world, so I will obsessively and happily consume scientific material, abandoning my spouse, my friends and family, my career as a philosopher and mountain biking, and instead live modestly off of savings or work that allows me to spend most of me time reading]. I am assured that taking the pill will increase my lifetime level of [subjective well-being].

Assume further that Arneson loves Sam, his family and friends, philosophy and mountain biking, and would have continued to do so without the pill. He would have had a very satisfying, subjectively meaningful, personally fulfilling, pleasurable and happy life, with high levels of overall desire/preference satisfaction, even if he doesn’t take the pill. On all of these measures of subjective well-being, and any other general kind,[1] he will be very well off. These measures of subjective well-being are based on his attitudes, like his desires, his preferences, his pleasure and unpleasantness, his dispositions to enjoy things or find them unpleasant or aversive, his attachments, his judgements, his moral intuitions, and so on. By attitudes, I mean ways for things to seem to him to be good or bad, or better or worse, ways he cares about things. But he will be even better off on all general measures of subjective well-being, including overall desire/preference satisfaction, if he takes the pill, by assumption.

And all of this is based on his own accurate beliefs about his effects on and connection to the world; he’s not being fooled into believing he’s doing something he’s not or that any fake relationships he might have are real, like in Nozick’s experience machine (Nozick, 1974, pp.42–43Nozick, 1989, pp.104–120excerpts).

Arneson will fucking love reading science articles. (Created with RealVisXL 4.0. Not what Arneson actually looks like.)

I’ll focus here on what’s best for Arneson.[2]

If subjective well-being is all that matters and higher lifetime subjective well-being is always better for someone, then for Arneson, it would be best for him to take the pill. Even if the frustration of his original desires, preferences or other attitudes is bad for him, we may just assume in the thought experiment that his gains with his new ones are greater than the losses on his original ones.[3]

And there’s nothing special about desiring only “[to know more about the world, so I will obsessively and happily consume scientific material (...)]”, which I’ve substituted. Arneson, in his thought experiment, used “casual sex, listening to sectarian religious sermons, mindless work, and TV watching”. It could be anything else, like becoming an entrepreneur, starting a new family in another country, making art (whether or not shared with others), meditating, counting blades of grass (Rawls, 1971, p. 432), watching paint dry or playing video games alone.

 

Rawls (1982, pdf, p.181) objected to the kind of verdict that Arneson should take the pill and the kinds of views delivering it, noting generally that they would make us bare persons:

Such persons are ready to consider any new convictions and aims, and even to abandon attachments and loyalties, when doing this promises a life with greater overall satisfaction, or well-being, as specified by a public ranking.

It seems everything becomes replaceable. Can we really cherish our loved ones and our relationships with them if we would be willing to abandon them for whatever would bring us more well-being?[4]

However, Arneson could just grant that he is such a bare person and be willing to abandon his attitudes and attachments to increase his subjective well-being or subjective well-being in general. Arneson wouldn’t be honouring his specific attachments to his spouse, family and friends, philosophy or mountain biking, and perhaps he owes it to himself or to those attachments to honour them (Dorsey, 2019, full). But it’s also his life and these are his attitudes. Who am I to judge? Perhaps he has stronger attitudes towards his subjective well-being per se than towards his other specific attachments together.

On the other hand, if he does prefer not to take the pill and takes his particular attachments as important and worthy of honouring, more so than his potential gain in well-being per se, it seems to me that it wouldn’t be better for him and he shouldn’t take the pill. Again, it’s his life and these are his attitudes.

I expect most consequentialist welfarists, including most utilitarians, to go further in the opposite direction. On their views, not only would it be better for Arneson's welfare for him to take the pill, we should in principle force him take the pill even if he prefers not to, assuming this is the only other option, and ignoring effects on others and indirect reasons (which actually seem important to me in practice). Or, we should also give everyone else similar pills. Or, we could just kill everyone and replace them with things that generate more well-being.[5]

It doesn’t matter how much Arneson loves his spouse, his friends and family, his career or mountain biking, or how much he hates the prospect of giving them up or having his consent violated, as long as his gain in future well-being is large enough to outweigh the losses.

 

To me, changing someone’s attitudes this way against their wishes seems worse for them, if and because it’s worse according to the attitudes they already have or would otherwise have had.[6] These new attitudes should not be able to outweigh the losses in their current attitudes, just by ensuring the gain in subjective well-being is high enough. Such outweighing seems to violate the Platinum Rule, which would require us to treat others as they would have us treat them.[7] It may be the case that after we force Arneson to take the pill, we treated him as he (or his attitudes) would have had us treat him according to his new pill attitudes. However, while we are making him take the pill, we are definitely not treating him as he would have had us treat him at the time. And if we don’t force him to take the pill, we treat him as he would have us treat him all throughout; his pill attitudes never become real.

There is a substantial and artificial change in Arneson’s desires, preferences, perspective and attitudes towards things. His “utility function” changed.

 

Even if we had constraints against this sort of involuntary attitude manipulation — whether principled or for indirect or practical reasons — as long as we hold that, in the ideal, the involuntary attitude manipulation would be better for someone or better for the world, this betterness wouldn’t match their attitudes, the way they care about things, their subjective perspective. It only matches a new perspective, a new set of attitudes that’s been created.

In my view, this — weighing things to match and respect the attitudes of moral patients — is one of the most important things for moral views to get right, perhaps the most important. Otherwise, we’re projecting or imposing our own attitudes onto others. Instead, we should just listen to theirs.

And if Arneson doesn’t take the pill, he won’t have the pill-induced attitudes to which to listen.

 

Parfit (1984)[8], Yudkowsky (2007), Dorsey (2019, full)[9] also considered similar (voluntary) scenarios and thought experiments, and there are multiple related discussions of wireheading on LessWrong, especially Sotala (2010).

 

Responses and replies

People aren’t always right about what’s best for themselves

People aren’t always right in their judgements or beliefs about what’s best for themselves for various reasons:

  1. They may be ignoring or misinformed about consequences or their likelihood, including even how they will or would judge or feel about things. This is often the case in children.
  2. They may discount their future attitudes in a way that’s unfair to their future selves, which they may later regret, or have reason to regret.
  3. They may neglect or discount some of the ways they care about things. For example, they may neglect their feelings, or feelings about specific things. Humans are not only reasoning beings, but also animals who care about things emotionally. Neglecting or discounting some of your emotions or desires can be unfair to those parts of you.

Common to all three cases is that the individual neglects, discounts or otherwise misweighs attitudes they do, will or would have. So, it seems it can be right to disagree with them about what’s best for them, by better anticipating and weighing these attitudes they already have or will have.

Whether or not we should actually do things against their wishes for their benefit is a separate question I’ll discuss below. Still, if it would violate the Platinum rule to intervene on someone’s behalf against their judgement or allow it to happen like in one of these three cases, it could also violate it if we didn’t, because they would later have had us act differently, or a different discounted part of them, like any discounted feelings or desires, would have had us act differently.

We might think Arneson’s thought experiment is similar or even a special case of 2, and we are similarly justified in believing he’s wrong about what’s best for himself. If he takes the pill, this will be how he would have had us treat him, on all his later attitudes. He wouldn’t want to reverse the effects, by assumption, either.[10]

However, if Arneson doesn’t take the pill, he will not have such attitudes in favour of taking (or having taken) the pill, so it’s not a special case of 2. What he would neglect or discount — his attitudes in the counterfactual where he takes the pill — are things he does not and will not favour overall. Why should he care about these? Or why should we care about these on his behalf?[11]

Alternatively, there are no attitudes that will ever exist independently of the choice whether to take the pill that Arneson would neglect discount, or that we would neglect or discount on his behalf.[12]

 

Deontological constraints

One response would be deontological constraints or principled act-omission distinctions that prohibit you from manipulating the attitudes of others (or killing them) against their wishes and without their consent, even if it would increase their well-being or overall well-being. However, such merely personal constraints don’t give you reason to stop someone else from manipulating or killing others to increase well-being, which the Platinum rule could require of you.[13] But we could just have similar reasons to stop others, too. Then, we’re good right?

Either way, if manipulating someone’s attitudes against their wishes or consent would really be better for them if and because it increases their lifetime well-being, then it seems to me like you wouldn’t be stopping yourself from doing so for them. So what are these deontological constraints for?

I imagine some of those sympathetic to such deontological constraints have their own answers to this in turn, some perhaps grounded in indirect reasons related for welfare or individuals’ attitudes, like I discuss next. Some may not be grounded in such indirect reasons for welfare or individuals’ attitudes, and I’d object that this is projecting or imposing too much onto others, rather than listening to their attitudes.

 

Indirect reasons

Those sympathetic to welfarism of some kind can accept that it would be better for Arneson, given the assumptions, but deny that this thought experiment is informative of what we should ever really do. It might ignore more important indirect reasons for why it would tend to be bad for welfare:

  1. We shouldn’t be confident in practice about any such case without an individual’s consent. Consent matters because its violation tends to lead to worse outcomes for those whose consent is violated. Even if not in this case, after assuming it would be better for Arneson (and no worse for anyone else), we should avoid undermining our deference to consent in other cases.
  2. Violating consent in general undermines trust and cooperation, which is important for achieving our goals. Furthermore, preventing the violation of consent can also promote alliances and cooperation with those whose consent we’ve protected and others.

I suspect these are indeed good reasons in practice.

Still, I also just deny that it really would be better for Arneson to take the pill, and specifically claim that it would be worse for him, or his attitudes. This requires a different conception of welfare. I will discuss this briefly in the last section, and more in another piece.

 

Incomparability

We could deny that Arneson’s original attitudes and attitudes after manipulation are comparable, because they are just too different. Or, generally, we could deny that different attitudes are comparable at all.

This would probably substantially undermine interpersonal comparisons, too, and make ethics much harder, although I think there is some good reason to believe in fairly widespread interpersonal incomparability, too (St. Jules, 2024b, St. Jules, 2024c). Arrow and Rawls rejected interpersonal comparisons (Arrow, 1977, p.225, Rawls, 1982, pdf, p.181).

However, this move doesn’t seem to imply that attitude manipulation is bad; it would just be incomparable to acts not manipulating attitudes. Arneson (2006, pdf) wrote:

If we discovered that a friend accidentally ingested such a pill and suffered involuntarily transformed desires, we should on balance be neither glad nor sad, for the friend’s sake, that this occurred.

Similarly, it seems we wouldn’t have reason to stop such attitude manipulation from happening — by accident or intentionally by someone else — in the first place.

 

Why should I care?

The thought experiment may seem too removed from practicality in the near term or ever. Brains are hard to manipulate reliably like this. Why should you care about such a thought experiment, even if you agree that taking the pill wouldn’t be better for Arneson?

Something similar seems reasonably likely to be possible in the future with more advanced technology. At least, it seems quite likely if and when we can reprogram artificially conscious beings — including digital people and mind uploads, as well as conscious minds not specifically much like humans’ — to change their attitudes.

Perhaps more importantly, whether or not we can ever carry out something like the thought experiment, if we admit that such attitude manipulation can be wrong to or worse for the affected individual even if it increases their subjective well-being, then this should be reflected in our moral views. In particular, I believe it supports certain kinds of person-affecting views, as I will argue briefly in the next section and further in another piece. Such moral views could have important implications for cause prioritization, especially reducing the priority given to extinction risk reduction.

 

Avoiding manipulation with preference-affecting views

What could work, then? A principled response according to which (especially involuntary) attitude manipulation is directly bad for you would need to

  1. count some of your original attitudes as less satisfied or further frustrated, and count this as bad/worse, and
  2. not allow the worseness of 1 to be entirely offset or outweighed by your new attitudes (at least in situations relevantly similar to the thought experiment).

This sounds like what a preference-affecting view can accomplish. Preference-affecting views are in favor of making preferences (or attitudes) satisfied, but neutral about making satisfied preferences (Barry, 1989, Bykvist, 1996 and Dietz, 2023).

Compare to person-affecting views. Person-affecting views are “in favor of making people happy, but neutral about making happy people” (Narveson, 1973, p.80). The neutrality towards “making happy people” means denying that we have any reason to create people for their own sake, although there may be indirect reasons. There are different ways of understanding “making people happy” that correspond to different person-affecting views. Should we only care about people who already exist (presentism)?[14] Should we only care about people who would exist regardless of our choice (necessitarianism)?[15] Should we care about how the lives of future people go in other ways besides ensuring more come to exist at all (various other views)?[16]

On a person-affecting view,[17] if killing someone is bad for them, say because it deprives them of potential future goods or frustrates their preferences, creating other happy people wouldn’t make up for it, all else equal. So, person-affecting views can take more principled stances against killing and replacing, although not all person-affecting views actually achieve this (St. Jules, 2024a). Mind uploading and cryonics would also make more sense as impartial pursuits on person-affecting (and preference-affecting) views.

Preference-affecting views are basically person-affecting views, but applied to individual preferences (or attitudes) instead of just to whole individuals or whole individual welfare. We can generate a preference-affecting view from a specific person-affecting view by applying the rules of the person-affecting view to attitudes instead of to whole individuals or their welfare. So, we could care only for attitudes that currently exist (presentism), only for those that would exist regardless of our choice (necessitarianism), or also for contingent attitudes in other ways besides ensuring more come to exist at all.

Preference-affecting views also are in fact person-affecting views. As such, causes are likely to be prioritized like on similar person-affecting views. Perhaps most notably, I expect extinction risk reduction to receive less priority on these views. If (human or total animal) extinction is bad at all, it would not be because of the positive value in the foregone future contingent lives, but depending on the particular view, if and because it’s bad for current individuals,[18] leads to more overall bad lives (on asymmetric views), is bad for other future individuals (e.g. other animals, aliens or artificial conscious beings) or replaces better off contingent individuals with worse off ones. But extinction could be good for opposite reasons, too.

Another possibility would be to recognize both preference-affecting reasons and totalist reasons together. This would raise the bar for but still allow attitude manipulation and replacement. So, for example, it could still sometimes be better for you to abandon your loved ones even if you prefer not to, but the net gain in subjective well-being would have to pass some threshold; not just any positive net gain is enough. Similarly, it could still be better if everyone was killed and replaced with better off beings, but the gain in well-being would have to be large enough. And similarly, to prioritize ensuring future moral patients exist for their own sake or the sake of their well-being would require the gain in well-being to be large enough, and not just larger than the opportunity costs for existing individuals.

My own view is that such a hybrid approach still does too much projecting or imposing onto others or the world. We should just listen.

In my next piece, I will further motivate, sketch and defend fully preference-affecting views.

  1. ^

     As opposed to narrower ones with types of objects of his attitudes specified, e.g. satisfaction with his relationships with others.

  2. ^

     If Arneson taking the pill would be bad for Arneson’s spouse, family, friends or others in the world, we could:

    1. consider those impacts separately,

    2. assume his gains in well-being are greater than the losses to others, or

    3. give the others similar pills, giving up any attachments to Arneson and increasing their subjective well-being, too.

  3. ^

     This rules out solutions based just on global preferences, i.e. an individual’s preferences about their own life as a whole, or whole outcomes, the way things go (Parfit, 1984), and hidden desires, or implicit preferences, e.g. to not have one’s preferences changed in certain ways (Yu, 2022).

  4. ^

     Again, assuming they aren’t made worse off, or they take similar pills, or that you only care about them through your own subjective well-being, if you’re willing to manipulate it.

  5. ^

     The apparent betterness of attitude manipulation follows from letting newly created attitudes offset or outweigh the frustration already existing or otherwise necessary attitudes. The original attitudes become replaceable. This has the same structure as the problem of replacement or replaceability in population ethics (Singer, 1979a, Singer, 1979b, Jamieson, 1984, Knutsson, 2021, St. Jules, 2024a), in which (potentially very well off) individuals are killed and replaced with (on aggregate) better off individuals. In our case here, replacement is applied between attitudes within an individual instead of between whole individuals. Furthermore, killing and replacing just seems like a more extreme form of attitude manipulation, where the individual’s personal identity — the characteristics that make someone the same person over time and across possible worlds (Olsen, 2023, Shoemaker, 2019 and/or Mackie & Jago, 2022) — is not at all preserved, even partially.

  6. ^

     If on the attitudes they will have without taking the pill, taking the pill would have been better, say because without the pill, they will suffer a lot more or wish they had never been born, or wish they had taken the pill, then it seems plausible to me that it would be better to take the pill. I will expand on such a view in my next piece.

  7. ^

     The Platinum Rule can be seen as a variant of the Golden Rule, which would require us to treat others as we would have them treat us. Or, it can be seen as an application of the Golden Rule, because we would have others treat us in accordance with our attitudes, so we should treat others in accordance with their attitudes.

  8. ^

     Parfit (1984) wrote:

    Consider this example. Knowing that you accept a Summative theory, I tell you that I am about to make your life go better. I shall inject you with an addictive drug. From now on, you will wake each morning with an extremely strong desire to have another injection of this drug. Having this desire will be in itself neither pleasant nor painful, but if the desire is not fulfilled within an hour it will then become very painful. This is no cause for concern, since I shall give you ample supplies of this drug. Every morning, you will be able at once to fulfil this desire. The injection, and its aftereffects, would also be neither pleasant nor painful. You will spend the rest of your days as you do now.

    What would the Summative Theories imply about this case? We can plausibly suppose that you would not welcome my proposal. You would prefer not to become addicted to this drug, even though I assure you that you will never lack supplies. We can also plausibly suppose that, if I go ahead, you will always regret that you became addicted to this drug. But it is likely that your initial desire not to become addicted, and your later regrets that you did, would not be as strong as the desires you have each morning for another injection. Given the facts as I described them, your reason to prefer not to become addicted would not be very strong. You might dislike the thought of being addicted to anything; and you would regret the minor inconvenience that would be involved in remembering always to carry with you sufficient supplies. But these desires might be far weaker than the desires you would have each morning for a fresh injection.

    On the Summative Theories, if I make you an addict, I will be increasing the sumtotal of your desire-fulfilment, I will be causing one of your desires not to be fulfilled: your desire not to become an addict, which, after my act, becomes a desire to be cured. But I will also be giving you an indefinite series of extremely strong desires, one each morning, all of which you can fulfil. The fulfilment of all these desires would outweigh the non-fulfilment of your desires not to become an addict, and to be cured. On the Summative Theories, by making you an addict, I will be benefiting you—making your life go better.

  9. ^

     Dorsey (2019, full) wrote:

    Faith: Faith is a highly regarded Air Force pilot who has long desired to become an astronaut. She has the physical skill, the appropriate training, and has been looked on as a potential candidate. At time t, she has the choice to undergo the last remaining set of tests to become an astronaut or take a very powerful psychotropic pill that would have the result of radically, and permanently, changing her desires. Instead of preferring to be an astronaut, she could instead prefer to be a highly regarded, but Earth-bound, Air Force pilot.

    Here Faith could become an astronaut. She wants to, and were she to take the final test, she would succeed and become an astronaut. This would be a prudential benefit to her, as she currently prefers to be an astronaut and her preference to be an astronaut is psychologically stable. However, it is also the case that she could rid herself of this preference. She could instead simply take the pill, and prefer to remain Earth-bound. But the latter course, plausibly, is comparatively imprudent. Surely as a matter of what she ought to do as concerns her own self-interest – what Faith, if I may say so, owes to herself – she ought to become an astronaut.

  10. ^

     This is an example of the reversal test (Bostrom & Ord, 2006).

    However, we could also imagine another pill to reverse the effects of the first, and him reuniting with his spouse, friends and family and taking up philosophy and mountain biking again. If he does take the second pill after the first, he could judge, back with his original attitudes and life and knowledge of what happened, that it was a mistake to take the first pill.

  11. ^

     This motivates an actualist preference-affecting (or attitude-affecting) view. I will flesh out such a view and its implications in my next piece.

  12. ^

     In line with a necessitarian preference-affecting view.

  13. ^

     Others wouldn’t necessarily particularly care whether it’s you or someone else manipulating or killing them or whether you stop someone else or stop yourself, so it isn’t clear why you wouldn’t have similarly strong reasons to stop someone else from manipulating or killing others to increase wellbeing, especially if those reasons are really just about others. A personal deontological constraint, if not extended to stopping others from violating it, seems partly self-regarding.

  14. ^

     No, in my view.

  15. ^

     No, in my view.

  16. ^

     Yes, in my view.

  17. ^

     Assuming we can say that someone is still (roughly) the same person in the future and across possible worlds, as opposed to there being no fact of the matter or that we are constantly changing identities. For standard references on personal identity, see Olsen, 2023, Shoemaker, 2019 and/or Mackie & Jago, 2022.

  18. ^

     E.g. Finneron-Burns, 2017. Depending on advances in technology like life extension and mind uploading, and how we count the future welfare of currently existing people, currently existing people could live extremely long lives and so be made far better (or worse) off (Gustafsson & Kosonen, 2019, Shulman, 2019).


Richard Y Chappell🔸 @ 2024-07-27T03:22 (+7)

I like the hybrid approach, and discuss its implications for replaceability a bit here. (Shifting to the intrapersonal case: those of us who reject preference theories of well-being may still recognize reasons not to manipulate preferences, for example based on personal identity: the more you manipulate my values, the less the future person is me. To be a prudential benefit, then, the welfare gain has to outweigh the degree of identity loss. Moreover, it's plausible that extrinsic manipulations are typically more disruptive to one's degree of psychological continuity than voluntary or otherwise "natural" character development.)

It seems worth flagging that some instances of replacement seem clearly good! Possible examples include:

I guess even preference-affecting views will support instrumental replacement, i.e. where the new desire results in one's other desires being sufficiently better satisfied (even before counting any non-instrumental value from the new desire itself) to outweigh whatever was lost.

MichaelStJules @ 2024-07-27T04:51 (+3)

I agree that some instances of replacement seem good, but I suspect the ones I'd agree with are only good in (asymmetric) preference-affecting ways. On the specific cases you mention:

  • Generational turnover
    • I'd be inclined against it unless
      • it's actually on the whole preferred (e.g. aggregating attitudes) by the people being replaced, or
      • the future generations would have lesser regrets or negative attitudes towards aspects of their own lives, or suffer less (per year, say). Pummer (2024) resolves some non-identity cases this way, while avoiding antinatalism (although I am fairly sympathetic to antinatalism).
  • not blindly marrying the first person you fall in love with
    • people typically (almost always?) care or will care about their own well-being per se in some way, and blindly marrying the first person you fall in love with is risky for that
    • more generally, a bad marriage can be counterproductive for most of what you care or will care to achieve
    • future negative attitudes (e.g. suffering) from the marriage or for things to be different can count against it
  • helping children to develop new interests:
    • they do or will care about their well-being per se, and developing interests benefits that
    • developing interests can have instrumental value for other attitudes they hold or are likely to eventually hold either way, e.g. having common interests with others, making friends, not being bored
    • developing new interests is often (usually? almost always?) a case of discovering dispositional attitudes they already have or would have had anyway. For example, there's already a fact of the matter, based in a child's brain as it already is or will be either way, whether they would enjoy certain aspects of some activity.[1] So, we can just count unknown dispositional attitudes on preference-affecting views. I'm sympathetic to counting dispositional attitudes anyway for various reasons, and whether or not they're known doesn't seem very morally significant in itself.
  1. ^

    Plus, the things that get reinforced, and so may shift some of their attitudes, typically get reinforced because of such dispositional attitudes: we come to desire the things we're already disposed to enjoy, with the experienced pleasure reinforcing our desires.

MichaelStJules @ 2024-07-27T04:25 (+3)

Good point about the degree of identity loss.

I think the hybrid view you discuss is in fact compatible with some versions of actualism (e.g. weak actualism), as entirely preference-affecting views (although maybe not exactly preference-affecting in the informal way I describe them in this post), so not necessarily hybrid in the way I meant it here.

Take the two outcomes of your example, assuming everyone would be well-off as long as they live, and Bob would rather continue to live than be replaced:

  1. Bob continues to live.
  2. Bob dies and Sally is born.

From the aggregated preferences or attitudes of the people in 1, 1 is best. From the aggregated preferences or attitudes of the people in 2, 2 is best, if Sally gains more than Bob loses between 1 and 2. So each outcome is best for the (would-be) actual people in it. So, we can go for either.

So, not all preference-affecting views even count against this kind of replaceability.

My next two pieces will mostly deal with actualist(-ish) views, because I think they're best at taking on the attitudes that matter and treating them the right way, or being radically empathetic.

huw @ 2024-07-27T04:42 (+3)

I am a bit unenlightened when it comes to moral philosophy so I would appreciate if you can help me understand this viewpoint better. Does it change if you replace 'subjective well-being' with 'life satisfaction' (in the sense of SWB being experiential and satisfaction being reflective/prospective)? i.e. are there conceptions of 'life satisfaction' that sort of take into account what this person wants for themselves?

For example, I wonder if people who have preferences that are hard to satisfy might actually want to take such a life-satisfaction pill, if it meant their new preferences were easier to satisfy. (Is this, in some sense, what a lot of Buddhist reframing around desire is doing?)

MichaelStJules @ 2024-07-27T05:00 (+3)

Life satisfaction is typically considered to be a kind of (or measure of) subjective well-being, and the argument would be the same for that as a special case. Just make the number go up enough after taking the pill, while replacing what they care about. (And I'm using subjective well-being even more broadly than I think normally used.)

For example, I wonder if people who have preferences that are hard to satisfy might actually want to take such a life-satisfaction pill, if it meant their new preferences were easier to satisfy.

In my view, it only makes sense to do if they already have or were going to otherwise have preferences/attitudes that would be more satisfied by taking the pill. If they would suffer less by taking the pill, then it could make sense. If they prefer to have greater life satisfaction per se, then it can make sense to take the pill.

huw @ 2024-07-27T05:31 (+3)

Hmm. I'm imagining a monogamous bisexual person who prefers het relationships, but settles for a gay one because they really love their partner and reasonably believe they wouldn't be able to find a better het relationship if they were back on the market (such that they are not avoiding suffering and also maximising utility by being in this relationship). This person would opt to take the pill that makes them exclusively gay in order to feel more life satisfaction (or even SWB), even though it destroys their preferences.

I assume this person is in your latter bucket of preferring greater life satisfaction per se? If so, I don't think this situation is as uncommon as you imply—lots of people have hard to satisfy or unsatisfiable preferences that they would rather be rid of in favour of greater life satisfaction; in some sense, this is what it means to be human (Buddhism again).

MichaelStJules @ 2024-07-27T05:41 (+2)

Ya, I agree that many or even most people would get rid of some of their preferences if they could to be happier or more satisfied or whatever. Many people also have fears, anxieties or insecurities they'd rather not have, and those are kinds of "preferences" or "attitudes", the way I'm using those terms.