MichaelStJules's Quick takes
By MichaelStJules @ 2019-10-24T06:08 (+7)
nullMichaelStJules @ 2024-10-11T05:58 (+44)
Future debate week topics?
- Global health & wellbeing (including animal welfare) vs global catastrophic risks, based on Open Phil's classifications.
- Neartermism vs longtermism.
- Extinction risks vs risks of astronomical suffering (s-risks).
- Saving 1 horse-sized duck vs saving 100 duck-sized horses.
I like the idea of going through cause prioritization together on the EA Forum.
Toby Tremlettđš @ 2024-10-11T10:15 (+7)
Me too! The two broad categories of ideas I've had are basically
1. Key cause-prio debates- especially ones which have been had over many years and many posts, but haven't really been summarised/ focused into one place (like those you list)
2. Debates about tactics/ methodology. For example: "We should invest more heavily in animal sentience research than corporate campaigns". That's a rough example, but the idea would be to do a debate where people would have to get fairly fine-grained in cost-effectiveness thinking before they could vote. I doubt we would get as much engagement, but engagement may be particularly valuable if the question is well posed.
quila @ 2024-10-12T06:16 (+4)
5. the value of something like, how EA looks to outsiders? that seems to be the thing behind multiple points (2, 4, 7, and 8) in this which was upvoted, and i saw it other times this debate week (for example here) as a reason against the animal welfare option.
(i personally think that compromising epistemics for optics is one way movements ... if not die, at least become incrementally more of a simulacrum, no longer the thing they were meant to be. and i'm not sure if such claims are always honest, or if they can secretly function to enforce the relevance of public attitudes one shares without needing to argue for them.)
MichaelStJules @ 2020-11-16T02:25 (+22)
I feel increasingly unsympathetic to hedonism (and maybe experientalism generally?). Yes, emotions matter, and the strength of emotions could be taken to mean how much something matters, but if you separate a cow and her calf and theyâre distressed by this, the appropriate response for their sake is not to drug or fool them until they feel better, itâs to reunite them. What they want is each other, not to feel better. Sometimes I think about something bad in the world that makes me sad; I don't think you do me any favour by just taking away my sadness; I don't want to stop feeling sad, what I want is for the bad in the world to be addressed.
Rather than affect being what matters in itself, maybe affect is a signal for what matters and its intensity tells us how much it matters. Hedonism as normally understood would therefore be like Goodhartâs law: it ignores the objects of our emotions. This distinction can also be made between different versions of preference utilitarianism/consequentialism, as "satisfaction versions" and "object versions". See Krister Bykvist's PhD thesis and Wlodek Rabinowicz and Jan Ăsterberg, "Value Based on Preferences: On Two Interpretations of Preference Utilitarianism" (unfortunately both unavailable online to me, at least).
Of course, often we do just want to feel better, and that matters, too. If someone wants to not suffer, then of course they should not suffer.
Related: wireheading, the experience machine, complexity of value.
MichaelStJules @ 2020-01-10T19:35 (+11)
The procreation asymmetry can be formulated this way (due to Jeff McMahan):
while the fact that a person's life would be worse than no life at all ... constitutes a strong moral reason for not bringing him into existence, the fact that a person's life would be worth living provides no (or only a relatively weak) moral reason for bringing him into existence.
I think it's a consequence of a specific way of interpreting the claim "an outcome can only be worse than another if it's worse for someone", where the work goes into defining "worse for A". Using "better" instead of "worse" would give you a different asymmetry.
This is a summary of the argument for the procreation asymmetry here and in the comments, especially this comment, which also looks further at the case of bringing someone into existence with a good life. I think this is an actualist argument, similar to Krister Bykvist's argument in 2.1 (which cites Dan Brock from this book) and Derek Parfit's argument on p.150 of Reasons and Persons, and Johann Frick's argument (although his is not actualist, and he explicitly rejects actualism). The starting claim is that your ethical reasons are in some sense conditional on the existence of individuals, and the asymmetry between existence and nonexistence can lead to the procreation asymmetry.
1. From an outcome in which an individual doesn't/won't exist, they don't have any interests that would give you a reason to believe that another outcome is better on their account (they have no account!). So, ignoring other reasons, this outcome is not dominated by any other, and the welfare of an individual we could bring into existence is not in itself a reason to bring them into existence. This is reflected by the absence of arrows starting from the Nonexistence block in the image above.
2. An existing individual (or an individual who will exist) has interests. In an outcome in which they have a bad life, an outcome in which they didn't exist would have been better for them from the point of view of the outcome in which they do exist with a bad life, so an outcome with a bad life is dominated by one in which they don't exist, ignoring other reasons. Choosing an outcome which is dominated this way is worse than choosing an outcome that dominates it. So, that an individual would have negative welfare is a reason to prevent them from coming into existence. This is reflected by the arrow from Negative existence to Nonexistence in the image above.
3. If the individual would have had a good life, we could say that this would be better than their nonexistence and dominates it (ignoring other reasons), but this only applies from outcomes in which they exist and have a good life. If they never existed, because of 1, it would not be dominated from that outcome (ignoring other reasons).
Together, 1 and 2 are the procreation asymmetry (reversing the order of the two claims from McMahan's formulation).
MichaelStJules @ 2022-08-15T06:10 (+2)
Considering formalizations of actualism, Jack Spencer, 2021, The procreative asymmetry and the impossibility of elusive permission (pdf here) discusses problems with actualism (especially "strong actualism" but also "weak actualism") and proposes "stable actualism" as a solution.
MichaelStJules @ 2020-10-22T05:03 (+2)
I think my argument builds off the following from "The value of existence" by Gustaf Arrhenius and Wlodek Rabinowicz (2016):
Consequently, even if it is better for p to exist than not to exist, assuming she has a life worth living, it doesnât follow that it would have been worse for p if she did not exist, since one of the relata, p, would then have been absent. What does follow is only that non-existence is worse for her than existence (since âworseâ is just the converse of âbetterâ), but not that it would have been worse if she didnât exist.
The footnote that expands on this:
Rabinowicz suggested this argument already back in 2000 in personal conversation with Arrhenius, Broome, Bykvist, and Erik Carlson at a workshop in Leipzig; and he has briefly presented it in Rabinowicz (2003), fn. 29, and in more detail in Rabinowicz (2009a), fn. 2. For a similar argument, see Arrhenius (1999), p. 158, who suggests that an affirmative answer to the existential question âonly involves a claim that if a person exists, then she can compare the value of her life to her non-existence. A person that will never exist cannot, of course, compare âherâ non-existence with her existence. Consequently, one can claim that it is better ⌠for a person to exist ⌠than ⌠not to exist without implying any absurdities.â Cf. also Holtug (2001), p. 374f. In fact, even though he accepted the negative answer to the existential question (and instead went for the view that it can be good but not better for a person to exist than not to exist), Parfit (1984) came very close to making the same point as we are making when he observed that there is nothing problematic in the claim that one can benefit a person by causing her to exist: âIn judging that some personâs life is worth living, or better than nothing, we need not be implying that it would have been worse for this person if he had never existed. --- Since this person does exist, we can refer to this person when describing the alternative [i.e. the world in which she wouldnât have existed]. We know who it is who, in this possible alternative, would never have existedâ (pp. 487-8, emphasis in original; cf. fn. 9 above). See also Holtug (2001), Bykvist (2007) and Johansson (2010).
MichaelStJules @ 2020-09-24T16:27 (+2)
You could equally apply this argument to individual experiences, for an asymmetry between suffering and pleasure, as long as whenever an individual suffers, they have an interest in not suffering, and it's not the case that each individual, at every moment, has an interest in more pleasure, even if they don't know it or want it.
Something only matters if it matters (or will matter) to someone, and an absence of pleasure doesn't necessarily matter to someone who isn't experiencing pleasure* and certainly doesn't matter to someone who does not and will not exist, and so we have no inherent reason to promote pleasure. On the other hand, there's no suffering unless someone is experiencing it, and according to some definitions of suffering, it necessarily matters to the sufferer.
* for example, when concentrating in a flow state, while asleep, when content.
See also tranquilism and this post I wrote.
MichaelStJules @ 2020-09-07T17:16 (+2)
And we can turn this into a wide person-affecting view to solve the Nonidentity problem by claiming that identity doesn't matter. To make the above argument fit better with this, we can rephrase it slightly to refer to "extra individuals" or "no extra individuals" rather than any specific individuals who will or won't exist. Frick makes a separate general claim that if exactly one of two normative standards (e.g. people, with interests) will exist, and they are standards of the same kind (e.g. the extent to which people's interests are satisfied can be compared), then it's better for the one which will be better satisfied to apply (e.g. the better off person should come to exist).
On the other hand, a narrow view might still allow us to say that it's worse to bring a worse off individual into existence with a bad life than a better off one, if our reasons against bringing an individual into existence with a bad life are stronger the worse off they would be, a claim I'd expect to be widely accepted. If we apply the view to individual experiences or person-moments, the result seems to be a negative axiology, in which only the negative matters, on, and with hedonism, only suffering would matter. Whether or not this follows can depend on how the procreation asymmetry is captured, and there are systems in which it would not follow, e.g. the narrow asymmetric views here, although these reject the independence of irrelevant alternatives.
Under standard order assumptions which include the independence of irrelevant alternatives and completeness, the procreation asymmetry does imply a negative axiology.
MichaelStJules @ 2020-02-25T18:38 (+7)
Utility functions (preferential or ethical, e.g. social welfare functions) can have lexicality, so that a difference in category can be larger than the maximum difference in category , but we can still make probabilistic tradeoffs between them. This can be done, for example, by having separate utility functions, and for and , respectively, such that
- for all satisfying the condition and all satisfying (e.g. can be the negation of , although this would normally lead to discontinuity).
- is bounded to have range in the interval (or range in an interval of length at most 1).
Then we can define our utility function as the sum , so
This ensures that all outcomes with are at least as good as all outcomes with , without being Pascalian/fanatical to maximize regardless of what happens to . Note, however, that may be increasingly difficult to change as the number of moral patients increases, so we may approximate Pascalian fanaticism in this limit, anyway.
For example, if there is any suffering in that meets a certain threshold of intensity, , and if there is no suffering at all in , . can still be continuous this way.
If the probability that this threshold is met is and the expected value of conditional on this is bounded below by , , regardless of for the choices available to you, then increasing by at least , which can be small, is better than trying to reduce .
As another example, an AI could be incentivized to ensure it gets monitored by law enforcement. Its reward function could look like
where is 1 if the AI is monitored by law enforcement and passes some test (or did nothing?) in period , and 0 otherwise. You could put an upper bound on the number of periods or use discounting to ensure the right term can't evaluate to infinity since that would allow to be ignored (maybe the AI will predict its expected lifetime to be infinite), but this would eventually allow to overcome the , unless you also discount the future in .
This should also allow us to modify the utility function , if preventing the modification would cause a test to be failed.
Furthermore, satisfying the strongly lexically dominates increasing , but we can still make expected tradeoffs between them.
The problem then reduces to designing the AI in such a way that it can't cheat on the test, which might be something we can hard-code into it (e.g. its internal states and outputs are automatically sent to law enforcement), and so could be easier than getting right.
This overall approach can be repeated for any finite number of functions, . Recursively, you could define
for increasing and bounded with range in an interval of length at most 1, e.g. some sigmoid function. In this way, each dominates the previous ones, as above.
MichaelStJules @ 2020-05-03T16:44 (+4)
To adapt to a more deontological approach (not rule violation minimization, but according to which you should not break a rule in order to avoid violating a rule later), you could use geometric discounting, and your (moral) utility function could look like:
where
1. is the act and its consequences without uncertainty and you maximize the expected value of f over uncertainty in ,
2. is broken into infinitely many disjoint intervals , with coming just before temporally (and these intervals are chosen to have the same time endpoints for each possible ),
3. if a rule is broken in , and otherwise, and
4. is a constant, .
So, the idea is that if and only if the earliest rule violation in happens later than the earliest one in (at the level of precision determined by how the intervals are broken up). The value of ensures this. (Well, there are some rare exceptions if ). You essentially count rule violations and minimize the number of them, but you use geometric discounting based on when the rule violation happens in such a way to ensure that it's always worse to break a rule earlier than to break any number of rules later.
However, breaking up into intervals this way probably sucks for a lot of reasons, and I doubt it would lead to prescriptions people with deontological views endorse when they maximize expected values.
This approach basically took for granted that a rule is broken not when I act, but when a particular consequence occurs.
If, on the other hand, a rule is broken at the time I act, maybe I need to use some functions instead of the , because whether or not I act now (in time interval ) and break a rule depends on what happens in the future. This way, however, could basically always be , so I don't think this approach works.
MichaelStJules @ 2020-07-07T22:58 (+3)
This nesting approach with above also allows us to "fix" maximin/leximin under conditions of uncertainty to avoid Pascalian fanaticism, given a finite discretization of welfare levels or finite number of lexical thresholds. Let the welfare levels be , and define:
i.e. is the number of individuals with welfare level at most , where is the welfare of individual , and is 1 if and 0 otherwise. Alternatively, we could use .
In situations without uncertainty, this requires us to first choose among options that minimize the number of individuals with welfare at most , because takes priority over , for all , and then, having done that, choose among those that minimize the number of individuals with welfare at most , since takes priority over , for all , and then choose among those that minimize the number of individuals with welfare at most , and so on, until .
This particular social welfare function assigns negative value to new existences when there are no impacts on others, which leximin/maximin need not do in general, although it typically does in practice, anyway.
This approach does not require welfare to be cardinal, i.e. adding and dividing welfare levels need not be defined. It also dodges representation theorems like this one (or the stronger one in Lemma 1 here, see the discussion here), because continuity is not satisfied (and welfare need not have any topological structure at all, let alone be real-valued). Yet, it still satisfies anonymity/symmetry/impartiality, monotonicity/Pareto, and separability/independence. Separability means that whether one outcome is better or worse than another does not depend on individuals unaffected by the choice between the two.
MichaelStJules @ 2020-07-08T00:43 (+2)
Here's a way to capture lexical threshold utilitarianism with a separable theory and while avoiding Pascalian fanaticism, with a negative threshold and a positive threshold > 0:
- The first term is just standard utilitarianism, but squashed with a function into an interval of length at most 1.
- The second/middle sum is the number of individuals (or experiences or person-moments) with welfare at least , which we add to the first term. Any change in number past this threshold dominates the first term.
- The third/last sum is the number of individuals with welfare at most , which we subtract from the rest. Any change in number past this threshold dominates the first term.
Either of the second or third term can be omitted.
We could require for all , although this isn't necessary.
More thresholds could be used, as in this comment: we would apply to the whole expression above, and then add new terms like the second and/or the third, with thresholds and , and repeat as necessary.
MichaelStJules @ 2020-03-27T04:19 (+6)
I think EA hasn't sufficiently explored the use of different types of empirical studies from which we can rigorously estimate causal effects, other than randomized controlled trials (or other experiments). This leaves us either relying heavily on subjective estimates of the magnitudes of causal effects based on weak evidence, anecdotes, expert opinion or basically guesses, or being skeptical of interventions whose cost-effectiveness estimates don't come from RCTs. I'd say I'm pretty skeptical, but not so skeptical that I think we need RCTs to conclude anything about the magnitudes of causal effects. There are methods to do causal inference from observational data.
I think this has lead us to:
1. Underexploring the global health and development space. See John Halstead's and Hauke Hillebrandt's "Growth and the case against randomista development". I think GiveWell is starting to look beyond RCTs. There's probably already a lot of research out there they can look to.
2. Relying too much on guesses and poor studies in the effective animal advocacy space (especially in the past), for example overestimating the value of leafletting. I think things have improved a lot since then, and I thought the evidence presented in the work of Rethink Priorities, Charity Entrepreneurship and Founders Pledge on corporate campaigns was good enough to meet the bar for me to donate to support corporate campaigns specifically. Humane League Labs and some academics have done and are doing research to estimate causal effects from observational data that can inform EAA.
MichaelStJules @ 2019-10-24T06:08 (+5)
Fehige defends the asymmetry between preference satisfaction and frustration on rationality grounds. I start from a "preference-affecting view" in this comment, and in replies, describe how to get to antifrustrationism and argue against a symmetric view.
Let's consider a given preference from the point of view of a given outcome after choosing it, in which the preference either exists or does not, by cases:
1. The preference exists:
a. If there's an outcome in which the preference exists and is more satisfied, and all else is equal, it would have been irrational to have chosen this one (over it, and at all).
b. If there's an outcome in which the preference exists and is less satisfied, and all else is equal, it would have been irrational to have chosen the other outcome (over this one, and at all).
c. If there's an outcome in which the preference does not exist, and all else is equal, the preference itself does not tell us if either would have been irrational to have chosen.
2. The preference doesn't exist:
a. If there's an outcome in which the preference exists, regardless of its degree of satisfaction, and all else equal, the preference itself does not tell us if either would have been irrational to have chosen.
So, all else equal besides the existence or degree of satisfaction of the given preference, it's always rational to choose an outcome in which the preference does not exist, but it's irrational to choose an outcome in which the preference exists but is less satisfied than in another outcome.
(I made a similar argument in the thread starting here.)
MichaelStJules @ 2020-03-27T01:59 (+8)
I also think that antifrustrationism in some sense overrides interests less than symmetric views (not to exclude "preference-affecting" views or mixtures as options, though). Rather than satisfying your existing preferences, according to symmetric views, it can be better to create new preferences in you and satisfy them, against your wishes. This undermines the appeal of autonomy and subjectivity that preference consequentialism had in the first place. If, on the other hand, new preferences don't add positive value, then they can't compensate for the violation of preferences, including the violation of preferences to not have your preferences manipulated in certain ways.
Consider the following two options for interests within one individual:
A. Interest 1 exists and is fully satisfied
B. Interest 1 exists and is not fully satisfied, and interest 2 exists and is (fully) satisfied.
A symmetric view would sometimes choose B, so that the creation of interests can take priority over interests that would exist regardless. In particular, the proposed benefit comes from satisfying an interest that would not have existed in the alternative, so it seems like we're overriding the interests the individual would have in A with a new interest, interest 2. For example, we make someone want something and satisfy that want, at the expense of their other interests.
On the other hand, consider:
A. Interest 1 exists and is partially unsatisfied
B. Interest 1 exists and is fully satisfied, and interest 2 exists and is partially unsatisfied.
In this case, antifrustrationism would sometimes choose A, so that the removal or avoidance of an otherwise unsatisfied interest can take priority over (further) satisfying an interest that would exist anyway. But in this case, if we choose A because of concerns for interest 2, at least interest 2 would exist in the alternative A, so the benefit comes from the avoidance of an interest that would have otherwise existed. In A, compared to B, I wouldn't say we're overriding interests, we're dealing with an interest, interest 2, that would have existed otherwise.
Smith and Black's "The morality of creating and eliminating duties" deals with duties rather than preferences, and argues that assigning positive value to duties and their satisfaction leads to perverse conclusions like the above with preferences, and they have a formal proof for this under certain conditions.
Some related writings, although not making the same point I am here:
MichaelStJules @ 2019-10-26T15:01 (+3)
I also think this argument isn't specific to preferences, but could be extended to any interests, values or normative standards that are necessarily held by individuals (or other objects), including basically everything people value (see here for a non-exhaustive list). See Johann Frick’s paper and thesis which defend the procreation asymmetry, and my other post here.
MichaelStJules @ 2020-02-24T04:10 (+2)
Then, if you extend these comparisons to satisfy the independence of irrelevant alternatives by stating that in comparisons of multiple choices in an option set, all permissible options are strictly better than all impermissible options regardless of option set, extending these rankings beyond the option set, the result is antifrustrationism. To show this, you can use the set of the following three options, which are identical except in the ways specified:
- : a preference exists and is fully satisfied,
- : the same preference exists and is not fully satisfied, and
- : the preference doesn't exist,
and since is impermissible because of the presence of , this means , and so it's always better for a preference to not exist than for it to exist and not be fully satisfied, all else equal.
MichaelStJules @ 2024-10-11T05:38 (+4)
I never found psychological hedonism (or motivational hedonism) very plausible, but I think it's worth pointing out that the standard version â according to which everyone is ultimately motivated only by their own pleasure and pain â is a form of psychological egoism and seems incompatible with sincerely being a hedonistic utilitarian or caring about others and their interests for their own sake.
From https://www.britannica.com/topic/psychological-hedonism :
Psychological hedonism, in philosophical psychology, the view that all human action is ultimately motivated by desires for pleasure and the avoidance of pain. It has been espoused by a variety of distinguished thinkers, including Epicurus, Jeremy Bentham, and John Stuart Mill, and important discussions of it can also be found in works by Plato, Aristotle, Joseph Butler, G.E. Moore, and Henry Sidgwick.
Because its defenders generally assume that agents are motivated only by the prospect of their own pleasures and pains, psychological hedonism is a form of psychological egoism.
More concretely, a psychological hedonist who cares about others, but only based on how it makes them feel, would prefer to never find out that they've caused harm or are doing less good than they could, if it wouldn't make them (eventually) feel better overall. They don't actually want to do good, they just want to feel like they're doing good. Ignorance is bliss.
They could be more inclined to get in or stay in an experience machine, knowing they'd feel better even if it meant never actually helping anyone else.
That being said, they might feel bad about it if they know they're in or would be in an experience machine. So, they might refuse the experience machine by following their immediate feelings and ignoring the fact that they'd feel better overall in the long run. This kind of person seems practically indistinguishable from someone who sincerely cares about others through and based on their feelings.
MichaelStJules @ 2020-10-01T03:31 (+3)
This is an argument against hedonic utility being cardinal and for widespread commensurability between hedonic experiences of different kinds. It seems that our tradeoffs, however we arrive at them, don't track the moral value of hedonic experiences.
Let X be some method or system by which we think we can establish the cardinality and/or commensurability of our hedonic experiences, and rough tradeoff rates. For example, X=reinforcement learning system in our brains, our actual choices, or our judgements of value (including intensity).
If X is not identical to our hedonic experiences, then it may be the case that X is itself what's forcing the observed cardinality and/or commensurability onto our hedonic experiences. But if it's X that's doing this, and it's the hedonic experiences themselves that are of moral value, then that cardinality and/or commensurability are properties of X, not our hedonic experiences themselves. So the observed cardinality and/or commensurability is a moral illusion.
Here's a more specific illustration of this argument:
Do our reinforcement systems have access to our whole experiences (or the whole hedonic component), or only some subsets of those neurons that are firing that are responsible for them? And what if they're more strongly connected to parts of the brain for certain kinds of experiences than others? It seems like there's a continuum of ways our reinforcement systems could be off or even badly off, so it would be more surprising to me that it would track true moral tradeoffs perfectly. Change (or add or remove) one connection between a neuron in the hedonic system and one in the reinforcement system, and now the tradeoffs made will be different, without affecting the moral value of the hedonic states. If the link between hedonic intensity and reinforcement strength is so fragile, what are the chances the reinforcement system has got it exactly right in the first place? Should be 0 (assuming my model is right).
At least for similar hedonic experiences of different intensities, if they're actually cardinal, we might expect the reinforcement system to capture some continuous monotonic transformation and not a linear transformation. But then it could be applying different monotonic transformations to different kinds of hedonic experiences. So why should we trust the tradeoffs between these different kinds of hedonic experiences?
MichaelStJules @ 2020-10-05T21:43 (+4)
The "cardinal hedonist" might object that X (e.g. introspective judgement of intensity) could be identical to our hedonistic experiences, or does track their cardinality closely enough.
I think, as a matter of fact, X will necessarily involve extra (neural) machinery that can distort our judgements, as I illustrate with the reinforcement learning case. It could be that our judgements are still approximately correct despite this, though.
Most importantly, the accuracy of our judgements depends on there being something fundamental that they're tracking in the first place, so I think hedonists who use cardinal judgements of intensity owe us a good explanation for where this supposed cardinality comes from, which I expect is not possible with our current understanding of neuroscience, and I'm skeptical that it will ever be possible. I think there's a great deal of unavoidable arbitrariness in our understanding of consciousness.
MichaelStJules @ 2020-10-01T03:44 (+2)
Here's an illustration with math. Let's consider two kinds of hedonic experiences, and , with at least three different (signed) intensities each, and , respectively, with . These intensities are at least ordered, but not necessarily cardinal like real numbers or integers and we can't necessarily compare and . For example, and might be pleasure and suffering generally (with suffering negatively signed), or more specific experiences of these.
Then, what X does is map these intensities to numbers through some function,
satisfying and . We might even let and be some ordered continuous intervals, isomorphic to a real-valued interval, and have be continuous and increasing on each of and , but again, it's that's introducing the cardinalization and commensurability (or a different cardinalization and commensurability from the real one, if any); these aren't inherent to and .
MichaelStJules @ 2020-04-14T02:21 (+3)
If you're a consequentialist and you think
1. each individual can sometimes sacrifice some for more for themself,
2. we should be impartial, and
3. transitivity and the independence of irrelevant alternatives hold,
then it’s sometimes ethical to sacrifice from one individual for more for another. This isn't too surprising, but let's look at the argument, which is pretty simple, and discuss some examples.
Consider the following three options, with two individuals, and , and amounts of , amounts of :
i. , , read as has amount of and amount of , while has amount of and amount of .
ii. ,
iii. ,
Here we have i > ii by 1 for some , , and , and ii = iii by impartiality, so together i > iii by 3, and we sacrifice some from for some from for .
Remark: I did choose the amounts of and pretty specifically in this argument to match in certain ways. With continuous personal tradeoffs between and , and continuous tradeoffs between amounts of between different individuals at all base levels of , I think this should force continuous tradeoffs between one individual's amount of and another's amount of . We can omit the impartiality assumption in this case.
Possible examples:
- hedonistic welfare, some non-hedonistic values
- experiential values, some non-experiential values
- absence or negative of suffering, knowing the truth, for its own sake (not its instrumental value)
- absence or negative of suffering, pleasure
- absence or negative of suffering, anything else that could be good
- absence or negative of intense suffering, absence or negative of mild suffering
In particular, if you’d be willing to endure torture for some other good, you should be willing to allow others to be tortured for you to get more of that good.
I imagine people will take this either way, e.g. some will accept that it's actually okay to let some be tortured for some other kind of benefit to different people, and others will accept that nothing can compensate them for torture. I fall into the latter camp.
Others might also reject the independence of irrelevant alternatives or transitivity, or their "spirit", e.g. by individuating options to option sets. I'm pretty undecided about independence these days.
MichaelStJules @ 2020-03-27T05:54 (+3)
I've been thinking more lately about how I should be thinking about causal effects for cost-effectiveness estimates, in order to clarify my own skepticism of more speculative causes, especially longtermist ones, and better understand how skeptical I ought to be. Maybe I'm far too skeptical. Maybe I just haven't come across a full model for causal effects that's convincing since I haven't been specifically looking. I've been referred to this in the past, and plan to get through it, since it might provide some missing pieces for the value of research. This also came up here.
Suppose I have two random variables, and , and I want to know the causal effect of manipulating on , if any.
1. If I'm confident there's no causal relationship between the two, say due to spatial separation, I assume there is no causal effect, and conditional on the manipulation of to take value (possibly random), , is identical to , i.e. . (The notation is Pearl's do-calculus notation.)
2. If could affect , but I know nothing else,
a. I might assume, based on symmetry (and chaos?) for , that and are identical in distribution, but not necessarily literally equal as random variables. They might be slightly "shuffled" or permuted versions of each other (see symmetric decreasing rearrangements for specific examples of such a permutation). The difference in expected values is still 0. This is how I think about the effects of my every day decisions, like going to the store, breathing at particular times, etc. on future populations. I might assume the same for variables that depend on .
b. Or, I might think that manipulating just injects noise into , possibly while preserving some of its statistics, e.g. the mean or median. A simple case is just adding random symmetric noise with mean and median to . However, whether or not a statistic is preserved with the extra noise might be sensitive to the scale on which is measured. For example, if is real-valued, and is strictly increasing, then for the median, , but the same is not necessarily true for the expected value of , or for other variables that depend on .
c. Or, I might think that manipulating makes closer to a "default" distribution over the possible values of , often but not always uninformed or uniform. This can shift the mean, median, etc., of . For example, could be the face of the coin I see on my desk, and could be whether I flip the coin or not, with being not by default. So, if I do flip the coin and hence manipulate , this randomizes the value of , making my probability distribution for its value uniformly random instead of a known, deterministic value. You might think that some systems are the result of optimization and therefore fragile, so random interventions might return them to prior "defaults", e.g. naive systemic change or changes to ecosystems. This could be (like) regression to the mean.
I'm not sure how to balance these three possibilities generally. If I do think the effects are symmetric, I might go with a or b or some combination of them. In particular asymmetric cases, I might also combine c.
3. Suppose I have a plausible argument for how could affect in a particular way, but no observations that can be used as suitable proxies, even very indirect, for counterfactuals with which to estimate the size of the effect. I lean towards dealing with this case as in 2, rather than just making assumptions about effect sizes without observations.
For example, someone might propose a causal path through which affects with a missing estimate of effect size at at least one step along the path, but an argument to that this should increase the value of . It is not enough to consider only one such path, since there may be many paths from to , e.g. different considerations for how could affect , and these would need to be combined. Some could have opposite effects. By 2, those other paths, when combined with the proposed causal path, reduce the effects of on through the proposed path. The longer the proposed path, the more unknown alternate paths.
I think this is where I am now with speculative longtermist causes. Part of this may be my ignorance of the proposed causal paths and estimates of effect sizes, since I haven't looked too deeply at the justifications for these causes, but the dampening from unknown paths also applies when the effect sizes along a path are known, which is the next case.
4. Suppose I have a causal path through some other variable , , so that causes and causes , and I model both the effects of and , based on observations. Should I just combine the two for the effect of on ? In general, not in the straightforward way. As in 3, there could be another causal path, (and it could be longer, instead of with just a single intermediate variable).
As in case 3, you can think of as dampening the effect of , and with long proposed causal paths, we might expect the net effect to be small, consistently with the intuition that the predictable impacts on the far future decrease over time due to ignorance/noise and chaos, even though the actual impacts may compound due to chaos.
Maybe I'll write this up as a full post after I've thought more about it. I imagine there's been writing related to this, including in the EA and rationality communities.
MichaelStJules @ 2020-08-19T03:21 (+2)
I think cluster thinking and the use of sensitivity analysis are approaches for decision making under deep uncertainty, when it's difficult to commit to a particular joint probability distribution or weight considerations. Robust decision making is another. The maximality rule is another: given some set of plausible (empirical or ethical) worldviews/models for which we can't commit to quantifying our uncertainty, if A is worse in expectation than B under some subset of plausible worldviews/models, and not better than B in expectation under any such set of plausible worldviews/models, we say A < B, and we should rule out A.
It seems like EAs should be more familiar with the field of decision making under deep uncertainty. (Thanks to this post by weeatquince for pointing this out.)
See also:
- Deep Uncertainty by Walker, Lempert and Kwakkel for a short review.
- Decision Making under Deep Uncertainty: From Theory to Practice for a comprehensive text.
- Heuristics for clueless agents: how to get away with ignoring what matters most in ordinary decision-making by David Thorstad and Andreas Mogensen
- Many Weak Arguments vs. One Relatively Strong Argument and Robustness of Cost-Effectiveness Estimates and Philanthropy by Jonah Sinick
- Why I'm skeptical about unproven causes (and you should be too) by Peter Hurford (LW, blog)
- The Optimizer's Curse & Wrong-Way Reductions by Chris Smith (blog)
MichaelStJules @ 2020-08-22T20:54 (+2)
EDIT: I think this approach isn't very promising.
The above mentioned papers by Mogensen and Thorstad are critical of the maximality rule for being too permissive, but here's a half-baked attempt to improve it:
Suppose you have a social welfare function , and want to compare two options, and . Suppose further that you have two sets of probability distributions of size for the outcome of each of of and , . Then ( is at least as good as ) if (and only if) there is a bijection such that
, (1)
and furthermore, ( is strictly better than ) if the above inequality is strict for some .
This means pairing asymmetric/complex cluelessness arguments. Suppose you think helping an elderly person cross the street might have some important effect on the far future (you have some ), but you think not doing so could also have a similar far-future effect (according to ), but the short-term consequences are worse, and under some pairing of distributions/arguments , helping the elderly person always looks at least as good and under one pair looks better, so you should do it. Pairing distributions like this in some sense forces us to give equal weight to and , and maybe this goes too far and assumes away too much of our cluelessness or deep uncertainty?
The maximality rule as described in Maximal Cluelessness effectively assumes a pairing is already given to you, by instead using a single set of distributions that can each be conditioned on taking action or . We'd omit , and the expression replacing (1) above would be
.
I'm not sure what to do for different numbers of distributions for each option or infinitely many distributions. Maybe the function should be assumed given, as a preferred mapping between distributions, and we could relax the surjectivity, total domain, injectivity and even fact that it's a function, e.g. we compare for pairs , for some relation (subset) . But assuming we already have such a function or relation seems to assume away too much of our deep uncertainty.
One plausibly useful first step is to sort and according to the expected values of and under their corresponding probability distributions, respectively. Should the mapping or relation preserve the min and max? How should we deal with everything else? I suspect any proposal will seem arbitrary.
Perhaps we can assume slightly more structure on the sets for each option by assuming multiple probability distributions on , and go up a level (and we could repeat). Basically, I want to give probability ranges to the expected value of the action , and then compare the possible expected values of these expected values. However, if we just multiply our higher-order probability distributions by the lower-order ones, this comes back to the original scenario.
MichaelStJules @ 2019-10-25T04:19 (+2)
If we think
1. it's always better to improve the welfare of an existing person (or someone who would exist anyway) than to bring others into existence, all else equal, and
2. two outcomes are (comparable and) equivalent if they have the same distribution of welfare levels (but possibly different identities; this is often called Anonymity),
then not only would we reject Mere Addition (the claim that adding good lives, even those which are barely worth living but still worth living, is never bad), but the following would be true:
Given any two nonempty populations and , if any individual in is worse off than any individual in , then is worse than . In other words, we shouldn't add to a population any individual who isn't at least as well off as the best off in the population, all else equal.
Intuitively, adding someone with worse welfare than someone who would exist anyway is equivalent to reducing the existing individual's welfare and adding someone with better welfare than them; you just swap their welfares.
More formally, suppose , a member of the original population with welfare , is better off than , a member of the added population with welfare , so . Then consider
which is , but has instead of , with welfare .
which is , but has instead of , with welfare .
Then, is better than , by the first hypothesis, because the latter has all the same individuals from (and extras from ) with exactly the same welfare levels, except for (from and ) who is worse off with welfare (from ) instead of (from ). So .
And is equivalent to , by the second hypothesis, because the only difference is that we've swapped the welfare levels of and . So .
So, by transitivity (and the independence of irrelevant alternatives),
MichaelStJules @ 2019-10-27T03:47 (+3)
If welfare is real-valued (specifically from an interval ), then Maximin (maximize the welfare of the worst off individual) and theories which assign negative value to the addition of individuals with non-maximal welfare satisfy the properties above.
Furthermore, if along with welfare from a real interval and property 1 in the previous comment (2. Anonymity is not necessary), the following two properties also hold:
3. Extended Continuity, a modest definition of continuity for a theory comparing populations with real-valued welfares which must be satisfied by any order representable by a real-valued function that is continuous with respect to the welfares of the individuals in each population, and
4. Strong Pareto (according to one equivalent definition, under transitivity and the independence of irrelevant alternatives): if two outcomes with the same individuals in their populations differ only by the welfare of one individual, then the outcome in which that individual is better off is strictly better than the other,
then the theory must assign negative value to the addition of individuals with non-maximal welfare (and no positive value to the addition of individuals with maximal welfare) as long as any individual in the initial population has non-maximal welfare. In other words, the theory must be antinatalist in principle, although not necessarily in practice, since all else is rarely equal.
Proof : Suppose is any population with an individual with some non-maximal welfare and consider adding an individual who would also have some non-maximal welfare . Denote, for all small enough (),
: the population , but where individual has welfare (which exists for all sufficiently small , since is non-maximal, and welfare comes from an interval).
Also denote
: the population containing only , with non-maximal welfare , and
: the population containing only , but with some welfare ( is non-maximal, so there must be some greater welfare level).
Then
where the first inequality follows from the hypothesis that it's better to improve the welfare of an existing individual than to add any others, and the second inequality follows from Strong Pareto, because the only difference is 's welfare.
Then, by Extended Continuity and the first inequality for all (sufficiently small) , we can take the limit (infimum) of as to get
so, it's no better to add even if they would have maximal welfare, and by transitivity (and the independence of irrelevant alternatives) with ,
so it's strictly worse to add with non-maximal welfare. This completes the proof.