Thoughts on “The Case for Strong Longtermism” (Greaves & MacAskill)
By MichaelA🔸 @ 2021-05-02T18:00 (+30)
I recently read Greaves & MacAskill’s working paper “The case for strong longtermism” for a book/journal club, and noted some reactions to the paper. I’m making this post to share slightly-neatened-up versions of those reactions, and also to provide a space for other people to share their own reactions.[1] I’ll split my thoughts into separate comments, partly so it’s easier for people to reply to specific points.
I thought the paper outlined what (strong) longtermism is claiming - and many potential arguments for or against it - more precisely, thoroughly, and clearly than anything else I’ve read on the subject.[2] As such, it’s now one of the two main papers I’d typically recommend to someone who wanted to learn about longtermism from a philosophical perspective (as opposed to learning about what one’s priorities should be, given longtermism). (The other paper I’d typically recommend is Tarsney’s “The epistemic challenge to longtermism”.)
So if you haven’t read the paper yet, you should probably do that before / instead of reading my thoughts on it.
But despite me thinking the paper was a very useful contribution, my comments will mostly focus on what I see as possible flaws with the paper - some minor, some potentially substantive.
Here’s the paper’s abstract:
Let strong longtermism be the thesis that in a wide class of decision situations, the option that is ex ante best is contained in a fairly small subset of options whose ex ante effects on the very long-run future are best. If this thesis is correct, it suggests that for decision purposes, we can often simply ignore shorter-run effects: the primary determinant of how good an option is (ex ante) is how good its effects on the very long run are. This paper sets out an argument for strong longtermism. We argue that the case for this thesis is quite robust to plausible variations in various normative assumptions, including relating to population ethics, interpersonal aggregation and decision theory. We also suggest that while strong longtermism as defined above is a purely axiological thesis, a corresponding deontic thesis plausibly follows, even by non-consequentialist lights.
[1] There is already a linkpost to this paper on the Forum, but that was posted in a way that meant it never spent time on the front page, so there wasn't a time when people could comment and feel confident that people would see those comments.
There's also the post Possible misconceptions about (strong) longtermism, which I think is good, but which serves a somewhat different role.
[2] Other relevant things I’ve read include, for example, Bostrom’s 2013 paper on existential risk and Ord’s The Precipice. The key difference is not that those works are lower quality but rather that they had a different (and also important!) focus and goal.
Note that I haven’t read Beckstead’s thesis, and I’ve heard that that was (or perhaps is) the best work on this. Also, Tarsney’s “The epistemic challenge to longtermism” tackles a somewhat similar goal similarly well to Greaves and MacAskill.
This post does not necessarily represent the views of any of my employers.
MichaelA @ 2021-05-02T18:01 (+9)
I think the argument in the section “A meta-option: Funding research into longtermist intervention prospects” is important and is sometimes overlooked by non-longtermists.
Here’s a somewhat streamlined version of the section’s key claims:
let us suppose instead, for the sake of argument, that some reasonable credences do not assign higher expected cost-effectiveness to any particular one of the proposed longtermist interventions than they do to the best short-termist interventions, because of the thinness of the case in support of each such intervention. [...]
It does not follow that the credences in question would recommend funding short-termist interventions. That is because Shivani also has what we might call a “second-order” longtermist option: funding research into the cost-effectiveness of various possible attempts to influence the very long run, such as those discussed above. Provided that subsequent philanthropists would take due note of the results of such research, this second-order option could easily have higher expected value (relative to Shivani’s current probabilities) than the best short-termist option, since it could dramatically increase the expected effectiveness of future philanthropy (again, relative to Shivani’s current probabilities).
Finally, here is another option that is somewhat similar in spirit: rather than spending now, Shivani could save her money for a later time. [...] This fund would pay out whenever there comes a time when there is some action one could take that will, in expectation, sufficiently affect the value of the very long-run future.These two considerations show that the bar for empirical objections to our argument to meet is very high. Not only would it need to be the case that, out of all the (millions) of actions available to an actor like Shivani, for none of them should one have non-negligible credence that one can positively affect the expected value of the long-run future by any non-negligible amount. It would also need to be the case that one should be virtually certain that there will be no such actions in the future, and that there is almost no hope of discovering any such actions through further research. This constellation of conditions seems highly unlikely.
Roughly the same argument has often come to my mind as well as one of the strongest arguments for at least doing longtermist research, even if one felt that all object-level longtermist interventions that have been proposed so far are too speculative. (I’d guess that I didn’t independently come up with the argument, but rather heard a version of it somewhere else.)
One thing I’d add is that one could also do cross-cutting work, such as work on the epistemic challenge to longtermism, rather than just work to better evaluate the cost-effectiveness of specific interventions or classes of interventions.
MichaelStJules @ 2021-05-02T22:53 (+4)
Two possible objections:
-
It might be too difficult to ever identify ahead of time a long-termist intervention as robustly good, due to the absence of good feedback and skepticism, cluelessness or moral uncertainty.
-
Cross-cutting work, if public especially, can also benefit others with goals/values unaligned with your own and do more harm than good. More generally, resources and capital, including knowledge, you try to build can also end up in the wrong hands eventually, which undermines patient philanthropy, too.
MichaelA @ 2021-05-03T06:47 (+4)
On your specific points:
- Given that you said "robustly" in your first point, it might be that you're adopting something like risk-neutrality or another alternative to expected value theory. If so, I'd say that:
- That in itself is a questionable assumption, and people could do more work on which decision theory we should use.
- I personally lean more towards just expected value theory (but with this incorporating skeptical priors, adjusting for the optimiser's curse, etc.), at least in situations that don't involve "fanaticism". But I acknowledge uncertainty on that front too.
- If you just meant "It might be too difficult to ever identify ahead of time a long-termist intervention as better in expectation than short-termist interventions", then yeah, I think this might be true (at least if fanaticism in the philosophical sense is bad, which seems to be an open question). But I think we actually have extremely little evidence for this claim.
- We know from Tetlock's work that some people can do better than chance at forecasts over the range of months and years.
- We seem to have basically no evidence about how well people who are actually trying (and especially ones aware of Tetlock's work) do on forecasts over much longer timescales (so we don't have specific evidence that they'll do well or that they'll do badly).
- We have a scrap of evidence suggesting that forecasting accuracy declines as the range increases, but relatively slowly (though this was comparing a few months to about a year; ).
- So currently it seems to me that our best guess should be that forecasting accuracy continues to decline, but doesn't hit zero, although maybe it asymptotes to it eventually.
- That decline might be sharp enough to offset the increased "scale" of the future, or might not, depending both on various empirical assumptions and on whether we accept or reject "fanaticism" (see Tarsney's epistemic challenge paper).
- I agree that basically all interventions have downside risks, and that one notable category of downside risks is the risk that resources/capital/knowledge/whatever end up being used for bad things by other people. (This could be because they have bad goals or because they have good goals but bad plans; see also.) I think this will definitely mean we should deprioritise some otherwise plausible longtermist interventions. I also agree that it might undermine strong longtermism as a whole, but that seems very unlikely to me.
- One reason is that similar points also apply to short-termist interventions.
- Another is that it seems very likely that, if we try, we can make it more likely that the resources end up in the hands of people who will (in expectation) use them well, rather than in the hands of people who will (in expectation) use them poorly.
- We can also model these downside risks.
- We haven't done this in detail yet as far as I'm aware
- But we have come up with a bunch of useful concepts and frameworks for that (e.g., information hazards, unilateralist's curse, this post of mine [hopefully that's useful!])
- And there's been some basic analysis and estimation for some relevant things. e.g. in relation to "punting to the future"
(All that said, you did just say "Two possible objections", and I do think pointing out possible objections is a useful part of the cause prioritisation project.)
MichaelA @ 2021-05-03T06:33 (+4)
I basically agree with those two points, but also think they don't really defeat the case for strong longtermism, or at least for e.g. some tens or hundreds or thousands of people doing "second- or third-order" research on these things.
This research could, for example, attempt to:
- flesh out the two points you raised
- quantify how much those points reduce the value of second- or third-order research into longtermism
- consider whether there are any approaches to first- or second- or third-order longtermism-related work that don't suffer those objections, or suffer them less
It's hard to know how to count these things, but, off the top of my head, I'd estimate that:
- something like 50-1000 people have done serious, focused work to identify high-priority longtermist interventions
- fewer have done serious, focused work to evaluate the cost-effectiveness of those interventions, or to assess arguments for and against longtermism (e.g., work like this paper or Tarsney's epistemic challenge paper)
So I think we should see "strong longtermism actually isn't right, e.g. due to the epistemic challenge" as a live hypothesis, but that it does seem too early to say we've concluded that or that we've concluded it's not worth looking into. It seems that we're sufficiently uncertain, the potential stakes are sufficiently high, and the questions have been looked into sufficiently little that, whether we're leaning towards thinking strong longtermism is true or that it's false, it's worth having at least some people doing serious, focused work to "double-check".
MichaelA @ 2021-05-02T18:47 (+8)
[This point is unrelated to the paper's main arguments]
It seems like the paper implicitly assumes that humans are the only moral patients (which I don't think is a sound assumption, or an assumption the authors themselves would actually endorse).
- I think it does make sense for the paper to focus on humans, since it typically makes sense for a given paper to tackle just one thorny issue (and in this instance it's, well, the case for strong longtermism)
- But I think it would’ve been good for the paper to at least briefly acknowledge that this is just a simplifying assumption
- Perhaps just in a footnote
- Otherwise the paper is kind-of implying that the authors really do take it as a given that humans are the only moral patients
- And I think it's good to avoid feeding into that implicit assumption which is already very common among people in general (particularly outside of EA)
MichaelA @ 2021-05-02T18:15 (+8)
The authors imply (or explicitly state?) that any positive rate of pure time discounting would guarantee that strong longtermism is false (or at least that their arguments for strong longtermism wouldn’t work in that case).
- But I think that this is incorrect. Specifically, I think that strong longtermism could hold despite some positive rate of pure time discounting, as long as that rate is sufficiently low.
- How low that rate is depends on the size of other factors
- E.g., the factor by which the value influenceable in the future is larger than that influenceable in the present
- E.g., the rate at which it becomes harder to predict consequences that are further in the future.
- (I’m pretty sure I’ve seen basically this point raised elsewhere, but I can’t remember where.)
- How low that rate is depends on the size of other factors
- The specific statements from Greaves and MacAskill I think I disagree with are
In particular, [an assumption we make] rules out a positive rate of pure time preference. Such a positive rate would mean that we should intrinsically prefer a good thing to come at an earlier time rather than a later time. If we endorsed this idea, our argument would not get off the ground.
To see this, suppose that future well-being is discounted at a modest but significant positive rate – say, 1% per annum. Consider a simplified model in which the future certainly contains some constant number of people throughout the whole of an infinitely long future, and assume for simplicity that lifetime well-being is simply the time-integral of momentary well-being. Suppose further that average momentary well-being (averaged, that is, across people at a time) is constant in time. Then, with a well-being discount rate of 1% per annum, the amount of discounted well-being even in the whole of the infinite future from 100 years onwards is only about one third of the amount of discounted well-being in the next 100 years. While this calculation concerns total well-being rather than differences one could make to well-being, similar considerations will apply to the latter. [emphasis added]
- I assume they’re right that, given that particular simplified model, “the amount of discounted well-being even in the whole of the infinite future from 100 years onwards is only about one third of the amount of discounted well-being in the next 100 years”
- (I say “I assume” just because I haven’t checked the math myself)
- But as far as I can tell, it’s easy to specify alternative (and plausible) models in which strong longtermism would remain true despite some rate of pure time discounting
- E.g., we could simply tweak their simple model to include that well-being per year expands at a rate that’s above 1% (i.e., faster than the discount rate), either indefinitely or just for a large but finite length of time (e.g., 10,000 years)
- This could result from population growth or an increase in well-being per person.
- If we do this, then even after account for pure time discounting, the total discounted well-being per year is still growing over that period.
- This can allow the far future to contain far more value than the present, and can thus allow strong longtermism to be true.
- (Of course, there are also other things that that particular simple model doesn’t consider, and which could also affect the case for strong longtermism, such as the predictability of far future impacts. But there are still possible numbers that would mean strong longtermism could hold despite pure time discounting.)
- E.g., we could simply tweak their simple model to include that well-being per year expands at a rate that’s above 1% (i.e., faster than the discount rate), either indefinitely or just for a large but finite length of time (e.g., 10,000 years)
MichaelA @ 2021-05-02T18:05 (+6)
I don’t think the authors ever make it very clear what “wide class of decision situations” means in the definitions of axiological and deontic strong longtermism.
They do give a rough sense of what they mean, and perhaps that suffices for now. But I think it’d be useful to be a bit clearer.
Here’s a relevant thing they do say:
Which decision situations fall within the scope of our claims? In the first instance, we argue that the following is one such case:
The cause-neutral philanthropist. Shivani has $10,000. Her aim is to spend this money in whatever way would most improve the world, and she is open to considering any project as a means to doing this.
The bulk of the paper is devoted to defending the claim that this situation is within the scope of axiological strong longtermism; in the final two sections we generalise this to a wider range of decision situations.
They also say:
We agree that the washing-out hypothesis is true of some decision contexts [which I think would make strong longtermism false in those contexts]: in particular, for many relatively trivial decision contexts, such as a decision about whether or not to click one’s fingers. However, we claim that it is also false of many decision situations, and in particular of Shivani’s. If Shivani is specifically looking for options whose effects do not wash out, we claim she can find some.
But, as noted, these quotes still seem to me to leave the question of what “wide class of decision situations” means to them fairly open.
MichaelA @ 2021-05-02T18:21 (+5)
I think the authors are a bit too quick and confident in dismissing the idea that population ethics could substantially change their conclusions
They write:
However, the other options for long-run influence we discussed (in section 3.4) are attempts to improve average future well-being, conditional on humanity not going prematurely extinct. While the precise numbers that are relevant will depend on the precise choice of axiology (and we will not explicitly crunch suggested numbers for any other axiologies), any plausible axiology must agree that this is a valuable goal. Therefore, the bulk of our argument is robust to plausible variations in population axiology.
- I think I essentially agree, but (as noted) I think that that’s a bit too quick and confident.
- In particular, I think that, if we rule out extinction as a priority, it then becomes more plausible that empirical considerations would mean strong longtermism is either false or has no unusual implications
- It’s worth noting that the authors’ toy model suggested that influencing a future world government is something like 30 times as valuable as giving to AMF
- So the “margin for error” for non-extinction-related longtermist interventions might be relatively small
- I.e., maybe a short-termist perspective would come out on top if we made different plausible empirical assumptions, or if we found something substantially more cost-effective than AMF
- But of course, this was just with one toy model
- And the case for strong longtermism could look more robust if we made other plausible changes in the assumptions, or if we found more cost-effective interventions for reducing non-extinction-related trajectory changes
- So the “margin for error” for non-extinction-related longtermist interventions might be relatively small
- It’s worth noting that the authors’ toy model suggested that influencing a future world government is something like 30 times as valuable as giving to AMF
- Also, they quickly dismiss the idea that one approach to risk aversion would undermine the case for strong longtermism, with the reason being partly that extinction risk reduction still looks very good under that approach to risk aversion. But if we combined that approach with certain population ethics views, it might be the case that the only plausible longtermist priorities that remain focus on reducing the chance of worse-than-extinction futures.
- I.e., given those assumptions, we might have to rule out a focus on reducing extinction risk and rule out a focus on increasing the quality of futures that would be better than extinction anyway.
- This would be for reasons of population ethics and reasons of risk-aversion, respectively
- This could then be an issue for longtermism if we can’t find any promising interventions that reduce the chance of worse-than-extinction futures
- Though I tentatively think that there are indeed promising interventions in this category
- See also discussion of s-risks
- Though I tentatively think that there are indeed promising interventions in this category
- The relevant passage about risk aversion is this one:
- I.e., given those assumptions, we might have to rule out a focus on reducing extinction risk and rule out a focus on increasing the quality of futures that would be better than extinction anyway.
First, we must distinguish between two senses of “risk aversion with respect to welfare”. The standard sense is risk aversion with respect to total welfare itself (that is, vNM value is a concave function of total welfare, w). But risk aversion in that sense tends to increase the importance of avoiding much lower welfare situations (such as near-future extinction), relative to the importance of increasing welfare from an already much higher baseline (as in the case of distributing bed nets in a world in which extinction is very far in the future).
- On the other hand, I think the authors understate the case for extinction risk reduction being important from a person-affecting view
- They write “Firstly, “person-affecting” approaches to population ethics tend to regard premature extinction as being of modest badness, possibly as neutral, and even (if the view in question also incorporates “the asymmetry”) possibly as a good thing (Thomas, manuscript).”
- But see The person-affecting value of existential risk reduction
- See also discuss in The Precipice of how a moral perspective focused on "the present" might still see existential risk reduction as a priority
- I personally think that this is neither obviously false nor obviously true, so all I'd have suggested to Greaves & MacAskill is adding a brief footnote to acknowledge the possibility
- See also discuss in The Precipice of how a moral perspective focused on "the present" might still see existential risk reduction as a priority
MichaelStJules @ 2021-05-02T22:37 (+4)
I think it's worth clarifying that you mean worse-than-exrinction futures according to asymmetric views. S-risks can still happen in a better-than-extinction future according to classical utilitarianism, say, and could still be worth reducing.
There might be other interventions to increase wellbeing according to some person-affecting views, by increasing positive wellbeing without requiring additional people, but do any involve attractor states? Maybe genetically engineering humans to be happier or otherwise optimizing our descendants (possibly non-biological) for happiness? Maybe it's better to do this before space colonization, but I think intelligent moral agents would still be motivated to improve their own wellbeing after colonization, so it might not be so pressing for them, although could be for moral patients who have too little agency if we send them out on their own.
MichaelA @ 2021-05-03T06:53 (+4)
S-risks can still happen in a better-than-extinction future according to classical utilitarianism, say, and could still be worth reducing.
Yeah, this is true. On this, I've previously written that:
Two mistakes people sometimes make are discussing s-risks as if they’re entirely distinct from existential risks, or discussing s-risks as if they’re a subset of existential risks. In reality:
- There are substantial overlaps between suffering catastrophes and existential catastrophes, because some existential catastrophes would involve or result in suffering on an astronomical scale.
- [...]
- But there could also be suffering catastrophes that aren’t existential catastrophes, because they don’t involve the destruction of (the vast majority of) humanity’s long-term potential.
- This depends on one’s moral theory or values (or the “correct” moral theory or values), because, as noted above, that affects what counts as fulfilling or destroying humanity’s long-term potential.
- For example, the Center on Long-Term Risk notes: “Depending on how you understand the [idea of loss of “potential” in definitions] of [existential risks], there actually may be s-risks which aren’t [existential risks]. This would be true if you think that reaching the full potential of Earth-originating intelligent life could involve suffering on an astronomical scale, i.e., the realisation of an s-risk. Think of a quarter of the universe filled with suffering, and three quarters filled with happiness. Considering such an outcome to be the full potential of humanity seems to require the view that the suffering involved would be outweighed by other, desirable features of reaching this full potential, such as vast amounts of happiness.”
- In contrast, given a sufficiently suffering-focused theory of ethics, anything other than near-complete eradication of suffering might count as an existential catastrophe.
Your second paragraph makes sense to me, and is an interesting point I don't think I'd thought of.
MichaelA @ 2021-05-02T18:43 (+4)
[This point is unrelated to the paper's main arguments]
The authors write “If we create a world government, then the values embodied in the constitution of that government will constrain future decision-makers indefinitely.” But I think this is either incorrect or misleading.
(Whether it's incorrect or misleading depends on how narrowly the term “constitution” was intended to be interpreted.)
- I say this because I think that other things could constrain future decision-makers to a similar or greater extent than the formal written constitution, and because the constitution could later be amended or ignored
- Relevant types of “other things” include the institutional design of the world government, the people who initially have power, and the populations they represent or rule over
- As an analogy, it seems useful to ask how much current US decision-makers are influenced by the US Constitution relative to a wide range of other factors, and the extent to which we can see the creation of the US Constitution as a strong lock-in event
- It doesn't seem that the answers are just "influenced only by the Constitution" and "yes, it was definitely a strong lock-in event"
- More generally, I have a tentative impression that MacAskill is more confident than I am that things like formal, written constitutions would have a major and “locked-in” influence
- This is also based in part on small parts of MacAskill’s post Are we living at the most influential time in history?
- My uncertainty about this also reduces my confidence that influencing the shape of a future world government should be a longtermist priority
- Though I’d definitely like to see more exploration of the topic (since it might be very important), and hope to do some exploration of that at some point myself
- (See also totalitarianism, global governance, and value lock-in.)
Michael_Wiebe @ 2021-05-04T17:25 (+3)
What's your take on this argument:
"Why do we need longtermism? Let's just do the usual approach of evaluating interventions based on their expected marginal utility per dollar. If the best interventions turn out to be aimed at the short-term or long-term, who cares?"
MichaelA @ 2021-05-05T06:55 (+6)
tl;dr:
- I do think what we're doing can be seen as an attempt to approximate the process of evaluating interventions based on everything relevant to their expected marginal utility per dollar.
- But we never model anything close to all of reality's details, so what we focus on, what proxies we use, etc. matters. And it seems usually more productive to "factor out" certain questions like "should we focus on the long-term future or the nearer term?" and "should we focus on humans or nonhumans?", and have dedicated discussions about them, rather than discussing them in detail within each intervention prioritisation decision or cost-effectiveness model.
- "Longtermism" highlights a category of effects that previously received extremely little attention. "Wild animal suffering" is analogous. So the relevant effects would've been especially consistently ignored in models if not for these framings/philosophies/cause areas, even if in theory they always "should have been" part of our models.
[I wrote this all quickly; let me know if I should clarify or elaborate on things]
---
Here's one way to flesh out point 2:
- I think (almost?) no one ever has actually taken the approach of trying to make anything close to a fully fine-grained model the expected marginal utility per dollar of an intervention.
- I.e., I think all cost-effectiveness models that have ever been made massively simplify some things, ignore other things, use proxies, etc.
- As such, it really matters what "aspects of the world" you're highlighting as worth modelling in detail, what proxies you use, etc.
- E.g., I think GiveWell's evaluations are basically just based on the next few decades or so (as well as things like room for more funding), and don't explicitly consider any time beyond that
- (Maybe this is a bit wrong, since I haven't looked closely at GiveWell models for a while, but I think it's right)
- Meanwhile, prioritisation by longtermists focuses mostly on long-term effects, and does less detailed modelling of and places less emphasis on intrinsic effects in the nearer term
- Effects in the nearer term that have substantial expected impact on the long-term are (ideally) considered more, of course
- Predictably, this leads places like GiveWell to focus more on interventions that seem more likely to be best in the near-term, and places like the EA Long-Term Future Fund to focus more on interventions that seem more likely to be best in the long-term
- So whether we're bought into longtermism seems in theory like it'd make a difference to how we evaluate things and what we end up prioritising, and in practice that also seems to be the case
Here's another way of fleshing out point 2, copied from a comment I made on a doc where someone essentially proposed evaluating all interventions in terms of WELLBYs:
I'm inclined to think that, for longtermist interventions, the metrics that are usually most useful would be things like percentage or percentage point reduction in x-risks or increase in total value of the future, rather than things like WELLBYs.
I think the core reason is that that allows one to compare many longtermist interventions against each other without explicitly accounting for issues like how large the future will be, what population ethics view one holds, how many biological humans vs whole brain emulations vs artificial sentiences vs nonhuman animals ... there'll be, how much moral weight to assign to each of those types of beings, ...
Then those issues can just be taken into account for the rarer task of comparing longtermist interventions to other interventions
[Also, my impression is that WELLBYs are currently conceptualised for humans only, right?]
It might be best to have one main metric for each of the main broad cause areas, and then a very rough sense of the exchange rate between those metrics.
Here's another way to flesh out point 2::
- GiveWell benefits from the existence of many scientific fields like epidemiology. And it really makes sense that those fields exist in their own right, and then their relevant conclusions are "plugged in" to GiveWell models or inform high-level decisions about what to bother making models about and how to structure the models, rather than the fields basically existing only "within GiveWell models".
- Likewise, I think it makes sense for there to be communities of people and bodies of work looking into things like how large the future will be, what population ethics view one should hold, how many biological humans vs whole brain emulations vs artificial sentiences vs nonhuman animals ... there'll be, how much moral weight to assign to each of those types of beings, ...
- And I think it makes sense for that to not just be part of our cost-effectiveness models
All that said:
- there may be many models where it makes sense to explicitly model both the intrinsic value of near-term effects and the intrinsic value of long-term effects (e.g., I think I recall that ALLFED does this)
- and there may be many models where it makes sense to include parameters for these "cross-cutting uncertainties", like what population ethics view one should hold, and see how that affects the conclusions
- and ultimately I do think that what we're doing should be seen as an attempt to approximate the process of deciding what to do based on all morally relevant effects, weighted appropriate
Michael_Wiebe @ 2022-03-02T23:45 (+3)
So whether we're bought into longtermism seems in theory like it'd make a difference to how we evaluate things and what we end up prioritising, and in practice that also seems to be the case
It seems backwards to first "buy into" longtermism, and then use that to evaluate interventions. You should instead evaluate longtermist interventions, and use that to decide whether to buy into longtermism.
Michael_Wiebe @ 2022-03-02T23:41 (+1)
the metrics that are usually most useful would be things like percentage or percentage point reduction in x-risks or increase in total value of the future, rather than things like WELLBYs. [...]
It might be best to have one main metric for each of the main broad cause areas, and then a very rough sense of the exchange rate between those metrics.
This seems fine; if you're focusing on percentage point reduction in x-risks, you can abstract away from questions about the size of the future, population ethics, etc. But the key is having the exchange rate, which will be a function of those parameters. So you can work on a specific parameter (eg x-risk), which is then plugged back into the exchange rate function.
MichaelA @ 2021-05-02T18:58 (+3)
There are a few topics I don't remember the paper directly addressing, and that I'd be interested to hear people's thoughts on (including but not limited to the authors' thoughts). (Though it's also possible that I just forgot the bits of the paper where they were covered.)
- How sensitive is strong longtermism (or the authors' specific arguments) to an increasing number of people acting in line with strong longtermism?
- I haven’t tried thinking it through carefully myself yet
- I only thought of this partway through the paper, when I saw the authors use the rarity of a strong longtermist perspective as an argument in favour of such a perspective
- Specifically, they write “A complementary reason for suspecting that axiological strong longtermism is true concerns the behaviour of other actors. In general, there are diminishing marginal returns to the expenditure of resources towards a given aim, because the lower-hanging fruit are usually picked first. [...] the vast majority of other actors [...] exhibit a significant amounts of preference for near-term positive effects over long-term positive effects (Frederick, Loewenstein and O’Donoghue 2002). Shivani should therefore expect that most other actors have been selectively funding projects that deliver high short-run benefits, and leaving unfunded projects that are better by Shivani’s lights, but whose most significant benefits occur over the course of the very long run. This means that Shivani should expect to find axiological strong longtermism true at the current margin — provided (which we have not yet argued) that there were any projects with significantly beneficial ex ante effects on the very long-run future to begin with.”
- I don’t think I remember the paper directly addressing concerns about “fanaticism” or “Pascal’s muggings”
- And that seems to me like one of the best reasons to doubt strong longtermism
- Though I’m currently inclined to act according to longtermism regardless
- (Partly because it seems pretty plausible that strong longtermism does not depend on minuscule probabilities, and partly because it seems pretty plausible to me that fanaticism is actually fine; see discussion in the posts with the fanaticism tag)
- Though I’m currently inclined to act according to longtermism regardless
- Though the paper did address things like risk aversion, so maybe that effectively covered this issue?
- And that seems to me like one of the best reasons to doubt strong longtermism
- I can’t remember whether the paper addressed demandingness, and where to draw the line? Maybe one could argue that the authors’ arguments “prove too much” and reach absurdly demanding conclusions?
- Perhaps the authors felt that the existing debate about the demandingness of utilitarianism in general was sufficient, and they didn’t need to tackle that head-on here?
- I guess that seems reasonable to me?
- I think the authors essentially just claim that it seems fairly clear that we should do at least somewhat more than we do now, and that concerns about demandingness don’t counter that point, without addressing precisely how much we should do.
- They write: “Third, one might hold that some prerogatives are absolute: they cannot be overridden, no matter what the consequences. Absolutist views tend not to be very plausible, and have few adherents. (In the case of constraints as opposed to prerogatives, for instance, few people share Kant’s view that even when an innocent life depends on it, one should not tell a lie even to an intending murderer.) However, for our purposes, even if the non-consequentialist is absolutist with respect to some prerogatives, our argument will most likely still go through for most decision situations. This is because, for most decision-makers, the case for strong longtermism does not involve or at least does not rely on the existence of extraordinarily demanding options. Perhaps, no matter how great the stakes, one is never required to give up one’s own life, or that of one’s own child, and perhaps one is never required to reduce oneself from a Western standard of living to an allowance of $2 per day. But, for the vast majority of decision-makers, in the vast majority of decision-situations, these will not be the choices at hand. Instead, the choice will be whether to switch career paths, or live somewhat more frugally, or where to donate a specified amount of non-necessary income, in order to try to positively influence the long-run future. Even if one is sympathetic to absolutism about some sacrifices, it’s very implausible to be absolutist about these comparatively minor sorts of sacrifices (MacAskill, Mogensen, and Ord 2018).”
- But I think the case for strong longtermism might be somewhat more satisfying or convincing if we also knew "where the line was", even if the line is far ahead of where most people presently are
- Perhaps the authors felt that the existing debate about the demandingness of utilitarianism in general was sufficient, and they didn’t need to tackle that head-on here?
MichaelA @ 2021-05-02T18:04 (+3)
Three specific good things from the paper which I’d like to highlight:
- Their concept of “attractor states” seemed useful to me.
- It’s similar to the existing ideas of existential catastrophe, lock-in (e.g., value lock-in), and trajectory change. But it’s a bit different, and I think having multiple concepts/models/framings is often useful.
- The distinction between axiological strong longtermism and deontic strong longtermism is interesting.
- Axiological strong longtermism is the claim that “In a wide class of decision situations, the option that is ex ante best is contained in a fairly small subset of options whose ex ante effects on the very long-run future are best”
- Deontic strong longtermism is the claim that “In a wide class of decision situations, the option one ought, ex ante, to choose is contained in a fairly small subset of options whose ex ante effects on the very long-run future are best”
- I thought that the section of world government was a handy summary of important ideas on that topic (which is, in my view, under-discussed in EA).
- See also posts tagged Totalitarianism or Global governance.
(These were not necessarily the most important good things about the paper, and were certainly not the only ones.)
MichaelA @ 2021-05-02T18:04 (+2)
Tangent: A quote to elaborate on why I think having multiple concepts/models/framings is often useful.
This quote is from Owen Cotton-Barratt on the 80,000 Hours Podcast, and it basically matches my own views:
And when we build some model like this, we’re focusing attention on some aspects of [the world]. And because attention is a bit of a limited resource, it’s pulling attention away from other things. And so if we say, “Well, we want to analyze everything in terms of these abstract defense layers,” it’s pulling attention away from, “Okay, let’s just understand what we currently guess are the biggest risks,” and going in and analyzing those on a case by case basis.
And I tend to think that the right approach is not to say, “Well, we just want to look for the model which is making the best set of trade offs here”, and is more to say, “We want to step in and out and try different models which have different lenses that they’re bringing on the problem and we’ll try and understand it as much as possible from lots of different angles”. Maybe we take an insight that we got from one lens and we try and work out, “Okay, how do we import that and what does it mean in this other interpretation?
MichaelA @ 2021-05-02T18:36 (+2)
Part of the authors' argument is that axiological/consequentialist considerations outweigh other kinds of considerations when the stakes are sufficiently high. But I don't think the examples they give are as relevant or as persuasive/intuitive as they think.
(I personally basically agree with their conclusion, as I'm already mostly a utilitarian, but they want to convince people who aren't sold on consequentialism.)
They write
Further, in ‘emergency situation’ situations like wartime, axiological considerations outweigh non-consequentialist considerations (at least for those fighting a just war). Consider, for example, the intuitions that one would have with respect to how one should act if one lived in Britain during World War II. It’s very intuitive that, in that situation, that one is morally obligated to make significant sacrifices for the greater good that would not normally be required, such as by living far more frugally, separating oneself from one’s family, and taking significant risks to one’s own life — and this because the axiological stakes are so high.
- But I don’t think that the key thing driving these intuitions is the axiological stakes being so high
- I think the stakes of an individual’s decision in this context are probably lower than the stakes of donating effectively, given how little an individual’s frugality would influence the war effort
- Yet most people don’t have the intuition that what’s best axiologically is morally obligatory when it comes to donating effectively
- (Though that is probably at least partly because people underestimate the stakes involved in effective charity and/or overestimate the stakes involved in being frugal during WWII)
- Yet most people don’t have the intuition that what’s best axiologically is morally obligatory when it comes to donating effectively
- I'd guess that the major driver of our intuitions is probably actually humans having an instinct and/or strong norm for acting more cooperatively and shunning defectors when in situations of intergroup conflict
- E.g., we can also see people making strong sacrifices of a sort for their sports teams, where the axiological stakes are clearly fairly low
- Relatedly, I’d guess that people in WWII thought more in terms of a deontic duty to country or comrades, or virtues, or something like that, without explicit attention to consequences or axiology
- I think the stakes of an individual’s decision in this context are probably lower than the stakes of donating effectively, given how little an individual’s frugality would influence the war effort
- I also think it’s just empirically a fact that a decent portion of people didn’t make relevant types of significant sacrifices for the greater good during these periods, and/or didn’t feel that they were morally required to
- E.g., there were black markets and conscientious objectors
- I’m surprised by how confident Greaves and MacAskill seem to be about this example/argument. They use it again in two places:
- “First, one could reject the idea of ‘the good’ altogether (Thomson 2008). On this view, there is simply no such thing as axiology. It’s clear that our argument would not be relevant to those who hold such views. But such views have other problems, such as how to explain the fact that, in cases where there is a huge amount at stake, such as during wartime, ordinary prerogatives get overridden. It seems likely to us that any such explanation will result in similar conclusions to those we have drawn, via similar arguments.”
- “Let’s first consider the non-aggregationist response. Consider the example of someone alive in Britain during WWII, and considering whether or not to fight; or consider someone debating whether to vote in their country’s general election; or someone who is deciding whether to join an important political protest; or someone who is reducing their carbon footprint. In each case, the ex ante benefits to any particular other person are tiny. But in at least some such cases, it’s clear that the person is question is obligated to undertake the relevant action. ”
- Here the authors seem to me to be strangely confident that all readers would share the authors’ views/intuitions in some other cases as well.
- But I think it’s very clearly the case that many people in fact don’t share the intuition/view that those other actors are obligatory.
- Large fractions of people don’t vote in general elections, participate in any political protests, or make efforts to reduce carbon emissions. And I think many of these people would indeed say that they don’t think they're morally required to do these things (rather than thinking that they’re required but are weak-willed, or something like that).
- I wonder if this is partly due to the authors leaning politically left and non-libertarian, and their usual readers leaning the same way, such that they just don’t notice how other types of people would perceive the same situations?
- But I think it’s very clearly the case that many people in fact don’t share the intuition/view that those other actors are obligatory.
- Here the authors seem to me to be strangely confident that all readers would share the authors’ views/intuitions in some other cases as well.
MichaelA @ 2021-05-02T18:36 (+2)
(I think the following point might be important, but I also think I might be wrong and that I haven't explained the point well, so you may want to skip it.)
The authors claim that their case for strong longtermism should apply even for actors that aren't cause-neutral, and they give an example that makes it appear that adopting strong longtermism wouldn’t lead to very strange conclusions for an actor who isn’t cause-neutral. But I think that the example substantially understates the counterintuitiveness of the conclusions one could plausibly reach.
- Essentially, the example is a person who has to decide what deworming program to support, who might realise that the one that is best for the long-term isn’t best for the short-term, and who they argue should thus support the former program (the one that's best for the long-term).
- But both programs are presented as net positive in the short term, and economic growth is presented as net positive for the long term
- However, it’s plausible that deworming’s intended effects are bad for the long-term future. And if so, then strong longtermism might tell this person that, if possible, they should support a deworming program that utterly fails at its stated objective
- And if so, then it seems like the actor ends up acting in a very surprising way, and like they're actually cause-neutral after all (given that they're deviating a lot from their initial surface-level goal of deworming)
- I think that part of the issue is that the authors are implicitly defining “not cause-neutral” as “focused on a particular type of activity”, rather than “focused on a particular type of outcome”
- I think the more intuitive understanding of “not cause-neutral” would involve the actor focusing on the outcome of (e.g.) reducing the burden of intestinal worms
- And if we use that definition, then it seems much less obvious how strong longtermism would be relevant to a non-cause-neutral actor’s decisions
- I think the more intuitive understanding of “not cause-neutral” would involve the actor focusing on the outcome of (e.g.) reducing the burden of intestinal worms
MichaelA @ 2021-05-02T18:06 (+2)
The authors seem to make a good case for strong longtermism. But I don’t think they make a good case that strong longtermism has very different implications to what we’d do anyway (though I do think that that case can be made).
- That is, I don’t recall them directly countering the argument that the flow through effects of things that are best in the short term might be such that the same actions are also best in the long term, and that strong longtermism would thus simply recommend taking those actions.
- Though personally I do think that one can make fairly good arguments against such claims available.
- In particular, a “meta-level” argument along the lines of the post Beware surprising and suspicious convergence, as well as object-level arguments for the importance, tractability, and neglectedness of specific long-term-future-focused interventions (e.g., reducing anthropogenic existential risk).
- See also If you value future people, why do you consider near term effects?
- Cluelessness is also relevant, though my independent impression is that that’s not a useful concept (see here and here)
- In particular, a “meta-level” argument along the lines of the post Beware surprising and suspicious convergence, as well as object-level arguments for the importance, tractability, and neglectedness of specific long-term-future-focused interventions (e.g., reducing anthropogenic existential risk).
- But I think that some people reading the paper won’t be familiar with those arguments, and also that there’s room for reasonable doubt about those arguments.
- So it seems to me that the paper should’ve devoted a bit more time to that, or at least acknowledged the issue a bit more.