Thoughts on “The Case for Strong Longtermism” (Greaves & MacAskill)

By MichaelA🔸 @ 2021-05-02T18:00 (+30)

I recently read Greaves & MacAskill’s working paper “The case for strong longtermism” for a book/journal club, and noted some reactions to the paper. I’m making this post to share slightly-neatened-up versions of those reactions, and also to provide a space for other people to share their own reactions.[1] I’ll split my thoughts into separate comments, partly so it’s easier for people to reply to specific points.

I thought the paper outlined what (strong) longtermism is claiming - and many potential arguments for or against it - more precisely, thoroughly, and clearly than anything else I’ve read on the subject.[2] As such, it’s now one of the two main papers I’d typically recommend to someone who wanted to learn about longtermism from a philosophical perspective (as opposed to learning about what one’s priorities should be, given longtermism). (The other paper I’d typically recommend is Tarsney’s “The epistemic challenge to longtermism”.)

So if you haven’t read the paper yet, you should probably do that before / instead of reading my thoughts on it.

But despite me thinking the paper was a very useful contribution, my comments will mostly focus on what I see as possible flaws with the paper - some minor, some potentially substantive. 

Here’s the paper’s abstract:

Let strong longtermism be the thesis that in a wide class of decision situations, the option that is ex ante best is contained in a fairly small subset of options whose ex ante effects on the very long-run future are best. If this thesis is correct, it suggests that for decision purposes, we can often simply ignore shorter-run effects: the primary determinant of how good an option is (ex ante) is how good its effects on the very long run are. This paper sets out an argument for strong longtermism. We argue that the case for this thesis is quite robust to plausible variations in various normative assumptions, including relating to population ethics, interpersonal aggregation and decision theory. We also suggest that while strong longtermism as defined above is a purely axiological thesis, a corresponding deontic thesis plausibly follows, even by non-consequentialist lights.

[1] There is already a linkpost to this paper on the Forum, but that was posted in a way that meant it never spent time on the front page, so there wasn't a time when people could comment and feel confident that people would see those comments. 

There's also the post Possible misconceptions about (strong) longtermism, which I think is good, but which serves a somewhat different role.

[2] Other relevant things I’ve read include, for example, Bostrom’s 2013 paper on existential risk and Ord’s The Precipice. The key difference is not that those works are lower quality but rather that they had a different (and also important!) focus and goal. 

Note that I haven’t read Beckstead’s thesis, and I’ve heard that that was (or perhaps is) the best work on this. Also, Tarsney’s “The epistemic challenge to longtermism” tackles a somewhat similar goal similarly well to Greaves and MacAskill.

This post does not necessarily represent the views of any of my employers.


MichaelA @ 2021-05-02T18:01 (+9)

I think the argument in the section “A meta-option: Funding research into longtermist intervention prospects” is important and is sometimes overlooked by non-longtermists.

Here’s a somewhat streamlined version of the section’s key claims: 

let us suppose instead, for the sake of argument, that some reasonable credences do not assign higher expected cost-effectiveness to any particular one of the proposed longtermist interventions than they do to the best short-termist interventions, because of the thinness of the case in support of each such intervention. [...] 

It does not follow that the credences in question would recommend funding short-termist interventions. That is because Shivani also has what we might call a “second-order” longtermist option: funding research into the cost-effectiveness of various possible attempts to influence the very long run, such as those discussed above. Provided that subsequent philanthropists would take due note of the results of such research, this second-order option could easily have higher expected value (relative to Shivani’s current probabilities) than the best short-termist option, since it could dramatically increase the expected effectiveness of future philanthropy (again, relative to Shivani’s current probabilities).

Finally, here is another option that is somewhat similar in spirit: rather than spending now, Shivani could save her money for a later time. [...] This fund would pay out whenever there comes a time when there is some action one could take that will, in expectation, sufficiently affect the value of the very long-run future.

These two considerations show that the bar for empirical objections to our argument to meet is very high. Not only would it need to be the case that, out of all the (millions) of actions available to an actor like Shivani, for none of them should one have non-negligible credence that one can positively affect the expected value of the long-run future by any non-negligible amount. It would also need to be the case that one should be virtually certain that there will be no such actions in the future, and that there is almost no hope of discovering any such actions through further research. This constellation of conditions seems highly unlikely.

Roughly the same argument has often come to my mind as well as one of the strongest arguments for at least doing longtermist research, even if one felt that all object-level longtermist interventions that have been proposed so far are too speculative. (I’d guess that I didn’t independently come up with the argument, but rather heard a version of it somewhere else.)

One thing I’d add is that one could also do cross-cutting work, such as work on the epistemic challenge to longtermism, rather than just work to better evaluate the cost-effectiveness of specific interventions or classes of interventions.

MichaelStJules @ 2021-05-02T22:53 (+4)

Two possible objections:

  1. It might be too difficult to ever identify ahead of time a long-termist intervention as robustly good, due to the absence of good feedback and skepticism, cluelessness or moral uncertainty.

  2. Cross-cutting work, if public especially, can also benefit others with goals/values unaligned with your own and do more harm than good. More generally, resources and capital, including knowledge, you try to build can also end up in the wrong hands eventually, which undermines patient philanthropy, too.

MichaelA @ 2021-05-03T06:47 (+4)

On your specific points:

  • Given that you said "robustly" in your first point, it might be that you're adopting something like risk-neutrality or another alternative to expected value theory. If so, I'd say that:
    • That in itself is a questionable assumption, and people could do more work on which decision theory we should use.
    • I personally lean more towards just expected value theory (but with this incorporating skeptical priors, adjusting for the optimiser's curse, etc.), at least in situations that don't involve "fanaticism". But I acknowledge uncertainty on that front too.
  • If you just meant "It might be too difficult to ever identify ahead of time a long-termist intervention as better in expectation than short-termist interventions", then yeah, I think this might be true (at least if fanaticism in the philosophical sense is bad, which seems to be an open question). But I think we actually have extremely little evidence for this claim.
    • We know from Tetlock's work that some people can do better than chance at forecasts over the range of months and years.
    • We seem to have basically no evidence about how well people who are actually trying (and especially ones aware of Tetlock's work) do on forecasts over much longer timescales (so we don't have specific evidence that they'll do well or that they'll do badly).
    • We have a scrap of evidence suggesting that forecasting accuracy declines as the range increases, but relatively slowly (though this was comparing a few months to about a year; ).
    • So currently it seems to me that our best guess should be that forecasting accuracy continues to decline, but doesn't hit zero, although maybe it asymptotes to it eventually.
    • That decline might be sharp enough to offset the increased "scale" of the future, or might not, depending both on various empirical assumptions and on whether we accept or reject "fanaticism" (see Tarsney's epistemic challenge paper).
  • I agree that basically all interventions have downside risks, and that one notable category of downside risks is the risk that resources/capital/knowledge/whatever end up being used for bad things by other people. (This could be because they have bad goals or because they have good goals but bad plans; see also.) I think this will definitely mean we should deprioritise some otherwise plausible longtermist interventions. I also agree that it might undermine strong longtermism as a whole, but that seems very unlikely to me.
    • One reason is that similar points also apply to short-termist interventions.
    • Another is that it seems very likely that, if we try, we can make it more likely that the resources end up in the hands of people who will (in expectation) use them well, rather than in the hands of people who will (in expectation) use them poorly. 
    • We can also model these downside risks. 
      • We haven't done this in detail yet as far as I'm aware
      • But we have come up with a bunch of useful concepts and frameworks for that (e.g., information hazards, unilateralist's curse, this post of mine [hopefully that's useful!])
      • And there's been some basic analysis and estimation for some relevant things. e.g. in relation to "punting to the future"

(All that said, you did just say "Two possible objections", and I do think pointing out possible objections is a useful part of the cause prioritisation project.)

MichaelA @ 2021-05-03T06:33 (+4)

I basically agree with those two points, but also think they don't really defeat the case for strong longtermism, or at least for e.g. some tens or hundreds or thousands of people doing "second- or third-order" research on these things. 

This research could, for example, attempt to: 

  • flesh out the two points you raised
  • quantify how much those points reduce the value of second- or third-order research into longtermism
  • consider whether there are any approaches to first- or second- or third-order longtermism-related work that don't suffer those objections, or suffer them less

It's hard to know how to count these things, but, off the top of my head, I'd estimate that: 

  • something like 50-1000 people have done serious, focused work to identify high-priority longtermist interventions
  • fewer have done serious, focused work to evaluate the cost-effectiveness of those interventions, or to assess arguments for and against longtermism (e.g., work like this paper or Tarsney's epistemic challenge paper)

So I think we should see "strong longtermism actually isn't right, e.g. due to the epistemic challenge" as a live hypothesis, but that it does seem too early to say we've concluded that or that we've concluded it's not worth looking into. It seems that we're sufficiently uncertain, the potential stakes are sufficiently high, and the questions have been looked into sufficiently little that, whether we're leaning towards thinking strong longtermism is true or that it's false, it's worth having at least some people doing serious, focused work to "double-check".

MichaelA @ 2021-05-02T18:47 (+8)

[This point is unrelated to the paper's main arguments] 

It seems like the paper implicitly assumes that humans are the only moral patients (which I don't think is a sound assumption, or an assumption the authors themselves would actually endorse).

MichaelA @ 2021-05-02T18:15 (+8)

The authors imply (or explicitly state?) that any positive rate of pure time discounting would guarantee that strong longtermism is false (or at least that their arguments for strong longtermism wouldn’t work in that case). 

In particular, [an assumption we make] rules out a positive rate of pure time preference. Such a positive rate would mean that we should intrinsically prefer a good thing to come at an earlier time rather than a later time. If we endorsed this idea, our argument would not get off the ground. 

To see this, suppose that future well-being is discounted at a modest but significant positive rate – say, 1% per annum. Consider a simplified model in which the future certainly contains some constant number of people throughout the whole of an infinitely long future, and assume for simplicity that lifetime well-being is simply the time-integral of momentary well-being. ​Suppose further that average momentary well-being (averaged, that is, across people at a time) is constant in time. ​Then, with a well-being discount rate of 1% per annum, the amount of discounted well-being even in the whole of the infinite f​uture from 100 years onwards is only about one third of the amount of discounted well-being in the next 100 years. While this calculation concerns total well-being rather than differences one could make to well-being, similar considerations will apply to the latter. [emphasis added]

MichaelA @ 2021-05-02T18:05 (+6)

I don’t think the authors ever make it very clear what “wide class of decision situations” means in the definitions of axiological and deontic strong longtermism.

They do give a rough sense of what they mean, and perhaps that suffices for now. But I think it’d be useful to be a bit clearer.

Here’s a relevant thing they do say:

Which decision situations fall within the scope of our claims? In the first instance, we argue that the following is one such case:

The cause-neutral philanthropist. Shivani has $10,000. Her aim is to spend this money in whatever way would most improve the world, and she is open to considering any project as a means to doing this.

The bulk of the paper is devoted to defending the claim that this situation is within the scope of axiological strong longtermism; in the final two sections we generalise this to a wider range of decision situations.

They also say:

We agree that the washing-out hypothesis is true of some decision contexts [which I think would make strong longtermism false in those contexts]: in particular, for many relatively trivial decision contexts, such as a decision about whether or not to click one’s fingers. However, we claim that it is also false of many decision situations, and in particular of Shivani’s. If Shivani is specifically looking for options whose effects do not wash out, we claim she can find some. 

But, as noted, these quotes still seem to me to leave the question of what “wide class of decision situations” means to them fairly open.

MichaelA @ 2021-05-02T18:21 (+5)

I think the authors are a bit too quick and confident in dismissing the idea that population ethics could substantially change their conclusions

They write:

However, the other options for long-run influence we discussed (in section 3.4) are attempts to improve average future well-being, conditional on humanity ​not g​oing  prematurely extinct. While the precise numbers that are relevant will depend on the precise choice of axiology (and we will not explicitly crunch suggested numbers for any other axiologies), any plausible axiology must agree that this is a valuable goal. Therefore, the bulk of our argument is robust to plausible variations in population axiology.

First, we must distinguish between two senses of “risk aversion with respect to welfare”. The standard sense is risk aversion with respect to total welfare itself (that is, vNM value is a concave function of total welfare, w). But risk aversion in that sense tends to ​increase the importance of avoiding much lower welfare situations (such as near-future extinction), relative to the importance of increasing welfare from an already much higher baseline (as in the case of distributing bed nets in a world in which extinction is very far in the future).

MichaelStJules @ 2021-05-02T22:37 (+4)

I think it's worth clarifying that you mean worse-than-exrinction futures according to asymmetric views. S-risks can still happen in a better-than-extinction future according to classical utilitarianism, say, and could still be worth reducing.

There might be other interventions to increase wellbeing according to some person-affecting views, by increasing positive wellbeing without requiring additional people, but do any involve attractor states? Maybe genetically engineering humans to be happier or otherwise optimizing our descendants (possibly non-biological) for happiness? Maybe it's better to do this before space colonization, but I think intelligent moral agents would still be motivated to improve their own wellbeing after colonization, so it might not be so pressing for them, although could be for moral patients who have too little agency if we send them out on their own.

MichaelA @ 2021-05-03T06:53 (+4)

S-risks can still happen in a better-than-extinction future according to classical utilitarianism, say, and could still be worth reducing.

Yeah, this is true. On this, I've previously written that:

Two mistakes people sometimes make are discussing s-risks as if they’re entirely distinct from existential risks, or discussing s-risks as if they’re a subset of existential risks. In reality:

  1. There are substantial overlaps between suffering catastrophes and existential catastrophes, because some existential catastrophes would involve or result in suffering on an astronomical scale.
    • [...]
  2. But there could also be suffering catastrophes that aren’t existential catastrophes, because they don’t involve the destruction of (the vast majority of) humanity’s long-term potential.
    • This depends on one’s moral theory or values (or the “correct” moral theory or values), because, as noted above, that affects what counts as fulfilling or destroying humanity’s long-term potential.
    • For example, the Center on Long-Term Risk notes: “Depending on how you understand the [idea of loss of “potential” in definitions] of [existential risks], there actually may be s-risks which aren’t [existential risks]. This would be true if you think that reaching the full potential of Earth-originating intelligent life could involve suffering on an astronomical scale, i.e., the realisation of an s-risk. Think of a quarter of the universe filled with suffering, and three quarters filled with happiness. Considering such an outcome to be the full potential of humanity seems to require the view that the suffering involved would be outweighed by other, desirable features of reaching this full potential, such as vast amounts of happiness.”
    • In contrast, given a sufficiently suffering-focused theory of ethics, anything other than near-complete eradication of suffering might count as an existential catastrophe.

Your second paragraph makes sense to me, and is an interesting point I don't think I'd thought of.

MichaelA @ 2021-05-02T18:43 (+4)

[This point is unrelated to the paper's main arguments] 

The authors write “If we create a world government, then the values embodied in the constitution of that government will constrain future decision-makers indefinitely.” But I think this is either incorrect or misleading. 

(Whether it's incorrect or misleading depends on how narrowly the term “constitution” was intended to be interpreted.) 

Michael_Wiebe @ 2021-05-04T17:25 (+3)

What's your take on this argument:

"Why do we need longtermism? Let's just do the usual approach of evaluating interventions based on their expected marginal utility per dollar. If the best interventions turn out to be aimed at the short-term or long-term, who cares?"

MichaelA @ 2021-05-05T06:55 (+6)

tl;dr:

  1. I do think what we're doing can be seen as an attempt to approximate the process of evaluating interventions based on everything relevant to their expected marginal utility per dollar.
  2. But we never model anything close to all of reality's details, so what we focus on, what proxies we use, etc. matters. And it seems usually more productive to "factor out" certain questions like "should we focus on the long-term future or the nearer term?" and "should we focus on humans or nonhumans?", and have dedicated discussions about them, rather than discussing them in detail within each intervention prioritisation decision or cost-effectiveness model.
  3. "Longtermism" highlights a category of effects that previously received extremely little attention. "Wild animal suffering" is analogous. So the relevant effects would've been especially consistently ignored in models if not for these framings/philosophies/cause areas, even if in theory they always "should have been" part of our models.

[I wrote this all quickly; let me know if I should clarify or elaborate on things]

---

Here's one way to flesh out point 2:

  • I think (almost?) no one ever has actually taken the approach of trying to make anything close to a fully fine-grained model the expected marginal utility per dollar of an intervention.
    • I.e., I think all cost-effectiveness models that have ever been made massively simplify some things, ignore other things, use proxies, etc.
    • As such, it really matters what "aspects of the world" you're highlighting as worth modelling in detail, what proxies you use, etc.
    • E.g., I think GiveWell's evaluations are basically just based on the next few decades or so (as well as things like room for more funding), and don't explicitly consider any time beyond that
      • (Maybe this is a bit wrong, since I haven't looked closely at GiveWell models for a while, but I think it's right)
    • Meanwhile, prioritisation by longtermists focuses mostly on long-term effects, and does less detailed modelling of and places less emphasis on intrinsic effects in the nearer term
      • Effects in the nearer term that have substantial expected impact on the long-term are (ideally) considered more, of course
    • Predictably, this leads places like GiveWell to focus more on interventions that seem more likely to be best in the near-term, and places like the EA Long-Term Future Fund to focus more on interventions that seem more likely to be best in the long-term
    • So whether we're bought into longtermism seems in theory like it'd make a difference to how we evaluate things and what we end up prioritising, and in practice that also seems to be the case

Here's another way of fleshing out point 2, copied from a comment I made on a doc where someone essentially proposed evaluating all interventions in terms of WELLBYs:

I'm inclined to think that, for longtermist interventions, the metrics that are usually most useful would be things like percentage or percentage point reduction in x-risks or increase in total value of the future, rather than things like WELLBYs.

I think the core reason is that that allows one to compare many longtermist interventions against each other without explicitly accounting for issues like how large the future will be, what population ethics view one holds, how many biological humans vs whole brain emulations vs artificial sentiences vs nonhuman animals ... there'll be, how much moral weight to assign to each of those types of beings, ...
Then those issues can just be taken into account for the rarer task of comparing longtermist interventions to other interventions

[Also, my impression is that WELLBYs are currently conceptualised for humans only, right?]

It might be best to have one main metric for each of the main broad cause areas, and then a very rough sense of the exchange rate between those metrics.

Here's another way to flesh out point 2::

  • GiveWell benefits from the existence of many scientific fields like epidemiology. And it really makes sense that those fields exist in their own right, and then their relevant conclusions are "plugged in" to GiveWell models or inform high-level decisions about what to bother making models about and how to structure the models, rather than the fields basically existing only "within GiveWell models".
  • Likewise, I think it makes sense for there to be communities of people and bodies of work looking into things like how large the future will be, what population ethics view one should hold, how many biological humans vs whole brain emulations vs artificial sentiences vs nonhuman animals ... there'll be, how much moral weight to assign to each of those types of beings, ...
    • And I think it makes sense for that to not just be part of our cost-effectiveness models

All that said:

  • there may be many models where it makes sense to explicitly model both the intrinsic value of near-term effects and the intrinsic value of long-term effects (e.g., I think I recall that ALLFED does this)
  • and there may be many models where it makes sense to include parameters for these "cross-cutting uncertainties", like what population ethics view one should hold, and see how that affects the conclusions
  • and ultimately I do think that what we're doing should be seen as an attempt to approximate the process of deciding what to do based on all morally relevant effects, weighted appropriate
Michael_Wiebe @ 2022-03-02T23:45 (+3)

So whether we're bought into longtermism seems in theory like it'd make a difference to how we evaluate things and what we end up prioritising, and in practice that also seems to be the case

It seems backwards to first "buy into" longtermism, and then use that to evaluate interventions. You should instead evaluate longtermist interventions, and use that to decide whether to buy into longtermism.

Michael_Wiebe @ 2022-03-02T23:41 (+1)

the metrics that are usually most useful would be things like percentage or percentage point reduction in x-risks or increase in total value of the future, rather than things like WELLBYs. [...]
It might be best to have one main metric for each of the main broad cause areas, and then a very rough sense of the exchange rate between those metrics.

This seems fine; if you're focusing on percentage point reduction in x-risks, you can abstract away from questions about the size of the future, population ethics, etc. But the key is having the exchange rate, which will be a function of those parameters. So you can work on a specific parameter (eg x-risk), which is then plugged back into the exchange rate function.

MichaelA @ 2021-05-02T18:58 (+3)

There are a few topics I don't remember the paper directly addressing, and that I'd be interested to hear people's thoughts on (including but not limited to the authors' thoughts). (Though it's also possible that I just forgot the bits of the paper where they were covered.)

  1. How sensitive is strong longtermism (or the authors' specific arguments) to an increasing number of people acting in line with strong longtermism?
    • I haven’t tried thinking it through carefully myself yet
    • I only thought of this partway through the paper, when I saw the authors use the rarity of a strong longtermist perspective as an argument in favour of such a perspective
      • Specifically, they write “A complementary reason for suspecting that axiological strong longtermism is true concerns the behaviour of other actors. In general, there are diminishing marginal returns to the expenditure of resources towards a given aim, because the lower-hanging fruit are usually picked first. [...] the vast majority of other actors [...] exhibit a significant amounts of preference for near-term positive effects over long-term positive effects (Frederick, Loewenstein and O’Donoghue 2002). Shivani should therefore expect that most other actors have been selectively funding projects that deliver high short-run benefits, and leaving unfunded projects that are better by Shivani’s lights, but whose most significant benefits occur over the course of the very long run. This means that Shivani should expect to find axiological strong longtermism true at the current margin — provided (which we have not yet argued) that there were any projects with significantly beneficial ex ante effects on the very long-run future to begin with.”
  2. I don’t think I remember the paper directly addressing concerns about “fanaticism” or “Pascal’s muggings”
    • And that seems to me like one of the best reasons to doubt strong longtermism
      • Though I’m currently inclined to act according to longtermism regardless
        • (Partly because it seems pretty plausible that strong longtermism does not depend on minuscule probabilities, and partly because it seems pretty plausible to me that fanaticism is actually fine; see discussion in the posts with the fanaticism tag)
    • Though the paper did address things like risk aversion, so maybe that effectively covered this issue?
  3. I can’t remember whether the paper addressed demandingness, and where to draw the line? Maybe one could argue that the authors’ arguments “prove too much” and reach absurdly demanding conclusions?
    • Perhaps the authors felt that the existing debate about the demandingness of utilitarianism in general was sufficient, and they didn’t need to tackle that head-on here?
      • I guess that seems reasonable to me?
    • I think the authors essentially just claim that it seems fairly clear that we should do at least somewhat more than we do now, and that concerns about demandingness don’t counter that point, without addressing precisely how much we should do. 
      • They write: “Third, one might hold that some prerogatives are absolute: they cannot be overridden, no matter what the consequences. Absolutist views tend not to be very plausible, and have few adherents. (In the case of constraints as opposed to prerogatives, for instance, few people share Kant’s view that even when an innocent life depends on it, one should not tell a lie even to an intending murderer.) However, for our purposes, even if the non-consequentialist is absolutist with respect to some prerogatives, our argument will most likely still go through for most decision situations. This is because, for most decision-makers, the case for strong longtermism does not involve or at least does not rely on the existence of extraordinarily demanding options. Perhaps, no matter how great the stakes, one is never required to give up one’s own life, or that of one’s own child, and perhaps one is never required to reduce oneself from a Western standard of living to an allowance of $2 per day. But, for the vast majority of decision-makers, in the vast majority of decision-situations, these will not be the choices at hand. Instead, the choice will be whether to switch career paths, or live somewhat more frugally, or where to donate a specified amount of non-necessary income, in order to try to positively influence the long-run future. Even if one is sympathetic to absolutism about some sacrifices, it’s very implausible to be absolutist about these comparatively minor sorts of sacrifices (MacAskill, Mogensen, and Ord 2018).”
      • But I think the case for strong longtermism might be somewhat more satisfying or convincing if we also knew "where the line was", even if the line is far ahead of where most people presently are
MichaelA @ 2021-05-02T18:04 (+3)

Three specific good things from the paper which I’d like to highlight:

  1. Their concept of “attractor states” seemed useful to me. 
  2. The distinction between axiological strong longtermism and deontic strong longtermism is interesting.
    • Axiological strong longtermism is the claim that “In a wide class of decision situations, the option that is ex ante best is contained in a fairly small subset of options whose ex ante effects on the very long-run future are best” 
    • Deontic strong longtermism is the claim that “In a wide class of decision situations, the option one ought, ex ante, to choose is contained in a fairly small subset of options whose ex ante effects on the very long-run future are best”
  3. I thought that the section of world government was a handy summary of important ideas on that topic (which is, in my view, under-discussed in EA).

(These were not necessarily the most important good things about the paper, and were certainly not the only ones.)

MichaelA @ 2021-05-02T18:04 (+2)

Tangent: A quote to elaborate on why I think having multiple concepts/models/framings is often useful.

This quote is from Owen Cotton-Barratt on the 80,000 Hours Podcast, and it basically matches my own views:

And when we build some model like this, we’re focusing attention on some aspects of [the world]. And because attention is a bit of a limited resource, it’s pulling attention away from other things. And so if we say, “Well, we want to analyze everything in terms of these abstract defense layers,” it’s pulling attention away from, “Okay, let’s just understand what we currently guess are the biggest risks,” and going in and analyzing those on a case by case basis.

And I tend to think that the right approach is not to say, “Well, we just want to look for the model which is making the best set of trade offs here”, and is more to say, “We want to step in and out and try different models which have different lenses that they’re bringing on the problem and we’ll try and understand it as much as possible from lots of different angles”. Maybe we take an insight that we got from one lens and we try and work out, “Okay, how do we import that and what does it mean in this other interpretation?

MichaelA @ 2021-05-02T18:36 (+2)

Part of the authors' argument is that axiological/consequentialist considerations outweigh other kinds of considerations when the stakes are sufficiently high. But I don't think the examples they give are as relevant or as persuasive/intuitive as they think. 

(I personally basically agree with their conclusion, as I'm already mostly a utilitarian, but they want to convince people who aren't sold on consequentialism.)

They write 

Further, ​in ‘emergency situation’ situations like wartime, axiological considerations outweigh non-consequentialist considerations (at least for those fighting a just war). Consider, for example, the intuitions that one would have with respect to how one should act if one lived in Britain during World War II. It’s very intuitive that, in that situation, that one is morally obligated to make significant sacrifices for the greater good that would not normally be required, such as by living far more frugally, separating oneself from one’s family, and taking significant risks to one’s own life — and this because the axiological stakes are so high.

MichaelA @ 2021-05-02T18:36 (+2)

(I think the following point might be important, but I also think I might be wrong and that I haven't explained the point well, so you may want to skip it.)

The authors claim that their case for strong longtermism should apply even for actors that aren't cause-neutral, and they give an example that makes it appear that adopting strong longtermism wouldn’t lead to very strange conclusions for an actor who isn’t cause-neutral. But I think that the example substantially understates the counterintuitiveness of the conclusions one could plausibly reach. 

MichaelA @ 2021-05-02T18:06 (+2)

The authors seem to make a good case for strong longtermism. But I don’t think they make a good case that strong longtermism has very different implications to what we’d do anyway (though I do think that that case can be made).