Thoughts on "A case against strong longtermism" (Masrani)
By MichaelA🔸 @ 2021-05-03T14:22 (+39)
I recently read Vaden Masrani’s post “A case against strong longtermism” for a book/journal club, and noted some reactions to the post as I went. I’m making this post to share slightly-neatened-up versions of those reactions.[1] I’ll split my specific reactions into separate comments, partly so it’s easier for people to reply to specific points.
Masrani’s post centres on critiquing The Case for Strong Longtermism, a paper by Greaves & MacAskill. I recommend reading that paper before reading this post or Masrani’s post. I think the paper is basically very good and very useful, though also flawed in a few ways; I wrote my thoughts on the paper here.
My overall thoughts on Masrani’s post are as follows:
- I think that criticism is very often valuable, and especially so for ideas that are promoted by prominent people and are influencing important decisions. Masrani’s post represents a critique of such an idea, so it’s in a category of things I generally appreciate and think we should generally be happy people are producing.
- However, my independent impression is that the critique was quite weak and that it involved multiple misunderstandings of the Greaves & MacAskill paper in particular, longtermist ideas and efforts more generally, and also some other philosophical ideas.
- Relatedly, my independent impression is that Masrani’s post is probably more likely to cause confusions or misconceptions than it is to usefully advance people’s thinking and discussions.
- All that said, I do think that there are various plausible arguments against longtermism that warrant further discussion and research.
- Some are discussed in Greaves and MacAskill’s paper.
- One of the best such arguments (in my view) is discussed in Tarsney’s great paper “The epistemic challenge to longtermism”.
- See also Criticism of effective altruist causes and What are the leading critiques of "longtermism" and related concepts.
(Given these views, I was also pretty tempted to call this A Case Against “A Case Against Longtermism”, but I didn’t want to set off an infinitely recursive loop of increasingly long and snarky titles!)
(Masrani also engaged in the comments section of their original post, wrote some followup posts, and has discussed similar topics on a podcast they host with Ben Chugg. I read most of the comments section on the original post and listened to a 3 hour interview they had with Fin and Luca of the podcast Hear This Idea, and continued to be unimpressed by the critiques provided. But I haven’t read/listened to the other things.)
[1] This seemed better than just making all these comments on Masrani’s post, since I had a lot of comments and that post is from several months ago.
This post does not necessarily represent the views of any of my employers.
MichaelA @ 2021-05-03T14:44 (+32)
Masrani writes:
What is particularly striking is the authors seem utterly oblivious to the fact that something might be wrong with the framework itself. In Section 4 they anticipate objections to longtermism, but at no point do they question whether using expected value calculus might itself be the cause of the repugnant conclusions they arrive at. Instead, they obediently follow the calculus and endorse the conclusions.
- This is simply false. Greaves and MacAskill actually spend a decent amount of space discussing various alternatives to standard expected value reasoning.
- And Greaves also wrote a whole (separate) paper on the related matter of cluelessness.
- In any case, “utterly oblivious” seems to me to be both a rude phrasing and a strong claim.
Davidmanheim @ 2021-05-06T06:40 (+2)
This is true, but seems to be responding to tone rather than the substance of the argument. And given that (I think) we're interested in the substantive question rather than the social legitimacy of the criticism, I think that it is more useful to engage with the strongest version of the argument.
The actual issue that is relevant here, which isn't well identified, is that naive expected value fails in a number of ways. Some of these are legitimate criticisms, albeit not well formulated in the paper. Specifically I think that there are valuable points about secondary uncertainty, value of information, and similar issues that are ignored by Greaves and MacAskill in their sketch of the ideal decision-theoretic reasoning.
MichaelA @ 2021-05-06T07:37 (+12)
seems to be responding to tone rather than the substance of the argument.
That's roughly true for me saying "In any case, “utterly oblivious” seems to me to be both a rude phrasing and a strong claim."
But I don't think it's true for my comment as a whole. Masrani makes specific claims here, and the claims are inaccurate.
And given that (I think) we're interested in the substantive question rather than the social legitimacy of the criticism, I think that it is more useful to engage with the strongest version of the argument.
I think steelmanning is often really useful. But I think there's also valuing in noticing when a person/post/whatever is just actually incorrect about something, and in trying to understand what arguments they're actually making. Some reasons:
- Something like epistemic spot-checking / combatting something like Gell-Mann Amnesia
- Making it less likely that other people walk away remembering the incorrect claim as actually true
- Prioritising which arguments/criticisms to bother engaging with
- We obviously shouldn't choose arguments at random from the entire pool of available arguments in the world, or the entire pool of available arguments on a given topic. It's probably often more efficient to engage with arguments that are already quite strong, rather than steelmanning less strong arguments that we happen to have stumbled upon
So here I'm actually not solely interested in the substantive questions raised by Masrani's post, but also in countering misconceptions that I think the post may have generated, and giving indications of why I think people might find it more useful to engage with other criticisms of longtermism instead (e.g., the ones linked to in the body of my post itself).
One final thing worth noting is that this was a quickly produced post adapting notes I'd made anyway. I do think that if I'd spent quite a while on this, it'd be fair to say "Why didn't you just talk about the best arguments against longtermism, and the points missing from Greaves & MacAskill, instead?"
I think that there are valuable points about secondary uncertainty, value of information, and similar issues that are ignored by Greaves and MacAskill in their sketch of the ideal decision-theoretic reasoning.
Yeah, I imagine there are many things in this vicinity that Greaves & MacAskill didn't cover yet that are relevant to the case for strong longtermism or how to implement it in practice, and I'd be happy to see (a) recommendations of sources where those things are discussed well, and/or (b) other people generate new useful discussions of those things. Ideally applied to longtermism specifically, but general discussions - or general discussions plus a quick explanation of the relevance - seems useful too.
I definitely don't mean to imply with this post that I see strong longtermism as clearly true; I'm just quickly countering a specific set of misconceptions and objections.
Davidmanheim @ 2021-05-06T11:00 (+2)
As I mentioned in my other reply, I don't see as much value in responding to weak-man claims here on the forum, but agree that they can be useful more generally.
Regarding "secondary uncertainty, value of information, and similar issues," I'd be happy to point to sources that are relevant on these topics generally, especially Morgan and Henrion's "Uncertainty," which is a general introduction to some of these ideas, and my RAND dissertation chairs work on policy making under uncertainty, focused on US DOD decisions, but applicable more widely. Unfortunately, I haven't put together my ideas on this, and don't know that anyone at GPI has done so either - but I do know that they have engaged with several people at RAND who do this type of work, so it's on their agenda.
RyanCarey @ 2021-05-03T15:57 (+23)
So you've shown that Masrani has made a bunch of faulty arguments. But do you think his argument fails overall? i.e. can you refute its central point?
MichaelA @ 2021-05-03T16:39 (+16)
tl;dr: Yes, I think so, for both questions. I think my comments already did this, but that I didn't make it obvious whether and where this happened, so your question is a useful one.
I like that essay, and also this related Slate Star Codex essay. I also think this might be a generically useful question to ask in response to a post like the one I've made. (Though I also think there's value in epistemic spot checks, and that if you know there are a large number of faulty arguments in X but not whether those were the central arguments in X, that's still some evidence that the central arguments are faulty too.)
Your comment makes me realise that probably a better structure for this post would've been to first summarise my understanding of the central point Masrani was making and Masrani's key arguments for that, and then say why I disagree with parts of those key arguments, and then maybe also add other disagreements but flag that they're less central.
The main reason my post is structured as it is is basically just that I tried to relatively quickly adapt notes that I made while reading the post. But here's a quick attempt at something like that (from re-skimming Masrani's post now, having originally read it over a month ago)...
---
Masrani writes:
In Section 2 the authors helpfully state the two assumptions which longtermism needs to get off the ground:
- In expectation, the future is vast in size. In particular they assume the future will contain at least 1 quadrillion (1015) beings in expectation.
- We should not be biased towards the present.
I think both of these assumptions are false, and in fact:
- In expectation, the future is undefined.
- We should absolutely be biased towards the present.
We’ll discuss both in turn after an introduction to expected values.
As noted in some of my comments:
- The "undefined" bit involves talking a lot about infinities, but neither Greaves and MacAskill's paper nor standard cases for longtermism rely on infinities
- The "undefined" bit also "proves too much"; it basically says we can't predict anything ever, but actually empirical evidence and common sense both strongly indicate that we can make many predictions with better-than-chance accuracy
- See this comment
- Greaves and MacAskill say we shouldn't have a pure rate of time preference. They don't say we should engage in no time discounting at all. And Masrani's arguments for a bias towards the present are unrelated to the question of whether we should have a pure rate of time preference, so they don't actually counter the paper's claims.
- The paper also significantly misunderstands what strong longtermism and the paper actually implies (e.g., thinking that it definitely entails a focus on existential risk), which is a problem when attempting to argue against strong longtermism and the paper.
- I'm not sure whether this last bit should be considered part of refuting the main point, but it seems relevant?
---
(I should note again that I read the post over a month ago and just dipped in quickly to skim for a central point to refute, so it's possible there were other central points I missed.)
I also expect that the post mentioned various other things that are related to better arguments against longtermism, e.g. the epistemic challenge to longtermism that Tarsney's paper discusses. But I'm pretty sure I remember the post not adding to what had already been discussed on those points. (A post that just summarised those other arguments could be useful, but the post didn't set out to be that.)
vadmas @ 2021-05-04T15:58 (+2)
Hey! Can't respond most of your points now unfortunately, but just a few quick things :)
(I'm working on a followup piece at the moment and will try to respond to some of your criticisms there)
My central point is the 'inconsequential in the grand scheme of things' one you highlight here. This is why I end the essay with this quote:
> If among our aims and ends there is anything conceived in terms of human happiness and misery, then we are bound to judge our actions in terms not only of possible contributions to the happiness of man in a distant future, but also of their more immediate effects. We must not argue that the misery of one generation may be considered as a mere means to the end of securing the lasting happiness of some later generation or generations; and this argument is improved neither by a high degree of promised happiness nor by a large number of generations profiting by it. All generations are transient. All have an equal right to be considered, but our immediate duties are undoubtedly to the present generation and to the next. Besides, we should never attempt to balance anybody’s misery against somebody else’s happiness.
The "undefined" bit also "proves too much"; it basically says we can't predict anything ever, but actually empirical evidence and common sense both strongly indicate that we can make many predictions with better-than-chance accuracy
Just wanted to flag that I responded to the 'proving too much' concern here: Proving Too Much
MichaelA @ 2021-05-05T07:14 (+18)
Hey Vaden!
Yeah, I didn't read your other posts (including Proving Too Much), so it's possible they counter some of my points, clarify your argument more, or the like.
(The reason I didn't read them is that I read your first post, read most comments on it, listened to the 3 hour podcast, and have read a bunch of other stuff on related topics (e.g., Greaves & MacAskill's paper), so it seems relatively unlikely that reading your other posts would change my mind.)
---
Hmm, something that strikes me about that quote is that it seems to really be about deontology vs consequentialism - and/or maybe placing less moral weight on future generations. It doesn't seem to be about reasons why strong longtermism would have bad consequences or reasons why longtermist arguments have been unsound (given consequentialism). Specifically, that quote's arguments for its conclusion seem to just be that we have a stronger "duty" to the present, and that "we should never attempt to balance anybody’s misery against somebody else’s happiness."
(Of course, I'm not reading the quote in the full context of its source. Maybe those statements were meant more like heuristics about what types of reasoning tend to have better consequences?)
But if I recall correctly, your post mostly focused on arguments that stronglongtermist would have bad consequences or that longtermist arguments have been unsound. And "we should never attempt to balance anybody's misery against somebody else's happiness" is either:
- Also an argument against any prioritisation of efforts that would help people, including e.g. GiveWell's work, or
- Basically irrelevant, if it just means we can't "actively cause" misery in someone (as opposed to just "not helping") in order to help others
- I think that longtermism doesn't do that any more than GiveWell does
So I think that that quote arrives at a similar conclusion to you, but it might show very different reasoning for that conclusion than your reasoning?
Do you have a sense of what the double crux(es) is/are between you and most longtermists?
MichaelA @ 2021-05-03T14:26 (+23)
Masrani seems to take (some of) Greaves and MacAskill’s examples and tentative views about what strong longtermism might indicate one should prioritise as a logically necessary consequence of the moral view itself. In particular, Masrani seems to assume that longtermism necessarily focuses solely on existential risk reduction. But this is actually incorrect.
- E.g., Masrani writes: “This assumption is why longtermism states it is always better to work on x-risks than anything else one might want to do to improve the short-term.”
- But in reality, what strong longtermism would say one should prioritise depends on various empirical features of the world, as well as aspects of one’s philosophical views other than strong longtermism itself (e.g., one’s views on population ethics).
- I think the main two contenders for alternative longtermist priorities are (1) trajectory changes other than existential risks and (2) speeding up development/progress.
- Masrani also seems to have not noticed that Greaves and MacAskill’s paper itself notes some things other than existential risk reduction which could be priorities under a strong longtermist perspective, and which could align more with the sort of things GiveWell supports.
- E.g., speeding up progress.
MichaelA @ 2021-05-03T14:35 (+8)
I felt uncomfortable with and confused by the section of the post that was about jargon and euphemisms.
- E.g., Masrani writes “No single individual is more of an expert in morality than another, and we all have a right to ask for these ideas to be expressed in plain english.”
- I definitely think that people sometimes use jargon unnecessarily or fail to explain jargon when they should’ve.
- See also 3 suggestions about jargon in EA.
- But I also think jargon can be very useful.
- And it seemed to me that this section of the post implied that various authors were deliberately being hard to understand in order to make it less likely that they’d be held accountable, or something like that.
- (Though it's possible that I just happened to incorrectly get that "vibe", and that that wasn't an implication Masrani intended.)
- And I definitely think that some people are more of an expert in morality than other people, in one relevant sense of "expertise" - namely, having thought more about it, having more useful concepts, knowing who the other people to talk to about related things are, etc.
- I’m not very confident that these people will tend to have better bottom-line views about morality than other people (though I tentatively think they would).
- But I do think I’ll learn more about morality by talking to them than by talking to a randomly chosen member of the world population.
Will Payne @ 2021-05-09T16:00 (+10)
Also worth noting that there are a bunch of other more accessible descriptions of longtermism out there and this is specifically a formal definition aimed at an academic audience (by virtue of being a GPI paper)
MichaelA @ 2021-05-03T14:33 (+8)
Masrani seemed to jump to a strange, uncharitable, and incorrect conclusion about the history of longtermist thought.
Masrani wrote:
I will primarily focus on The case for strong longtermism, listed as “draft status” on both Greaves and MacAskill’s personal websites as of November 23rd, 2020. It has generated quite a lot of conversation within the effective altruism (EA) community despite its status, including multiple podcast episodes on 80000 hours podcast (one, two, three), a dedicated a multi-million dollar fund listed on the EA website, numerous blog posts, and an active forum discussion. (Update 21/12/2020: Oops I was sloppy with the chronology in this paragraph - Patrick points out that the paper formalizes and extends ideas that have existed in the community for a while.)”
- I appreciate that Masrani was willing to say "oops" and acknowledged having made a mistake here
- But I think that this is a strange mistake to have made
- I think it had seemed to me that a single draft status paper led to all of those consequences, I'd find that very surprising, and I'd therefore at least google the term “longtermism” to check if that’s indeed the case
- And at that point, I’d quickly find mentions of the term that predate the paper
- And many of the links given in the paragraph itself clearly show publication dates that precede the draft paper
- And the draft paper itself mentions prior work that makes it clear that this paper wasn’t the first presentation of ideas in this vicinity
- I think it had seemed to me that a single draft status paper led to all of those consequences, I'd find that very surprising, and I'd therefore at least google the term “longtermism” to check if that’s indeed the case
- So this seems to me like weak evidence of (1) a failure to read the paper carefully and (2) a willingness to quickly jump to uncharitable interpretations.
- And some things mentioned in other comments of mine here also seem to me like weak evidence of the same things.
- (But I do worry that this comment in particular sounds kind-of personal and attacking, and I apologise if it does - that's not my intent.)
MichaelA @ 2021-05-03T14:43 (+7)
Masrani says that "longtermism encourages us to treat our fellow brothers and sisters with careless disregard for the next one thousand years, forever". But strong longtermism being true now doesn't mean it always was true and always will be true, as Greaves and Macaskill themselves note.
Masrani writes:
The monumental asymmetry between the present and future that the longtermists seem to be missing is that the present moves with us, while the future never arrives. Concretely, this means that longtermism encourages us to treat our fellow brothers and sisters with careless disregard for the next one thousand years, forever. There will always be the next one thousand years.
- But Greaves and MacAskill's paper explicitly notes that longtermism depends on surprising empirical facts that would not always be true, and that strong longtermism may not have held in the past.
- Also, Greaves and MacAskill’s discussion of attractor states provides one obvious way in which strong longtermism could stop being true in future.
- I.e., if we reach an attractor state (e.g., extinction, or lock-in of a good future), the future from that point onwards will then be far harder to influence, which would presumably very much weaken the case for strong longtermism at that point.
- The case for strong longtermism would also tend to become less compelling as the ratio between the total size of the present and near-term generations and the total size of the far future generations grows larger (unless this is offset by increased ability to influence the future and or predict influence).
- This ratio will grow larger as our civilization expands and as we progress towards some "unchangeable limits of the universe".
- At some point, our better ability to influence the near term will presumably outweigh the larger size of the future.
MichaelA @ 2021-05-03T14:29 (+7)
At least in some places, Masrani seems to think or imply that longtermism doesn’t aim to influence any events that occur in the next (say) 1000 years. But in reality, longtermists mostly focus on influencing the further future via influencing things that happen within the next 1000 years (e.g., whether an existential catastrophe occurs).
- I.e., most longtermists still care a great deal about the nearer-term future for instrumental reasons (as well as caring somewhat for intrinsic reasons)
Davidmanheim @ 2021-05-06T06:49 (+2)
This seems to agree with his criticism - that we care about the near-term only as it affects the long term, and can therefore justify ignoring even negative short term consequences of our actions if it leads to future benefits. It argues even more strongly for abandoning otherwise short term beneficial interventions with small longer term impacts.
Obvious examples of how this goes wrong include many economic planning projects of the 20th century, where the short term damage to communities, cities, and livelihoods was justified by incorrect claims about long term growth.
MichaelA @ 2021-05-06T07:51 (+4)
tl;dr: I basically agree with everything except "This seems to agree with his criticism", because I think (from memory) that Masrani was making a stronger and less valid claim. (Though I'm not totally sure; it may have just been slightly sloppy writing + the other misconception that longtermism is necessarily solely focused on existential risk reduction.)
---
I think there's a valid claim similar to what Masrani said, and that that could reasonably be seen as a criticism of longtermism given some reasonable moral and/or empirical assumptions. Specifically, I think it's true that:
- The very core of strong longtermism is that idea that the intrinsic importance of the effects of our actions on the long-term future is far greater than the intrinsic importance of the effects of our actions on the near-term, and thus that we should focus on how our actions affect the long-term (or, in other words, the near-term effects we should aim for are whichever ones are best for the long-term)
- It seems very likely to be the case that what's best for the long-term isn't what's the very best for the near-term
- It seems plausible that what's best for the long-term is actually net-negative for the near-term
- This means acting according to strong longtermism will likely be worse for the near-term than acting according to (EA-style) neartermism, and might be net-negative for the near-term
- Various historical cases suggest that "ends justify the means" reasoning and attempts to enact grand, long-term visions often have net negative effects
- Though I'm not actually sure how often they had net negative effects vs having net positive effects, how this differs from other types of reasoning and planning, and how analogous those cases are to longtermist efforts in relevant ways)
- But this might suggest that, in practice, strong longtermism is more likely to be bad for the near-term than it should be in theory
I would mostly "bite the bullet" of this critique - i.e., say that we can't prioritise everything at once, and if the case for strong longtermism holds up then it's appropriate that we prioritise the long-term at the expense of the short-term. And then I do think we should remain vigilant of ways our thinking, priorities, actions, etc. could mirror bad instances of "ends justify the means" etc.
But I could understand someone else being more worried about this objection.
Also, FWIW, I think the Greaves and MacAskill paper maybe fails to acknowledge that strong longtermism actions might be very strange or net-negative from a near-term perspective, rather than just not top priorities. (Though maybe I just forgot where they said this.) I made a related comment here.
---
We could steelman Masrani into making the above sorts of claims and then have a productive discussion. But I think it's also useful to sometimes just talk about what someone actually said and correct things that are actually misleading or common misconceptions. And I think Masrani was making a stronger claim (though I'm now unsure, as mentioned at the top), which I also think some other people actually believe and which seems like a misconception worth correcting (see also). (To be fair, I think Greaves & MacAskill could maybe have been more careful with some phrasngs to avoid people forming this misconception.)
E.g. Masrani writes:
The recent working paper by Hilary Greaves and William MacAskill puts forth the case for strong longtermism, a philosophy which says one should simply ignore the consequences of one’s actions if they take place over the “short term” timescale of 100 to 1000 years
And:
To reiterate, longtermism gives us permission to completely ignore the consequences of our actions over the next one thousand years, provided we don’t personally believe these actions will rise to the level of existential threats.
And:
longtermism encourages us to treat our fellow brothers and sisters with careless disregard for the next one thousand years, forever.
(But again, I now realise that this might have just been slightly sloppy writing + the x-risk misconception, and also that Greaves & MacAskill may have been slightly sloppy with some phrases as well in a way that contributed to this. So I think this point isn't especially important as a critique of the post.
Though I guess my original statement still seems appropriate hedged: "At least in some places, Masrani seems to think or imply that longtermism doesn’t aim to influence any events that occur in the next (say) 1000 years." [emphasis added])
Davidmanheim @ 2021-05-06T10:49 (+2)
I think we basically agree.
And while I agree that it's sometimes useful to respond to what was actually said, rather than the best possible claims, that type of post is useful as a public response, rather than useful for discussion of the ideas. Given that the forum is for discussion about EA and EA ideas, I'd prefer to use steelman arguments where possible to better understand the questions at hand.
MichaelA @ 2021-05-03T14:50 (+5)
(I'll put a bundle of smaller, disconnected reactions in this one thread.)
Masrani writes:
This observation - that in expectation, the future is not vast, but undefined - passes a few basic sanity checks. First, we know from common sense that we cannot predict the future, in expectation or otherwise. Prophets have been trying this for millennia with little success - it would be rather surprising if the probability calculus somehow enabled it.
- But empirical evidence and common sense actually clearly demonstrate that we can predict the future with better than chance accuracy, at least in some domains, and sometimes very easily
- E.g., I can predict that the sun will rise tomorrow
- See also Phil Tetlock’s work
MichaelA @ 2021-05-03T15:14 (+17)
Masrani seems to confuse (1) pure time discounting / a pure rate of time preference with (2) time discounting for other reasons (e.g., due to the possibility that the future won’t come to pass due to a catastrophe; see Greaves).
- In particular, Masrani seems to claim that Greaves and MacAskill's paper is wrong to reject pure time discounting, but bases that claim partly on the fact that there could be a catastrophe in future (which is a separate matter from pure time discounting).
- E.g., Masrani writes: “We should be biased towards the present for the simple reason that tomorrow may not arrive. The further out into the future we go, the less certain things become, and the smaller the chance is that we’ll actually make it there. Preferring good things to happen sooner rather than later follows directly from the finitude of life."
---
Another, separate point about discounting:
Masrani writes:
If one does not discount the future, then one is equally concerned about every moment in time
But as far as I can tell, this is false, at least taken if literally; instead, how concerned one should be about a given moment in time depends in part on what’s happening at the time (e.g. how many moral patients there are, and what they’re experiencing).
MichaelA @ 2021-05-03T15:07 (+5)
Masrani writes:
Second, we know from basic results in epistemology (discussed here before) that predicting the future course of human history is impossible when that history depends on future knowledge, which we by definition don’t know. We cannot know today what we will only learn tomorrow. It is not the case that someone standing in 1200 would assign a “low credence” to the statement “the internet will be invented in the 1990’s”. They wouldn’t be able to think the thought in the first place, much less formalize it mathematically.
- But we very often predict things that depend on things we don’t fully understand, and with above chance accuracy.
- E.g. I can often predict with decent success what someone will do, even without knowing everything they know, and even when some things that they know and that I don’t know are relevant to what I’ll do.
- To be clear, I’d agree with lots of weaker claims in this vicinity, like that predicting the future is very hard, and that one thing that makes it harder is that we lack some knowledge which future people will have (e.g., about the nature of future technologies).
- But saying we can’t ever predict the future at all is too strong.
Davidmanheim @ 2021-05-06T06:55 (+2)
Yes, this seems to be a problem, but it's also a problem with naive expected value thinking that prioritizes predictions without looking at adaptive planning or value of information. And I think Greaves and MacAskill don't really address these issues sufficiently in their paper - though I agree that they have considered them and are open to further refinement of their ideas.
But I don't beleive that it's clear we predict things about the long term "with above chance accuracy." If we do, it's not obvious how to construct the baseline probability we would expect to outperform.
Critically, the requirement for this criticism to be correct is that our predictions are not good enough to point to interventions that have higher expected benefit than more-certain ones, and this seems very plausible. Constructing the case for whether or not it is true seems valuable, but mostly unexplored.
MichaelA @ 2021-05-06T08:16 (+2)
Yeah, I agree with your first two paragraphs. (I don't think I understand the third one; feel free to restate that, if you've got time.)
In particular, it's worth noting that I agree that it's not currently clear that we can predict (decision-relevant) things about the long-term with above chance accuracy (see also the long-range forecasting tag). Above, I merely claimed that "we very often predict things that depend on things we don’t fully understand, and with above chance accuracy" - i.e., I didn't specify long-term.
It does seem very likely to me that it's possible to predict decision-relevant things about the long-term future at least slightly better than complete guesswork. But it seems plausible to me that our predictive power becomes weak enough that that outweighs the increased scale of the future, such that we should focus on near-term effects instead. (I have in mind basically Tarsney's way of framing the topic from his "Epistemic Challenge" paper. There are also of course factors other than those two things that could change the balance, like population ethical views or various forms of risk-aversion.)
This seems like a super interesting and important topic, both for getting more clarity on whether we should adopt strong longtermism and on how to act given longtermism.
---
I specified "decision-relevant" above because of basically the following points Tarsney makes in his Epistemic Challenge paper:
The epistemic challenge to longtermism emphasizes the difficulty of predicting the
far future. But to understand the challenge, we must specify more precisely the
kind of predictions we’re interested in. After all, some predictions about the far
future are relatively easy. For instance, I can confidently predict that, a billion
years from now, the observable universe will contain more than 100 and fewer than
10^100 stars. (And this prediction is quite precise, since (100, 10^100) comprises only
an infinitesimal fraction of the natural numbers!)But our ability to make predictions like these doesn’t have much bearing on
the case for longtermism. For roughly the same reason that it is relatively easy to
predict, the number of stars in the observable universe is very difficult to affect.
And what we need, for practical purposes, is the ability to predictably affect the
world by doing one thing rather than another. That is, we need the ability to make
practical predictions—predictions that, if I choose Oj , the world will be different in
some particular way than it would have been if I had chosen Ok.Even long-term practical predictions are sometimes easy. For instance, if I shine a laser pointer into the sky, I can predict with reasonable confidence that a billion years from now, some photons will be whizzing in a certain direction through a certain region of very distant space, that would not have been there if I had pointed the laser pointer in a different direction. I can even predict what the wavelength of those photos will be, and that it would have been different if I had used my green instead of my red laser pointer.
But our ability to make predictions like these isn’t terribly heartening either, since photons whizzing through one region or another of empty space is not (presumably) a feature of the world that matters. What we really want is the ability to make long-term evaluative practical predictions: predictions about the effects of our present choices on evaluatively significant features of the far future. The epistemic challenge to longtermism claims that our ability to make this sort of prediction is so limited that, even if we concede the astronomical importance of the far future, the longtermist thesis still comes out false.
Davidmanheim @ 2021-05-06T10:52 (+2)
Agree that this is important, and it's something I've been thinking about for a while. But the last paragraph was just trying to explain what the paper said (more clearly) were evaluative practical predictions. I just think about that in more decision-theoretic terms, and if I was writing about this more, would want to formulate it that way.
MichaelA @ 2021-05-03T14:50 (+5)
Masrani focuses quite a bit on the idea that longtermism relies on comparisons to an infinite amount of potential future good. But Greaves and MacAskill's paper doesn't actually mention infinity at any point, and neither their argument nor the othe standard arguments I've seen rely at all on infinities.
- E.g., Masrani writes: "By “this observation” I just mean the fact that longtermism is a really really bad idea because it lets you justify present day suffering forever, by always comparing it to an infinite amount of potential future good (forever).”
- (I won't say more on this here, since the comments section of the link-post for Masrani’s post already contains an extensive discussion of whether and how infinities might be relevant relation to longtermism.)
MichaelA @ 2021-05-03T14:51 (+4)
Masrani writes:
Therefore there are no uncertainties associated with predictions made in expectation. Adding the magic words “in expectation” allows longtermists to make predictions about the future confidently and with absolute certainty.”
- But I think that this is simply false: our predictions (as well as other credences) can differ in how "resilient" they are
MichaelA @ 2021-05-03T14:26 (+3)
Masrani seems to sort-of implicitly assume that (a) people will have strong ulterior motives to bend the ideas of strong longtermism towards things that they want to believe or support anyway (for non-altruistic reasons), and thus (b) we must guard against a view or a style of reasoning which is vulnerable to being bent in that way. But I think it would be more productive and accurate to basically “assume good faith”.
- I think longtermism is actually less about “lifting some constraints” or letting us “get away with” something, and more about saying what we should do in certain circumstances.
- Relatedly, strong longtermism doesn’t say the short term suffering doesn’t matter and therefore we can do whatever we want; instead, it says the long term matters even more, and thus we are obligated to focus on helping the future.
- And, empirically, it really doesn’t seem like most people who identify with longtermism are mostly bending strong longtermism towards things they wanted to believe or support anyway.
- (It does seem likely that there’s some degree of ulterior motives and rationalisation, but not that that’s a dominant force.)
- Indeed, many of these people have switched their priorities due to longtermism, find their new priorities less emotionally resonant, and may have faced disruptions to their social or work lives due to the switch they made.
- See e.g. Why I find longtermism hard, and what keeps me motivated
- This data doesn’t disprove the idea that all of this happened due to ulterior motives or rationalisation (e.g., maybe the dominant motive was to conform to the beliefs of some prestigious-seeming group), but it does seem to be some evidence against that theory.
MichaelA @ 2021-05-03T14:27 (+13)
This ties into another point: Many of the framings and phrasings in Masrani’s post seem quite “loaded”, in the sense of making something sound bad partly just through strong connotations or rhetoric rather than explicit arguments in neutral terms.
- E.g., the author writes “I think, however, that longtermism has the potential to destroy the effective altruism movement entirely, because by fiddling with the numbers, the above reasoning can be used to squash funding for any charitable cause whatsoever. The stakes are really high here.”
- But I think that most longtermists aren’t trying to fiddle with the numbers in order to squash funding for things that are cost-effective; most of them are mostly trying to actually work out what’s true and use that info to improve the world.
- E.g., the author writes “To reiterate, longtermism gives us permission to completely ignore the consequences of our actions over the next one thousand years, provided we don’t personally believe these actions will rise to the level of existential threats. In other words, the entirely subjective and non-falsifiable belief that one’s actions aren’t directly contributing to existential risks gives one carte blanche permission to treat others however one pleases. The suffering of our fellow humans alive today is inconsequential in the grand scheme of things. We can “simply ignore” it - even contribute to it if we wish - because it doesn’t matter. It’s negligible. A mere rounding error.”
- I do think that “inconsequential in the grand scheme of things” is indeed in some sense essentially an implication of longtermism. But that seems like a quite misleading way of framing it.
- I think the spirit of the longtermist view is more along the lines of thinking that what we already thought mattered still matters a lot, but also that other things matter surprisingly and hugely much, such that there may be a strong reason to strongly prioritise those other things.
- So the spirit is more like caring about additional huge things, rather than being callous about things we used to care about.
- Though I do acknowledge that those different framings can reach similar conclusions in practice, and also that longtermism is sometimes framed in a way that is more callous/dismissive than I’m suggesting here.
- I think the spirit of the longtermist view is more along the lines of thinking that what we already thought mattered still matters a lot, but also that other things matter surprisingly and hugely much, such that there may be a strong reason to strongly prioritise those other things.
- I do think that “inconsequential in the grand scheme of things” is indeed in some sense essentially an implication of longtermism. But that seems like a quite misleading way of framing it.
MichaelStJules @ 2021-05-06T02:22 (+2)
Masrani seems to sort-of implicitly assume that (a) people will have strong ulterior motives to bend the ideas of strong longtermism towards things that they want to believe or support anyway (for non-altruistic reasons), and thus (b) we must guard against a view or a style of reasoning which is vulnerable to being bent in that way. But I think it would be more productive and accurate to basically “assume good faith”.
This can happen unconsciously, though, e.g. confirmation bias, or whenever there's arbitrariness or "whim", e.g. priors or how you weight different considerations with little evidence. The weaker the evidence, the more prone to bias, and there's self-selection so that the people most interested in longtermism are the ones whose arbitrary priors and weights support it most, rightly or wrongly. (EDIT: see the optimizer's curse.) This is basically something Greaves and MacAskill acknowledge in their paper, although also argue applies to short-term-focused interventions:
Finally, it might seem at first sight that ambiguity aversion would undermine the case for strong longtermism. In contemplating options like those discussed in section 3, the first-order task is to assess what are the rational credences that some given intervention to (say) reduce extinction risk, or reduce the chance of major global conflict, or increase the safety of artificial intelligence, and so on, would lead to a large positive payoff in the long run. The thing that is most striking about this task is that it is hard. There is very little data to guide credences; one has an uncomfortable feeling of picking numbers, for the purposes of guiding important decisions, somewhat arbitrarily. That is, such interventions generate significant ambiguity. However, on reflection, attempts to optimise the short run also generate significant ambiguity, since it is very unclear what might be the long run consequences of (say) bed net distribution (Greaves, 2016). In addition, we again face the issue of whether one should be ambiguity averse with respect to the state of the world, or instead with respect to the difference one makes oneself to that state. We explore these issues in a related paper (Greaves, MacAskill and Mogensen, manuscript).
That being said, I suspect it's possible in practice to hedge against these indirect effects from short-term-focused interventions.
MichaelA @ 2021-05-06T07:13 (+2)
That being said, I suspect it's possible in practice to hedge against these indirect effects from short-term-focused interventions.
I haven't read your post, so can't comment.
That said, FWIW, my independent impression is that "cluelessness" isn't a useful concept and that the common ways the concept has been used either to counter neartermism or counter longtermism are misguided. (I write about this here and here.) So I guess that that's probably consistent with your conclusion, though maybe by a different road. (I prefer to use the sort of analysis in Tarsney's epistemic challenge paper, and I think that that pushes in favour of either longtermism or further research on longtermism vs neartermism, though I definitely acknowledge room for debate on that.)
MichaelStJules @ 2021-05-06T21:17 (+6)
I think Tarsney's paper does not address/avoid cluelessness, or at least its spirit, i.e., the arbitrary weighting of different considerations, since
- You still need to find a specific intervention that you predict ex ante pushes you towards one attractor and away from another, and you have more reason to believe it does this than it goes in the opposite direction (in expectation, say). If you have more reason to believe this due to arbitrary weights, which could reasonably have been chosen to have the intervention backfire, this is not a good epistemic state to be in. For example, is the AI safety work we're doing now backfiring? This could be due to, for example:
- creating a false sense of security,
- publishing the results of the GPT models, demonstrating AI capabilities and showing the world how much further we can already push it, and therefore accelerating AI development, or
- slowing AI development more in countries that care more about safety than those that don't care much, risking a much worse AGI takeover if it matters who builds it first.
- You still need to predict which of the attractors is ex ante ethically better, which again involves both arbitrary empirical weights and arbitrary ethical weights (moral uncertainty). You might find the choice to be sensitive to something arbitrary that could reasonably go either way. Is extinction actually bad, considering the possibility of s-risks?
Does some s-risk (e.g. AI safety, authoritarianism) work reduce some extinction risks and so increase other s-risks, and how do we weigh those possibilities?
I worry that research on longtermism vs neartermism (like Tarsney's paper) just ignores these problems, since you really need to deal with somewhat specific interventions, because of the different considerations involved. In my view, (strong) longtermism is only true if you actually identify an intervention that you can only reasonably believe does (much) more net good in the far future in expectation than short-term-focused alternatives do in the short term in expectation, or, roughly, that you can only reasonably believe does (much) more good than harm (in the far future) in expectation. This requires careful analysis of a specific intervention, and we may not have the right information now or ever to confirm that a particular intervention satisfies these conditions. To every longtermist intervention I've tried to come up with specific objections to, I've come up with objections that I think could reasonably push it into doing more harm than good in expectation.
Of course, what should "reasonable belief" mean? How do we decide which beliefs are reasonable and which ones aren't (and the degree of reasonableness, if it's a fuzzy concept)?
MichaelA @ 2021-05-07T06:47 (+10)
Basically, I agree that longtermist interventions could have these downside risks, but:
- I think we should basically just factor that into their expected value (while using various best practices and avoiding naive approaches)
- I do acknowledge that this is harder than that makes it sound, and that people often do a bad job. But...
- I think that these same points also apply to neartermist interventions
- Though with less uncertainty about at least the near-term effects, of course
Of course, what should "reasonable belief" mean? How do we decide which beliefs are reasonable and which ones aren't (and the degree of reasonableness, if it's a fuzzy concept)?
I think this gets at part of what comes to mind when I hear objections like this.
Another part is: I think we could say all of that with regards to literally any decision - we'd often be less uncertain, and it might be less reasonable to think the decision would be net negative or astronomically so, but I think it just comes in degrees, rather than applying strongly to some scenarios and not at all applying to others. One way to put this is that I think basically every decision meets the criteria for complex cluelessness (as I argued in the above-mentioned links: here and here).
But really I think that (partly for that reason) we should just ditch the term "complex cluelessness" entirely, and think in terms of things like credal resilience, downside risk, skeptical priors, model uncertainty, model combination and adjustment, the optimizer's curse, best practice for forecasting, and expected values given all that.
Here I acknowledge that I'm making some epistemological, empirical, decision-theoretic, and/or moral claims/assumptions that I'm aware various people who've thought about related topics would contest (including yourself and maybe Greaves, both of whom have clearly "done your homework"). I'm also aware that I haven't fully justified these stances here, but it seemed useful to gesture roughly at my conclusions and reasoning anyway.
I do think that these considerations mostly push against longtermism and in favour of neartermism. (Caveats include things like being very morally uncertain, such that e.g. reducing poverty or reducing factory farming could easily be bad, such that maybe the best thing is to maintain option value and maximise the chance of a long reflection. But this also reduces option value in some ways. And then one can counter that point, and so on.) But I think we should see this all as a bunch of competing quantitative factors, rather than as absolutes and binaries.
(Also, as noted elsewhere, I currently think longtermism - or further research on whether to be longtermist - comes out ahead of neartermism, all-things-considered, but I'm unsure on that.)
MichaelStJules @ 2021-05-07T16:02 (+2)
I don't think it's usually reasonable to choose only one expected value estimate, though, and this to me is the main consequence of cluelessness. Doing your best will still leave a great deal of ambiguity if you're being honest about what beliefs you think would be reasonable to have, despite not being your own fairly arbitrary best guess (often I don't even have a best guess, precisely because of how arbitrary that seems). Sensitivity analysis seems important.
But really I think that (partly for that reason) we should just ditch the term "complex cluelessness" entirely, and think in terms of things like credal resilience, downside risk, skeptical priors, model uncertainty, model combination and adjustment, the optimizer's curse, best practice for forecasting, and expected values given all that.
I would say complex cluelessness basically is just sensitivity of recommendations to model uncertainty. The problem is that it's often too arbitrary to come to a single estimate by combining models. Two people with access to all of the same information and even the same ethical views (same fundamental moral uncertainty and methods for dealing with them) could still disagree about whether an intervention is good or bad, or which of two interventions is best, depending basically on whims (priors, arbitrary weightings).
At least substantial parts of our credences are not very sensitive to arbitrariness with shorttermist interventions with good evidence, even if on the whole the expected value is, but the latter is what I hope hedging could be used to control. Maybe you can do this just with longtermist interventions, though. A portfolio of interventions can be less ambiguous than each intervention in it. (This is what my hedging post is about.)
MichaelA @ 2021-05-06T07:11 (+2)
tl;dr: I basically agree with your first paragraph, but think that:
- that's mostly consistent with my prior comment
- that doesn't represent a strong argument against longtermism
- Masrani's claims/language go beyond the defensible claims you're making
This can happen unconsciously, though, e.g. confirmation bias, or whenever there's arbitrariness or "whim", e.g. priors or how you weight different considerations with little evidence. The weaker the evidence, the more prone to bias
Agreed. But:
- I think that a small to moderate degree of such bias is something I acknowledged in my prior comment
- (And I intended to imply that it could occur unconsciously, though I didn't explicitly state that)
- I think unconscious bias is always a possibility, including in relation to whatever alternative to longtermism one might endorse
- See also Caution on Bias Arguments and Beware Isolated Demands for Rigor
- That said, I think "The weaker the evidence, the more prone to bias" is true (all other factors held constant), and I think that that does create one reason why bias may push in favour of longtermism more than in favour of other things.
- I think I probably should've acknowledged that.
- But there's still the fact that there are so many other sources of bias, factors exacerbating or mitigating bias, etc. So it's still far from obvious which group of people (sorted by current cause priorities) is more biased overall in their cause prioritisation.
- And I think that there's some value in trying to figure that out, but that should be done and discussed very carefully, and is probably less useful than other discussions/research that could inform cause priorities.
- E.g., scope neglect, identifiable victim effects, and confirmation bias when most people first enter EA (since more were previously interested in global health & dev than in longtermism) bias against longtermism
- But a desire to conform to what's currently probably more "trendy" in EA biases towards longtermism
- And so on
- Less important: It seems far from obvious to me whether there's substantial truth in the claim that "there's self-selection so that the people most interested in longtermism are the ones whose arbitrary priors and weights support it most, rightly or wrongly", even assuming bias is a big part of the story.
- E.g., I think things along the lines of conformity and deference are more likely culprits for "unwarranted/unjustified" shifts towards longtermism than confirmation bias are
- It seems like a very large portion of longtermists were originally focused on other areas and were surprised to find themselves ending up longtermist, which makes confirmation bias seem like an unlikely explanation
- E.g., I think things along the lines of conformity and deference are more likely culprits for "unwarranted/unjustified" shifts towards longtermism than confirmation bias are
- Compared to what you're suggesting, Masrani - at least in some places - seems to imply something more extreme, more conscious, and/or more explicitly permitted by longtermism itself (rather than just general biases that are exacerbated by having limited info)
- E.g., "by fiddling with the numbers, the above reasoning can be used to squash funding for any charitable cause whatsoever." [emphasis added]
- E.g., “To reiterate, longtermism gives us permission to completely ignore the consequences of our actions over the next one thousand years, provided we don’t personally believe these actions will rise to the level of existential threats. In other words, the entirely subjective and non-falsifiable belief that one’s actions aren’t directly contributing to existential risks gives one carte blanche permission to treat others however one pleases. The suffering of our fellow humans alive today is inconsequential in the grand scheme of things. We can “simply ignore” it - even contribute to it if we wish - because it doesn’t matter." [emphasis added]
- This very much sounds to me like "assuming bad faith" in a way that I think is both unproductive and inaccurate for most actual longtermists
- I.e., this sounds quite different to "These people are really trying to do what's best. But they're subject to cognitive biases and are disproportionately affected by the beliefs of the people they happen to be around or look up to - as are we all. And there are X, Y, Z specific reasons to think those effects are leading these people to be more inclined towards longtermism than they should be."