Why I am probably not a longtermist
By Denise_Melchin @ 2021-09-23T17:24 (+242)
tl;dr: I am much more interested in making the future good, as opposed to long or big, as I neither think the world is great now nor am convinced it will be in the future. I am uncertain whether there are any scenarios which lock us into a world at least as bad as now that we can avoid or shape in the near future. If there are none, I think it is better to focus on “traditional neartermist” ways to improve the world.
I thought it might be interesting to other EAs why I do not feel very on board with longtermism, as longtermism is important to a lot of people in the community.
This post is about the worldview called longtermism. It does not describe a position on cause prioritisation. It is very possible for causes commonly associated with longtermism to be relevant under non-longtermist considerations.
I structured this post by crux and highlighted what kind of evidence or arguments would convince me that I am wrong, though I am keen to hear about others which I might have missed! I usually did not investigate my cruxes thoroughly. Hence, only ‘probably’ not a longtermist.
The quality of the long-term future
1. I find many aspects of utilitarianism uncompelling.
You do not need to be a utilitarian to be a longtermist. But I think depending on how and where you differ from total utiliarianism, you will probably not go ‘all the way’ to longtermism.
I very much care about handing the world off in a good state to future generations. I also care about people’s wellbeing regardless of when it happens. What I value less than a total utilitarian is bringing happy people into existence who would not have existed otherwise. This means I am not too fussed about humanity’s failure to become much bigger and spread to the stars. While creating happy people is valuable, I view it as much less valuable than making sure people are not in misery. Therefore I am not extremely concerned about the lost potential from extinction risks (but I very much care about its short-term impact), although that depends on how good and long I expect the future to be (see below).
What would convince me otherwise:
I not only care about pursuing my own values, but I would like to ensure that other people’s reflected values are implemented. For example, if it turned out that most people in the world really care about increasing the human population in the long term, I would prioritise it much more. However I am a bit less interested in the sum of individual preferences, but more the preferences of a wide variety of groups. This is to give more weight to rarer worldviews as well as not rewarding one group outbreeding the other or spreading their values in an imperialist fashion.
I also want to give the values of people who are suffering the most more weight. If they think the long-term future is worth prioritising over their current pain, I would take this very seriously.
Alternatively, convincing me of moral realism and the correctness of utilitarianism within that framework would also work. So far I have not seen a plain language explanation of why moral realism makes any sense, but it would probably be a good start.
If the world suddenly drastically improved and everyone had as good a quality of life as my current self, I would be happy to focus on making the future big and long instead of improving people’s lives.
2. I do not think humanity is inherently super awesome.
A recurring theme in a lot of longtermist worldviews seems to be that humanity is wonderful and should therefore exist for a long time. I do not consider myself a misanthrope, I expect my views to be average for Europeans. Humanity has many great aspects which I like to see thrive.
But I find the overt enthusiasm for humanity most longtermists seem to have confusing. Even now, humanity is committing genocides, letting millions of people die of hunger, enslaving and torturing people as well as billions of factory-farmed animals. I find this hard to reconcile with a “humanity is awesome” worldview.
A common counterargument to this seems to be that these are problems, but we have just not gotten around to fixing them yet. That humans are lazy, not evil. This does not compel me. I not only care about people living good lives, I also care about them being good people. Laziness is no excuse.
Right now, we have the capacity to do more. Mostly, we do not. Few people who hear about GiveWell recommended charities decide to donate a significant amount of their income. People go on tourist intercontinental flights despite knowing about climate change. Many eat meat despite having heard of conditions on factory farms. Global aid is a tiny proportion of most developed countries’ budgets. These examples are fairly cosmopolitan, but I do not consider this critical.
Taken one at a time, you can quibble with these examples. Sometimes people actually lack the information. They can have empirical disagreements or different moral views (e.g. not considering animals to be sentient). Sometimes they triage and prioritise other ways of doing good. I am okay with all of these reasons.
But in the end, it seems to me that many people have plenty of resources to do better and yet there are still enormous problems left. It is certainly great if we set up better systems in the future to reduce misery and have the right carrots and sticks in place to get people to behave better. But I am unenthusiastic about a humanity which requires these to behave well.
This also makes me reluctant to put a lot of weight on helping people being good regardless of when it happens. This is only true if people in the future are as morally deserving as people are today.
Or putting this differently: if humans really were so great, we would not need to worry about all these risks to the future. They would solve themselves.
What would convince me otherwise:
I would be absolutely thrilled to be wrong about how moral people are where I live! Admittedly, I find it hard to think of plausible evidence as it seems to be in direct contradiction to the world I observe. Maybe it is genuinely a lack of information that stops people from acting better, as e.g. Max Roser from Our World in Data seems to believe. Information campaigns having large effects would be persuasive.
I am unfamiliar with how seriously people take their moral obligations in other places and times. Maybe the lack of investment I see is a local aberration.
Even though this should not have an impact on my worldview, I would probably also feel more comfortable with the longtermist idea if I saw a stronger focus on social or medical engineering to produce (morally) better people within the longtermist community.
3. I am unsure whether the future will be better than today.
In many ways, the world has gotten a lot better. Extreme poverty is down and life expectancy is up. Fewer people are enslaved. I am optimistic about these positive trends continuing.
What I feel more skeptical of is how much of the story these trends tell. While probably most people agree that having fewer people starve and die young is good, there are plenty of trends which get lauded by longtermists which others might feel differently about, for example the decline in religiosity. Or they can put weight on different aspects. Someone who values animals in factory farms highly might not think the world has improved.
I am concerned that seeing the world as improving is dependent on a worldview with pretty uncommon values. Using the lens of Haidt’s moral foundations theory it seems that most of the improvements are in the Care/harm foundation, while the world may not have improved according to other moral foundations like Loyalty/betrayal or Sanctity/degradation.
Also, many world improvements I expect to peter out before they become negative. But I am worried that some will not. For example, I think increased hedonism and individualism have both been a good force, but if overdone I would consider them to make the world worse, and it seems to me we are either almost or already there.
I am generally concerned about trends to overshoot their original good aim by narrowly optimising too much. Optimising for profit is the clearest example. I wrote a bit more about this here.
If the world is not better than it was in the past, extrapolating towards expecting an even better future does not work. For me this is another argument on wanting to focus on making the future good instead of long or big.
On a related note, while this is not an argument which deters me from longtermism, some longtermists looking forward to futures which I consider to be worthless (e.g. the hedonium shockwave) puts me off. Culturally many longtermists seem to favour more hedonism, individualism and techno-utopianism than I would like.
What would convince me otherwise:
I am well aware lots of people are pessimistic about the future because they get simple facts about how the world has been changing wrong. Yet I am interested in learning more about how different worldviews lead to perceiving the world as improving or not.
The length of the long-term future
I do not feel compelled by arguments that the future could be very long. I do not see how this is possible without at least soft totalitarianism, which brings its own risks of reducing the value of the future.
Or looking at it differently, people working on existential risks spent some years convincing me that existentials risks are pretty big. Switching from that argument to work on existential risks to longtermism, which requires reaching existential security, gives me a sense of whiplash.
See also this shortform post on the topic. One argument brought up there is the Lindy rule, pointing out that self-propagating systems have existed for billions of years so we can expect this length again. But I do not see why self-propagating systems should be the baseline, I am only interested in applying the Lindy rule to a morally worthwhile human civilisation which has been rather short in comparison.
I am also not keen to base decisions on rough expected value calculations in which the assessment of the small probability is uncertain and the expected value is the primary argument (as opposed to a more ‘cluster thinking’ based approach). I am not in principle opposed to such decisions, but my own track record with such decisions is very poor. : the predicted expected value from back of the envelope calculations does not materialise.
I also have traditional Pascal’s mugging type concerns for prioritizing the potentially small probability of a very large civilisation.
What would convince me otherwise:
I would appreciate solid arguments on how humanity could reach existential security.
The ability to influence the long-term future
I am unconvinced that people can reliably have a positive impact which dissipates further into the future than 100 years, maybe within a factor of 3. But there is one important exception: if we have the ability to prevent or shape a “lock-in” scenario within this timeframe. By lock-in I mean anything which humanity can never escape from. Extinction risks are an obvious example, others are permanent civilisational collapse.
I am aware that Bostrom’s canonical definition of existential risks includes both of these lock-in scenarios, but it also includes scenarios which I consider to be irrelevant (failing to reach a transhumanist future), which is why I am not using the term in this section.
Thinking we cannot reliably impact the world for more than several decades, I do not find working on cause areas like ‘improving institutional decision-making’ compelling except for their ability to shape or prevent a lock-in in that timeframe..
I am also only interested in lock-in scenarios which would be as bad or worse than the current world, or maybe not much better. I am not interested in preventing a future in which humans just watch Netflix all day - it would be pretty disappointing, but at least better than a world in which people routinely starve to death.
At the moment, I do not know enough about the probabilities of a range of bad lock-in scenarios to judge whether focusing on them is warranted under my worldview. If this turns out to be the case on further investigation, I could imagine describing my worldview as longtermist when pushed, but I expect I would still feel a cultural disconnect with other longtermists.
If there are no options to avoid or shape bad lock-in scenarios within the next few decades, I expect improving the world with “traditional neartermist” approaches is best. My views here are very similar to Alexander Berger’s which he laid out in this 80,000 Hours podcast.
What would convince me otherwise:
If there have been any intentional impacts for more than a few hundred years out, I would be keen to know about them. I am familiar with Carl’s blogposts on the topic.
I expect to spend some time investigating this crux soon: if there are bad lock-in scenarios on the horizon which we can avoid or shape, that would likely change my feelings on longtermism.
Given that this is an important crux one might well consider it premature for me to draw conclusions about my worldview already. But my other views seem sufficiently different to most of the longtermist views I hear that they were hopefully worth lying out regardless.
If anyone has any resources they want to point me to which might change my mind, I am keen to hear about them.
Thanks to AGB and Linch Zhang for providing comments on a draft of this post.
This work is licensed under a Creative Commons Attribution 4.0 International License.
Halstead @ 2021-09-27T08:50 (+81)
Thanks a lot for sharing this denise. Here are some thoughts on your points.
- On your point about moral realism, I'm not sure how that can be doing much work in an argument against longtermism specifically, as opposed to all other possible moral views. Moral anti-realism implies that longtermism isn't true, but then it also implies that near-termism isn't true. The thought seems to be that there could only be an argument that would give you reason to change your mind if moral realism were true, but if that were true, there would be no point in discussing arguments for and against longtermism because they wouldn't have justificatory force.
- Your argument suggests that you find a person-affecting form of utilitarianism most plausible. But to me we should not reach conclusions about ethics on the basis of what you find intuitively appealing without considering the main arguments for and against these positions. Person-affecting views have lots of very counter-intuitive implications and are actually quite hard to define.
- I don't think it is true that the case for longtermism rests on the total view. As discussed in the Greaves and MacAskill paper, many theories imply longtermism.
- Your view that humanity is not super-awesome seems to me compatible with longtermism. The 'not super-awesome' critique attacks a premise of one strand of longtermism which is especially focused on ensuring human survival. But other forms of longtermism do not rely on these premises. For example, if you don't think that humanity is super awesome, then focusing on values change looks a good bet, as does reducing s-risks.
- I'm not sure your point that 'the future will not be better than today' hits the mark. More precisely, I think you want to say that 'today the world is net bad and the future will be as well'. It could be true that the future is not better than today but that the future is extremely good. In that case, reducing extinction risks would still have astronomical expected value.
- Independently of point 5, again I don't think one needs to hold that the future is good for longtermism to be true. Suffering-focused people are longtermists but don't think that the future is good. Totalists could also think that the future is not good in expectation. But still even if the future is bad in expectation, if the variance of possible realisable states of the future is high, that makes affecting the trajectory of the future extremely important.
- On existential security, this is a good and underdiscussed point. I hadn't thought about this much until recently, but after looking into it I became quite convinced that a period of existential security is very likely provided that we survive catastrophic risks and avoid value lock-in. My thoughts are not original and owe a lot to discussions with Carl Shulman, Max Daniel, Toby Ord, Anders Sandberg and others. One point is that biorisk and AI risk are transition risks not state risks. The coordination problems involved in solving them are so hard that once they are solved, they stay solved. To ensure AI safety, one has to ensure stability in coordination dynamics for millions of subjective years of strategic interplay between advanced AI systems. Once we can do that, then we have effectively solved all coordination problems. Solving biorisk is also very hard if you think there will be strongly democratised biotech. If you solve that and build digital minds to explore the galaxy, then you basically eliminate biorisk. If we go to the stars, then we at least avoid earth-bound GCRs. In short, we would have huge technological power and have solved the hardest coordination problems. If you buy limits to growth arguments, you might also think that tech progress will slow down, and so catastrophic risks will fall, as they are driven by tech progress. All of this suggests that conditional on survival of the time of perils, the probability that the future is extremely long is >>10%. So, the probability is not Pascalian.
- You seem to suggest that we cannot influence the long-term future with the exception of what you call 'lock-in events' like extinction and permanent collapse, which are attractor states that could lock in a state of affairs in the next 100 years. I suppose another one would be AI-enabled permanent lock-in of bad values. But these are the main things that longtermists are working on, so I don't see how this could be a criticism of longtermism.
- I'm don't think that the inference from 'I don't know how to influence the future' to 'donate to AMF' follows. If you buy these cluelessness arguments (I don't personally), then it seems like two obvious things to do would be to give later to prepare for when a potential lock in event rears its head. So you could invest in stocks, or you could grow the movement so that it is ready to deal with a lock in event. If you are very uncertain about how to affect the longterm future, but accept that the future is potentially extremely valuable, then this is the strongest argument ever for 'more research needed'. If there is a big asteroid heading our way but we currently feel very unsure about how to affect that, but there are some smart people who think they can stop the asteroid, the correct answer seems to me to be "let's put tonnes of resources into figuring out how to stop this asteroid" not "let's donate to AMF".
AppliedDivinityStudies @ 2021-09-23T23:39 (+33)
Hey, great post, I pretty much agree with all of this.
My caveat is: One aspect of longtermism is that the future should be big and long, because that's how we'll create the most moral value. But a slightly different perspective is that the future might be big and long, and so that's where the most moral value will be, even in expectation.
The more strongly you believe that humanity is not inherently super awesome, the more important that latter view seems to be. It's not "moral value" in the sense of positive utility, it's "moral value" in the sense of lives that can potentially be affected.
For example, you write:
I do not feel compelled by arguments that the future could be very long. I do not see how this is possible without at least soft totalitarianism, which brings its own risks of reducing the value of the future.
And I agree! But where you seem to be implying "the future will only be stable under totalitarianism, so it's not really worth fighting for", I would argue "the future will be stable under totalitarianism, so it's really important to fight totalitarianism in particular!" An overly simplistic way of thinking about this is that longtermism is (at least in public popular writing) mostly concerned with x-risk, but under your worldview, we ought to be much more concerned about s-risk. I completely agree with this conclusion, I just don't think it goes against longtermism, but that might come down to semantics.
FWIW my completely personal and highly speculative view is that EA orgs and EA leaders tend to talk too much about x-risk and not enough about s-risk, mostly because the former is more palatable, and is currently sufficient for advocating for s-risk relevant causes anyway. Or more concretely: It's pretty easy to imagine an asteroid hitting the planet, killing everyone, and eliminating the possibility of future humans. It's a lot wackier, more alienating and more bizarre to imagine an AI that not only destroys humanity, but permanently enslaves it in some kind of extended intergalactic torture chamber. So (again, totally guessing), many people have decided to just talk about x-risk, but use it as a way to advocate for getting talent and funding into AI Safety, which was the real goal anyway.
On a final note, if we take flavors of your view with varying degrees of extremity, we get, in order of strength of claim:
- X-risk is less important than s-risk
- We should be indifferent about x-risk, there's too much uncertainty both ethically and in terms of what the future will actually look like
- The potential for s-risk is so bad that we should invite and actually trying to cause x-risk, unless s-risk reduction is really tractable
- S-risks aside, humanity is just really net negative and we should invite x-risk no matter what (to be clear, I don't think you're making any of these claims yourself, but they're possible paths views similar to yours might lead to).
Some of these strike me as way too strong and unsubstantiated, but regardless of what we think object-level, it's not hard to think of reasons these views might be under-discussed. So I think what you're really getting at is something like, "does EA have the ability to productively discuss info-hazards". And the answer is that we probably wouldn't know if it did.
Natália Mendonça @ 2021-09-27T19:22 (+13)
FWIW my completely personal and highly speculative view is that EA orgs and EA leaders tend to talk too much about x-risk and not enough about s-risk, mostly because the former is more palatable, and is currently sufficient for advocating for s-risk relevant causes anyway. Or more concretely: It's pretty easy to imagine an asteroid hitting the planet, killing everyone, and eliminating the possibility of future humans. It's a lot wackier, more alienating and more bizarre to imagine an AI that not only destroys humanity, but permanently enslaves it in some kind of extended intergalactic torture chamber.
I’m pretty sure that risks of scenarios a lot broader and less specific than extended intergalactic torture chambers count as s-risks. S-risks are defined as merely “risks of astronomical suffering.” So the risk of having, for example, a sufficiently extremely large future with a small but nonzero density of suffering would count as an s-risk. See this post from Tobias Baumann for examples.
MichaelStJules @ 2021-09-24T15:29 (+6)
To be clear, by "x-risk" here, you mean extinction risks specifically, and not existential risks generally (which is what "x-risk" was coined to refer to, from my understanding)? There are existential risks that don't involve extinction, and some s-risks (or all, depending on how we define s-risk) are existential risks because of the expected scale of their suffering.
AppliedDivinityStudies @ 2021-09-24T21:35 (+2)
Ah, yes, extinction risk, thanks for clarifying.
Mauricio @ 2021-09-23T20:03 (+30)
Thanks for this! Quick thoughts:
- Curious what you make of writings like these. I think they directly addresses your crux of whether there are long-lasting, negative lock-in scenarios on the horizon which we can avoid or shape.
- Relatedly, you mention wanting to give the values of people who are suffering the most more weight. Those and related readings make what some find a good case for thinking that those who suffer most will be future generations--I imagine they'd wish more of their ancestors had been longtermists.
- I personally find arguments like these and these compelling for being tentatively optimistic about the value of the long-term future, despite sharing many of your intuitions in sections (2) and (3).
- These assume that positive experiences/lives are at least of nontrivial value, relative to the disvalue of suffering. It's not fully clear to me from this post whether that matches your values.
- Since you mention you care about others' reflected values being implemented, it seems relevant that:
- In the absence of a long, flourishing future, a wide range of values (not just happiness) would go for a very long time unfulfilled: freedom/autonomy, relationships (friendship/family/love), art/beauty/expression, truth/discovery, the continuation of tradition/ancestors' efforts, etc.
- Most people seem to value the creation of happiness and happy people quite a bit, relative to the prevention of suffering. This is suggested by e.g. a survey and the fact that most adults have (or say they want) children.
- (Maybe these aren't their reflected values, but they're potentially decent proxies for reflected values.)
-
Additional thoughts:
I do not see how this is possible without at least soft totalitarianism, which brings its own risks of reducing the value of the future.
I think the word "totalitarianism" is pulling too much weight here. I'm sympathetic to something like "existential security requires a great combination of preventative capabilities and civilizational resilience." I don't see why that must involve anything as nasty as totalitarianism. As one alternative, advances in automation might allow for decentralized, narrow, and transparent forms of surveillance--preventing harmful actions without leaving room for misuse of data (which I'd guess is our usual main concern about mass surveillance).
(Calling something "soft totalitarianism" also feels like a bit odd, like calling something "mild extremism." Totalitarianism has historically been horrible in large part because it's been so far from being soft/moderate, so sticking the connotations of totalitarianism onto soft/moderate futures may mislead us into underestimating their value.)
I also have traditional Pascal’s mugging type concerns for prioritizing the potentially small probability of a very large civilisation.
I don't see how traditional Pascal's mugging type concerns are applicable here. As I understand them, those apply to using expected value reasoning with very low (subjective) probabilities. But surely "humanity will last with at least our current population for as long as the average mammalian species" (which implies our future is vast) is a far more plausible claim than "I'm a magical mugger from the seventh dimension"?
Denise_Melchin @ 2021-09-25T10:12 (+10)
On your second bullet point what I would add to Carl's and Ben's posts you link to is that suffering is not the only type of disvalue or at least "nonvalue" (e.g. meaninglessness comes to mind). Framing this in Haidt's moral foundations theory, suffering is only addressing the care/harm foundation.
Also, I absolutely value positive experiences! More so for making existing people happy, but also somewhat for creating happy people. I think I just prioritise it a bit less than the longtermists around me compared to avoiding misery.
I will try to respond to the s-risk point elsewhere.
Mauricio @ 2021-09-26T08:23 (+1)
Thanks! I'm not very familiar with Haidt's work, so this could very easily be misinformed, but I imagine that other moral foundations / forms of value could also give us some reasons to be quite concerned about the long term, e.g.:
- We might be concerned with degrading--or betraying--our species / traditions / potential.
- You mention meaninglessness--a long, empty future strikes me as a very meaningless one.
(This stuff might not be enough to justify strong longtermism, but maybe it's enough to justify weak longtermism--seeing the long term as a major concern.)
Also, I absolutely value positive experiences! [...] I think I just prioritise it a bit less
Oh, interesting! Then (with the additions you mentioned) you might find the arguments compelling?
Larks @ 2021-09-27T01:09 (+12)
We might be concerned with degrading--or betraying--our species / traditions / potential.
Yeah this is a major motivation for me to be a longtermist. As far as I can see a Haidt/conservative concern for a wider range of moral values, which seem like they might be lost 'by default' if we don't do anything, is a pretty longtermist concern. I wonder if I should write something long up on this.
Denise_Melchin @ 2021-09-27T18:14 (+7)
I would be interested to read this!
Sean_o_h @ 2021-09-27T18:27 (+3)
Me too.
peterhartree @ 2021-10-01T21:44 (+4)
My recent post on Scheffler discusses some of these themes:
MichaelStJules @ 2021-09-24T16:18 (+6)
In the absence of a long, flourishing future, a wide range of values (not just happiness) would go for a very long time unfulfilled: freedom/autonomy, relationships (friendship/family/love), art/beauty/expression, truth/discovery, the continuation of tradition/ancestors' efforts, etc.
I think many (but not all) of these values are mostly conditional on future people existing or directed at their own lives, not the lives of others, and you should also consider the other side: in an empty future, everyone has full freedom/autonomy and gets everything they want, no one faces injustice, no one suffers, etc..
Most people seem to value the creation of happiness and happy people quite a bit, relative to the prevention of suffering. This is suggested by e.g. a survey and the fact that most adults have (or say they want) children.
- (Maybe these aren't their reflected values, but they're potentially decent proxies for reflected values.)
I think most people think of the badness of extinction as primarily the deaths, not the prevented future lives, though, so averting extinction wouldn't get astronomical weight. From this article (this paper):
According to the survey results, most people think that we lose more between 1 and 2 than between 2 and 3. In other words, they see most of the tragedy as the present-day deaths; they don’t see the end of the human race as a larger additional tragedy.
Mauricio @ 2021-09-24T19:30 (+3)
Thanks!
I think many (but not all) of these values are mostly conditional on future people existing or directed at their own lives, not the lives of others
Curious why you think this first part? Seems plausible but not obvious to me.
in an empty future, everyone has full freedom/autonomy and gets everything they want
I have trouble seeing how this is a meaningful claim. (Maybe it's technically right if we assume that any claim about the elements of an empty set is true, but then it's also true that, in an empty future, everyone is oppressed and miserable. So non-empty flourishing futures remain the only futures in which there is flourishing without misery.)
in an empty future [...] no one faces injustice, no one suffers
Yup, agreed that empty futures are better than some alternatives under many value systems. My claim is just that many value systems leave substantial room for the world to be better than empty.
I think most people think of the badness of extinction as primarily the deaths, not the prevented future lives, though, so averting extinction wouldn't get astronomical weight.
Yeah, agreed that something probably won't get astronomical weight if we're doing (non-fanatical forms of) moral pluralism. The paper you cite seems to suggest that, although people initially see the badness of extinction as primarily the deaths, that's less true when they reflect:
More people find extinction uniquely bad when [...] they are explicitly prompted to consider long-term consequences of the catastrophes. [...] Finally, we find that (d) laypeople—in line with prominent philosophical arguments—think that the quality of the future is relevant: they do find extinction uniquely bad when this means forgoing a utopian future.
MichaelStJules @ 2021-09-25T02:52 (+2)
Curious why you think this first part? Seems plausible but not obvious to me.
I think, for example, it's silly to create more people just so that we can instantiate autonomy/freedom in more people, and I doubt many people think of autonomy/freedom this way. I think the same is true for truth/discovery (and my own example of justice). I wouldn't be surprised if it wasn't uncommon for people to want more people to be born for the sake of having more love or beauty in the world, although I still think it's more natural to think of these things as only mattering conditionally on existence, not as a reason to bring them into existence (compared to non-existence, not necessarily compared to another person being born, if we give up the independence of irrelevant alternatives or transitivity).
I also think a view of preference satisfaction that assigns positive value to the creation and satisfaction of new preferences is perverse in a way, since it allows you to ignore a person's existing preferences if you can create and satisfy a sufficiently strong preference in them, even against their wishes to do so.
I have trouble seeing how this is a meaningful claim. (Maybe it's technically right if we assume that any claim about the elements of an empty set is true, but then it's also true that, in an empty future, everyone is oppressed and miserable. So non-empty flourishing futures remain the only futures in which there is flourishing without misery.)
Sorry, I should have been more explicit. You wrote "In the absence of a long, flourishing future, a wide range of values (not just happiness) would go for a very long time unfulfilled", but we can also have values that would go frustrated for a very long time too if we don't go extinct, and including even in a future that looks mostly utopian. I also think it's likely the future will contain misery.
More people find extinction uniquely bad when [...] they are explicitly prompted to consider long-term consequences of the catastrophes. [...] Finally, we find that (d) laypeople—in line with prominent philosophical arguments—think that the quality of the future is relevant: they do find extinction uniquely bad when this means forgoing a utopian future.
That's fair. From the paper:
(Recall that the first difference was the difference between no catastrophe and a catastrophe killing 80%, and the second difference the difference between a catastrophe killing 80% and a catastrophe killing 100%.) We therefore asked participants who gave the expected ranking (but not the other participants) which difference they judged to be greater. We found that most people did not find extinction uniquely bad: only a relatively small minority (23.47%, 50/213 participants) judged the second difference to be greater than the first difference.
It is worth noting that this still doesn't tell us how much greater the difference between total extinction and a utopian future is compared an 80% loss of life in a utopian future. Furthermore, people are being asked to assume the future will be utopian ("a future which is better than today in every conceivable way. There are no longer any wars, any crimes, or any people experiencing depression or sadness. Human suffering is massively reduced, and people are much happier than they are today."), which we may have reason to doubt.
When they were just asked to consider the very long-term consequences in the salience condition, only about 50% in the UK sample thought extinction was uniquely bad and <40% did in the US sample. This is the salience condition:
When you do so, please remember to consider the long-term consequences each scenario will have for humanity. If humanity does not go extinct, it could go on to a long future. This is true even if many (but not all) humans die in a catastrophe, since that leaves open the possibility of recovery. However, if humanity goes extinct (if 100% are killed), there will be no future for humanity.
They were also not asked their views on futures that could be worse than now for the average person (or moral patient, generally).
Mauricio @ 2021-09-26T07:57 (+3)
Fair points. Your first paragraph seems like a good reason for me to take back the example of freedom/autonomy, although I think the other examples remain relevant, at least for nontrivial minority views. (I imagine, for example, that many people wouldn't be too concerned about adding more people to a loving future, but they would be sad about a future having no love at all, e.g. due to extinction.)
(Maybe there's some asymmetry in people's views toward autonomy? I share your intuition that most people would see it as silly to create people so they can have autonomy. But I also imagine that many people would see extinction as a bad affront to the autonomy that future people otherwise would have had, since extinction would be choosing for them that their lives aren't worthwhile.)
only about 50% in the UK sample thought extinction was uniquely bad
This seems like more than enough to support the claim that a wide variety of groups disvalue extinction, on (some) reflection.
I think you're generally right that a significant fraction of non-utilitarian views wouldn't be extremely concerned by extinction, especially under pessimistic empirical assumptions about the future. (I'd be more hesitant to say that many would see it as an actively good thing, at least since many common views seem like they'd strongly disapprove of the harm that would be involved in many plausible extinction scenarios.) So I'd weaken my original claim to something like: a significant fraction of non-utilitarian views would see extinction as very bad, especially under somewhat optimistic assumptions about the future (much weaker assumptions than e.g. "humanity is inherently super awesome").
seanrson @ 2021-09-24T20:53 (+2)
Re: the dependence on future existence concerning the values of "freedom/autonomy, relationships (friendship/family/love), art/beauty/expression, truth/discovery, the continuation of tradition/ancestors' efforts, etc.," I think that most of these (freedom/autonomy, relationships, truth/discovery) are considered valuable primarily because of their role in "the good life," i.e. their contribution to individual wellbeing (as per "objective list" theories of wellbeing), so the contingency seems pretty clear here. Much less so for the others, unless we are convinced that people only value these instrumentally.
Mauricio @ 2021-09-24T22:12 (+1)
Thanks! I think I see how these values are contingent in the sense that, say, you can't have human relationships without humans. Are you saying they're also contingent in the sense that (*) creating new lives with these things has no value? That's very unintuitive to me. If "the good life" is significantly more valuable than a meh life, and a meh life is just as valuable as nonexistence, doesn't it follow that a flourishing life is significantly more valuable than nonexistence?
(In other words, "objective list" theories of well-being (if they hold some lives to be better than neutral) + transitivity seem to imply that creating good lives is possible and valuable, which implies (*) is false. People with these theories of well-being could avoid that conclusion by (a) rejecting that some lives are better than neutral, or (b) by rejecting transitivity. Do they?)
seanrson @ 2021-09-24T23:41 (+4)
I mostly meant to say that someone who otherwise rejects totalism would agree to (*), so as to emphasize that these diverse values are really tied to our position on the value of good lives (whether good = virtuous or pleasurable or whatever).
Similarly, I think the transitivity issue has less to do with our theory of wellbeing (what counts as a good life) and more to do with our theory of population ethics. As to how we can resolve this apparent issue, there are several things we could say. We could (as I think Larry Temkin and others have done) agree with (b), maintaining that 'better than' or 'more valuable than' is not a transitive relation. Alternatively, we could adopt a sort of "tethered good approach" (following Christine Korsgaard), where we maintain that claims like "A is better/more valuable than B" are only meaningful insofar as they are reducible to claims like "A is better/more valuable than B for person P." In that case, we might deny that "a meh life is just as valuable as [or more/less valuable than] nonexistence " is meaningful, since there's no one for whom it is more valuable (assuming we reject comparativism, the view that things can be better or worse for merely possible persons). Michael St. Jules is probably aware of better ways this could be resolved. In general, I think that a lot of this stuff is tricky and our inability to find a solution right now to theoretical puzzles is not always a good reason to abandon a view.
Mauricio @ 2021-09-26T07:33 (+1)
Hm, I can't wrap my head around rejecting transitivity.
we could adopt a sort of "tethered good approach" (following Christine Korsgaard), where we maintain that claims like "A is better/more valuable than B" are only meaningful insofar as they are reducible to claims like "A is better/more valuable than B for person P."
Does this imply that bringing tortured lives into existence is morally neutral? I find that very implausible. (You could get out of that conclusion by claiming an asymmetry, but I haven't seen reasons to think that people with objective list theories of welfare buy into that.) This view also seems suspiciously committed to sketchy notions of personhood.
seanrson @ 2021-09-26T23:11 (+1)
Yeah I’m not totally sure what it implies. For consequentialists, we could say that bringing the life into existence is itself morally neutral; but once the life exists, we have reason to end it (since the life is bad for that person, although we’d have to make further sense of that claim). Deontologists could just say that there is a constraint against bringing into existence tortured lives, but this isn’t because of the life’s contribution to some “total goodness” of the world. Presumably we’d want some further explanation for why this constraint should exist. Maybe such an action involves an impermissible attitude of callous disregard for life or something like that. It seems like there are many parameters we could vary but that might seem too ad hoc.
Chi @ 2021-10-19T10:25 (+3)
Again, I haven't actually read this, but this article discusses intransitivity in asymmetric person-affecting views, i.e. I think in the language you used: The value of pleasure is contingent in the sense that creating new lives with pleasure has no value. But the disvalue of pain is not contingent in this way. I think you should be able to directly apply that to other object-list theories that you discuss instead of just hedonistic (pleasure-pain) ones.
An alternative way to deal with intransitivity is to say that not existing and any life are incomparable. This gives you the unfortunate situation that you can't straightforwardly compare different worlds with different population sizes. I don't know enough about the literature to say how people deal with this. I think there's some long work in the works that's trying to make this version work and that also tries to make "creating new suffering people is bad" work at the same time.
I think some people probably do think that they are comparable but reject that some lives are better than neutral. I expect that that's rarer though?
MichaelStJules @ 2021-09-25T03:10 (+2)
That's very unintuitive to me. If "the good life" is significantly more valuable than a meh life, and a meh life is just as valuable as nonexistence, doesn't it follow that a flourishing life is significantly more valuable than nonexistence?
Under the asymmetry, any life is at most as valuable as nonexistence, and depending on the particular view of the asymmetry, may be as good only when faced with particular sets of options.
- If you can bring a good life into existence or none, it is at least permissible to choose none, and under basically any asymmetry that doesn't lead to principled antinatalism (basically all but perfect lives are bad), it's permissible to choose either.
- If you can bring a good life into existence or none, it is at least permissible to choose none, and under a non-antinatalist asymmetry, it's permissible to choose either.
- If you can bring a good life into existence, a flourishing life into existence or none, it is at least permissible to choose none, and under a wide view of the asymmetry (basically to solve the nonidentity problem), it is not permissible to bring the merely good life into existence. Under a non-antinatalist asymmetry (which can be wide or narrow), it is permissible to bring the flourishing life into existence. Under a narrow (not wide) non-antinatalist asymmetry, all three options are permissible.
If you accept transitivity and the independence of irrelevant alternatives, instead of having the flourishing life better than none, you could have a principled antinatalism:
meh life < good life < flourishing life ≤ none,
although this doesn't follow.
Mauricio @ 2021-09-26T08:03 (+3)
Thanks! I can see that for people who accept (relatively strong versions of) the asymmetry. But (I think) we're talking about what a wide range of ethical views say--is it at all common for proponents of objective list theories of well-being to hold that the good life is worse than nonexistence? (I imagine, if they thought it was that bad, they wouldn't call it "the good life"?)
MichaelStJules @ 2021-09-26T08:15 (+2)
is it at all common for proponents of objective list theories of well-being to hold that the good life is worse than nonexistence?
I think this would be pretty much only antinatalists who hold stronger forms of the asymmetry, and this kind of antinatalism (and indeed all antinatalism) is relatively rare, so I'd guess not.
jackmalde @ 2021-09-24T20:47 (+21)
Thanks for this post I am always interested to hear why people are sceptical of longtermism.
If I were to try to summarise your view briefly (which is helpful for my response) I would say:
- You have person-affecting tendencies which make you unconcerned with reducing extinction risks
- You are suffering-focused
- You don’t think humanity is very good now nor that it is likely to be in the future under a sort of ‘business as usual’ path, which makes you unenthusiastic about making the future long or big
- You don’t think the future will be long (unless we have totalitarianism) which reduces the scope for doing good by focusing on the future
- You’re sceptical there are lock-in scenarios we can affect within the next few decades, and don’t think there is much point of trying to affect them beyond this time horizon
I’m going to accept 1, 2 as your personal values and I won’t try to shift you on them. I don’t massively disagree on point 3.
I’m not sure I completely agree on point 4 but I can perhaps accept it as a reasonable view, with a caveat. Even if the future isn’t very long in expectation, surely it is kind of long in expectation? Like probably more than a few hundred years? If this is the case, might it be better to be some sort of “medium-termist” as opposed to a “traditional neartermist”. For example, might it be better to tackle climate change than to give out malarial bednets? I’m not sure if the answer is yes, but it’s something to think about.
Also, as has been mentioned, if we can only have long futures under totalitarianism, which would be terrible, might we want to reduce risks of totalitarianism?
Moving onto point 5 and lock-in scenarios. Firstly I do realise that the constellation of your views means that the only type of x-risk you are likely to care about is s-risks, so I will focus on lock in events that involve vast amounts of suffering. With that in mind, why aren’t you interested in something like AI alignment? Misaligned AI could lock-in vast amounts of suffering. We could also create loads of digital sentience that suffers vastly. And all this could happen this century. We can’t be sure of course, but it does seems reasonable to worry about this given how high the stakes are and the uncertainty over timelines. Do you not agree? There may also be other s-risks that may have potential lock-ins in the nearish future but I’d have to read more.
My final question, still on point 5, is why don’t you think we can affect probabilities of lock-in events that may happen beyond the next few decades? What about growing the Effective Altruism/longtermist community, or saving/investing money for the future, or improving values? These are all things that many EAs think can be credible longtermist interventions and could reasonably affect chances of lock-in (including of the s-risk kind) beyond the next few decades as they essentially increase the number of thoughtful/good people in the future or the amount of resources such people have at their disposal. Do you disagree?
Denise_Melchin @ 2021-09-25T07:37 (+5)
Thanks for trying to summarise my views! This is helpful for me to see where I got the communication right and where I did not. I'll edit your summary accordingly where you are off:
- You have person-affecting tendencies which make you
unconcernedless concerned with reducing extinction risks than longtermists, although you are still concerned about the nearterm impacts and put at least some value on the loss of future generations (which also depends on how long/big we can expect the future to be)- You are suffering-focused [Edit: I would not have previously described my views that way, but I guess it is an accurate enough description]
- You don’t think humanity is very good now nor that it is likely to be in the future under a sort of ‘business as usual’ path, which makes you
unenthusiasticwant to prioritiseaboutmaking the future good over making it long or big- You don’t think the future will be long
(unless we have totalitarianism)which reduces the scope for doing good by focusing on the future- You’re
scepticalclueless whether there are lock-in scenarios we can affect within the next few decades, and don’t think there is much point of trying to affect them beyond this time horizon
jackmalde @ 2021-09-25T07:59 (+4)
Thanks for that. To be honest I would say the inaccuracies I made are down to sloppiness by me rather than by you not being clear in your communication. Having said that none of your corrections change my view on anything else I said in my original comment.
Davidmanheim @ 2021-09-26T06:56 (+18)
"If there have been any intentional impacts for more than a few hundred years out"
There have been a number of stabilizing religious institutions which were built for exactly this purpose, both Jewish, and Christian. They intended to maintain the faiths of members and peace between them, and have been somewhere between very and incredibly successful in doing so, albeit imperfectly. Similarly, Temple-era Judaism seems to have managed a fairly stable system for several hundred years, including rebuilding the Temple after its destruction. We also have the example of Chinese dynasties and at least several European monarchies which intended to plan for centuries, and were successful in doing so.
But given the timeline of "more than a few hundred years out," I'm not sure there are many other things which could possibly qualify. On a slightly shorter timescale, there are many, many more examples. The US government seems like one example - an intentionally built system which lasted for centuries and spawned imitators which were also largely successful. But on larger and smaller scales, we've seen 200+ year planning be useful in many, many cases, where it occurred.
The question of what portion of such plans worked out is a different one, and a harder one to answer, but it's obviously a minority. I'm also unsure whether there are meaningful differentiators between cases where it did and didn't work, but it's a really good question, and one that I'd love to see work on.
Denise_Melchin @ 2021-09-25T08:47 (+17)
Thank you everyone for the many responses! I will address one point which came up in multiple comments here as a top-level comment, and otherwise respond to comments.
Regarding the length of the long-term future: My main concern here is that it seems really hard to reach existential security (i.e. extinction risks falling to smaller and smaller levels), especially given that extinction risks have been rising in recent decades. If we do not reach existential security the future population is much smaller accordingly and gets less weight in my considerations. I take concerns around extinction risks seriously - but they are an argument against longtermism, not in favour of it. It just seems really weird to me to jump from 'extinction risks are rising so much, we must prioritize them!' to 'there is lots of value in the long-term future'. The latter is only true if we manage to get rid of those extinction risks.
The line about totalitarianism is not central for me. Oops. Clearly should not have opened the section with a reference to it.
I think even with totalitarianism reaching existential security is really hard - the world would need to be permanently locked into a totalitarian state.
I recommend reading this shortform discussion on reaching existential security.
Something that stood out to me in that discussion (in a comment by Paul Christiano: "Stepping back, I think the key object-level questions are something like "Is there any way to build a civilization that is very stable?" and "Will people try?" It seems to me you should have a fairly high probability on "yes" to both questions.")
as well as Toby's EAG Reconnect AMA is how much of the belief that we can reach existential security might be based on a higher level of baseline optimism than I have about humanity.
Denise_Melchin @ 2021-09-27T17:41 (+14)
This is just a note that I still intend to respond to a lot of comments, but I will be slow! (I went into labour as I was writing my previous batch of responses and am busy baby cuddling now.)
jackmalde @ 2021-09-25T09:12 (+2)
I think you mean to say 'existential risk' rather than 'extinction risk' in this comment?
I think even with totalitarianism reaching existential security is really hard - the world would need to be permanently locked into a totalitarian state.
Something I didn't say in my other comment is that I do think the future could be very, very long under a misaligned AI scenario. Such an AI would have some goals, and it would probably be useful to have a very long time to achieve those goals. This wouldn't really matter if there was no sentient life around for the AI to exploit, but we can't be sure that this would be the case as the AI may find it useful to use sentient life.
Overall I am interested to hear your view on the importance of AI alignment as, from what I've heard, it sounds like it could still be important taking into account your various views.
Charles He @ 2021-09-25T09:12 (+1)
If we do not reach existential security the future population is much smaller accordingly and gets less weight in my considerations. I take concerns around extinction risks seriously - but they are an argument against longtermism, not in favour of it. It just seems really weird to me to jump from 'extinction risks are rising so much, we must prioritize them!' to 'there is lots of value in the long-term future'. The latter is only true if we manage to get rid of those extinction risks.
I don’t understand. It seems that you could see the value of the long term future being unrelated to the probability of x risk. Then, the more you value the long term future, the more you value improving x risk.
I think a sketch of the story might go: let’s say your value for reaching the best final state of the long term future is "V".
If there's a 5%, 50%, or 99.99% risk of extinction, that doesn’t affect V (but might make us sadder that we might not reach it).
Generally (e.g. assuming that x risk can be practically reduced) it’s more likely you would work on x-risk as your value of V is higher.
It seems like this explains why the views are correlated, “extinction risks are rising so much, we must prioritize them!” and “there is lots of value in the long-term future”. So these views aren't a contradiction.
Am I slipping in some assumption or have I failed to capture what you envisioned?
UriKatz @ 2021-09-24T01:51 (+10)
I am largely sympathetic to the main thrust of your argument (borrowing from your own title: I am probably a negative utilitarian), but I have 2 disagreements that ultimately lead me to a very different conclusion on longtermism and global priorities:
- Why do you assume we cannot effect the future further than 100 years? There are numerous examples of humans doing just that: in science (inventing the wheel, electricity or gunpowder), government (the US constitution), religion (the Buddhist Pali cannon, the Bible, the Quran), philosophy (utilitarianism), and so on. One can even argue that the works of Shakespeare have had an effect on people for hundreds of years.
- Though humanity is not inherently awesome, it does not inherently suck either. Humans have the potential to do amazing things, for good or evil. If we can build a world with a lot less war and crime and a lot more collaboration and generosity, isn't it worth a try? In Parfit's beautiful words: "Life can be wonderful as well as terrible, and we shall increasingly have the power to make life good. Since human history may be only just beginning, we can expect that future humans, or supra-humans, may achieve some great goods that we cannot now even imagine. In Nietzsche’s words, there has never been such a new dawn and clear horizon, and such an open sea ... Some of our successors might live lives and create worlds that, though failing to justify past suffering, would give us all, including some of those who have suffered, reasons to be glad that the Universe exists.”
Khorton @ 2021-09-24T07:43 (+18)
I'm not Denise, but I agree that we can and will all affect the long-term future. The children we have or don't have, the work we do, the lives we save, will all effect future generations.
What I'm more skeptical about is the claim that we can decide /how/ we want to affect future generations. The Bible has certainly had a massive influence on world history, but it hasn't been exclusively good, and the apostle Paul would have never guessed how his writing would influence people even a couple hundred years after his death.
UriKatz @ 2021-09-24T17:06 (+7)
Hi Khorton,
If by “decide” you mean control the outcome in any meaningful way I agree, we cannot. However I think it is possible to make a best effort attempt to steer things towards a better future (in small and big ways). Mistakes will be made, progress is never linear and we may even fail altogether, but the attempt is really all we have, and there is reason to believe in a non-trivial probability that our efforts will bear fruit, especially compared to not trying or to aiming towards something else (like maximum power in the hands of a few).
For a great exploration of this topic I refer to this talk by Nick Bostrom: http://www.stafforini.com/blog/bostrom. The tl;dr is that we can come up with evaluation functions for states of the world that, while not yet being our desired outcome, are indications that we are probably moving in the right direction. We can then figure out how we get to the very next state, in the near future. Once there, we will jot a course for the next state, and so on. Bostrom signals out technology, collaboration and wisdom as traits humanity will need a lot of in the better future we are envisioning, so he suggests can measure them with our evaluation function.
Khorton @ 2021-09-24T19:53 (+3)
To me that doesn't sound very different from "I want a future with less suffering, so I'm going to evaluate my impact based on how far humanity gets towards eradicating malaria and other painful diseases". Which I guess is consistent with my views but doesn't sound like most long-termists I've met.
UriKatz @ 2021-09-25T12:53 (+1)
Well, it wouldn’t work if you said “I want a future with less suffering, so I am going to evaluate my impact based on how many paper clips exist in the world at a given time”. Bostrom selects collaboration, technology and wisdom because he thinks they are the most important indicators of a better future and reduced x-risk. You are welcome to suggest other parameters for the evaluation function of course, but not every parameter works. If you read the analogy to chess in the link I posted it will become much more clear how Bostrom is thinking about this.
(if anyone reading this comment knows of evolutions in Bostrom’s thought since this lecture I would very much appreciate a reference)
MichaelStJules @ 2021-09-24T15:55 (+8)
For similar moral views (asymmetric, but not negative utilitarian), this paper might be of interest:
Teruji Thomas, "The Asymmetry, Uncertainty, and the Long Term" (also on the EA Forum). See especially section 6 (maybe after watching the talk, instead of reading the paper, since the paper gets pretty technical).
Denise_Melchin @ 2022-04-21T09:23 (+6)
This is a link collection for content relevant to my post published since, for ease of reference.
Focusing on the empirical arguments to prioritise x-risks instead of philosophical ones (which I could not be more supportive of):
-
Carl Shulman’s 80,000hours podcast on the common sense case for existential risk
-
Scott Alexander writing about the terms long-termism and existential risks
On the definition of existential risk (as I find Bostrom’s definition dubious):
-
Based on this comment thread in a different question by Linch
-
Zoe’s paper which also has other stuff I have not yet read in full
How GCBRs could remain a solved problem, thereby getting us closer to existential security:
- A blogpost by Carl which cross-posted to the EA Forum later than it was published on the blog
EdoArad @ 2021-09-24T09:01 (+6)
Thanks for this clear write-up in an important discussion :)
I'm not sure where exactly my own views lie, but let me engage with some of your points with the hope of clarifying my own views (and hopefully also help you or other readers).
You say that care more about the preference of people than about total wellbeing, and that it'd change your mind if it turns out that people today prefer longtermist causes.
What do you think about the preferences of future people? You seem to take the "rather make people happy than to make happy people" point of view on population ethics, but future preferences extend beyond their preference to exist. Since you also aren't interested in a world where trillions of people watch Netflix all day, I take it that you don't take their preferences as that important.
That said, you clearly do care about the shape of the future of humanity. Whether people have freedom, whether people suffer, whether they are morally righteous, etc. In fact, you seem to be pretty pessimistic about humanity's future in those aspects. Also, it seems like you aren't interested in transhumanist futures - at least, not how they are usually depicted.
Some thoughts on that. But first, please let me know if (where) I was off in any of the above. Sorry if I've misinterpreted your views.
- I think that the length of the long-term future might be a strong double-crux here. If you'd expect the future to be mostly devoid of value, or even not many orders of magnitude more than the near future, then I'd find it very hard to justify working on longtermist causes (mostly due to traceability). Instead of addressing that, I'll just respond to your other points conditional on there being a likely long-term future with lots of valuable life.
- I feel some uneasiness about not considering future people's preferences as mostly equal to people alive today. I think that the way I feel about it is somewhat like child-rearing: I'd want some sort of a balance between directing my children's future to become "better people" and give them the freedom to make their own choices and binge on Netflix. Furthermore, I can already predict many of their preferences for which I can make some preparation (say, save up on money or buy an apartment in child-friendly areas). Another analogy here is that of colonialism, where one entity acts to shape the future of another (weaker) entity. Overall, I feel like we have a lot of responsibility for future people and we should take care not to enforce our own worldview too much.
- Very relevant is the question of whether moral growth is possible (or even expected). I'm not sure of my own views here, but I definitely think that improving moral progress could be potentially a very important cause.
- I think that some sort of a transhumanist future is inevitable. It's hard for me to imagine economic/intellectual progress completely stopping or slowing down drastically forever without any major catastrophe, and it's hard for me to imagine non-transhumanist futures with consistent exponential growth. Holden Karnofsky makes this case here in his recent The Most Important Century series.
- Now, since you seem to disvalue transhumanist futures, I think this might be where our opinions might differ the most but maybe most malleable. I can imagine many potential futures where sentient beings are living in abundance and having meaningful lives. I don't think that paperclip-maximizers and ruthless dictatorships are the most likely futures (although, I do think that these kinds of futures are important risks). For one thing, our values aren't that weird. But other than that, a likely scenario is that of gradual moral change, rather than locking-in to some malign set of random values. I think that some discussions of Utopias are very relevant here, but they may be misleading. This is something I want to think more about, as I'm easily biased into believing weird futuristic scenarios.
seanrson @ 2021-09-24T14:59 (+7)
You say that care more about the preference of people than about total wellbeing, and that it'd change your mind if it turns out that people today prefer longtermist causes.What do you think about the preferences of future people? You seem to take the "rather make people happy than to make happy people" point of view on population ethics, but future preferences extend beyond their preference to exist. Since you also aren't interested in a world where trillions of people watch Netflix all day, I take it that you don't take their preferences as that important.
What do you mean by this?
OP said, "I also care about people’s wellbeing regardless of when it happens." Are you interpreting this concern about future people's wellbeing as not including concern about their preferences? I think the bit about a Netflix world is consistent with caring about future people's preferences contingent on future people existing. If we accept this kind of view in population ethics, we don't have welfare-related reasons to ensure a future for humanity. But still, we might have quasi-aesthetic desires to create the sort of future that we find appealing. I think OP might just be saying that they lack such quasi-aesthetic desires.
(As an aside, I suspect that quasi-aesthetic desires motivate at least some of the focus on x-risks. We would expect that people who find futurology interesting would want the world to continue, even if they were indifferent to welfare-related reasons. I think this is basically what motivates a lot of environmentalism. People have a quasi-aesthetic desire for nature, purity, etc., so they care about the environment even if they never ground this in the effects of the environment on conscious beings.)
Perhaps you are referring to the value of creating and satisfying these future people's preferences? If this is what you meant, a standard line for preference utilitarians is that preferences only matter once they are created. So the preferences of future people only matter contingent on the existence of these people (and their preferences).
There are several ways to motivate this, one of which is the following: would it be a good thing for me to create in you entirely new preferences just so I can satisfy them? We might think not.
This idea is captured in Singer's Practical Ethics (from back when he espoused preference utilitarianism):
The creation of preferences which we then satisfy gains us nothing. We can think of the creation of the unsatisfied preferences as putting a debit in the moral ledger which satisfying them merely cancels out... Preference Utilitarians have grounds for seeking to satisfy their wishes, but they cannot say that the universe would have been a worse place if we had never come into existence at all.
EdoArad @ 2021-09-25T06:18 (+5)
Good points, thanks :) I agree with everything here.
One view on how we impact the future is asking how would we want to construct it assuming we had direct control over it. I think that this view lends more to the points you make and where population ethics feels to me much murkier.
However, there are some things that we might be able to put some credence on that we'd expect future people to value. For example, I think that it's more likely than not that future people would value their own welfare. So while it's not an argument for preventing x-risk (as that runs into the same population ethics problems), it is still an argument for other types of possible longtermist interventions and definitely points at where (a potentially enormous amount of) value lies. Say, I expect working on moral circle expansion to be very important from this perspective (although, I'm not sure about how interventions there are actually promising).
Regarding quasi-aesthetic desires, I agree and think that this is very important to understand further. Personally, I'm confused as to whether I should value these kinds of desires (even at the expense of something based on welfarism), or whether I should think of these as a bias to overcome. As you say, I also guess that this might be behind some of the reasons for differing stances on cause prioritization.
MichaelStJules @ 2021-09-24T15:45 (+5)
(My views are suffering-focused and I'm not committed to longtermism, although I'm exploring s-risks slowly, mostly passively.)
I do not feel compelled by arguments that the future could be very long. I do not see how this is possible without at least soft totalitarianism, which brings its own risks of reducing the value of the future.
Do you mean you expect all of our descendants to be wiped out, with none left? What range would you give for your probability of extinction (or unrecoverable collapse) each year?
If we colonize space and continue to expand (which doesn't seem extraordinarily unlikely), the probabilities of extinction in distant colonies become less and less correlated, and the probability of all colonies being wiped out with none left to continue expanding would decrease over time. Maybe this doesn't happen fast enough, in your view?
Vasco Grilo @ 2022-04-29T12:10 (+4)
Thanks for the post. Here are some comments (I am confident there is considerable overlap with the other comments, but I have not read them):
- What was done well:
- Willingness to challenge EA ideas in order to better understand them and improve them.
- Points to possibly neglected topics in long-termism (e.g. mitigation of very bad outcomes).
- Sections “What would convince me otherwise”.
- Good arguments for why it is uncertain whether the long-term future will be good/bad.
- What could be improved:
- “While creating happy people is valuable, I view it as much less valuable than making sure people are not in misery. Therefore I am not extremely concerned about the lost potential from extinction risks”.
- What about x-risks which do not involve extinction? For example, decreasing s-risk would decrease the likelihood of a future with large “misery”.
- Sections “I do not think humanity is inherently super awesome” and “I am unsure whether the future will be better than today”.
- Longtermism only requires that most of the expected value of our actions is in the future. It does not rely on predictions about how good the future will be.
- Section “The length of the long-term future”.
- Similarly, given the uncertainty about the length of the long-term future (Toby Ord guesses in The Precipice there is a “one in two chance that humanity avoids every existential catastrophe and eventually fulfils its potential”), most of the expected value should concern the long-term.
- Explicit expected value calculations could overestimate the importance of the long-term. However, a more accurate Bayesian approach could still favour the long-term as long as the prior is not unreasonably narrow.
- Section “The ability to influence the long-term future”.
- The concept of s-risk could be mentioned here.
- “While creating happy people is valuable, I view it as much less valuable than making sure people are not in misery. Therefore I am not extremely concerned about the lost potential from extinction risks”.
Hoa Do @ 2024-10-03T23:15 (+1)
A lot to say on this but keeping short on time I make some quick points.
- Humanity is "not super awesome" in comparison to what alternative? From my POV I am grateful at all that humanity has some measure of empathy, charity and compassion. I can imagine much worse and to humanity's credit we live in a very hostile universe. Have we come to forget how cruel nature could be?
- Are we taking for granted that being "good" or altruist is not a natural state? It requires a lot of foresight, it is learned, it is instilled through generations of evolution which I believe to still be in the process of being selected.
- It is hard to be "good". It is easier to be less bad but generally we're most likely doing a lot of bad without realizing it most of the time. I don't find it so easy to point fingers at others for not being so moral. Their life is an entire universe I have yet to discover. I don't know them and their burdens. I am grateful that I am in a position to think of morals and ethics at all. Charity is a privilege.
- Perhaps I believe that generally humans are good and that they behave badly because they are lacking something which has prevented their growth into become a better human. I think our world in data really highlights the trend of what humanity does as it becomes more efficient/wealthy.
- Could humanity become totalitarian and cruel? Absolutely. Which is why EA and our fight is so important. Perhaps I believe in humanity because I must.
- Lastly I can't help but feel a sense of irony at people who are living in the modern age in one of the developed countries thinking humanity is bad while benefiting from all the labor of our ancestors. We are so privileged today standing on their shoulders. All the while talking about how bad it is.