Response to recent criticisms of EA "longtermist" thinking

By kbog @ 2020-01-06T04:31 (+27)

This is a response to some recent criticisms of "Longtermist" EA thinking. I have organized it in the form of an FAQ responding to concerns.

Does the Bostromian paradigm rely on transhumanism and an impersonal, totalist utilitarianism?

Some object that the long-term paradigm stems from a couple of axiological positions, utilitarianism and transhumanism.

Bostrom’s views do not rely on utilitarianism. They do require that the future generally be considered potentially extremely valuable relative to the present, based on quality and/or quantity of life. So some sort of value aggregation is required. However, intrinsic discounting, as well as a variety of nonconsequentialist views on present-day things like duties against lying/killing/etc, are fully compatible with Bostrom’s paradigm.

Bostrom’s paradigm doesn’t quite require transhumanism. If humanity reaches a stable state of Earthly affairs, we theoretically might continue for hundreds of millions of years, being born and dying in happy 100-year cycles, which is sufficient for an extremely valuable long-run future. Existential risks may be a big problem over this timeframe, however. Conscious simulations or human space travel colonization would be required for a reliably super-valuable far future.

Conscious simulations might technically not be considered transhumanism. The idea that we can upload our current brains onto computers is generally considered transhumanism, but that is not the only way of having conscious simulations/computations. Of course, conscious intelligent simulations are always a pretty "out there" sci-fi scenario.

Space travel may require major human changes in order to be successful. We could, in theory, focus 100% on terraforming and travel with Earthlike space arks; this would theoretically enable major space travel with no transhumanism, but it would be hard and our descendants will undoubtedly choose a different route. If we made minor genetic changes to make humans more resilient against radiation and low-gravity environments, that could greatly reduce the difficulty of space travel, though it’s unclear if this should be considered transhumanism. Proper transhumanism to make us smarter, longer-lived and more cooperative would broadly help, however. Another option is to have space travel and terraforming be done by automated systems, and the first humans could be very similar to us, except for being conceived, born and raised de novo by robots. Again I don’t know if this is technically transhumanism, although it is certainly ‘out there.’

Finally, you could believe transhumanism will only be done for key things like space travel. Just because we can train astronauts does not mean we all want to become astronauts. Transhumanism could be like astronaut training: something clunky and unpleasant that is authorized for a few, but not done by ordinary people on Earth or on post-terraformation worlds.

In summary, while there are some ideas shared with utilitarianism and transhumanism, neither utilitarian moral theory nor the aspiration to broadly re-engineer humanity are really required for a long-term view.

If someone has an objection to axiological utilitarianism or axiological transhumanism, it’s best for them to think carefully about what their particular objections are, and then see whether they do or don’t pose a problem for the longtermist view.

Are long-term priorities distracting?

One worry with long-term priorities is that it can distract us from short-term problems. This is easily identified as a spurious complaint. Every cause area distracts us from some other cause areas. Short-term priorities distract us from long-term priorities. That is the very nature of Effective Altruism and, indeed, the modern resource-limited world. It is not a serious criticism.

Do long-term priorities imply short-term sacrifices?

Another worry is that long-term views imply that we might tolerate doing bad things for the short-term if it helps the long-term. For instance, if starting a war could reduce existential risk, it could be justified.

This seems like a basically moral complaint: “long-termists will achieve their goals of maximizing human well-being but in the process it may involve things I cannot tolerate, due to my moral views.”

Again, this objection applies to any kind of priority. If you are very concerned with a short-term problem like global disease and poverty, you might similarly decide that some actions to harm people in the long-run future are justified to assist your own cause. Furthermore, you might also decide that actions to harm some people in the short run are justified to save others in the short run. This is just the regular trolley problem. An act-consequentialist view can compel you to make such tradeoffs regardless of whether you prioritize the short run or the long run. Meanwhile, if you reject the idea of harming a few to save the many, you will not accept the idea of harming people in the short run to help people in the long run, even if you generally prioritize the long run. So in theory, this is not about short-term versus long-term priorities, it is just about consequentialism versus nonconsequentialism.

You might say that some people have a more nuanced take between the hard consequentialist and the hard nonconsequentialist view. Suppose that someone does not believe in killing 1 to save 5, but they do believe in killing 1 to save 10,000. This person might see ways that small short term harms could be offset by major long-term benefits, without seeing ways that small short-term harms could be offset by other, more modest short-term benefits. But of course this is a contingent fact. If they ever do encounter a situation where they could kill 1 to save 10,000 in the short run, they will be obliged to take that opportunity. So there is still the same moral reductio ad absurdum (assuming that you do in fact think it’s absurd to make such sacrifices, which is dubious).

One could make a practical argument instead of a moral one: that longtermist priorities are so compelling that they make it too easy for politicians and others to justify bad aggressive actions against their enemies. So the long-term priorities are a perfectly good idea for us to believe and to share with each other, but not something to share in more public political and military contexts.

Speculating how policymakers will act based on a philosophy is a very dubious approach. I have my own speculations – I think they will act well, or at least much better than the likely alternatives. But a better methodology is to see what people’s military and political views actually are when they subscribe to Bostrom’s long-term priorities. See the views of the Candidate Scoring System under “long run issues”, or see what other EAs have written about politics and international relations. They are quite conventional.

Moreover, Bostrom’s long-term priorities are a very marginal view in the political sphere, and it will be a long time before they become the dominant paradigm, if ever.

In summary, the moral argument does not work. Pragmatically speaking, it may be good to think hard about how long-term views should be packaged and sold to governments, but that’s no reason to reject the idea, especially not at this early stage.

Do long-term views place a perverse priority on saving people in wealthy countries?

Another objection to long-term views is that they could be interpreted as putting a higher priority on saving the lives of people in wealthy rather than poor countries, because such people contribute more to long-run progress. This is not unique to Bostrom’s priority, it is shared by many other views. Common parochial views in the West – to give to one’s own university or hometown – similarly put a higher priority on local people. Nationalism puts a higher priority on one’s own country. Animal-focused views can also come to this conclusion, not for lifesaving but for increasing people’s wealth, based on differing rates of meat consumption. A regular short-term human-focused utilitarian view could also come to the same conclusion, based on international differences in life expectancy and average happiness. In fact, the same basic argument that people in the West contribute more to the global economy can be used to argue for differing priorities even on a short-run worldview.

Just because so many views are vulnerable to this objection doesn’t mean the objection is wrong. But it’s still not clear what this objection even is. Assuming that saving people in wealthier countries is the best thing for global welfare, why should anyone object to it?

One could worry that sharing such an ideology will cause people to become white or Asian supremacists. On this worry, whenever you give people a reason to prefer saving a life in advanced countries (USA, France, Japan, South Korea, etc) over saving lives in poor countries, that has a risk of turning them into a white or Asian supremacist, because the richer countries happen to have people of different races than poorer countries, speaking in average terms. But hundreds of millions of people believe in one of these various ideologies which place a higher priority on saving people in their own countries, yet only a tiny minority become racial supremacists. Therefore, even if these ideologies do cause racial supremacism, the effect size is extremely small, not enough to pose a meaningful argument here. I also suspect that if you actually look at the process of how racial supremacists become radicalized, the real causes will be something other than rational arguments about the long-term collective progress of humanity.

One might say that it’s still useful for Effective Altruists to insert language in relevant papers to disavow racial supremacism, because there is still a tiny risk of radicalizing someone, and isn’t it very cheap and easy to insert such language and make sure that no one gets the wrong idea? But any reasonable reader will already know that Effective Altruists are not racial supremacists and don’t like the ideology one bit. And far-right people generally believe that there is strong liberal bias afflicting Effective Altruism and others in the mainstream media and academia, so even if Effective Altruists said we disavowed racial supremacism, far-right people would view it as a meaningless and predictable political line. As for the reader who is centrist or conservative but not far-right, such a statement may seem ridiculous, showing that the author is paranoid or possessed of a very ‘woke’ ideology, and this would harm the reputation of the author and of Effective Altruists more generally. As for anyone who isn’t already thinking about these issues, the insertion of a statement against racial supremacism may seem jarring, like a signal that the author is in fact associated with racial supremacism and is trying to deny it. If someone denies alleged connections to racial supremacism, their denial can be quoted and treated as evidence that the allegations against them really are not spurious. Finally, such statements take up space and make the document take longer to read. When asked, you should definitely directly respond "I oppose white supremacism," but preemptively putting disclaimers for every reader seems like a bad policy.

So much for the racial supremacism worries. Still, one could say that it’s morally wrong to give money to save the lives of wealthier people, even if it’s actually the most effective and beneficial thing to do. But this argument only makes sense if you have an egalitarian moral framework, like that of Rawls, and you don't believe that broadly improving humanity's progress will help some extremely-badly-off people in the future.

In that case, you will have a valid moral disagreement with the longtermist rich-country-productivity argument. However, this is superfluous because your egalitarian view simply rejects the long-term priorities in the first place. It already implies that we should give money to save the worst-off people now, not happy people in the far future and not even people in 2040 or 2080 who will be harmed by climate change. (Also note that Rawls’ strict egalitarianism is wrong anyway, as his “original position” argument should ultimately be interpreted to support utilitarianism.)

Do long-term views prioritize people in the future over people today?

They do in the same sense that they prioritize the people of Russia over the people of Finland. There are more Russians than Finns. There is nothing wrong with this.

On an individual basis, the prioritization will be roughly similar, except future people may live longer and be happier (making them a higher priority to save) and they may be difficult to understand and reliably help (making them a lower priority to save).

Again, there is nothing wrong with this.

Will long-term EAs ignore short-term harms?

No, for three reasons. First, short-term harms are generally slight probabilistic long-term harms as well. If someone dies today, that makes humanity grow more slowly and makes the world a more volatile place. Therefore, fanaticism to sacrifice many people immediately in order to obtain speculative long-run benefits does not make sense in the real world, under a fanatical long-term view.

Second, EAs recognize some of the issues with long-term planning, and according to general uncertainty on our ability to predict and change the future, will incorporate some caution about incurring short-run costs.

Third, in the real world, these are all speculative philosophical trolley problems. We live in a lawful, ordered society where causing short-term harms results in legal and social punishments, which makes it irrational for people with long-term priorities to try to take harmful actions.

A related note: is white supremacism popular?

Going off the heels of the previous discussion of racial supremacism, one might wonder if being associated with white supremacism is good or bad for public relations in the West these days. Well, the evidence clearly shows that white supremacism is bad for PR.

A 2017 Reuters poll asked people if they favored white nationalism; 8% supported it and 65% opposed it. When asked about the alt-right, 6% supported it and 52% opposed it. When asked about neo-Nazism, 4% supported it and 77% opposed it. These results show a clear majority opposing white supremacism, and even those few who support it could be dismissed per the Lizardman Constant.

These proportions change further when you look at elites in government, academia and wealthy corporate circles. In these cases, white supremacism is essentially nonexistent. Very many who oppose it do not merely disagree with it, but actively abhor it.

Abhorrence of white supremacism extends to many concrete actions to suppress it and related views in intellectual circles. For examples, see the “Academia” section in the Candidate Scoring System, and this essay about infringements upon free speech in academia. And consider Freddie DeBoer’s observation that “for every one of these controversies that goes public, there are vastly more situations where someone self-censors, or is quietly bullied into acquiescing. For every odd example that goes viral, there is no doubt dozens more that occur behind closed doors.”

White supremacism is also generally banned on social media, including Reddit and Twitter. And deplatforming works.

For the record, I think that deplatforming white supremacists – people like Richard Spencer – is often a good thing. But I am under no illusions about the way things work.

One could retort that being wrongly accused of white supremacism can earn one public sympathy from certain influential heterodox people, like Peter Thiel and Sam Harris. These kinds of heterodox figures are often inclined to defend some people who are accused of white supremacism, like Charles Murray, Noah Carl and others. However, this defense only happens as a partial pushback against broader ‘cancellation’ conducted by others. The defense usually focuses on academic freedom and behavior rather than whether the actual ideas are correct. It can gain ground with some of the broader public, but elite corporate and academic circles remain opposed.

And even among the broader public and political spheres, the Very Online IDW type who pays attention to these re-platformed people is actually pretty rare. Most people in the real world are rather politically disengaged, have no love for ‘political correctness’ nor for those regarded as white supremacists, and don’t pay much attention to online drama. And far-right people are often excluded even from right-wing politics. For instance, the right-wing think tank Heritage Foundation made someone resign following controversy about his argument for giving priority in immigration law to white people based on IQ.

All in all, it’s clear that being associated with white supremacism is bad for PR.

Summary: what are the good reasons to disagree with longtermism?

Reason 1: You don't believe that very large numbers of people in the far future add up to being a very big moral priority. For instance, you may disregard aggregation. Alternatively, you may take a Rawlsian moral view combined with the assumption that the worst-off people who we can help are alive today.

Reason 2: You predict that interstellar travel and conscious simulations will not be adopted and humanity will not expand.

Honorable mention 1: If you believe that future technologies like transhumanism will create a bad future, then you will still focus on the long run, but with a more pessimistic viewpoint that worries less about existential risk.

Honorable mention 2: if you don't believe in making trolley problem-type sacrifices, you will have a mildly different theoretical understanding of longtermism than some EA thinkers who have characterized it with a more consequentialist angle. In practice, it's unclear if there will be any difference.

Honorable mention 3: if you are extremely worried about the social consequences of giving people a strong motivation to fight for the general progress of humanity, you will want to keep longtermism a secret, private point of view.

Honorable mention 4: if you are extremely worried about the social consequences of giving people in wealthy countries a strong motivation to give aid to their neighbors and compatriots, you will want to keep longtermism a secret, private point of view.

There are others reasons to disagree with long-term priorities (mainly, uncertainty in predicting and changing the far future), but these are just the takeaways from the ideas I've discussed here.

A broad plea: let’s keep Effective Altruism grounded

Many people came into Effective Altruism from moral philosophy, or at least think about it in very rigorous philosophical terms. This is great for giving us rigorous, clear views on a variety of issues. However, there is a downside. The urge to systematize everything to its logical theoretical conclusions inevitably leads to cases where the consequences are counter-intuitive. Moral philosophy has tried for thousands of years to come up with a single moral theory, and it has failed, largely because any consistent moral theory will have illogical or absurd conclusions in edge cases. Why would Effective Altruism want to be like a moral theory, burdened by these edge cases that don’t matter in the real world? And if you are a critic of Effective Altruism, why would you want to insert yourself into the kind of debate where your own views can be exposed to have similar problems? Effective Altruism can instead be a more grounded point of view, a practical philosophy of living like Stoicism. Stoics don’t worry about what they would do if they had to destroy an innocent country in order to save Stoic philosophy, or other nonsense like that. And the critics of Stoicism don’t make those kinds of objections. Instead, everything revolves around a simple question whose answers are inevitably acceptable: how can I realistically live the good life? (Or something like that. I don’t actually know much about Stoicism.)

Effective Altruism certainly should not give up formal rigor in answering our main questions. However, we should be careful about which questions we seek to answer. And we should be careful about which questions we use as the basis for criticizing other Effective Altruists. We should focus on the questions that really matter for deciding practical things like where we will work, where we will donate and who we will vote for. If you have in mind some unrealistic, fantastical scenario about how utility could be maximized in a moral dilemma, (a) don’t talk about it, and (b) don’t complain about what other Effective Altruists say or might have to say about it. It’s pointless and needless on both sides.


weeatquince @ 2020-01-13T09:56 (+40)

Hi,

I downvoted this but I wanted to explain why and hopefully provide constructive feedback. I felt that, having seen the original post this is referencing, I really do not think this post did a good/fair job of representing (or steelmanning) the original arguments raised.

To try and make this feedback more useful and help the debate here are some very quick attempts to steelman some of the original arguments:


kbog @ 2020-01-13T22:51 (+18)

Your comment makes points that are already addressed by my original post.

Historically arguments that justify horrendous activities have a high frequency of being utopia based (appealing to possible but uncertain future utopias).

This is selecting on the dependent variable. Nearly every reformer and revolutionary has appealed to possible but uncertain future utopias. Also most horrendous activities have been most greatly motivated by some form of xenophobia or parochialism, which is absent here.

If an argument leads to some ridiculous / repugnant conclusions that most people would object too then it is worth being wary of that argument.

Maybe at first glance, but it's a good idea to replace that first glance with a more rigorous look at pros and cons, which we have been doing for years. Also this is about consequentialist longtermism, not longtermism per se.

The philosophers who developed the long-termist astronomical waste argument openly use it to promote a range of abhorrent hawkish geopolitical responses (eg premptive nuclear strikes).

I don't find Bostrom's argument abhorrent, especially since he didn't actually promote preemptive nuclear strikes. And again, this confounds longtermism with act-consequentialist views.

There are problems with taking a simple expected value approach to decision making under uncertainty. Eg Pascal's mugging problems.

It's inappropriate to confound longtermism with EV maximization. It's not clear that doubting EV maximization will weaken, let alone end, the case for focusing on the long run. Loss-averse frameworks will care more about preventing existential risks and negative long-run trajectories. If you ignore tiny-probability events then you will worry less about existential risks but will still prioritize acceleration of our broad socioeconomic trajectory.

Generally speaking, EV maximization is fine and does a good job of beating its objections. Pascal's Mugging is answered by factoring in the optimizer's curse, noting that paying off the mugger incurs opportunity costs and that larger speculated benefits are less likely on priors.

People should move beyond merely objecting to EV maximization, and provide preferred formal characterizations that can be examined for real-world implications. They exist in the literature but in the context of these debates people always seem shy to commit to anything.

The astronomical waste type arguments are not robust to a range of different philosophical and non-utilitarian ethical frameworks

They are robust to a pretty good range of frameworks. I would guess that perhaps three-fourths of philosophical views, across the distribution of current opinions and published literature, would back up a broadly long-term focused view (not that philosophers themselves have necessarily caught up), although they won't necessarily be consequentialist about pursuing this priority.

(given ethical uncertainty) this makes them not great arguments

Assuming ethical uncertainty. I do not make this assumption: it requires a kind of false moral realism.

When non-Effective-Altruists open more leeway to us on the basis of moral uncertainty, we can respond in kind, but until then, deferring to ethical uncertainty is needless disregard for other people's well-being.

MichaelStJules @ 2020-01-14T17:52 (+17)

I don't think your responses in this comment about narrower views being confounded with longtermism as a whole are fair. If your point is that longtermism is broader than weeatquince or Phil made it out to be, then I think you're missing the point of the original criticisms, since the views being criticized are prominent in practice within EA longtermism. The response "There are other longtermist views" doesn't help the ones being criticized.

In particular, 80,000 Hours promotes both (risk-neutral) EV maximization and astronomical waste (as I described in other replies), and consequentialism is disproportionately popular among EA survey and SSC survey respondents. It's about half, although they don't distinguish between act and rule consequentialism, and it's possible longtermists are much less likely to be consequentialists, but I doubt that. To be fair to 80,000 Hours, they've also written against pure consequentialism, with that article linked to on their key ideas page.

80,000 Hours shapes the views and priorities of EAs, and, overall, I think the views being criticized will be made more popular by 80,000 Hours' work.

MichaelStJules @ 2020-01-14T00:09 (+9)
And again, this is making the mistake of confounding longtermism with act-consequentialist views.
It's similarly inappropriate to confound longtermism with EV maximization.

Do you think these views dominate within EA longtermism? I suspect they might, and this seems to be how 80,000 Hours (and probably by extension, CEA) thinks about longtermism, at least. (More on this below in this comment.)

They are robust to a pretty good range of frameworks. I would guess that perhaps three-fourths of philosophical views, across the distribution of current opinions and published literature, would back up a broadly long-term focused view (not that philosophers themselves have necessarily caught up), although they won't necessarily be consequentialist about pursuing this priority.

I think this may be plausible of longtermism generally (although I'm very unsure), but not all longtermist views accept astronomical waste. I'd guess that acceptance of the astronomical waste argument, specifically, is a minority view within ethics, both among philosophers and among published views. Among population axiologies or pure consequentialist views, I'd guess most published views were designed specifically to avoid the repugnant conclusion, and a great deal of these (maybe most, but I'm not sure) will also reject the astronomical waste argument.

Longtermism in EA seems to be dominated by views that accept the astronomical waste argument, or, at least that seems to be the case for 80,000 Hours. 80,000 Hours' cause prioritization and problem quiz (question 4, specifically) make it clear that the absence of future generations accounts for most of their priority given to existential risks.

They also speak of preventing extinction as "saving lives" and use the expected number of lives:

Let’s explore some hypothetical numbers to illustrate the general concept. If there’s a 5% chance that civilisation lasts for ten million years, then in expectation, there are 5000 future generations. If thousands of people making a concerted effort could, with a 55% probability, reduce the risk of premature extinction by 1 percentage point, then these efforts would in expectation save 28 future generations. If each generation contains ten billion people, that would be 280 billion lives saved. If there’s a chance civilisation lasts longer than ten million years, or that there are more than ten billion people in each future generation, then the argument is strengthened even further.

Moral uncertainty isn't a theoretically coherent idea - it assumes an objectively factual basis for people's motivations.

I don't think it needs to make this assumption. Even anti-realists can assign different weights to different views and intuitions. It can be statement about where you expect your views to go if you heard all possible arguments for and against. 80,000 Hours also uses moral uncertainty, FWIW.

Philosophy should not be about fact-finding, it should be about reinforcing the mission to improve and protect all people's lives.

Who counts (different person-affecting views, nonhuman animals, fetuses, etc.), in what ways and what does it mean to improve and protect people's lives? These are obviously important questions for philosophy. Are you saying we should stop thinking about them?

Do you improve and protect a person's life by ensuring they come into existence in the first place? If not, then you should reject the astronomical waste argument.

kbog @ 2020-01-14T00:31 (+1)
Do you think these views dominate within EA longtermism?

I don't, but why do you ask? I don't see your point.

I'd guess that acceptance of the astronomical waste argument, specifically, is a minority view within ethics, both among philosophers and among published views

As I said previously, most theories imply it, but the field hasn't caught up. They were slow to catch onto LGBT rights, animal interests, and charity against global poverty; it's not surprising that they would repeat the same problem of taking too long to recognize priorities from a broader moral circle.

Even anti-realists can assign different weights to different views and intuitions. It can be statement about where you expect your views to go if you heard all possible arguments for and against.

But then there is scarce reason to defer to surveys of philosophers as guidance. Moral views are largely based on differences in intuition, often determined by differences in psychology and identity. Future divergences in your moral inclinations could be a random walk from your current position, or regression to the human population mean, or regression to the Effective Altruist mean.

MichaelStJules @ 2020-01-14T02:28 (+3)
I don't, but why do you ask? I don't see your point.

Because the criticism isn't just against longtermism per se, but longtermism in practice. In practice, I think these views are popular or at least disproportionately promoted or taken for granted at prominent EA orgs (well, 80,000 Hours, at least).

As I said previously, most theories imply it

Based on what are you making this claim? I seriously doubt this, given the popularity of different person-affecting views and different approaches to aggregation. Here are some surveys of value-monistic consequentialist systems (or population axiologies), but they are by no means exhaustive, since they miss theories like leximin/maximin, Moderate Trade-off Theory, rank-discounted theories and often specific theories with person-affecting views:

http://www.crepp.ulg.ac.be/papers/crepp-wp200303.pdf

https://www.repugnant-conclusion.com/population-ethics.pdf

http://users.ox.ac.uk/~mert2255/papers/population_axiology.pdf (this one probably gives the broadest overview since it also covers person-affecting views; of the families of theories mentioned, I think only Totalism, and (some) critical-level theories clearly support the astronomical waste argument.)

Also, have you surveyed theories within virtue ethics and deontology?

At any rate, I'm not sure the number of theories is a better measure than number of philosophers or ethicists specifically. A lot of theories will be pretty ad hoc and receive hardly any support, sometimes even by their own authors. Some are introduced just for the purpose of illustration (I think Moderate Trade-off Theory was one).

But then there is scarce reason to defer to surveys of philosophers as guidance. Moral views are largely based on differences in intuition, often determined by differences in psychology and identity. Future divergences in your moral inclinations could be a random walk from your current position, or regression to the human population mean, or regression to the Effective Altruist mean.

Sure, but arguments can influence beliefs.

Are you 100% certain of a specific fully-specified ethical system? I don't think anyone should be. If you aren't, then shouldn't we call that "moral uncertainty" and find ways to deal with it?

kbog @ 2020-01-14T04:28 (+2)
Because the criticism isn't just against longtermism per se, but longtermism in practice.

But in my original post I already acknowledged this difference. You're repeating things I've already said, as if it were somehow contradicting me.

Based on what are you making this claim?

Based on my general understanding of moral theory and the minimal kinds of assumptions necessary to place the highest priority on the long-run future.

Also, have you surveyed theories within virtue ethics and deontology?

I am familiar with them.

They, and many of the population ethics theories that you link, frequently still imply a greater focus on the long-term future than on other social issues.

(I don't intend to go into more specific arguments here. If you care about this issue, go ahead and make a proper top-level post for it so that it can be debated in a proper context.)

At any rate, I'm not sure the number of theories is a better measure than number of philosophers or ethicists specifically

"Most" i.e. majority of theories weighted for how popular they are. That's what I meant by saying "across the distribution of current opinions and published literature." Though I don't have a particular reason to think that support for long term priorities comes disproportionately from popular or unpopular theories.

Are you 100% certain of a specific fully-specified ethical system? I don't think anyone should be. If you aren't, then shouldn't we call that "moral uncertainty" and find ways to deal with it?

No. First, if I'm uncertain between two ethical views, I'm genuinely ambivalent about what future me should decide: there's no 'value of information' here. Second, as I said in the original post, it's a pointless and costly exercise to preemptively try to figure out a fully-specified ethical system. I think we should take the mandate that we have, to follow some kind of Effective Altruism, and then answer moral questions if and when they appear and matter in the practice of this general mandate. Moral arguments need to be both potentially convincing and carrying practical ramifications for us to worry about moral uncertainty.

MichaelStJules @ 2020-01-14T07:09 (+3)
But in my original post I already acknowledged this difference. You're repeating things I've already said, as if it were somehow contradicting me.

Sorry, I should have been more explicit at the start. You responded to a few of weeatquince's points by saying they confounded specific narrower views with longtermism as whole, but these views are very influential within EA longtermism in practice, and the writing your OP is a response to dealt with these narrower views in the first place. I don't think weeatquince (or Phil) was confounding these narrower views with longtermism broadly understood, and the point was to criticize these specific views, anyway, so longtermism being broader is besides the point. If they were confounding these more specific views with longtermism, it still wouldn't invalidate the original criticisms, because these specific views do seem to get significant weight in EA longtermism in practice, anyway (e.g. through 80,000 Hours).

They, and many of the population ethics theories that you link, frequently still imply a greater focus on the long-term future than on other social issues.

I don't disagree, but the original point was about "astronomical waste-type arguments", specifically, not just priority for the long-term future or longtermism, broadly understood. Maybe I've interpreted "astronomical waste-type arguments" more narrowly than you have. Astronomical waste to me means roughly failing to ensure the creation of an astronomical number of happy beings. I seriously doubt that most theories or ethicists, or theories weighted by "the distribution of current opinions and published literature" would support the astronomical waste argument, whether or not most are longtermist in some sense. Maybe most would accept Beckstead's adjustment, but the original criticisms seemed to be pretty specific to Bostrom's original argument, so I think that's what you should be responding to.

I think there's an important practical difference between longtermist views which accept the original astronomical waste argument and those that don't: those that do take extinction to be astronomically bad, so nearer term concerns are much more likely to be completely dominated by very small differences in extinction risk probabilities (under risk-neutral EV maximization or Maxipok, at least).

What theories have you seen that do support the astronomical waste argument? Don't almost all of them (weighted by popularity or not) depend on (impersonal) totalism or a slight variation of it?

Are you saying views accepting the astronomical waste argument are dominant within ethics generally?

kbog @ 2020-01-14T08:03 (+2)
Sorry, I should have been more explicit at the start. You responded to a few of weeatquince's points by saying they confounded specific narrower views with longtermism as whole, but these views are very influential within EA longtermism in practice, and the writing your OP is a response to dealt with these narrower views in the first place. I don't think weeatquince (or Phil) was confounding these narrower views with longtermism broadly understood, and the point was to criticize these specific views, anyway, so longtermism being broader is besides the point. If they were confounding these more specific views with longtermism, it still wouldn't invalidate the original criticisms, because these specific views do seem to get significant weight in EA longtermism in practice, anyway (e.g. through 80,000 Hours).

You seem to be interpreting my post as an attempt at a comprehensive refutation, when it is not and was not presented as such. I took some arguments and explored their implications. I was quite open about the fact that some of the arguments could lead to disagreement with common Effective Altruist interpretations of long-term priorities even if they don't refute the basic idea. I feel like you are manufacturing disagreement and I think this is a good time to end the conversation.

What theories have you seen that do support the astronomical waste argument? Don't almost all of them (weighted by popularity or not) depend on (impersonal) totalism or a slight variation of it?

As I said previously, this should be discussed in a proper post; I don't currently have time or inclination to go into it.

Are you saying views accepting the astronomical waste argument are dominant within ethics generally?

I answered this in previous comments.

matthew.vandermerwe @ 2020-01-13T13:44 (+4)
The philosophers who developed the long-termist astronomical waste argument openly use it to promote a range of abhorrent hawkish geopolitical responses (eg premptive nuclear strikes).

I find this surprising. Can you point to examples?

weeatquince @ 2020-01-13T14:04 (+1)

Section 9.3 here: https://www.nickbostrom.com/existential/risks.html

(Disclaimer: Not my own views/criticism. I am just trying to steelman a Facebook post I read. I have not looked into the wider context of these views or people's current positions on these views.)


kbog @ 2020-01-13T23:06 (+4)

Note that Bostrom doesn't advocate preemptive nuclear strikes in this essay. Rather he says the level of force should be no greater than necessary to "reduce the threat to an acceptable level."

MichaelStJules @ 2020-01-14T02:51 (+7)

When the stakes are "astronomical" and many longtermists are maximizing EV (or using Maxipok) and are consequentialists (or sufficiently consequentialist), what's an acceptable level of threat? For them, isn't the only acceptable level of threat the lowest possible level of threat?

Unless the probability difference is extremely small, won't it come down to whether it increases or decreases risk in expectation, and those who would be killed can effectively be ignored since they won't make a large enough difference to change the decision?

EDIT: Ah, preemptive strikes still might not be the best uses of limited resources if they could be used another way.

EDIT2: The US already has a bunch of nukes that aren't being used for anything else, though.

kbog @ 2020-01-14T04:30 (+6)

There are going to be prudential questions of governance, collateral damages, harms to norms, and similar issues which swamp very small direct differences in risk probability even if one is fixated on the very long run. Hence, an acceptable level of risk is one which is low enough that it seems equal or smaller than these other issues.


Khorton @ 2020-01-13T19:58 (+4)

I'm not sure why this comment was downvoted; @weeatquince was asked for information and provided it.

Pablo_Stafforini @ 2020-01-06T13:22 (+21)
Reason 1 [for disagreeing with longtermism]: You don't believe that very large numbers of people in the far future add up to being a very big moral priority. For instance, you may take a Rawlsian view, believing that we should always focus on helping the worst-off.

It's not clear that, of all the people that will ever exist, the worst-off among them are currently alive. True, the future will likely be on average better than the present. But since the future potentially contains vastly more people, it's also more likely to contain the worst-off people. Moreover, work on S-risks by Tomasik, Gloor, Baumann and others provides additional reason for expecting such people—using 'people' in a broad sense—to be located in the future.

kbog @ 2020-01-06T16:19 (+4)

Thanks, I have adjusted it to show the additional assumption required.

MichaelStJules @ 2020-01-10T08:21 (+1)

Also, if you're trying to exhaust reasons, it would be better to add the qualifier "possible people". There are different kinds of person-affecting views someone could hold.

JimmyJ @ 2020-01-06T05:30 (+13)

I skimmed the post, but I couldn't find what this is responding to. Could you provide a link for context?

eukaryote @ 2020-01-13T19:34 (+10)

I believe this is a response to this post.

MichaelStJules @ 2020-01-14T02:37 (+3)

Also accessible here for readability.

Also see this earlier post for more context.

kbog @ 2020-01-06T19:48 (+1)

I didn't share it because they were trying to make ~drama~ and attacking EAs. I just represented their general arguments.

aarongertler @ 2020-01-15T00:10 (+15)

(Speaking for myself here, not as a moderator or CEA staffer)

I think that not sharing the inspiration for a post is usually a bad idea (with some exceptions around material that is obscene or could be personally harmful to individuals). It starts discussions off on a foundation of confusion and makes it difficult for the author of the original work to point out misconceptions or add context. And if a discussion gets much attention, someone will usually want to know what prompted it, leading the original post to be shared anyway.

(There's also the Streisand effect, where hiding material makes people more interested in seeing it. I think that people in such a situation are also likely to read the material more attentively once they actually find it.)

G Gordon Worley III @ 2020-01-06T18:55 (+10)

The analysis of issues around white-supremacism seem like a bit of a strawman to me. Are there people making serious objections to long-termist views on the grounds that it will maybe favor the wealthy, and since the globally wealthy are predominantly of European descent, this implies a kind of de facto white supremacism? This seems like a kind of vague, guilty-by-correlation argument we need not take seriously, but you devote a lot of space to it so I take it you have some reason to believe many people, including those on this forum, honestly believe it.

kbog @ 2020-01-06T19:47 (+2)

There seems to be, like, 1 serious person who believes it.

Guilt by correlation arguments in the basic sense are silly, but can actually be valid worries about the unintended consequences of sharing an idea. I'm not strawmanning, I actually tried to steelman.

I excluded the original source because it shouldn't be taken seriously as you say, but I still discussed the issue in the interest of fairness.


Sean_o_h @ 2020-01-06T21:07 (+24)

I've spent quite a bit of time trying to discuss the matter privately with the main author of the white supremacy critique, as I felt the claim was v unfair in a variety of ways and know the writer personally. I do not believe I have succeeded in bringing them round. I think it's likely that there will be a journal article making this case at some point in the coming year.

At that point a decision will need to be made by thinkers in the longtermist community re: whether it is appropriate to respond or not. (It won't be me; I don't see myself as someone who 'speaks' for EA or longtermism; rather someone whose work fits within a broad longtermist/EA frame).

What makes this a little complicated, in my view, is that there are (significantly) weaker versions of these critiques - e.g. relating to the diversity, inclusiveness, founder effects and some of the strategies within EA - that are more defensible (although I think EA has made good strides in relation to most of these critiques) and these may get tangled up with this more extreme claim among those who consider those weaker critiques valid.

While I am unsure about the appropriate strategy for the extreme claim, if and when it is publicly presented, it seems good to me to steelman and engage with the less unreasonable claims.

John_Maxwell @ 2020-01-11T03:29 (+8)

It seems weird that the longtermism is being accused of white supremacy given that population growth is disproportionately happening in countries that aren't traditionally considered white? As you can see from the map on this page, population growth is concentrated in places like Africa, the Middle East, and South Asia. It appears to me that it's neartermist views of population ethics ("only those currently alive are morally relevant") that place greater moral weight on white folks? I wonder how a grandmother from one of those places, proud of her many grandchildren, would react if a childless white guy told her that future generations weren't morally relevant... It also seems weird to position climate change as a neartermist cause.

MichaelStJules @ 2020-01-11T21:00 (+1)

Those grandchildren already exist; no one's saying they don't matter. Are you saying these grandmothers want to have more and more grandchildren? I'm not sure people in these countries are having as many children as they would prefer; I'd expect them to prefer to have fewer, if more informed or if conditions were better for them. Child brides and other forms of coercion, worse access to contraceptives, abortion and other family planning services, worse access to information, more restrictive gender roles and fewer options for making a living generally, higher infant mortality rates, etc..

Was a claim made that future generations aren't morally relevant? I think the objections were more specifically against the total view (and the astronomical waste argument), which treats people like mere vessels for holding value. Longtermism, in practice, seems to mostly mean the total view. There are many other person-affecting views besides presentism ("only those currently alive are morally relevant"), some of which could be called longtermist, too. So-called "wide" person-affecting views solve the non-identity problem, and there's the procreation asymmetry, too.

G Gordon Worley III @ 2020-01-07T18:49 (+4)

Thanks for the context. My initial reaction to seeing that case included was "surely this is all made up", so surprised to learn there's someone making this as a serious critique on the level of publishing a journal article about it, and not just random Tweets aiming to score points with particular groups who see EA in general and long-termism specifically as clustering closer to their enemies than their allies.

MichaelStJules @ 2020-01-06T07:09 (+9)
Reason 3: You believe that future technologies like transhumanism will give humanity a bad future.

Or other individuals besides humans you value, e.g. other sentient individuals like animals, digital sentience and sentient aliens.

However, these are also reasons to work on s-risks, which is a longtermist cause.

kbog @ 2020-01-10T05:16 (+2)

Thanks, yes, updating that to an 'honorable mention.'

MichaelStJules @ 2020-01-06T06:49 (+8)
Will long-term EAs ignore short-term harms?

I think some long-term EAs ignore harms to farmed animals. Not all or necessarily most, though. Actually, based on the most recent EA survey ("Mean cause rating and sub grouping", in section Group membership), cause prioritization between veg*ns and meat eaters only really differed very much on animal welfare (higher for veg*ns) and global poverty (higher for meat eaters).

If someone dies today, that makes humanity grow more slowly and makes the world a more volatile place.

I don't really have much confidence in this claim. It sounds plausible, of course, but there are also potentially long-term negative effects from larger populations, like environmental harms. These will have to be weighed against each other.

kbog @ 2020-01-06T09:49 (+4)

Here is an excerpt from Candidate Scoring System about the value of population size:

Folk assumptions about ‘overpopulation’ are flawed and must be dropped, making it unclear whether our population growth is too high or too low for total human welfare (Greaves 2019, Ord 2014). 19% of economists agree that the economic benefits of an expanding world population outweigh the economic costs and 29% agree with provisos; 48% disagree (Fuller and Geide-Stevenson 2014). There are also non-human consequences of having a larger population. Humans eat meat, and animal agriculture and aquaculture generally involve negative welfare (Bogosian 2019). Much of the land we occupy would otherwise be occupied by wilderness, but it’s not clear if wilderness has net negative animal welfare (Plant 2016). However, this consideration is small in the long run since we can expect animal farming to eventually become more humane or outmoded.
From what we can see, the social cost of the pollution of a typical person, even in the West, seems small in comparison to a person’s whole life and impacts. Per our air pollution section, we can tentatively say that the social cost of carbon, including other impacts of air pollution besides climate change, is less than $200/ton. This is $3,200 per American per year. Meanwhile, American GDP per capita is $59,500. Thus, the average American seems to contribute far more to the world through labor than what they destroy via air pollution. There are other downsides of population growth, but they seem unlikely to be much worse than air pollution, and there are other upsides as well.
Changing population size now may also have a large effect on long-run fertility. Jones (2019) provides a model of economic growth with endogenous fertility and suggests that actual population growth may be either higher or lower than optimal, but in particular he shows that under certain assumptions insufficient fertility could lead to an indefinitely declining population with a corresponding stagnation of wealth and knowledge. This counts as an existential threat to humanity, albeit a relatively slow and mundane one that might be overturned by a Darwinian mechanism.

Of course, killing someone is worse than deciding not to give birth to someone.

MichaelStJules @ 2020-01-13T19:56 (+1)

Also, the Vox article here described how some EA Global attendees thought of global poverty as a "rounding error". I don't know that this is very representative of longtermists, though.