Rethink's CURVE Sequence - The Good and the Gaps

By JackM @ 2023-11-28T01:06 (+96)

(Also posted to my substack The Ethical Economist: a blog covering Economics, Ethics and Effective Altruism.)

Rethink Priorities’ Worldview Investigation Team recently published their CURVE Sequence: “Causes and Uncertainty: Rethinking Value in Expectation.” The aim of the sequence was to:

  1. Consider alternatives to expected value maximization (EVM) for cause prioritization, motivated by some unintuitive consequences of EVM. The alternatives considered were incorporating risk aversion, and contractualism.
  2. Explore the practical implications of a commitment to EVM and, in particular, if it supports prioritizing existential risk (x-risk) mitigation over all else.

I found the sequence thought-provoking. It opened my eyes to the fact that x-risk mitigation may only be astronomically valuable under certain contentious conditions. I still prefer risk-neutral EVM (with some reasonable uncertainty), but am now less certain that this clearly implies a focus on prioritizing x-risk mitigation.

Having said that, the sequence wasn’t conclusive and it would take more research for me to determine that x-risk reduction shouldn’t be the top priority for the EA community. This post summarizes some of my reflections on the sequence.

Summary of posts in the sequence

Reflections on the sequence

Before I proceed - a quick note. The CURVE sequence didn’t set out to argue for alternatives to EVM. Rather it recognizes that some may prefer alternatives to EVM and then assesses what these alternatives would say about cause prioritization. As someone who finds the underlying justification for risk-neutral EVM the strongest of any decision theory (e.g. see Von-Neumann Morgenstern theorem), I was less interested in the posts that assessed other theories.

Risk-neutral EVM has some counterintuitive conclusions (e.g. fanaticism), but the other theories have their own issues. In the contractualism post it was pointed out that contractualism can favor spending a billion dollars saving one life for certain over spending the same amount of money to almost certainly save far more lives. This seems almost antithetical to the core ideas of Effective Altruism. Otherwise, risk aversion has been shown to lead to making unquestionably poor decisions and it doesn’t even avoid fanaticism - the main feature of risk-neutral EVM we were looking to avoid.

Of course I have some uncertainty over the best decision theory, so it is useful to know what other theories say. I tend to favor a maximizing expected choiceworthiness (MEC) approach to dealing with moral uncertainty. MEC says that we ought to assign probabilities to the correctness of different theories, assess the moral worth of actions under each theory, and then choose the action with the highest overall expected moral value based on these probabilities. As someone who will apply the highest credence to risk-neutral EVM, all I need is for x-risk reduction to be hugely valuable under risk-neutral EVM for it to swamp all my altruistic endeavors. With this in mind, I was mostly interested to see what the sequence would say about the claim that risk-neutral EVM implies x-risk reduction is our most pressing priority. With that clarification out the way, let’s dive into some reflections.

The Good

X-risk reduction may only work under very specific scenarios…

The standard case for x-risk reduction is simple - the future could be vast and great, it seems important to do what we can to ensure that potential isn’t destroyed. The CURVE sequence shows that things aren’t quite this simple.

Arvo Muñoz Morán’s post raises that, for x-risk reduction to be far and away the most pressing priority, we may need to assume some or perhaps all of the following:

  1. Fast value growth e.g. through interplanetary expansion.
  2. That we will face a small number of high risk periods, but otherwise low risk (e.g. a time of perils or great filters hypothesis).
  3. That the best interventions we have available to us have persistent effects e.g. that they don’t just reduce x-risk for a few years but for a longer time period.

…and these scenarios may not be all that likely

Importantly, these scenarios that are needed for x-risk reduction to be overwhelmingly important may not be at all realistic. The CURVE sequence doesn’t cover how realistic fast value growth might be, but it does examine the other two key conditions.

David Rhys Bernard’s post examines the time of perils hypothesis, noting no fewer than 18 premises that may be required to ground it. All the premises are controversial to varying degrees, and it seems that their constellation is pretty unlikely.  

Arvo Muñoz Morán’s post briefly touches on persistence, suggesting that it is unlikely to be higher than 50 years. Actions that drastically reduce risk and do so for a long time are rare. Importantly, the persistence of an intervention can be blunted by the fact that another actor might have done the same thing shortly afterwards.

Furthermore, Laura Duffy’s post suggests that if we lose these conditions, we may not be able to resort to the argument that x-risk reduction remains overwhelmingly important considering just the next few generations alone.

What the CURVE sequence has done is show that x-risk reduction may only have overwhelming value in a small number of unlikely scenarios. In other words, x-risk reduction is looking increasingly fanatical. This is useful to those of us who feel at least uncomfortable with fanaticism, which I suspect is the vast majority of us.

The Gaps

It is unreasonable to expect the CURVE sequence to have completely settled the debate. Of course there are areas for further research, some of which are explicitly noted in the posts themselves. Here are some of my reflections on where I would like to see further research.

Are there any x-risk interventions that avoid the pitfalls?

The CURVE sequence fires some shots against a very general conception of ‘x-risk reduction’. Specifically, it looks at mitigating risks of human extinction. But existential risk (x-risk) is wider than this - it covers anything that destroys our future potential whether or not this is via extinction. It is possible that CURVE’s criticisms don’t apply to all x-risk reducing interventions. Maybe there are some that, through their unique features, remain robustly good.

To think about this it is important to understand exactly why a general x-risk intervention falls prey to CURVE’s critique. The key insight is that it seems likely that the intervention’s effects will get ‘canceled out’ in some way. For example:

The question I have is, are there any interventions or types of intervention that actually are persistent and contingent? We may only need one or a small number of these for x-risk reduction to remain a very pressing priority for our community.

I’m not going to definitively answer my own question here, partly because I’m pretty certain I’m not clever enough. This is why I want the Rethink team to do further work. But I’ll provide some scattered thoughts anyway.

We most easily get contingency by considering value lock-in. In other words, we can consider events that, once they have happened, we find ourselves on another trajectory from which there’s no going back. In these cases the question “would someone else have done the same thing later on anyway” becomes redundant because we only really had one chance at influencing the event. Extinction is one example of a lock-in event. What are the chances we would come back into existence afterwards? Pretty much zilch.

Extinction is a persistent state. Unfortunately, ‘not being extinct’ doesn’t have this same property, or at least not to the same degree. It’s much easier to go from not extinct to extinct than it is the other way around. Non-extinction isn’t really a persistent state. This is what blunts the value of extinction-reducing interventions. It just seems hard to persistently reduce the risk.

But there may be non-extinction lock-in events that avoid this pitfall. Maybe there are interventions which help steer us from one genuinely persistent state to another. In this case, increasing the probability that we land in a better persistent state really could have astronomical value.

One possibility is the development of AGI. If we have one chance to make AGI and we lock-in value at this moment due to the immense power and influence of AGI, we are going to want to lock-in as much value as possible. For example, we may prefer that the U.S. develops AGI first, as opposed to a totalitarian state that could use this immense power and influence in a way that is less conducive to human wellbeing. A counterargument is that any attempt to bring about a better lock-in state might be somewhat in vain if we are just going to go extinct eventually anyway.

Another possibility is the development of artificial sentience. Once we have made this breakthrough, artificial sentience could proliferate rapidly with a certain welfare level. After this there may be no going back (contingency). It might be that once artificial sentience is created it exists for a long time (persistent state), even evading human extinction events. Importantly there may be multiple persistent states here with different value levels e.g. one where the artificial sentience has welfare x, one where it has welfare x+1, one with welfare x+2 etc. Perhaps we can help steer between these states by raising awareness of the moral status of digital sentience so that, when we do create it, we ensure it has good welfare.

OK, that was all very speculative. I am conscious that we might only be able to justify a focus on certain x-risk interventions by making a number of contentious assumptions - which was the original issue. Even so, I do want us to do some digging to see if there are any x-risk reducing interventions that truly are contingent and persistent. Rethink’s examination of a general conception of x-risk reduction is very useful, but I feel we need to move to more granular analysis that focuses on specific interventions.

If not x-risk, then what?

The Effective Altruism community has traditionally considered three primary cause areas: global health and development (GHD), animal welfare, and reducing existential risk. These are also the three buckets considered in the CURVE sequence.

If we lose existential risk, what do we revert to assuming risk-neutral EVM? GHD? Animal welfare? Something else?

This is a question that the EA community has tackled in some depth, but I think there are still more questions to tackle:

I’m sure there are many more important questions that still need investigation. Generally, I would like to see Rethink Priorities continue to be informed by more foundational cause prioritization work carried out by institutions such as the Global Priorities Institute. Where helpful, Rethink could build on foundational findings with more applied, empirical research. To be fair, it seems they already do this to some extent and they are within their own right not to accept what any particular GPI paper says.

Understanding the implications of other decision theories

The CURVE sequence considered some alternate decision theories because risk-neutral EVM has some counterintuitive implications. But these alternate decision theories have their own counterintuitive implications (see my ‘Reflections on the sequence’ section for some examples). What are the problems with the alternative theories and how serious are they?

More research on fanaticism

Ultimately the CURVE sequence’s criticisms of x-risk reduction are a bit of a moot point if there isn’t in fact any issue with fanaticism, as has been argued. Fanaticism seems to be a crunch point. It might not be Rethink that does this, but I would like to see more investigation into how important it is to avoid fanaticism in cause prioritization.

Concluding remarks

The CURVE sequence was great, it stimulated a lot of debate and furthered my understanding of under what conditions we can get astronomical value, and how realistic these conditions may be.

I hope the Worldview Investigations Team continues their great work and hopefully gets closer to “settling” some of the key debates. Until then, I will continue to point people who ask what they should do in the direction of x-risk reduction, albeit with a bit more trepidation.


arvomm @ 2023-11-28T16:23 (+23)

Thank you for deeply engaging with our work and for laying out your thoughts on what you think are the most promising paths forward, like searching for contingent and persistent interventions, applying a medium-term lens to global health and animal welfare, or investigating fanaticism. I thought your post was well-written, to the point and enjoyable.

Jack Malde @ 2023-11-28T17:44 (+11)

Thank you Arvo, I really appreciate it! I look forward to seeing more work from you and the team.

Pablo @ 2023-11-29T10:13 (+11)

The Summary of posts in the sequence alone was super useful. Perhaps the RP team would like to include it, or a revised version of it, in the sequence introduction?

Bob Fischer @ 2023-11-29T15:32 (+10)

Thanks for the idea, Pablo. I've added summaries to the sequence page.

SummaryBot @ 2023-11-28T12:47 (+10)

Executive summary: The Rethink Priorities CURVE sequence raised important critiques of existential risk reduction as an overwhelming priority, but gaps remain in understanding whether some x-risk interventions may still be robustly valuable and what the best alternatives are.

Key points:

  1. X-risk reduction may only be astronomically valuable under specific scenarios like fast value growth and time of perils that seem unlikely.
  2. It's unclear if some x-risk interventions avoid these critiques by being uniquely persistent and contingent.
  3. If x-risk falls, it's unclear what the best cause area is - global health, animal welfare, or something else?
  4. There are still open questions around issues like fanaticism, problems with alternate decision theories, and foundational cause prioritization.
  5. More research is needed to settle the debates raised by the CURVE sequence.

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Vasco Grilo @ 2023-11-30T19:26 (+2)

Thanks for the post, Jack!

In Uncertainty over time and Bayesian updating, David Rhys Bernard estimates how quickly uncertainty about the impact of an intervention increases as the time horizon of the prediction increases. He shows that a Bayesian should put decreasing weight on longer-term estimates. Importantly, he uses data from various development economics randomized controlled trials, and it is unclear to me how much the conclusions might generalize to other interventions.

For me the following is the most questionable assumption:

Constant variance prior: We assume that the variance of the prior was the same for each time horizon whereas the variance of the signal increases with time horizon for simplicity.

[...]

If the variance of the prior grows at the same speed as the variance of the signal then the expected value of the posterior will not change with time horizon.

I think the rate of increase of the variance of the prior is a crucial consideration. Intuitively, I would say the variance of the prior grows at the same speed as the variance of the signal, in which case the signal would not be discounted.

David Mears @ 2023-11-30T12:17 (+2)

Thanks, really helpful to have this overview, makes me more likely to read the sequence itself (partly by directing me to which parts cover what)