Rethink's CURVE Sequence - The Good and the Gaps
By JackM @ 2023-11-28T01:06 (+96)
(Also posted to my substack The Ethical Economist: a blog covering Economics, Ethics and Effective Altruism.)
Rethink Priorities’ Worldview Investigation Team recently published their CURVE Sequence: “Causes and Uncertainty: Rethinking Value in Expectation.” The aim of the sequence was to:
- Consider alternatives to expected value maximization (EVM) for cause prioritization, motivated by some unintuitive consequences of EVM. The alternatives considered were incorporating risk aversion, and contractualism.
- Explore the practical implications of a commitment to EVM and, in particular, if it supports prioritizing existential risk (x-risk) mitigation over all else.
I found the sequence thought-provoking. It opened my eyes to the fact that x-risk mitigation may only be astronomically valuable under certain contentious conditions. I still prefer risk-neutral EVM (with some reasonable uncertainty), but am now less certain that this clearly implies a focus on prioritizing x-risk mitigation.
Having said that, the sequence wasn’t conclusive and it would take more research for me to determine that x-risk reduction shouldn’t be the top priority for the EA community. This post summarizes some of my reflections on the sequence.
Summary of posts in the sequence
- In Causes and Uncertainty: Rethinking Value in Expectation, Bob Fischer introduces the sequence. The motivation for considering alternatives to EVM is due to the unintuitive consequence of the theory that the highest EV option needn’t be one where success is at all likely.
- In If Contractualism, Then AMF, Bob Fischer considers contractualism as an alternative to EVM. Under contractualism, the surest global health and development (GHD) work beats out x-risk mitigation and most animal welfare work, even if the latter options have higher EV.
- In How Can Risk Aversion Affect Your Cause Prioritization?, Laura Duffy considers how different risk attitudes affect cause prioritization. The results are complex and nuanced, but one key finding is that spending on corporate cage-free campaigns for egg-laying hens is robustly cost-effective under nearly all reasonable types and levels of risk aversion considered. Otherwise, prioritization depends on type and level of risk aversion.
- In How bad would human extinction be?, Arvo Muñoz Morán investigates the value of x-risk mitigation efforts under different risk assumptions. The persistence of an x-risk intervention - the risk mitigation’s duration - plays a key role in determining how valuable the intervention is. The rate of value growth is also pivotal, with only cubic and logistic growth (which may be achieved through interplanetary expansion) giving astronomical value to x-risk mitigation.
- In Charting the precipice: The time of perils and prioritizing x-risk, David Rhys Bernard considers various premises underlying the time of perils hypothesis which may be pivotal to the case for x-risk mitigation. All the premises are controversial to varying degrees so it seems reasonable to assign a low credence to this version of the time of perils. Justifying x-risk mitigation based on the time of perils hypothesis may require being fanatical.
- In Uncertainty over time and Bayesian updating, David Rhys Bernard estimates how quickly uncertainty about the impact of an intervention increases as the time horizon of the prediction increases. He shows that a Bayesian should put decreasing weight on longer-term estimates. Importantly, he uses data from various development economics randomized controlled trials, and it is unclear to me how much the conclusions might generalize to other interventions.
- In The Risks and Rewards of Prioritizing Animals of Uncertain Sentience, Hayley Clutterbuck examines several ways of incorporating risk sensitivity into the comparisons between interventions to help numerous animals with a relatively low probability of sentience (such as insects) and less numerous animals of likely or all-but-certain sentience (such as chickens and humans). She shows that while one kind of risk aversion makes us more inclined to help insects, two other kinds of risk aversion suggest the opposite.
- In Is x-risk the most cost-effective if we count only the next few generations?, Laura Duffy considers if we can justify x-risk mitigation where the value is restricted to the next few generations. She shows that, given plausible assumptions, x-risk may not be orders of magnitude better than our best funding opportunities in other causes, especially when evaluated under non-EVM risk attitudes. The motivation for considering only the next few generations is due to uncertainties raised in previous posts in the sequence about the “time of perils” hypothesis and the long-run value of affecting existential risk seriously.
- In Rethink Priorities’ Cross-Cause Cost-Effectiveness Model: Introduction and Overview, several authors present a cross-cause effectiveness model (CCM), a tool for assessing the value of different kinds of interventions and research projects conditional on a wide range of assumptions. This leads to a number of lessons, including the reliance of the expected value of x-risk mitigation on future population dynamics, the variability of x-risk mitigation, and how rare combinations of tail-end results and correlations between parameter distributions may prove decisive.
- In How Rethink Priorities is Addressing Risk and Uncertainty, RP’s Co-CEOs explain that, going forward, they intend to incorporate multiple decision theories into Rethink Priorities’ modeling, more rigorously quantify the value of different courses of action, and adopt transparent decision-making processes.
Reflections on the sequence
Before I proceed - a quick note. The CURVE sequence didn’t set out to argue for alternatives to EVM. Rather it recognizes that some may prefer alternatives to EVM and then assesses what these alternatives would say about cause prioritization. As someone who finds the underlying justification for risk-neutral EVM the strongest of any decision theory (e.g. see Von-Neumann Morgenstern theorem), I was less interested in the posts that assessed other theories.
Risk-neutral EVM has some counterintuitive conclusions (e.g. fanaticism), but the other theories have their own issues. In the contractualism post it was pointed out that contractualism can favor spending a billion dollars saving one life for certain over spending the same amount of money to almost certainly save far more lives. This seems almost antithetical to the core ideas of Effective Altruism. Otherwise, risk aversion has been shown to lead to making unquestionably poor decisions and it doesn’t even avoid fanaticism - the main feature of risk-neutral EVM we were looking to avoid.
Of course I have some uncertainty over the best decision theory, so it is useful to know what other theories say. I tend to favor a maximizing expected choiceworthiness (MEC) approach to dealing with moral uncertainty. MEC says that we ought to assign probabilities to the correctness of different theories, assess the moral worth of actions under each theory, and then choose the action with the highest overall expected moral value based on these probabilities. As someone who will apply the highest credence to risk-neutral EVM, all I need is for x-risk reduction to be hugely valuable under risk-neutral EVM for it to swamp all my altruistic endeavors. With this in mind, I was mostly interested to see what the sequence would say about the claim that risk-neutral EVM implies x-risk reduction is our most pressing priority. With that clarification out the way, let’s dive into some reflections.
The Good
X-risk reduction may only work under very specific scenarios…
The standard case for x-risk reduction is simple - the future could be vast and great, it seems important to do what we can to ensure that potential isn’t destroyed. The CURVE sequence shows that things aren’t quite this simple.
Arvo Muñoz Morán’s post raises that, for x-risk reduction to be far and away the most pressing priority, we may need to assume some or perhaps all of the following:
- Fast value growth e.g. through interplanetary expansion.
- That we will face a small number of high risk periods, but otherwise low risk (e.g. a time of perils or great filters hypothesis).
- That the best interventions we have available to us have persistent effects e.g. that they don’t just reduce x-risk for a few years but for a longer time period.
…and these scenarios may not be all that likely
Importantly, these scenarios that are needed for x-risk reduction to be overwhelmingly important may not be at all realistic. The CURVE sequence doesn’t cover how realistic fast value growth might be, but it does examine the other two key conditions.
David Rhys Bernard’s post examines the time of perils hypothesis, noting no fewer than 18 premises that may be required to ground it. All the premises are controversial to varying degrees, and it seems that their constellation is pretty unlikely.
Arvo Muñoz Morán’s post briefly touches on persistence, suggesting that it is unlikely to be higher than 50 years. Actions that drastically reduce risk and do so for a long time are rare. Importantly, the persistence of an intervention can be blunted by the fact that another actor might have done the same thing shortly afterwards.
Furthermore, Laura Duffy’s post suggests that if we lose these conditions, we may not be able to resort to the argument that x-risk reduction remains overwhelmingly important considering just the next few generations alone.
What the CURVE sequence has done is show that x-risk reduction may only have overwhelming value in a small number of unlikely scenarios. In other words, x-risk reduction is looking increasingly fanatical. This is useful to those of us who feel at least uncomfortable with fanaticism, which I suspect is the vast majority of us.
The Gaps
It is unreasonable to expect the CURVE sequence to have completely settled the debate. Of course there are areas for further research, some of which are explicitly noted in the posts themselves. Here are some of my reflections on where I would like to see further research.
Are there any x-risk interventions that avoid the pitfalls?
The CURVE sequence fires some shots against a very general conception of ‘x-risk reduction’. Specifically, it looks at mitigating risks of human extinction. But existential risk (x-risk) is wider than this - it covers anything that destroys our future potential whether or not this is via extinction. It is possible that CURVE’s criticisms don’t apply to all x-risk reducing interventions. Maybe there are some that, through their unique features, remain robustly good.
To think about this it is important to understand exactly why a general x-risk intervention falls prey to CURVE’s critique. The key insight is that it seems likely that the intervention’s effects will get ‘canceled out’ in some way. For example:
- We could reduce x-risk for a while but then succumb to an x-risk in the near- or medium-term anyway because our intervention wasn’t persistent enough.
- We could try an intervention that someone else was likely to do anyway. In this case our action didn’t really have much counterfactual impact because it wasn’t contingent.
The question I have is, are there any interventions or types of intervention that actually are persistent and contingent? We may only need one or a small number of these for x-risk reduction to remain a very pressing priority for our community.
I’m not going to definitively answer my own question here, partly because I’m pretty certain I’m not clever enough. This is why I want the Rethink team to do further work. But I’ll provide some scattered thoughts anyway.
We most easily get contingency by considering value lock-in. In other words, we can consider events that, once they have happened, we find ourselves on another trajectory from which there’s no going back. In these cases the question “would someone else have done the same thing later on anyway” becomes redundant because we only really had one chance at influencing the event. Extinction is one example of a lock-in event. What are the chances we would come back into existence afterwards? Pretty much zilch.
Extinction is a persistent state. Unfortunately, ‘not being extinct’ doesn’t have this same property, or at least not to the same degree. It’s much easier to go from not extinct to extinct than it is the other way around. Non-extinction isn’t really a persistent state. This is what blunts the value of extinction-reducing interventions. It just seems hard to persistently reduce the risk.
But there may be non-extinction lock-in events that avoid this pitfall. Maybe there are interventions which help steer us from one genuinely persistent state to another. In this case, increasing the probability that we land in a better persistent state really could have astronomical value.
One possibility is the development of AGI. If we have one chance to make AGI and we lock-in value at this moment due to the immense power and influence of AGI, we are going to want to lock-in as much value as possible. For example, we may prefer that the U.S. develops AGI first, as opposed to a totalitarian state that could use this immense power and influence in a way that is less conducive to human wellbeing. A counterargument is that any attempt to bring about a better lock-in state might be somewhat in vain if we are just going to go extinct eventually anyway.
Another possibility is the development of artificial sentience. Once we have made this breakthrough, artificial sentience could proliferate rapidly with a certain welfare level. After this there may be no going back (contingency). It might be that once artificial sentience is created it exists for a long time (persistent state), even evading human extinction events. Importantly there may be multiple persistent states here with different value levels e.g. one where the artificial sentience has welfare x, one where it has welfare x+1, one with welfare x+2 etc. Perhaps we can help steer between these states by raising awareness of the moral status of digital sentience so that, when we do create it, we ensure it has good welfare.
OK, that was all very speculative. I am conscious that we might only be able to justify a focus on certain x-risk interventions by making a number of contentious assumptions - which was the original issue. Even so, I do want us to do some digging to see if there are any x-risk reducing interventions that truly are contingent and persistent. Rethink’s examination of a general conception of x-risk reduction is very useful, but I feel we need to move to more granular analysis that focuses on specific interventions.
If not x-risk, then what?
The Effective Altruism community has traditionally considered three primary cause areas: global health and development (GHD), animal welfare, and reducing existential risk. These are also the three buckets considered in the CURVE sequence.
If we lose existential risk, what do we revert to assuming risk-neutral EVM? GHD? Animal welfare? Something else?
This is a question that the EA community has tackled in some depth, but I think there are still more questions to tackle:
- Which interventions are we not “clueless” about? Is it really reasonable to fund GHD interventions when it is plausible that the negative animal welfare impacts may exceed the positive human welfare impacts. I provide some initial thoughts in this comment chain.
- The failure of x-risk reduction may not mean the failure of longtermism. Toby Ord discusses how speed-ups and enhancements may be hugely valuable as they scale with both the instantaneous value of the long term future and its duration. Are there good speed-up or enhancement intervention options?
- If we do give up on longtermism, let’s be wary of throwing the baby out with the bathwater. Maybe we can revert to medium-termism, which could imply something like boosting technological progress / economic growth or mitigating climate change? How do these options compare to GHD and animal welfare in terms of marginal cost-effectiveness? Also, are there GHD or animal welfare interventions that can be considered medium-termist? Are any of these questions simply unanswerable and, if so, how do we proceed?
I’m sure there are many more important questions that still need investigation. Generally, I would like to see Rethink Priorities continue to be informed by more foundational cause prioritization work carried out by institutions such as the Global Priorities Institute. Where helpful, Rethink could build on foundational findings with more applied, empirical research. To be fair, it seems they already do this to some extent and they are within their own right not to accept what any particular GPI paper says.
Understanding the implications of other decision theories
The CURVE sequence considered some alternate decision theories because risk-neutral EVM has some counterintuitive implications. But these alternate decision theories have their own counterintuitive implications (see my ‘Reflections on the sequence’ section for some examples). What are the problems with the alternative theories and how serious are they?
More research on fanaticism
Ultimately the CURVE sequence’s criticisms of x-risk reduction are a bit of a moot point if there isn’t in fact any issue with fanaticism, as has been argued. Fanaticism seems to be a crunch point. It might not be Rethink that does this, but I would like to see more investigation into how important it is to avoid fanaticism in cause prioritization.
Concluding remarks
The CURVE sequence was great, it stimulated a lot of debate and furthered my understanding of under what conditions we can get astronomical value, and how realistic these conditions may be.
I hope the Worldview Investigations Team continues their great work and hopefully gets closer to “settling” some of the key debates. Until then, I will continue to point people who ask what they should do in the direction of x-risk reduction, albeit with a bit more trepidation.
arvomm @ 2023-11-28T16:23 (+23)
Thank you for deeply engaging with our work and for laying out your thoughts on what you think are the most promising paths forward, like searching for contingent and persistent interventions, applying a medium-term lens to global health and animal welfare, or investigating fanaticism. I thought your post was well-written, to the point and enjoyable.
Jack Malde @ 2023-11-28T17:44 (+11)
Thank you Arvo, I really appreciate it! I look forward to seeing more work from you and the team.
Pablo @ 2023-11-29T10:13 (+11)
The Summary of posts in the sequence alone was super useful. Perhaps the RP team would like to include it, or a revised version of it, in the sequence introduction?
Bob Fischer @ 2023-11-29T15:32 (+10)
Thanks for the idea, Pablo. I've added summaries to the sequence page.
SummaryBot @ 2023-11-28T12:47 (+10)
Executive summary: The Rethink Priorities CURVE sequence raised important critiques of existential risk reduction as an overwhelming priority, but gaps remain in understanding whether some x-risk interventions may still be robustly valuable and what the best alternatives are.
Key points:
- X-risk reduction may only be astronomically valuable under specific scenarios like fast value growth and time of perils that seem unlikely.
- It's unclear if some x-risk interventions avoid these critiques by being uniquely persistent and contingent.
- If x-risk falls, it's unclear what the best cause area is - global health, animal welfare, or something else?
- There are still open questions around issues like fanaticism, problems with alternate decision theories, and foundational cause prioritization.
- More research is needed to settle the debates raised by the CURVE sequence.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Vasco Grilo @ 2023-11-30T19:26 (+2)
Thanks for the post, Jack!
In Uncertainty over time and Bayesian updating, David Rhys Bernard estimates how quickly uncertainty about the impact of an intervention increases as the time horizon of the prediction increases. He shows that a Bayesian should put decreasing weight on longer-term estimates. Importantly, he uses data from various development economics randomized controlled trials, and it is unclear to me how much the conclusions might generalize to other interventions.
For me the following is the most questionable assumption:
Constant variance prior: We assume that the variance of the prior was the same for each time horizon whereas the variance of the signal increases with time horizon for simplicity.
[...]
If the variance of the prior grows at the same speed as the variance of the signal then the expected value of the posterior will not change with time horizon.
I think the rate of increase of the variance of the prior is a crucial consideration. Intuitively, I would say the variance of the prior grows at the same speed as the variance of the signal, in which case the signal would not be discounted.
David Mears @ 2023-11-30T12:17 (+2)
Thanks, really helpful to have this overview, makes me more likely to read the sequence itself (partly by directing me to which parts cover what)