Leadership change at the Center on Long-Term Risk

By JesseClifton, Tristan Cook, Mia_Taylor @ 2025-01-31T21:08 (+154)

The Center on Long-Term Risk (CLR) does research and community building aimed at reducing s-risk

Jesse Clifton is stepping down as CLR’s Executive Director. He’ll be succeeded by Tristan Cook as Managing Director and Mia Taylor as Interim Research Director. [1]

Statement from Jesse

Over the past year or so, I’ve become increasingly convinced by arguments that we are clueless about the sign (in terms of expected total suffering reduced) of interventions aimed at reducing s-risk. (And I think it’s plausible that we should consider ourselves clueless about interventions aimed at improved expected total welfare, generally.) The other researchers on CLR’s Conceptual Research team[2] have come to a similar view,[3] but not the other staff or the board, who are still positive on the pre-cluelessness priorities. 

Given this, I don’t think it makes sense for me to lead CLR. So, for now, I’ll be transitioning to working part-time at CLR (largely, helping with the transition to new leadership) and part-time at Polaris Ventures, where I’ll be leading on animal welfare grantmaking and helping with Polaris’ AI-related grantmaking. (Polaris hasn’t updated their views on cluelessness, but is starting some small-scale animal welfare grantmaking as a form of worldview diversification.) I think animal welfare work is more likely to be recommended by my all-things-considered normative views than what I’ve been doing, though I’m not confident that cluelessness doesn’t undermine this, too. Besides that, I still value s-risk reducers succeeding by their own epistemic standards, and plan to continue contributing as a member of the s-risk community.

This decision wasn’t made lightly. For the past six years, I’ve thought that working on s-risk reduction was the most important thing in the world, and acted accordingly. I’ve only decided to change direction after a lot of (I hope) careful thought and discussion over the past year or so. Concluding that I can’t expect to reduce s-risk in the way I had hoped comes with no small sense of loss. And, I’m personally quite sad to be reducing my involvement with CLR. It’s a pretty amazing place as far as moral and epistemic seriousness goes, and I like to think I’ve grown a lot in my time here, thanks to the people I’ve gotten to work with. On the other hand, I’m excited to get to work with Polaris, which has a great team and where I’ll be challenged in a role pretty different to my current one. 

Regarding the new leadership: I think very highly of Tristan and Mia, and I’m excited for them to take on their new roles. Tristan’s done well as lead of our Community & Engagement team over the past few months, for example, overseeing the redesign and implementation of our intro fellowship programs. Mia’s been leading our Empirical Research team, and has done a great job developing and beginning to execute on the team’s research agenda. They’re both really sharp, conscientious, and dedicated to doing as much good as possible.

Statement from Mia and Tristan     

We — Mia and Tristan — are deeply grateful for Jesse's dedicated leadership at CLR for the past 5 years. While we’re sad to see him step down, we’re excited to take on this responsibility and we’re grateful for the support and trust of Jesse, the team, and the board.

We both started our careers at CLR through the Summer Research Fellowship program — Tristan in 2021 and Mia in 2022. Our development as thinkers owes a lot to our colleagues and mentors at CLR — Jesse in particular. In our time at CLR, we’ve really appreciated the culture of intellectual rigor and moral seriousness. Preserving those values will be a priority through the transition.

We take the arguments for cluelessness raised by Jesse and other researchers at CLR seriously. We believe that predicting the long-term consequences of our actions is hard and that we are likely unaware of many important considerations. Moreover, actions aimed at reducing s-risks face robustness problems, particularly due to the low absolute likelihood of the outcomes we wish to prevent. However, even in light of these challenges, we remain convinced that CLR’s mission of s-risk reduction should continue. 

Our immediate priority is to decide on CLR’s direction. We see this transition as an opportunity for refining our strategy based on both the robustness considerations raised by internal research and external developments — particularly advances in AI capabilities.

We feel incredibly lucky to work with such a talented and thoughtful team. We have a wealth of insight on our priority areas from over a decade of research on s-risk reduction, and we look forward to continuing to advance this work in the years ahead.

  1. ^

     Mia will be using the next four months to explore other options before deciding whether to make a longer-term commitment to CLR.

  2. ^

     CLR is divided into the Conceptual research, Empirical research, and Community & Engagement teams. The Conceptual team consists of Anthony DiGiovanni, Anni Leskelä, and Nicolas Macé. 

  3. ^

    As an example of some of the thinking that’s gone into this, see this post. We may post more summaries of thoughts on cluelessness in future.


Ben_West🔸 @ 2025-02-02T05:26 (+18)

I appreciate you being willing to share your candid reasons publicly, Jesse. Best of luck with your future plans, and best of luck to Tristan and Mia!

kokotajlod @ 2025-02-10T21:49 (+16)

I left this comment on one of their docs about cluelessness, reposting here for visibility:

CLR embodies the best of effective altruism in my opinion. You guys are really truly actually trying to make the world better / be ethical. That means thinking hard and carefully about what it means to make the world better / be ethical, and pivoting occasionally as a result.

I am not a consequentialist myself, certainly not an orthodox EV-maximizing bayesian, though for different reasons than you describe here (but perhaps for related reasons?). I think I like your section on alternatives to 'going with your best guess,' I agree such alternatives should be thoroughly explored because 'going with your best guess' is pretty unsatisfying. 

I'm not sure whether any of the alternatives will turn out to be more satisfying after more reflection, which is why I'm not ready to say I agree with you overall just yet.

But I'm certainly sympathetic and curious.

Thanks!

cloud @ 2025-02-05T21:20 (+15)

Jesse's departure is a huge loss for the field of AI safety. It is also consistent with what I know of his character. It has never been about status, intellectual stimulation, or money for him. His motivation has always been, plainly, to do good.

Jesse anticipated and articulated core safety challenges relating to AI cooperation and conflict in 2019-- a time when even "vanilla" AI x-risk was a niche concern. His expansive, solo-authored research agenda set the stage for years of research to come, and-- to my eye-- reads even better today, in 2025.

I wish the best to Jesse, Tristan, Mia, and the rest of CLR.

Angelina Li @ 2025-02-03T14:55 (+10)

I am glad you've come to a decision here, even though it sounds like a painful one! I really appreciated being able to read this, thank you for sharing!

I think animal welfare work is more likely to be recommended by my all-things-considered normative views than what I’ve been doing, though I’m not confident that cluelessness doesn’t undermine this, too.

@JesseClifton I'd be really curious to hear your thoughts on why animal welfare work seems better under your normative beliefs, if you're open to sharing. (Not sharing my views because I don't want to anchor you.) Someone I'm close to is trying to figure out what they believe about cluelessness, and I thought they might benefit from hearing someone else think through this!

JesseClifton @ 2025-02-07T21:46 (+15)

Some reasons why animal welfare work seems better:

  • I put some weight on a view which says: “When doing consequentialist decision-making, we should set the net weight of the reasons we have no idea how to weigh up (e.g., long-run flowthrough effects) to zero.” This probably implies restricting attention to near-term consequences, and animal welfare interventions seem best for that. (I just made a post that discusses this approach to decision-making.)
    • I think this kind of view is hard to make theoretically satisfying, but it does a good enough job of capturing intuitions relative to alternatives that I currently want to give it some weight.
  • Non-consequentialist considerations might push towards fighting the worst ongoing atrocities / injustices, which also suggests animal-related work.  
JesseClifton @ 2025-02-06T11:21 (+5)

(Thanks! Haven't forgotten about this, will try to respond soon.)

Vasco Grilo🔸 @ 2025-02-05T16:19 (+1)

Thanks for the update!

Over the past year or so, I’ve become increasingly convinced by arguments that we are clueless about the sign (in terms of expected total suffering reduced) of interventions aimed at reducing s-risk.

I believe one can positively influence futures which have an astronomical positive or negative value, but negligibly so. I think the effects of one's actions are well approximated by considering just the 1st 100 years or so.