Effective altruism in the age of AGI

By William_MacAskill @ 2025-10-10T10:57 (+138)

This post is based on a memo I wrote for this year’s Meta Coordination Forum. See also Arden Koehler’s recent post, which hits a lot of similar notes. 

Summary

The EA movement stands at a crossroads. In light of AI’s very rapid progress, and the rise of the AI safety movement, some people view EA as a legacy movement set to fade away; others think we should refocus much more on “classic” cause areas like global health and animal welfare.

I argue for a third way: EA should embrace the mission of making the transition to a post-AGI society go well, significantly expanding our cause area focus beyond traditional AI safety. This means working on neglected areas like AI welfare, AI character, AI persuasion and epistemic disruption, human power concentration, space governance, and more (while continuing work on global health, animal welfare, AI safety, and biorisk).

These additional cause areas are extremely important and neglected, and particularly benefit from an EA mindset (truth-seeking, scope-sensitive, willing to change one’s mind quickly). I think that people going into these other areas would be among the biggest wins for EA movement-building right now — generally more valuable than marginal technical safety or safety-related governance work. If we can manage to pull it off, this represents a potentially enormous opportunity for impact for the EA movement.

There's recently been increased emphasis on "principles-first" EA, which I think is great. But I worry that in practice a "principles-first" framing can become a cover for anchoring on existing cause areas, rather than an invitation to figure out what other cause areas we should be working on. Being principles-first means being genuinely open to changing direction based on new evidence; if the world has changed dramatically, we should expect our priorities to change too.

This third way will require a lot of intellectual nimbleness and willingness to change our minds. Post-FTX, much of EA adopted a "PR mentality" that I think has lingered and is counterproductive. EA is intrinsically controversial because we say things that aren't popular — and given recent events, we'll be controversial regardless. This is liberating: we can focus on making arguments we think are true and important, with bravery and honesty, rather than constraining ourselves with excessive caution.

Three possible futures for the EA movement

AI progress has been going very fast, much faster than most people anticipated.[1] AI safety has become its own field, with its own momentum and independent set of institutions. It can feel like EA is in some ways getting eaten by that field: for example, on university campuses, AI safety groups are often displacing EA groups. 

Here are a couple of attitudes to EA that I’ve seen people have in response:

I get a lot of (1)-energy from e.g. the Constellation office and other folks heavily involved in AI safety and governance. I’ve gotten (2)-energy in a more diffuse way from some conversations I’ve had, and from online discussion; it’s sometimes felt to me that a “principles-first” framing of EA (which I strongly agree with) can in practice be used as cover for an attitude of “promote the classic mix of cause areas.”

I think that the right approach is a third way:[3]

To make this more precise and concrete, I mean something like:

Thanks to Lizka Vaintrob for the diagram.

More broadly, I like to think about the situation like this:

I think the third way is the right approach for two big reasons:

  1. I think, in the aggregate, AGI-related cause areas other than technical alignment and biorisk are as big or even bigger a deal than technical alignment and biorisk are, and there are highly neglected areas in this space. EA can help fill those gaps.
  2. Currently, the EA movement feels intellectually adrift, and this focus could be restorative.

A third potential reason is:

  1. Even just for AI safety, EA-types are way more valuable than non-EA technical alignment researchers.

I’ll take each in turn.

Reason #1: Neglected cause areas

The current menu of cause areas in EA is primarily: :

On the “third way” approach, taking on the mission of making the transition to a post-AGI society go well, the menu might be more like this (though note this is meant to be illustrative rather than exhaustive, is not in any priority order, and in practice these wouldn’t all get equal weight[4]): 

I think the EA movement would accomplish more good if people and money were more spread across these cause areas than they currently are.  

I make the basic case for the importance of some of these areas, and explain what they are at more length, in PrepIE (see especially section 6) and Better Futures (see especially the last essay).[6]  The key point is that there’s a lot to do to make the transition to a post-AGI world go well; and much of this work isn't naturally covered by AI safety and biosecurity.  

These areas unusually neglected. The pipeline of AI safety folks is much stronger than it is for these other areas, given MATS and the other fellowships; the pipeline for many of these other areas is almost nonexistent. And the ordinary incentives for doing technical AI safety research (for some employers, at least) are now very strong: including equity, you could get a starting salary on the order of $1M/yr, working for an exciting, high-status role, in the midst of the action, with other smart people. Compare with, say, public campaigning, where you get paid much less, and get a lot more hate.[7]

These other cause areas are also unusually EA-weighted, in the sense that they particularly benefit from EA-mindset (i.e. ethically serious, intensely truth-seeking, high-decoupling, scope-sensitive, broadly cosmopolitan, ambitious, and willing to take costly personal actions.) 

If AI progress continues as I expect it to, over the next ten years a huge amount will change in the world as a result of new AI capabilities, new technology, society’s responses to those changes. We’ll learn a huge amount more, too, from AI-driven intellectual progress. To do the most good, people will need to be extremely nimble and willing to change their minds, in a way that most people generally aren’t.

The biggest note of caution, in my view, is that at the moment these other areas have much less absorptive capacity than AI safety and governance: there isn’t yet a thriving ecosystem of organisations and fellowships etc that make it easy to work on these areas. That means I expect there to be a period of time during which: (i) there’s a lot of discussion of these issues; (ii) some people work on building the necessary ecosystem, or on doing the research necessary to figure out what the most viable paths are; but (iii) most people pursue other career paths, with an eye to switching in when the area is more ripe. This situation reminds me a lot of AI takeover risk or biorisk circa 2014.

Reason #2: EA is currently intellectually adrift

Currently, the online EA ecosystem doesn’t feel like a place full of exciting new ideas, in a way that’s attractive to smart and ambitious people:

Things aren’t disastrous or irrecoverable, and there’s still lots of promise. (E.g. I thought EAG London was vibrant and exciting, and in general in-person meetups still seem great.) But I think we’re far from where we could be.

It seems like a very fortunate bonus, to me that these other cause areas are so intellectually fertile; there are just so many unanswered questions, and so many gnarly tradeoffs to engage with. An EA movement that was engaging much more with these areas would, in its nature, be intensely intellectually vibrant.

It also seems to me there’s tons of low-hanging fruit in this area. For one thing, there’s already a tremendous amount of EA-flavoured analysis happening, by EAs or the “EA-adjacent”, it’s just that most of it happens in person, or in private Slack channels or googledocs. And when I’ve run the content of this post by old-hand EAs who are now focusing on AI, the feedback I’ve gotten is an intense love of EA, and keenness (all other things being equal) to Make EA Great Again, it’s just that they’re busy and it’s not salient to them what they could be doing.

I think this is likely a situation where there’s multiple equilibria we could end up in. If online EA doesn’t seem intellectually vibrant, then it’s not an attractive place for someone to intellectually engage with; if it does seem vibrant, then it is. (Lesswrong has seen this dynamic, falling into comparative decline before Lesswrong 2.0 rebooted it into an energetic intellectual community.)

Reason #3: The benefits of EA mindset for AI safety and biorisk

Those are my main reasons for wanting EA to take the third path forward. But there’s an additional argument, which others have pressed on me: Even just for AI safety or biorisk reduction, EA-types tend to be way more impactful than non-EA types.

Unfortunately, many of these examples are sensitive and I haven’t gotten permission to talk about them, so instead I’ll quote Caleb Parikh who gives a sense of this:

Some "make AGI go well influencers" who have commented or posted on the EA Forum and, in my view, are at the very least EA-adjacent include Rohin Shah, Neel Nanda, Buck Shlegeris, Ryan Greenblatt, Evan Hubinger, Oliver Habryka, Beth Barnes, Jaime Sevilla, Adam Gleave, Eliezer Yudkowsky, Davidad, Ajeya Cotra, Holden Karnofsky ....  most of these people work on technical safety, but I think the same story is roughly true for AI governance and other "make AGI go well" areas.

This isn’t a coincidence. The most valuable work typically comes from deeply understanding the big picture, seeing something very important that almost no one is doing (e.g. control, infosecurity), and then working on that. Sometimes, it involves taking seriously personally difficult actions (e.g. Daniel Kokotajlo giving up a large fraction of his family’s wealth in order to be able to speak freely).

Buck Shlegeris has also emphasised to me the importance of having common intellectual ground with other safety folks, in order to be able to collaborate well. Ryan Greenblatt gives further reasons in favour of longtermist community-building here.

This isn’t particularly Will-idiosyncratic

If you’ve got a very high probability of AI takeover (obligatory reference!), then my first two arguments, at least, might seem very weak because essentially the only thing that matters is reducing the risk of AI takeover. And it’s true that I’m unusually into non-takeover AGI preparedness cause areas, which is why I’m investing the time to write this.

But the broad vibe in this post isn’t Will-idiosyncratic. I’ve spoken to a number of people whose estimate of AI takeover risk is a lot higher than mine who agree (to varying degrees) with the importance of non-misalignment, non-bio areas of work, and buy that these other areas are particularly EA-weighted. 

If this is true, why aren’t more people shouting about this? The issue is that very few people, now, are actually focused on cause-prioritisation, in the sense of trying to figure out what new areas we should be working on. There’s a lot to do and, understandably, people have got their heads down working on object-level challenges. 

Some related issues

Before moving onto what, concretely, to do, I’ll briefly comment on three related issues, as I think they affect what the right path forward is.

Principles-first EA

There’s been a lot of emphasis recently on “principles-first” EA, and I strongly agree with that framing. But being “principles-first” means actually changing our mind about what to do, in light of new evidence and arguments, and as new information about the world comes in. I’m worried that, in practice, the “principles-first” framing can be used as cover for “same old cause-areas we always had.”[8]

I think that people can get confused by thinking about “AI” as a cause area, rather than thinking about a certain set of predictions about the world that have implications for most things you might care about. Even in “classic” cause areas (e.g. global development), there’s enormous juice in taking the coming AI-driven transformation seriously — e.g. thinking about how the transition can be structured so as to benefit the global poor as much as feasible.

I’ve sometimes heard people describe me as having switched my focus from EA to AI. But I think it would be a big mistake to think of AI focus as a distinct thing from an EA focus.[9] From my perspective, I haven’t switched my focus away from EA at all. I’m just doing what EA principles suggest I should do: in light of a rapidly changing world, figuring out what the top priorities are, and where I can add the most value, and focusing on those areas.

Cultivating vs growing EA

From a sterile economics-y perspective, you can think of EA-the-community as a machine for turning money and time into goodness:[10]

 

The purest depiction of the EA movement.

In the last year or two, there’s been a lot of focus on growing the inputs. I think this was important, in particular to get back a sense of momentum, and I’m glad that that effort has been pretty successful. I still think that growing EA is extremely valuable, and that some organisation (e.g. Giving What We Can) should focus squarely on growth.

But right now I think it’s even more valuable, on the current margin, to try to improve EA’s ability to turn those inputs into value — what I’ll broadly call EA’s culture. This is sometimes referred to as “steering”, but I think that’s the wrong image: the idea of trying to aim towards some very particular destination. I prefer the analogy of cultivation — like growing a plant, and trying to make sure that it’s healthy.

There are a few reasons why I think that cultivating EA’s culture is more important on the current margin than growing the inputs:

  1. Shorter AI timelines means there’s less time for growth to pay off. Fundraising and recruitment typically takes years, whereas cultural improvements (such as by reallocating EA labour) can be faster.
  2. The expected future inputs have gone up a lot recently, and as the scale of inputs increases, the importance of improving the use of those inputs increases relative to the gains from increasing inputs even further.
    1. Money: As a result of AI-exposed valuations going up, hundreds of people will have very substantial amounts to donate; the total of expected new donations is in the billions of dollars. And if transformative AI really is coming in the next ten years, then the real value of AI-exposed equity is worth much more again, e.g. 10x+ as much.
    2. Labour: Here, there’s more of a bottleneck, for now. But, fuelled by the explosion of interest in AI, MATS and other fellowships are growing at a prodigious pace. The EA movement itself continues to grow. And we’ll increasingly be able to pay for “AI labour”: once AI can substitute for some role, then an organisation can hire as much AI labour to fill that role as they can pay for, with no need to run a hiring round, and no decrease in quality of performance of the labour as they scale up. Once we get closer to true AGI, money and labour become much more substitutable.
    3. In contrast, I see much more of a bottleneck coming from knowing how best to use those inputs.
  3. As I mentioned earlier, if we get an intelligence explosion, even a slow or muted one, there will be (i) a lot of change in the world, happening very quickly; (ii) a lot of new information and ideas arguments being produced by AI, in a short space of time. That means we need intense intellectual nimbleness. My strong default expectation is that people will not change their strategic picture quickly enough, or change what they are doing quickly enough. We can try to set up a culture that is braced for this.
  4. Cultivation seems more neglected to me, at the moment, and I expect this neglectedness to continue. It’s seductive to focus on increasing inputs because it’s easier to create metrics and track progress. For cultivation, metrics don’t fit very well: having a suite of metrics doesn’t help much with growing a plant. Instead, the right attitude is more like paying attention to what qualitative problems there are and fixing them.
  5. If the culture changed in the ways I’m suggesting, I think that would organically be good for growth, too.

“PR mentality”

Post-FTX, I think core EA adopted a “PR mentality” that (i) has been a failure on its own terms and (ii) is corrosive to EA’s soul. 

By “PR mentality” I mean thinking about communications through the lens of “what is good for EA’s brand?” instead of focusing on questions like “what ideas are true, interesting, important, under-appreciated, and how can we get those ideas out there?”[11]

I understand this as a reaction in the immediate aftermath of FTX — that was an extremely difficult time, and I don’t claim to know what the right calls were in that period. But it seems to me like a PR-focused mentality has lingered. 

I think this mentality has been a failure on its own terms because… well, there’s been a lot of talk about improving the EA brand over the last couple of years, and what have we had to show for it? I hate to be harsh, but I think that the main effect has just been a withering of EA discourse online, and the effect of more people believing that EA is a legacy movement. 

This also matches my perspective from interaction with “PR experts” — where I generally find they add little, but do make communication more PR-y, in a way that’s a turn-off to almost everyone. I think the standard PR approach can work if you’re a megacorp or a politician, but we are neither of those things. 

And I think this mentality is corrosive to EA’s soul because as soon as you stop being ruthlessly focused on actually figuring out what’s true, then you’ll almost certainly believe the wrong things and focus on the wrong things, and lose out on most impact. Given fat-tailed distributions of impact, getting your focus a bit wrong can mean you do 10x less good than you could have done. Worse, you can easily end up having a negative rather than a positive effect. 

And this becomes particularly true in the age of AGI. Again, we should expect enormous AI-driven change and AI-driven intellectual insights (and AI-driven propaganda); without an intense focus on figuring things out, we’ll miss the changes or insights that should cause us to change our minds, or we’ll be unwilling to enter areas outside the current Overton window.

Here’s a different perspective: 

  1. EA is, intrinsically, a controversial movement — because it’s saying things that are not popular (there isn’t value in promoting ideas that are universally endorsed because you won’t change anyone’s mind!), and because its commitment to actually-believing the truth means it will regularly clash with whatever the dominant intellectual ideology of the time is. 
  2. In the past, there was a hope that with careful brand management, we could be well-liked by almost everyone.
  3. Given events of the last few years (I’m not just thinking of FTX but also leftwing backlash to billionaire association, rightwing backlash to SB1047, tech backlash to the firing of Sam Altman), and given the intrinsically-negative-and-polarising nature of modern media, that ship has sailed.
  4. But this is a liberating fact. It means we don’t need to constrain ourselves with PR mentality — we’ll be controversial whatever we do, so the costs of additional controversy are much lower. Instead, we can just focus on making arguments about things we think are true and important. Think Peter Singer! I also think the “vibe shift” is real, and mitigates much of the potential downsides from controversy. 

What I’m not saying

In earlier drafts people I found that people sometimes misunderstood me, taking me to have a more extreme position than I really have. So here’s a quick set of clarifications (feel free to skip):

Are you saying we should go ALL IN on AGI preparedness?

No! I still think people should figure out for themselves where they think they’ll have the most impact, and probably lots of people will disagree with me, and that’s great.

There’s also still a reasonable chance that we don’t get to better-than-human AI within the next ten years, even after the fraction of the economy dedicated to AI has scaled up by as much as it feasibly can. If so then, the per-year chance of getting to better-than-human AI will go down a lot (because we’re not getting the unusually rapid progress from investment scale-up), and timelines would probably become a lot longer. The ideal EA movement is robust to this scenario, and even in my own case I’m thinking about my current focus as a next-five-years thing, after which I’ll reassess depending on the state of the world.

Shouldn’t we instead be shifting AI safety local groups (etc) to include these other areas?

Yes, that too!

Aren’t timelines short? And doesn’t all this other stuff only pay off in long timelines worlds?

I think it’s true that this stuff pays off less well in very short (e.g. 3-year) timelines worlds. But it still pays off to some degree, and pays off comparatively more in longer-timeline and slower-takeoff worlds, and we should care a lot about them too.

But aren’t very short timelines the highest-leverage worlds?

This is not at all obvious to me. For a few reasons:

Is EA really the right movement for these areas?

It’s not the ideal movement (i.e. not what we’d design from scratch), but it’s the closest we’ve got, and I think the idea of setting up some wholly new movement is less promising than EA itself a whole evolving. 

Are you saying that EA should just become an intellectual club? What about building things!?

Definitely not - let’s build, too! 

Are you saying that EA should completely stop focusing on growth?

No! It’s more like: at the moment there’s quite a lot of focus on growth. That’s great. But there seems to be almost no attention on cultivation, even though that seems especially important right now, and that’s a shame.

What if I don’t buy longtermism?

Given how rapid the transition will be, and the scale of social and economic transformation that will come about, I actually think longtermism is not all that cruxy, at least as long as you’ve got a common-sense amount of concern for future generations.

But even if that means you’re not into these areas, that’s fine! I think EA should involve lots of different cause areas, just as a healthy well-functioning democracy has people with a lot of different opinions: you should figure out what worldview you buy, and act on that basis.

What to do?

I’ll acknowledge that I’ve spent more time thinking about the problems I’m pointing to, and the broad path forward, than I have about particular solutions, so I’m not pretending that I have all the answers. I’m also lacking a lot of boots-on-the-ground context. But I hope at least we can start discussing whether the broad vision is on point, and what we could concretely do to help push in this direction. So, to get that going, here are some pretty warm takes.

Local groups

IIUC, there’s been a shift on college campuses from EA uni groups to AI safety groups. I don’t know the details of local groups, and I expect this view to be more controversial than my other claims, but I think this is probably an error, at least in extent. 

The first part of my reasoning I’ve already given — the general arguments for focusing on non-safety non-bio AGI preparedness interventions.

But I think these arguments bite particularly hard for uni groups, for two reasons: 

  1. Uni groups have a delayed impact, and this favours these other cause areas.
    1. Let’s say the median EA uni group member is 20.
    2. And, for almost all of them, it takes 3 years before they start making meaningful contributions to AI safety.
    3. So, at best, they are starting to contribute in 2028.
      1. We get a few years of work in my median timeline worlds.
      2. And almost no work in shorter-timeline worlds, where additional technical AI safety folks are most needed.
    4. In contrast, many of these other areas (i) continue to be relevant even after the point of no return with respect to alignment, and (ii) become comparatively more important in longer-timeline and slower-takeoff worlds (because the probability of misaligned takeover goes down in those worlds).
    5. (I think this argument isn’t totally slam-dunk yet, but will get stronger over the next couple of years.)
  2. College is an unusual time in one’s life where you’re in the game for new big weird ideas and can take the time to go deep into them. This means uni groups provide an unusually good way to create a pipeline for these other areas, which are often further outside the Overton window, and which particularly reward having a very deep strategic understanding.

The best counterargument I’ve heard is that it’s currently boom-time for AI safety field-building. AI safety student groups get to ride this wave, and miss out on it if there’s an EA group instead.

This seems like a strong counterargument to me, so my all-things-considered view will depend on the details of the local group. My best guess is that, where possible: (i) AI safety groups should incorporate more focus on these other areas,[12] and; (ii) there should be both AI safety and EA groups, with a bunch of shared events on the “AGI preparedness” topics.

Online

Some things we could do here include:

Conferences

Conclusion

EA is about taking the question "how can I do the most good?" seriously, and following the arguments and evidence wherever they lead. I claim that, if we’re serious about this project, then developments in the last few years should drive a major evolution in our focus.

I think it would be a terrible shame, and a huge loss of potential, if people came to see EA as little more than a bootloader for the AI safety movement, or if EA ossified into a social movement focused on a fixed handful of causes. Instead, we could be a movement that's grappling with the full range of challenges and opportunities that advanced AI will bring, doing so with the intellectual vitality and seriousness of purpose that is at EA’s core.

 

—— Thanks to Joe Carlsmith, Ajeya Cotra, Owen Cotton-Barratt, Max Dalton, Max Daniel, Tom Davidson, Lukas Finnveden, Ryan Greenblatt, Rose Hadshar, Oscar Howie, Kuhan Jeyapragasan, Arden Koehler, Amy Labenz, Fin Moorhouse, Toby Ord, Caleb Parikh, Zach Robinson, Abie Rohrig, Buck Shlegeris, Lizka Vaintrob, and everyone at the Meta Coordination Forum.

  1. ^

     I’ve started thinking about the present time as the “age of AGI”: the time period where we have fairly general-purpose AI reasoning systems, and where I think of GPT-4 as the first very weak AGI, ushering in the age of AGI. (Of course, any dividing line will have a lot of arbitrariness, and my preferred definition for “full” AGI — a model that can do almost any cognitive task as well as an expert human at lower cost, and that can learn as sample-efficiently as an expert human — is a lot higher a bar.)

  2. ^

    The positively-valenced statement of this is something like: “EA helped us find out how crucial AI would be about 10 years before everyone else saw it, which was a very useful head start, but we no longer need the exploratory tools of EA as we've found the key thing of our time and can just work on it.”

  3. ^

    This is similar to what Ben West called the “Forefront of weirdness” option for “Third wave EA”.

  4. ^

    And note that some (like AI for better reasoning, decision-making and coordination) are cross-cutting, in that work in the area can help with many other cause areas.

  5. ^

    I.e. What should be in the model spec? How should AI behave in the countless different situations it finds itself in? To what extent should we be trying to create pure instruction-following AI (with refusals for harmful content) vs AI that has its own virtuous character? 

  6. ^

    See also Carl Shulman’s excellent podcasts on 80,000 Hours here and here.

  7. ^

    In June 2022, Claire Zabel wrote a post, “EA and Longtermism: not a crux for saving the world”, and said:

    I think that recruiting and talent pipeline work done by EAs who currently prioritize x-risk reduction (“we” or “us” in this post, though I know it won’t apply to all readers) should put more emphasis on ideas related to existential risk, the advent of transformative technology, and the ‘most important century’ hypothesis, and less emphasis on effective altruism and longtermism, in the course of their outreach.

    This may have been a good recommendation at the time; but in the last three years the pendulum has heavily swung the other way, sped along by the one-two punch of the FTX collapse and the explosion of interest and progress in AI, and in my view has swung too far.

  8. ^

    Being principles-first is even compatible with most focus going on some particular area of the world, or some particular bet. Y Combinator provides a good analogy. YC is “cause neutral” in the sense that they want to admit whichever companies are expected to make the most money, whatever sector they are working in. But recently something like 90% of YC companies have been AI focused — because that’s where the most expected revenue is. (The only primary source I could find is this which says “over 50%” in the Winter 2024 batch.)

    That said, I think it would be a mistake if everyone in EA were all-in on an AI-centric worldview.

  9. ^

    As AI becomes a progressively bigger deal, affecting all aspects of the world, that attitude would be a surefire recipe for becoming irrelevant. 

  10. ^

    You can really have fun (if you’re into that sort of thing) porting over and adapting growth models of the economy to EA. 

    You could start off thinking in terms of a Cobb-Douglas production function: 

     V = AKɑL1-ɑ

    Where K is capital (i.e. how much EA-aligned money there is), L is labour, and A is EA’s culture, institutions and knowledge. At least for existential risk reduction or better futures work, producing value seems more labour-intensive than capital-intensive, so ɑ<0.5.

    But, probably capital and labour are less substitutable than this (it’s hard to replace EA labour with money), so you’d want a CES production function:

    V = A(KρL1-ρ)1/ρ

    With ρ of less than 0.

    But, at least past some size, EA clearly demonstrates decreasing returns to scale, as we pluck the lowest-hanging fruit. So we could incorporate this too:

    V = A(MρL1-ρ)𝛖/ρ

    With 𝛖 of less than 1.

    In the language of this model, part of what I’m saying in the main text is that (i) as M and L increase (which they are doing), the comparative value of increasing A increases by a lot; (ii) there seem to me to be some easy wins to increase A.

    I’ll caveat I’m not an economist, so really I’m hoping that Cunningham’s law will kick in and Phil Trammell or someone will provide a better model. For example, maybe ideally you’d want returns to scale to be logistic, as you get increasing returns to scale to begin with, but value ultimately plateaus. And you’d really want a dynamic model that could represent, for example, the effect of investing some L in increasing A, e.g. borrowing from semiendogenous models.

  11. ^

    Some things I don’t mean by the truth-oriented mindset:

    • “Facts don’t care about your feelings”-style contrarianism, that aims to deliberately provoke just for the sake of it.
    • Not paying attention to how messages are worded. In my view, a truth-oriented mindset is totally compatible with, for example, thinking about what reactions or counterarguments different phrasings might have on the recipient and choosing wordings with that in mind — aiming to treat the other person with empathy and respect, to ward off misconceptions early, and to put ideas in their best light.
  12. ^

    I was very happy to see that BlueDot has an “AGI strategy” course, and has incorporated AI-enabled coups into its reading list. But I think it could go a lot further in the direction I’m suggesting.


David_Moss @ 2025-10-10T13:21 (+14)

Post-FTX, I think core EA adopted a “PR mentality” that (i) has been a failure on its own terms and (ii) is corrosive to EA’s soul. 

 

I find it helpful to distinguish two things, one which I think EA is doing too much of and one which EA is doing too little of:

Henry Stanley 🔸 @ 2025-10-10T15:07 (+7)

It’s not the ideal movement (i.e. not what we’d design from scratch), but it’s the closest we’ve got

Interested to hear what such a movement would look like if you were building it from scratch.

David_Moss @ 2025-10-10T12:56 (+4)

Currently, the online EA ecosystem doesn’t feel like a place full of exciting new ideas, in a way that’s attractive to smart and ambitious people

 

This may be partly related to the fact that EA is doing relatively little cause and cross-cause prioritisation these days (though, since we posted this, GPI has wound down and Forethought has spun up). 

People may still be doing within-cause, intervention-level prioritisation (which is important), but this may be unlikely to generate new, exciting ideas, since it assumes causes, and works only within them, is often narrow and technical (e.g. comparing slaughter methods), and is often fundamentally unsystematic or inaccessible (e.g. how do I, a grantmaker, feel about these founders?).

spra 🔸 @ 2025-10-10T17:57 (+3)

IMO one way in which EA is very important to AI Safety is in cause prioritization between research directions. For example, there's still a lot of money + effort (e.g. GDM + Anthropic safety teams) going towards mech interp research despite serious questioning to whether it will help us meaningfully decrease x-risk. I think there's a lot of people who do some cause prioritization, come to the conclusion that they should work on AI Safety, and then stop doing cause prio there. I think that more people even crudely applying the scale, tractability, neglectedness framework to AI Safety research directions would go a long way for increasing the effectiveness of the field at decreasing x-risk.

Peter @ 2025-10-10T18:13 (+1)

I think this is an interesting vision to reinvigorate things and do kind of feel sometimes "principles first" has been conflated with just "classic EA causes."

To me, "PR speak" =/= clear effective communication. I think the lack of a clear, coherent message is most of what bothers people, especially during and after a crisis. Without that, it's hard to talk to different people and meet them where they're at. It's not clear to me what the takeaways were or if anyone learned anything. 

I feel like "figuring out how to choose leaders and build institutions effectively" is really neglected and it's kind of shocking there doesn't seem to be much focus here. A lingering question for me has been "Why can't we be more effective in who we trust?" and the usual objections sort of just seem like "it's hard." But so is AI safety, biorisk, post-AGI prep, etc... so that doesn't seem super satisfying.