Which side of the AI safety community are you in?

By Greg_Colbourn ⏸️ @ 2025-10-23T14:23 (+9)

This is a linkpost to https://www.lesswrong.com/posts/zmtqmwetKH4nrxXcE/which-side-of-the-ai-safety-community-are-you-in

With dismay, I have to conclude that the bulk of EA is in Camp A "Race to superintelligence "safely"”. I'd love to see some examples where this isn't the case (please share in the comments), but at least the vast majority of money, power and influence in EA seems to be on the wrong side of history here.

I no longer really think of myself as an EA any more for this reason. I anticipate downvotes, and pushback along the lines of "we can't be certain of extinction". But I've still yet to see an actually good argument for thinking p(doom|AGI) is low, or a convincing rebuttal of If Anyone Builds It, Everyone Dies.

[LessWrong post by Max Tegmark:] In recent years, I’ve found that people who self-identify as members of the AI safety community have increasingly split into two camps:

Camp A) "Race to superintelligence safely”: People in this group typically argue that "superintelligence is inevitable because of X”, and it's therefore better that their in-group (their company or country) build it first. X is typically some combination of “Capitalism”, “Molloch”, “lack of regulation” and “China”.

Camp B) “Don’t race to superintelligence”: People in this group typically argue that “racing to superintelligence is bad because of Y”. Here Y is typically some combination of “uncontrollable”, “1984”, “disempowerment” and “extinction”.

Whereas the 2023 extinction statement  was widely signed by both Camp B and Camp A (including Dario Amodei, Demis Hassabis and Sam Altman), the 2025 superintelligence statement conveniently separates the two groups – for example, I personally offered all US Frontier AI CEO’s to sign, and none chose to do so. However, it would be oversimplified to claim that frontier AI corporate funding predicts camp membership – for example, someone from one of the top companies recently told me that he'd sign the 2025 statement were it not for fear of how it would impact him professionally. 

The distinction between Camps A and B is also interesting because it correlates with policy recommendations: Camp A tends to support corporate self-regulation and voluntary commitments without strong and legally binding safety standards akin to those in force for pharmaceuticals, aircraft, restaurants and most other industries. In contrast, Camp B tends to support such binding standards, akin to those of the FDA (which can be viewed as a strict ban on releasing medicines that haven’t yet undergone clinical trials and been safety-approved by independent experts). Combined with market forces, this would naturally lead to new powerful yet controllable AI tools, to do science, cure diseases, increase productivity and even aspire for dominance (economic and military) if that’s desired – but not full superintelligence until it can be devised to meet the agreed-upon safety standards – and it remains controversial whether this is even possible.

In my experience, most people (including top decision-makers) are currently unaware of the distinction between A and B and have an oversimplified view: You’re either for AI or against it. I’m often asked: “Do you want to accelerate or decelerate? Are you a boomer or a doomer?” To facilitate a meaningful and constructive societal conversation about AI policy, I believe that it will be hugely helpful to increase public awareness of the differing visions of Camps A and B. Creating such awareness was a key goal of the 2025 superintelligence statement. So if you’ve read this far, I’d strongly encourage you to read it and, if you agree with it, sign it and share it. If you work for a company and worry about blowback from signing, please email me at mtegmark@gmail.com and say "I'll sign this if N others from my company do", where N=5, 10 or whatever number you're comfortable with. 

Finally, please let me provide an important clarification about the 2025 statement. Many have asked me why it doesn't define its terms as carefully as a law would require. Our idea is that detailed questions about how to word laws and safety standards should be tackled later, once the political will has formed to ban unsafe/unwanted superintelligence. This is analogous to how detailed wording of laws against child pornography (who counts as a child, what counts as pornography, etc.) got worked out by experts and legislators only after there was broad agreement that we needed some sort of ban on child pornography. 


Denkenberger🔸 @ 2025-10-24T01:28 (+3)

With dismay, I have to conclude that the bulk of EA is in Camp A "Race to superintelligence "safely"”. I'd love to see some examples where this isn't the case (please share in the comments), but at least the vast majority of money, power and influence in EA seems to be on the wrong side of history here.

 

I made a poll to get at what the typical EA Forum user thinks, though that is not necessarily representative of the money, power, or influence. My takeaway was:

"For the big picture, it received 39 votes. 13% want AGI never to be built, 26% said to pause AI now in some form, and another 21% would like to pause AI if there is a particular event/threshold. 31% want some other regulation, 5% are neutral and 5% want to accelerate AI in a safe US lab. So if I had to summarize the median respondent, it would be strong regulation for AI or pause if a particular event/threshold is met. There appears to be more evidence for the claim that EA wants AI to be paused/stopped than for the claim that EA wants AI to be accelerated."

Greg_Colbourn ⏸️ @ 2025-10-27T12:58 (+2)

That's good to see, but the money, power and influence is critical here[1], and that seems to be far too corrupted by investments in Anthropic, or just plain wishful techno-utopian thinking.

  1. ^

    The poll respondents are not representative of that for EA. There is no one representing OpenPhil, CEA or 80k, no large donors, and only one top 25 karma account.