Polls on De/Accelerating AI
By Denkenberger🔸 @ 2025-08-09T02:01 (+29)
Recently, people on both ends of the de/accelerating AI spectrum have been making claims that EAs are on the opposite end. So I think it would be helpful to have a poll to get a better idea where EAs stand. I think it's useful to have some actual descriptions on the different positions, though it's probably not possible to have these make sense as an ordering for everyone. So you may need to do some averaging to get your location on the spectrum.
Poll on big picture AI de/acceleration (1 is on left, 21 is on right)
- Accelerate ASI everywhere (subsidy, no regulations)
- Accelerate AGI everywhere (subsidy, no regulations)
- Accelerate ASI in less safe lab in US (subsidy, no regulations)
- Accelerate AGI in less safe lab in US (subsidy, no regulations)
- Accelerate ASI in safer lab in US (subsidy, no regulations)
- Accelerate AGI in safer lab in US (subsidy, no regulations)
- Neutral (no regulations, no subsidy)
- Responsible scaling policy or similar
- SB-1047 (liability, etc)
- Pause AI if AI is greatly accelerating the progress on AI (e.g. 10x)
- Ban training above a certain size
- Ban a certain level of autonomous code writing
- Ban AI agents
- Pause AI if it causes a major disaster (e.g. like Chernobyl)
- Restrict access to AI to few people (like nuclear)
- Make AI progress very slow (heavily regulate it)
- Pause AI if there is mass unemployment (say >20%)
- Pause AI now if it is done globally
- Pause AI now unilaterally (one country)
- Shut AI down for decades until something changes radically, such as genetic enhancement of intelligence
- Never build AGI (Stop AI)
Poll on personal action AI de/acceleration (1 is on left, 11 is on right)
- Ok to be a capabilities employee at a less safe lab (direct impact)
- Ok to be a capabilities employee at a safer lab (direct impact)
- Ok to be a capabilities employee at a less safe lab for career capital/donations
- Ok to be a capabilities employee at a safer lab for career capital/donations
- Ok to be a safety employee at a less safe lab
- Ok to be a safety employee at a safer lab
- Ok to invest in AI companies
- Ok to pay for AI (but not to invest)
- Ok to use free AI (but not to pay for AI, or you need to offset your payment for AI)
- Not ok to use AI
- Only donating/working on pausing AI is ok
Update: Now that voting has closed, I thought I would summarize some of the results. Obviously, this is a tiny subset of EAs, so there could be large sample bias, and there may be some averaging of positions, so nothing is very confident here.
For the big picture, it received 39 votes. 13% want AGI never to be built, 26% said to pause AI now in some form, and another 21% would like to pause AI if there is a particular event/threshold. 31% want some other regulation, 5% are neutral and 5% want to accelerate AI in a safe US lab. So if I had to summarize the median respondent, it would be strong regulation for AI or pause if a particular event/threshold is met. There appears to be more evidence for the claim that EA wants AI to be paused/stopped than for the claim that EA wants AI to be accelerated.
As for the personal actions, there were 27 votes. 67-74% thought it was okay to work in an AI lab in some capacity, depending on how you interpret votes in between defined numbers. About half of these were okay with the work being in capabilities, whereas the other half said it should be in safety. 26% appear to say it is not okay to invest in AI, and about 15% say it is not okay to pay for AI, and about 7% say it is not okay to use AI at all. So the median EA likely thinks it is ok to do AI safety work in the labs. It appears that EAs think that personal actions that accelerate AI are more acceptable than big picture actions to do the same, but this could be a difference due to the personal question being phrased as what is permissible versus the big picture as what would be best to do.
Matthew_Barnett @ 2025-08-10T02:51 (+5)
The current results show that I'm the most favorable to accelerating AI out of everyone who voted so far. I voted for "no regulations, no subsidy" and "Ok to be a capabilities employee at a less safe lab".
However, I should clarify that I only support laissez faire policy for AI development as a temporary state of affairs, rather than a permanent policy recommendation. This is because the overall impact and risks of existing AI systems are comparable to, or less than, that of technologies like smartphones, which I also favor remaining basically unregulated. But I expect future AI capabilities will be greater.
After AI agents get significantly better, my favored proposals to manage AI risks are to implement liability regimes (perhaps modeled after Gabriel Weil's proposals) and to grant AIs economic rights (such as a right to own property, enter contracts, make tort claims, etc.). Other than these proposals, I don't see any obvious policies that I'd support that would slow down AI development -- and in practice, I'm already worried these policies would go too far in constraining AI's potential.
Arepo @ 2025-08-14T04:59 (+2)
Weakly in favour of moderate regulation, though I think within the EA movement the extinction case is much overstated, the potential benefits are understated, and I can imagine regulatory efforts backfiring in any number of ways.
Having said that after I voted I realised there are a number of conditional actions lower down in the poll that I would be sympathetic to given the conditions (e.g. pause if mass unemployment or major disaster)
Jason @ 2025-08-09T07:58 (+2)
Individuals should not accelerate AI
6 "Ok to be a safety employee at a safer lab", but not 7 "OK to invest to invest in AI companies", unsure on 5 "Ok to be a safety employee at a less safe lab"
[I note that the "OK to . . ." wording and "should not" make this a question about ethical permissibility, not necessarily what would be the best thing to do]
Jason @ 2025-08-09T07:56 (+2)
We should slow AI down
Closest to 16 "Make AI progress very slow (heavily regulate it)" in theory
Denkenberger🔸 @ 2025-08-09T02:14 (+2)
Individuals should not accelerate AI
Though there is risk of corruption of values, I think the counterfactual impact of a safety-oriented person joining a less safe lab to do safety work is net positive.
Denkenberger🔸 @ 2025-08-09T02:08 (+2)
We should slow AI down
I think we should weigh reducing AI risk by slowing it down against other continuing sources of X-risk. I'm also concerned about a pause becoming permanent, or increasing risk when unpaused, or only getting one chance to pause. However, if AI progress is much faster than now, I think a pause could increase the expected value of the long-run future.
MichaelDickens @ 2025-08-09T16:50 (+4)
I think it is very unclear whether building AI would decrease or increase non-AI risks.
My guess is that a decentralized / tool AI would increase non-AI x-risk by e.g. making it easier to build biological weapons, and a world government / totalizing ASI would, conditional on not killing everyone, decrease x-risk.
Denkenberger🔸 @ 2025-08-09T21:28 (+8)
I think that in the build up to ASI, nuclear and pandemic risks would increase, but afterwards they would likely be solved. So let's assume someone is trying to minimize existential risk overall. If one eventually wants ASI (or thinks it is inevitable), the question is when is optimal. If one thinks that the background existential risk not caused by AI is 0.1% per year, and the existential risk from AI is 10% if developed now, then the question is, "How much does existential risk from AI decrease by delaying it?" If one thinks that we can get the existential risk from AI to less than 9% in a decade, then it would make sense to delay. Otherwise it would not make sense to delay.
Arepo @ 2025-08-14T05:08 (+2)
Increase relative to what counterfactual? I think it might be true both that annual risk of bad event goes up from AI, and that all-time risk decreases (on the assumption that we're basically reaching a hurdle we have to pass anyway, and that I'm highly sceptical that we gain much in practice by implementing forceful procedures that slow us down from getting there).
Jamie Green @ 2025-08-14T11:39 (+1)
I still believe it is net beneficial
bhrdwj🔸 @ 2025-08-11T19:15 (+1)
As long as you're moving things is a good direction, use your judgement. Working at a less safe lab and then whistleblowing could be a path, for instance.
bhrdwj🔸 @ 2025-08-11T19:13 (+1)
We absolutely should slow AI down at least some, versus the "ai.gov" policy. The challenge is, how to coordinate it. My maxed-out agree vote is not to emphasize total-shutdown, but to emphasize the criticality of enough-slowdown, and good-enough coordination.
Søren Elverlin @ 2025-08-09T18:46 (+1)
We should slow AI down
Otherwise I expect AI will kill us
qwertyops900 @ 2025-08-09T04:30 (+1)
We should slow AI down
I think anywhere between 9 (8?) and 15 is acceptable. To me, AI seems to have tremendous potential to alleviate suffering if used properly. At the same time, basically every sign you could possibly look at tells us we're dealing with something that's potentially immensely dangerous. If you buy into longtermist philosophy whatsoever, as I do, that means a certain duty to ensure safety. I kinda don't like going above 11, as it starts to feel authoritarian, which I have a very strong aversion to, but it seems non-deontologicaclly correct.