Three polls: on timelines and cause prio

By Toby Tremlett🔹 @ 2025-04-28T12:03 (+30)

Below are a few polls which I've considered running as debate weeks, but thought better of (for now at least). 

Timelines

 

I didn't run this as a debate week because I figured that the debate slider tool isn't the ideal way to map out a forecast. 

However, I still think it's an interesting temperature check to run on the community, especially with the publication of AI 2027. For the purposes of this poll, we can use the  criteria from this metaculus poll

Also it's no crime to vote based on vibes, leave a comment, and change your mind later. 

Bioweapons

 

Obviously, bioweapons pose a catastrophic risk. But can they be existential? I buy the Parfitian argument that we should disvalue extinction far more than about catastrophe (and this extends somewhat to other states nearby in value to extinction). But I'm unsure how seriously I should take bio-risks compared to other putative existential risks. 

Definitions:

Strong longtermism

I wonder where people land on this now that we talk about longtermism less. As a reminder, strong longtermism is the view that "the most important feature of our actions today is their impact on the far future". 

Summary of Greaves and MacAskill's paper on the view here

 

Henry Howard🔸 @ 2025-04-28T14:02 (+11)

Consequentialists should be strong longtermists

Disagree on the basis of cluelessness. 

Uncertainty about how to reliably affect the longterm future is much worse than uncertainty over our effects on the near-term.

I find the Hilary Greaves argument that neartermist interventions are just as unpredictable as longtermist interventions unconvincing because you could apply the same reason to treating a sick person (maybe they'll go on to cause disaster), or getting out of bed in the morning (maybe I'll go on to cause disaster). This paralysis is not tenable.

Harrison 🔸 @ 2025-04-28T22:52 (+7)

Bioweapons are an existential risk

MichaelDickens @ 2025-04-28T14:18 (+6)

AGI by 2028 is more likely than not

This question is already probabilistic, so arguably I should put my vote all the way on the "disagree" side, because I don't think it's more likely than not.

But I also don't think it's that far from a 50% chance either—maybe 40% although I don't have a strong belief. So my answer is that I weakly disagree.

calebp @ 2025-04-28T19:09 (+4)

Did you look at the metaculus resolution criteria? They seem extremely weak to me, would be intersted to know which critiera you think o3 (or whatever the best OAI model is) is furthest away from.

MichaelDickens @ 2025-04-28T20:58 (+2)

To be honest I did not read the post, I just looked at the poll questions. I was thinking of AGI in the way I would define it*, or as the other big Metaculus AGI question defines it. For the "weakly general AI" question, yeah I think 50% chance is fair, maybe even higher than 50%.

*I don't have a precise definition but I think of it as an AI that can do pretty much any intellectual task that an average human can do

calebp @ 2025-04-28T20:59 (+2)

Yeah that’s fair. I’m a lot more bullish on getting AI systems that satisfy the linked question’s definition than my own one.

calebp @ 2025-04-28T19:06 (+4)

AGI by 2028 is more likely than not


Most of my uncertainty is from potentially not understanding the criteria. They seem extremely weak to me:
 

  • * Able to reliably pass a Turing test of the type that would win the Loebner Silver Prize.
  • Able to score 90% or more on a robust version of the Winograd Schema Challenge, e.g. the "Winogrande" challenge or comparable data set for which human performance is at 90+%
  • Be able to score 75th percentile (as compared to the corresponding year's human students; this was a score of 600 in 2016) on all the full mathematics section of a circa-2015-2020 standard SAT exam, using just images of the exam pages.
  • Be able to learn the classic Atari game "Montezuma's revenge" (based on just visual inputs and standard controls) and explore all 24 rooms based on the equivalent of less than 100 hours of real-time play (see closely-related question.)

     

I wouldn't be surprised if we've already passed this.

emre kaplan🔸 @ 2025-04-29T19:52 (+2)

I don't think the current systems are able to pass the Turing test yet. Quoting from Metaculus admins:

"Given evidence from previous Loebner prize transcripts – specifically that the chatbots were asked Winograd schema questions – we interpret the Loebner silver criteria to be an adversarial test conducted by reasonably well informed judges, as opposed to one featuring judges with no or very little domain knowledge."

calebp @ 2025-04-29T20:15 (+2)

I'd bet that current models with less than $ 100,000 of post-training enhancements achieve median human performance on this task.

Seems plausible the metaculus judges would agree, especially given that that comment is quite old.

peterbarnett @ 2025-04-30T00:02 (+3)

AGI by 2028 is more likely than not

Look at the resolution criteria which is based on the specific metaculus Q, seems like a very low bar

Sharmake @ 2025-05-03T16:03 (+2)

Consequentialists should be strong longtermists

 

I disagree, mostly due to the should wording, as believing in consequentialism doesn't obligate you to have any particular discount rate or have any particular discount function, and these are basically free parameters, so discount rates are independent of consequentialism.

Sharmake @ 2025-05-03T16:01 (+2)

Bioweapons are an existential risk


I'll just repeat @weeatquince's comment, since he already covered the issue better than I did:

With current technology probably not an x-risk. With future technology I don’t think we can rule out the possibility of bio-sciences reaching the point where extinction is possible. It is a very rapidly evolving field with huge potential.

weeatquince @ 2025-05-03T12:25 (+2)

Bioweapons are an existential risk.

With current technology probably not an x-risk. With future technology I don’t think we can rule out the possibility of bio-sciences reaching the point where extinction is possible. It is a very rapidly evolving field with huge potential.

Neel Nanda @ 2025-04-30T20:56 (+2)

Consequentialists should be strong longtermists

I'm skeptical of Pascal's Muggings

Neel Nanda @ 2025-04-30T20:55 (+2)

Bioweapons are an existential risk

If this includes AI created/enhanced bioweapons it seems plausible, without that I'm much less sure, though if there's another few decades of synth bio progress but no AGI, seems plausible too

akash 🔸 @ 2025-04-30T20:33 (+2)

AGI by 2028 is more likely than not

I hope to write about this at length once school ends, but in short, here are the two core reasons I feel AGI in three years is quite implausible:
 

 

  1. ^

    As Beth Barnes put it, their latest benchmark specifically shows that "there's an exponential trend with doubling time between ~2 -12 months on automatically-scoreable, relatively clean + green-field software tasks from a few distributions." Real world tasks rarely have such clean feedback loops; see Section 6 of METR's RE-bench paper for a thorough list of drawbacks and limitations.

Sharmake @ 2025-04-29T15:38 (+2)

AGI by 2028 is more likely than not

 

While I think AGI by 2028 is reasonably plausible, I think that there are way too many factors that have to go right in order to get AGI by 2028, and this is true even if AI timelines are short.

 

To be clear, I do agree that if we don't get AGI by the early 2030s at latest, AI progress will slow down, I don't have nearly enough credence for the supporting arguments to have my median be in 2028.

Peter Wildeford @ 2025-04-28T20:16 (+2)

AGI by 2028 is more likely than not

 

I think it's 20% likely based on the model I made.

Ozzie Gooen @ 2025-04-28T20:07 (+2)

AGI by 2028 is more likely than not

calebp @ 2025-04-28T18:56 (+2)

Bioweapons are an existential risk


Note that imo almost all the x-risk from bio routes through AI, and is better thought of as an AI-risk threat model.

MichaelDickens @ 2025-04-28T14:20 (+2)

Bioweapons are an existential risk

Not sure how to interpret this question but the interpretation that comes to mind is "there is some risk that bioweapons cause extinction", on other words "there is a non-infinitesimal probability that bioweapons cause extinction", in which case yes that is certainly true.

Or, a slightly stronger interpretation could be "the risk from bioweapons is at least as large as the risk from asteroids", which I am also pretty confident is true.

Toby Tremlett🔹 @ 2025-04-29T13:27 (+2)

However people interpret the question is how we should discuss it, but when I was writing it, I was wondering about whether bioweapons can cause extinction/ existential risks or not per se. I.e. can bioweapons either:
a) kill everyone

b) Kill enough of the population, forever, such that we can never achieve much as a species.
I'm not sure about the feasibility of either. 

Will Aldred @ 2025-04-29T15:06 (+2)

It seems like I interpreted this question pretty differently to Michael (and, judging by the votes, to most other people). With the benefit of hindsight, it probably would have been helpful to define what percentage risk the midpoint (between agree and disagree) corresponds to?[1] Sounds like Michael was taking it to mean ‘literally zero risk’ or ‘1 in 1 million,’ whereas I was taking it to mean 1 in 30 (to correspond to Ord’s Precipice estimate for pandemic x-risk).

(Also, for what it’s worth, for my vote I’m excluding scenarios where a misaligned AI leverages bioweapons—I count that under AI risk. (But I am including scenarios where humans misuse AI to build bioweapons.) I would guess that different voters are dealing with this AI-bio entanglement in different ways.)

  1. ^

    Though I appreciate that it was better to run the poll as is than to let details like this stop you from running it at all.

Toby Tremlett🔹 @ 2025-04-29T15:23 (+2)

This is helpful. If this was actually for a debate week, I'd have made it 'more than 5% extinction risk this century' and (maybe) excluded risks from AI.

abrahamrowe @ 2025-04-28T14:01 (+2)

Bioweapons are an existential risk

 

I'm interpreting this question as "an existential risk that we should be concerned about", which I think the case for is much weaker than whether or not they are generally an existential risk (though I still think the answer is yes).

Elsa @ 2025-05-07T00:57 (+1)

Consequentialists should be strong longtermists, la mayor parte de la población piensa solo en el futuro inmediato y nunca en las generaciones futuras

VeryJerry @ 2025-05-01T00:04 (+1)

AGI by 2028 is more likely than not

Ai 2027

Joseph_Chu @ 2025-04-30T16:08 (+1)

AGI by 2028 is more likely than not

My current analysis, as well as a lot of other analysis I've seen, suggests AGI is most likely to be possible around 2030.

Evander H. 🔸 @ 2025-04-29T10:22 (+1)

AGI by 2028 is more likely than not

I think we should focus on short timelines, still I think there are not the most likely scenario. Most likely is imo a delay of maybe two years.

Evander H. 🔸 @ 2025-04-29T10:20 (+1)

Consequentialists should be strong longtermists

It just makes theoretically sense. In practice it doesn't matter, e.g. RSI and loss of control is a near term risk.

Evander H. 🔸 @ 2025-04-29T10:18 (+1)

Bioweapons are an existential risk

Mainly thinking about A(G)I engineered bioweapons.

GregorBln @ 2025-04-29T03:34 (+1)

AGI by 2028 is more likely than not

Harrison 🔸 @ 2025-04-28T22:51 (+1)

AGI by 2028 is more likely than not

Benjamin M. @ 2025-04-28T18:01 (+1)

Bioweapons are an existential risk

I don't buy the Parfitian argument, so I'm not sure what a binary yes-no about existential risk would mean to me. 

Benjamin M. @ 2025-04-28T17:57 (+1)

AGI by 2028 is more likely than not

I agree with a bunch of the standard arguments against this, but I'll throw in two more that I haven't seen fleshed out as much: 

  1. The intuitive definition of AGI includes some physical capabilities (and even ones that nominally exclude physical capabilities probably necessitate some), and we seem really far behind on where I would expect AI systems to be in manipulating physical objects.
  2. AIs make errors in systematically different ways than humans, and often have major vulnerabilities. This means we'll probably want AI that works with humans in every step, and so will want more specialized AI. I don't really buy some arguments that I've seen against this but I don't know enough to have a super confident rebuttal.
Benjamin M. @ 2025-04-29T11:52 (+3)

Hmm it seems like the Metaculus poll linked is actually on a random selection of benchmarks being arbitrarily defined as a weakly general intelligence. If I have to go with the poll resolution, I think there's a much greater chance (not going to look into how difficult the Atari game thing would be yet, so not sure how much greater).

Yarrow @ 2025-04-28T16:23 (+1)

AGI by 2028 is more likely than not

I gave a number of reasons I think AGI by 2030 is extremely unlikely in a post here.

Knight Lee @ 2025-04-28T14:39 (+1)

Consequentialists should be strong longtermists

Technically I agree that 100% consequentialists should be strong longtermists, but I think if you are moderately consequentialist, you should only sometimes be a longtermist. When it comes to choosing your career, yes, focus on the far future. When it comes to abandoning family members to squeeze out another hour of work, no. We're humans not machines.

Will Aldred @ 2025-04-28T13:10 (+1)

Consequentialists should be strong longtermists

For me, the strongest arguments against strong longtermism are simulation theory and the youngness paradox (as well as yet-to-be-discovered crucial considerations).[1]

(Also, nitpickily, I’d personally reword this poll from ‘Consequentialists should be strong longtermists’ to ‘I am a strong longtermist,’ because I’m not convinced that anyone ‘should’ be anything, normatively speaking.)

  1. ^

    I also worry about cluelessness, though cluelessness seems just as threatening to neartermist interventions as it does to longtermist ones.

Toby Tremlett🔹 @ 2025-04-29T13:30 (+4)

I'm a pretty strong anti-realist but this is one of the strongest types of shoulds for me. 
I.e. 'If you want to achieve the best consequences, then you should expect the majority of affectable consequences to be in the far future' Seems like the kind of thing that could be true or false on non-normative grounds, and would normatively ground a 'should' if you are already committed to consequentialism. In the sense that believing "I should get to Rome as fast as possible" and "The fastest way to get to Rome is to take a flight" grounds a 'should' for "I should take a flight to Rome".