Orienting to 3 year AGI timelines
By Nikola @ 2024-12-22T23:07 (+120)
This is a crosspost, probably from LessWrong. Try viewing it there.
nullVasco Grilo🔸 @ 2024-12-24T00:39 (+10)
Hi Nikola.
I define AGI here as an AI system which is able to perform 95% of the remote labor that existed in 2022. I don’t think definitions matter that much anyways because once we reach AI R&D automation, basically every definition of AGI will be hit soon after (barring coordinated slowdowns or catastrophes).
What is your median date of superintelligent AI as defined by Metaculus? If sometime in 2027, I would be happy to bet it will not happen before the end of 2027.
Nikola @ 2024-12-24T16:36 (+7)
My median is around mid 2029, largely due to business-not-as-usual scenarios like treaties, pauses, sabotage, and war.
Vasco Grilo🔸 @ 2024-12-25T11:27 (+10)
Thanks for sharing. Are you open to a bet like the one I linked above, but with a resolution date of mid 2029? I should disclaim some have argued it would be better for people with your views to instead ask banks for loans (see comments in the post about my bet).
yanni kyriacos @ 2024-12-26T04:50 (+2)
If I had ten grand (or one) to throw around I’d be putting that into my org or donating it to an AI Safety org. Do you think there are ways that a bet could be more useful than a donation for AI Safety? I’m struggling to see them.
Vasco Grilo🔸 @ 2024-12-26T17:36 (+2)
Hi Yanni,
I propose bets like this to increase my donations to animal welfare interventions, as I do not think their marginal cost-effectiveness will go down that much over the next few years.
yanni kyriacos @ 2024-12-26T22:55 (+2)
Ah ok that makes sense :)
And you don’t mind taking money from ai safety causes to fund that? Or maybe you think that is a really good thing?
Vasco Grilo🔸 @ 2024-12-27T10:57 (+1)
I guess AI safety interventions are less cost-effective than GiveWell's top charities, whereas I estimate:
Nikola @ 2024-12-25T16:34 (+2)
I think I'll pass for now but I might change my mind later. As you said, I'm not sure if betting on ASI makes sense given all the uncertainty about whether we're even alive post-ASI, the value of money, property rights, and whether agreements are upheld. But thanks for offering, I think it's epistemically virtuous.
Also I think people working on AI safety should likely not go into debt for security clearance reasons.
Vasco Grilo🔸 @ 2025-01-04T07:45 (+2)
@Nikola[1], here is an alternative bet I am open to you may prefer. If, until the end of 2029, Metaculus' question about superintelligent AI:
- Resolves with a date, I transfer to you 10 k 2025-January-$.
- Does not resolve, you transfer to me 10 k 2025-January-$.
- Resolves ambiguously, nothing happens.
The resolution date of the bet can be moved such that it would be good for you. I think the bet above would be neutral for you in terms of purchasing power if your median date of superintelligent AI as defined by Metaculus was the end of 2029, and the probability of me paying you if you win (p1) was the same as the probability of you paying me if I win (p2). Under your views, I think p2 is slightly higher than p1 because of higher extinction risk if you win than if I win. So it makes sense for you to move the resolution date of the bet a little bit forward to account for this. Your median date of superintelligent AI is mid 2029, which is 6 months before my proposed resolution date, so I think the bet above may already be good for you (under your views).
- ^
I am tagging you because I clarified a little the bet.
huw @ 2024-12-22T23:20 (+9)
Heya, I’m not an AI guy anymore so I find these posts kinda tricky to wrap my head around. So I’m earnestly interested in understanding: If AGI is that close, surely the outcomes are completely overdetermined already? Or if they’re not, surely you only get to push the outcomes by at most 0.1% on the margins (which is meaningless if the outcome is extinction/not extinction)? Why do you feel like you have agency in this future?
Nikola @ 2024-12-22T23:44 (+15)
I get that it can be tricky to think about these things.
I don't think the outcomes are overdetermined - there are many research areas that can benefit a lot from additional effort, policy is high leverage and can absorb a lot more people, and advocacy is only starting and will grow enormously.
AGI being close possibly decreases tractability, but on the other hand increases neglectedness, as every additional person makes a larger relative increase in the total effort spent on AI safety.
The fact that it's about extinction increases, not decreases, the value of marginally shifting the needle. Working on AI safety saves thousands of present human lives on expectation.
Peter @ 2024-12-23T04:57 (+2)
This is a thoughtful post so it's unfortunate it hasn't gotten much engagement here. Do you have cruxes around the extent to which centralization is favorable or feasible? It seems like small models that could be run on a phone or laptop (~50GB) are becoming quite capable and decentralized training runs work for 10 billion parameter models which are close to that size range. I don't know its exact size, but Gemini Flash 2.0 seems much better than I would have expected a model of that size to be in 2024.
Nikola @ 2024-12-23T06:15 (+4)
I'm guessing that open weight models won't matter that much in the grand scheme of things - largely because once models start having capabilities which the government doesn't want bad actors to have, companies will be required to make sure bad actors don't get access to models (which includes not making the weights available to download). Also, the compute needed to train frontier models and the associated costs are increasing exponentially, meaning there will be fewer and fewer actors willing to spend money to make models they don't profit from.
Peter @ 2024-12-23T06:42 (+1)
So it seems like you're saying there are at least two conditions: 1) someone with enough resources would have to want to release a frontier model with open weights, maybe Meta or a very large coalition of the opensource community if distributed training continues to scale, 2) it would need at least enough dangerous capability mitigations like unlearning and tamper resistant weights or cloud inference monitoring, or be behind the frontier enough so governments don't try to stop it. Does that seem right? What do you think is the likely price range for AGI?
I'm not sure the government is moving fast enough or interested in trying to lock down the labs too much given it might slow them down more than it increases their lead or they don't fully buy into risk arguments for now. I'm not sure what the key factors to watch here are. I expected reasoning systems next year, but it seems like even open weight ones were released this year that seem around o1 preview level just a few weeks after, indicating that multiple parties are pursuing similar lines of AI research somewhat independently.
Nikola @ 2024-12-23T07:43 (+3)
Yup those conditions seem roughly right. I'd guess the cost to train will be somewhere between $30B and $3T. I'd also guess the government will be very willing to get involved once AI becomes a major consideration for national security (and there exist convincing demonstrations or common knowledge that this is true).