What would you do if you had a lot of money/power/influence and you thought that AI timelines were very short?

By Greg_Colbourn @ 2021-11-12T21:59 (+29)

To make things more specific:
Lot of money = $1B+; lot of power = CEO of $10B+ org; lot of influence = 1M+ followers, or an advisor to someone with a lot of money or power.
AI timelines = time until an AI-mediated existential catastrophe
Very short = ≥ 10% chance of it happening in ≤ 2 years.

Please don’t use this space to argue that AI x-risk isn’t possible/likely, or that timelines aren’t that short. There are plenty of other places to do that. I want to know what you would do conditional on being in this scenario, not whether you think the scenario is likely.


lincolnq @ 2021-11-13T18:10 (+17)

Hm, if I felt timelines were that short I would probably feel like I knew which company/government was going to be responsible for actually building AGI (or at least narrow it to a few). The plan is to convince such orgs to ask me for advice, then have a team ready to research & give them the best possible advice, and hope that is good enough.

To convince them: I would be trying to leverage my power/influence to get to a position where leaders of the AGI-building organization would see me as someone to consult for help if they had a promising AGI-looking-thing and were trying to figure out how best to deploy it.

How?

Simultaneously, I would also be trying to join (or create, if necessary) some sort of think tank group comprising people who are the best for advice on short term AGI strategy. Again, power and money seem useful for putting together such a group - you should be able to recruit the best possible people with star power, and/or pay them well, to start thinking about such things full time. The hard part here is shaping the group in the right way, so that they are both smart and thoughtful about high stakes decisions, and their advice will be listened to and trusted by the AGI-building organization.

Assumptions / how does this strategy fail?

MaxRa @ 2021-11-19T17:50 (+4)

Interesting and hopefully very hypothetical question. :')

Hmm, hard to say what an AI mediated existential catastrophe within 2 years might look like, that‘s so fast. Maybe a giant leap towards a vastly superintelligent system that is so unconstrained and given so much access that it very quickly overpowers any attempt to constrain it? Feels kinda like it requires surprising carelessness… Or maybe a more narrow system that is deliberatly used to cause an existential catastrophe? 

Meta

Some concrete not-well-thought-through ideas

Increase the number of safety researchers:

Use of pressure/involve government:

Greg_Colbourn @ 2023-04-27T21:46 (+2)

This seems much more relevant now. I actually think we are (post GPT-4+planners) at a ≥ 10% chance of an existential catastrophe happening in ≤ 2 years (and maybe 50% within 5 years).

casebash @ 2021-11-13T15:52 (+2)

My expectation is that having $1 billion is more in money terms than being CEO of a $1 billion company is in power terms.

Greg_Colbourn @ 2021-11-13T16:58 (+2)

Ok, changed to $10B.

David Johnston @ 2021-11-14T00:13 (+1)

One thing I'd want to do is to create an organisation that builds networks with add many AI research communities as possible, monitors AI research as comprehensively as possible and assesses the risk posed by different lines of research.

Some major challenges:

I think networking + monitoring + risk assessing together can help with some of these challenges. Risk assessing + monitoring: we have a better idea of what we do and don't need to know, which helps with the first and second issues. Also, if we have good relationships with labs we are probably better placed to come up with proposals that reduce risk while not hindering lab goals too much.

Networking might also help know where relatively unmonitored research is taking place, even if we can't find out much more about it.

It would still be quite hard to have a big effect, but I think even knowing partially who is taking risks is pretty valuable in your scenario.