What would you do if you had a lot of money/power/influence and you thought that AI timelines were very short?
By Greg_Colbourn @ 2021-11-12T21:59 (+29)
To make things more specific:
Lot of money = $1B+; lot of power = CEO of $10B+ org; lot of influence = 1M+ followers, or an advisor to someone with a lot of money or power.
AI timelines = time until an AI-mediated existential catastrophe
Very short = ≥ 10% chance of it happening in ≤ 2 years.
Please don’t use this space to argue that AI x-risk isn’t possible/likely, or that timelines aren’t that short. There are plenty of other places to do that. I want to know what you would do conditional on being in this scenario, not whether you think the scenario is likely.
lincolnq @ 2021-11-13T18:10 (+17)
Hm, if I felt timelines were that short I would probably feel like I knew which company/government was going to be responsible for actually building AGI (or at least narrow it to a few). The plan is to convince such orgs to ask me for advice, then have a team ready to research & give them the best possible advice, and hope that is good enough.
To convince them: I would be trying to leverage my power/influence to get to a position where leaders of the AGI-building organization would see me as someone to consult for help if they had a promising AGI-looking-thing and were trying to figure out how best to deploy it.
How?
- if rich, donating lots of money to causes that such people care about and thus buying invitations to conferences and parties where they might hang out.
- If otherwise influential, then use my influence to get their attention with similar results.
- There might be other leveraged projects (like blogs, etc) that could generate lots of influence and admiration among the leaders of AGI-building orgs
Simultaneously, I would also be trying to join (or create, if necessary) some sort of think tank group comprising people who are the best for advice on short term AGI strategy. Again, power and money seem useful for putting together such a group - you should be able to recruit the best possible people with star power, and/or pay them well, to start thinking about such things full time. The hard part here is shaping the group in the right way, so that they are both smart and thoughtful about high stakes decisions, and their advice will be listened to and trusted by the AGI-building organization.
Assumptions / how does this strategy fail?
- I cannot build the influence required:
- I have to influence too many AGI builders (because I don't know which one is most likely to succeed), so my influence is too diluted
- They are not influenceable in this way
- AGI builders don't ask for the advice even if they want to:
- maybe the project is too secret
- advice can't solve the problem:
- maybe there is an internal deadline - things are moving too fast and they don't have time to ask
- maybe there are external deadlines, like competition between AGI builders, such that even if they get the advice they choose not to heed it
- maybe the AGI building leadership doesn't have sufficient control over the organization, so even if they get advice, their underlings fail to heed it
- advice is too low quality
- I wasn't able to recruit the people for the think tank
- They just didn't come up with the answer
MaxRa @ 2021-11-19T17:50 (+4)
Interesting and hopefully very hypothetical question. :')
Hmm, hard to say what an AI mediated existential catastrophe within 2 years might look like, that‘s so fast. Maybe a giant leap towards a vastly superintelligent system that is so unconstrained and given so much access that it very quickly overpowers any attempt to constrain it? Feels kinda like it requires surprising carelessness… Or maybe a more narrow system that is deliberatly used to cause an existential catastrophe?
Meta
- Ask people at FHI/MIRI/GovAI/OpenPhil/… what I should and shouldn’t do.
- Confidentially talk to other rich and influential and trustworthy people to coordinate a joint effort
Some concrete not-well-thought-through ideas
- talk to potential AI breakthrough companies about my worries, and what I can do to slow down the pace and increase safety/alignment/testing/cooperation with Safety organizations
- if there are race dynamics, try anything to get some cooperation between racing parties going (probably they will already have tried that, I suppose... but maybe there will be possibilities)
Increase the number of safety researchers:
- ask AI Safety researchers that are in broad agreement with you to write out big research prizes for small projects/answers to questions that might be helpful (with short time horizon, maybe 3 months, and make prizes widely known among CS people)
- try to hire everyone who contributed solid work to work full time on Safety Research in a research institute you set up, give them all info you have and let them think and work on whatever makes sense... hire the best science managers you can find
- offer the potential AI breakthrough companies to support them on safety issues with your hopefully somewhat impressive group of hires who won the prizes
Use of pressure/involve government:
- especially unsure: if they decline for reasons that seem irresponsible, talk to the government and try to convince them that AI research is on the brink of developing huge catastrophe causing AI?
Greg_Colbourn @ 2023-04-27T21:46 (+2)
This seems much more relevant now. I actually think we are (post GPT-4+planners) at a ≥ 10% chance of an existential catastrophe happening in ≤ 2 years (and maybe 50% within 5 years).
casebash @ 2021-11-13T15:52 (+2)
My expectation is that having $1 billion is more in money terms than being CEO of a $1 billion company is in power terms.
Greg_Colbourn @ 2021-11-13T16:58 (+2)
Ok, changed to $10B.
David Johnston @ 2021-11-14T00:13 (+1)
One thing I'd want to do is to create an organisation that builds networks with add many AI research communities as possible, monitors AI research as comprehensively as possible and assesses the risk posed by different lines of research.
Some major challenges:
- a lot of labs want to keep substantial parts of their work secret, even more so for e.g. military
- encouraging sharing of more knowledge might inadvertently spread knowledge of how to do risky stuff
- even knowing someone is doing something risky, might be hard to get them to change
- might be hard to see in advance what lines of research are risky
I think networking + monitoring + risk assessing together can help with some of these challenges. Risk assessing + monitoring: we have a better idea of what we do and don't need to know, which helps with the first and second issues. Also, if we have good relationships with labs we are probably better placed to come up with proposals that reduce risk while not hindering lab goals too much.
Networking might also help know where relatively unmonitored research is taking place, even if we can't find out much more about it.
It would still be quite hard to have a big effect, but I think even knowing partially who is taking risks is pretty valuable in your scenario.