Ilya Sutskever has officially left OpenAI
By MarcusAbramovitch @ 2024-05-15T00:00 (+37)
This is a linkpost to https://twitter.com/ilyasut/status/1790517455628198322
ChanaMessinger @ 2024-05-15T01:03 (+31)
Sam's comment: https://twitter.com/sama/status/1790518031640347056
Jan Leike has also left: https://www.nytimes.com/2024/05/14/technology/ilya-sutskever-leaving-openai.html
Jan Leike, who ran the Super Alignment team alongside Dr. Sutskever, has also resigned from OpenAI. His role will be taken by John Schulman, another company co-founder.
defun @ 2024-05-15T09:49 (+27)
This raises the concern of whether 80,000 Hours should still recommend people to join OpenAI.
JackM @ 2024-05-15T16:43 (+10)
Even if OpenAI has gone somewhat off the rails, should we want more or fewer safety-conscious people at OpenAI? I would imagine more.
Jelle Donders @ 2024-05-16T11:59 (+14)
I expect this was very much taken into account by the people that have quit, which makes their decision to quit anyway quite alarming.
MarcusAbramovitch @ 2024-05-15T19:21 (+7)
Does this not imply that all the people who quit recently shouldn't have?
JackM @ 2024-05-15T19:47 (+5)
From an EA-perspective - yes, maybe.
But also it's a personal decision. If you're burnt out and fed up or you can't bear to support an organization you disagree with then you may be better off quitting.
Also, quitting in protest can be a way to convince an organization to change course. It's not always effective, but it's certainly a strong message to leadership that you disapprove of what they're doing which may at the very least get them thinking.
JackM @ 2024-05-17T20:00 (+2)
I've just thought of a counter-argument to my point. If OpenAI isn't safe it may be worth trying to ensure a safer AI lab (say Anthropic) wins the race to AGI. So it might be worth suggesting that talented people go to Anthropic rather than AGI, even if it is part of product or capabilities teams.
Habryka @ 2024-05-17T20:40 (+2)
That sounds like the way OpenAI got started.
JackM @ 2024-05-17T21:38 (+2)
What are you suggesting? That if we direct safety conscious people to Anthropic that it will make it more likely that Anthropic will start to cut corners? Not sure what your point is.
Habryka @ 2024-05-17T22:16 (+4)
Yes, that if we send people to Anthropic with the aim of "winning an AI arms race" that this will make it more likely that Anthropic will start to cut corners. Indeed, that is very close to the reasoning that caused OpenAI to exist and what seems to have caused it to cut lots of corners.
JackM @ 2024-05-17T22:33 (+5)
Hmm, I don't see why ensuring the best people go to Anthropic necessarily means they will take safety less seriously. I can actually imagine the opposite effect as if Anthropic catches up or even overtakes OpenAI then their incentive to cut corners should actually decrease because it's more likely that they can win the race without cutting corners. Right now their only hope to win the race is to cut corners.
Ultimately what matters most is what the leadership's views are. I suspect that Sam Altman never really cared that much about safety, but my sense is that the Amodeis do.
Habryka @ 2024-05-17T22:47 (+7)
Yeah, I don't think this is a crazy take. I disagree with it based on having thought about it for many years, but yeah, I agree that it could make things better (though I don't expect it would and would instead make things worse).
Ryan Greenblatt @ 2024-05-18T04:15 (+6)
Ultimately what matters most is what the leadership's views are.
I'm skeptical this is true particularly as AI companies grow massively and require vast amounts of investment.
It does seem important, but unclear it matters most.
Jelle Donders @ 2024-05-15T14:27 (+17)
How many safety-focused people have left since the board drama now? I count 7, but I might be missing more. Ilya Sutskever, Jan Leike, Daniel Kokotajlo, Leopold Aschenbrenner, Cullen O'Keefe, Pavel Izmailov, William Saunders.
This is a big deal. A bunch of the voices that could raise safety concerns at OpenAI when things really heat up are now gone. Idk what happened behind the scenes, but they judged now is a good time to leave.
Possible effective intervention: Guaranteeing that if these people break their NDA's, all their legal fees will be compensated for. No idea how sensible this is, so agree/disagree voting encouraged.
Jason @ 2024-05-15T15:18 (+4)
Legal fees may not be these individuals' big exposure (assuming they have non-disclosure / non-disparagement agreements). That would be damages for breaking the NDA, which could be massive depending on the effects on OpenAI's reputation.
Brad West @ 2024-05-15T15:40 (+9)
It seems as if the potential of the damages could make the vast majority of defendants "judgment-proof" (meaning they lack the assets to satisfy the judgment).
I wonder about the ethics of an organization that had the policy of financially supporting people (post-bankruptcy) who made potentially extremely high EV decisions that were personally financially ruinous.
Jason @ 2024-05-19T20:12 (+2)
I probably would be OK with that from an ethics standpoint. After all, I was not a party to the contracts in question. We celebrate (in appropriate circumstances) journalists who serve as conduits for actual classified information. Needless to say, I find the idea of being an enabler for the breach of contractual NDAs much less morally weighty than being an enabler for the breach of someone's oath to safeguard classified information.
Legally, such an organization would have to be careful to mitigate the risk of claims for tortious interference with contract and other theories that the AI company could come up with. Promising financial support prior to the leak might open the door for such claims; merely providing it (through a well-written trust) after the fact would probably be OK.
Larks @ 2024-05-15T13:58 (+11)
Shakeel provides a helpful list of all the people who have recently quit / been purged:
1. Ilya Sutskever
2. Jan Leike
3. Leopold Aschenbrenner
4. Pavel Izmailov
5. William Saunders
6. Daniel Kokotajlo
7. Cullen O'Keefe
https://twitter.com/ShakeelHashim/status/1790685752134656371
MichaelStJules @ 2024-05-15T01:03 (+11)
Worth noting he said he's "confident that OpenAI will build AGI that is both safe and beneficial under [current leadership]".
harfe @ 2024-05-15T08:58 (+21)
These kinds of resignation messages might not be very informative though. There are probably incentives to say nice things about each other.
MichaelStJules @ 2024-05-15T15:57 (+9)
He could have said different nice things or just left out the bit about safety. Do you think he's straightfowardly lying to the public about what he believes?
Or maybe he's just being (probably knowingly) misleading? "confident that OpenAI will build AGI that is both safe and beneficial" might mean 95% in safe beneficial AGI from OpenAI, and 5% it kills everyone.
MarcusAbramovitch @ 2024-05-17T16:13 (+6)
https://x.com/janleike/status/1791498174659715494
Jan Leike left this thread on why he resigned.