Articles about recent OpenAI departures

By bruce @ 2024-05-17T17:38 (+126)

This is a linkpost to https://www.vox.com/future-perfect/2024/5/17/24158403/openai-resignations-ai-safety-ilya-sutskever-jan-leike-artificial-intelligence

A brief overview of recent OpenAI departures (Ilya Sutskever, Jan Leike, Daniel Kokotajlo, Leopold Aschenbrenner, Pavel Izmailov, William Saunders, Ryan Lowe Cullen O'Keefe[1]). Will add other relevant media pieces below as I come across them.


Some quotes perhaps worth highlighting:

Even when the team was functioning at full capacity, that “dedicated investment” was home to a tiny fraction of OpenAI’s researchers and was promised only 20 percent of its computing power — perhaps the most important resource at an AI company. Now, that computing power may be siphoned off to other OpenAI teams, and it’s unclear if there’ll be much focus on avoiding catastrophic risk from future AI models.

-Jan suggesting that compute for safety may have been deprioritised even despite the 20% commitment. (Wired claims that OpenAI confirms that their "superalignment team is no more").
 

“I joined with substantial hope that OpenAI would rise to the occasion and behave more responsibly as they got closer to achieving AGI. It slowly became clear to many of us that this would not happen,” Kokotajlo told me. “I gradually lost trust in OpenAI leadership and their ability to responsibly handle AGI, so I quit."


(Additional kudos to Daniel for not signing additional confidentiality obligations on departure, which is plausibly relevant for Jan too given his recent thread. Cullen also  notes that he is "not under any non-disparagement obligations to OpenAI").

 

Edit:
-Shakeel's article on the same topic.

-Kelsey's article about the nondisclosure/nondisparagement provisions that OpenAI employees have been offered. She also reports OpenAI saying that they *won't* strip anyone of their equity for not signing the secret NDA going forward.

-Sam Altman responds to the backlash around equity here, claiming to not know about this and being open to "fixing" things, though given Sam's claims around genuine embarrassment, not clawing back vested equity / never doing it for people who do not sign a separation agreement / in the process of fixing it, it does seem like asking ex-employees to email him individually if they're worried about their nondisparagement agreement is not the most ideal way of fixing this.

-Wired claims that OpenAI confirms that their "superalignment team is no more". 

-Cullen O'Keefe notes that he is "not under any non-disparagement obligations to OpenAI".

  1. ^

    Last two names covered by Shakeel/Wired, but thought it'd be clearer to list all names together


jimrandomh @ 2024-05-18T04:28 (+63)

The language shown in this tweet says:

If the Grantee becomes a Withdrawn Limited Partner, then unless, within 60 days following its applicable Withdrawal Event, the Grantee ... duly executes and delivers to the Partnership a general release of claims against the Partnership and the other Partners with regard to all matters relating to the Partnership up to and including the time of such Withdrawal Event, such Grantee's Units shall be cancelled and reduced to zero ...

It's a trick!

Departing OpenAI employees are then offered a general release which meets the requirements of this section and also contains additional terms. What a departing OpenAI employee needs to do is have their own lawyer draft, execute, and deliver a general release which meets the requirements set forth. Signing the separation agreement is a mistake, and rejecting the separation agreement without providing your own general release is a mistake.

I could be misunderstanding this; I'm not a lawyer, just a person reading carefully. And there's a lot more agreement text that I don't have screenshots of. Still, I think the practical upshot is that departing OpenAI employees may be being tricked, and this particular trick seems defeatable to me. Anyone leaving OpenAI really needs a good lawyer.

Greg_Colbourn @ 2024-05-18T18:46 (+15)

See also: Call for Attorneys for OpenAI Employees and Ex-Employees

BrownHairedEevee @ 2024-05-20T18:32 (+2)

It seems like these terms would constitute theft if the equity awards in question were actual shares of OpenAI rather than profit participation units (PPUs). When an employee is terminated, their unvested RSUs or options may be cancelled, but the company would have no right to claw back shares that are already vested as those are the employee's property. Similarly, don't PPUs belong to the employee, meaning that the company cannot "cancel" them without consideration in return?

David Mathers @ 2024-05-17T19:30 (+44)

Daniel's behavior here is genuinely heroic, and I say that as someone who is pretty skeptical of AI takeover being a significant risk*. 

*(I still think the departure of safety people is bad news though.) 

jimrandomh @ 2024-05-18T04:20 (+38)

According to Kelsey's article, OpenAI employees are coerced into signing lifelong nondisparagement agreements, which also forbid discussion of the nondisparagement agreements themselves, under threat of losing all of their equity.

This is intensely contrary to the public interest, and possibly illegal. Enormous kudos for bringing it to light.

In a legal dispute initiated by an OpenAI employee, the most important thing would probably be what representations were previously made about the equity. That's hard for me to evaluate, but if it's true that they were presented as compensation and the nondisparagement wasn't disclosed, then rescinding those benefits could be a breach of contract. However, I'm not sure if this would apply if this was threatened but the threat wasn't actually executed.

CA GOV § 12964.5 and 372 NLRB No. 58 also offer some angles by which former OpenAI employees might fight this in court.

CA GOV § 12964.5 talks specifically about disclosure of "conduct that you have reason to believe is unlawful." Generically criticizing OpenAI as pursuing unsafe research would not qualify unless (the speaker believes) it rises to the level of criminal endangerment, or similar. Copyright issues would *probably* qualify. Workplace harrassment would definitely qualify.

(No OpenAI employees have alleged any of these things publicly, to my knowledge)

372 NLRB No. 58 nominally invalidates separation agreements that contain nondisparagement clauses, and that restrict discussion of the terms of the separation agreement itself. However, it's specifically focused on the effect on collective bargaining rights under the National Labor Relations Act, which could make it inapplicable.

Larks @ 2024-05-18T05:14 (+35)

Kelsey suggests that OpenAI may be admitting defeat here:

OpenAI also says that going forward, they *won't* strip anyone of their equity for not signing the secret NDA, which is a bigger deal. I asked if this was a change of policy. ... "This statement reflects reality", replied OpenAI's spokesperson. To be fair it's a Friday night and I'm sure she's sick of me. But I have multiple ex-employees confirming this, if true, would be a big change of policy, presumably in response to backlash from current employees.

https://twitter.com/KelseyTuoc/status/1791691267941990764

Neel Nanda @ 2024-05-18T22:18 (+22)

Damage control, not defeat IMO. It's not defeat until they free previous leavers from unfair non disparagements/otherwise make it right to them

Rebecca @ 2024-05-18T09:48 (+12)

What about for people who’ve already resigned?

Joseph_Chu @ 2024-05-17T20:44 (+28)

This does not bode well to me. One of my personal concerns about the usefulness of AI safety technical research is the extent to which the fruits of such research would actually be utilized by the frontier labs in practice. Just because some hypothetical researcher or lab figures out a solution to the Alignment problem, it doesn't mean the actual eventual creators of AGI will care enough to actually use it if it, for instance, comes with an alignment tax that slows down their capabilities work and leads to less profit, or worse, causes the loss of first mover advantage to a less scrupulous competitor.

OpenAI seems like the front runner right now, and the fact they had a substantial Alignment Team with substantial compute resources devoted to them, at least made it seem like maybe they'd care enough to use any effective alignment techniques that do get developed and ensure that things go well. The gutting of the Alignment Team does not look good in this regard.

Linch @ 2024-05-18T00:19 (+25)

This feels really suss to me:

Many people at OpenAI get more of their compensation from PPUs than from base salary. PPUs can only be sold at tender offers hosted by the company. When you join OpenAI, you sign onboarding paperwork laying all of this out.

And that onboarding paperwork says you have to sign termination paperwork with a 'general release' within sixty days of departing the company. If you don't do it within 60 days, your units are cancelled. No one I spoke to at OpenAI gave this little line much thought.

And yes this is talking about vested units, because a separate clause clarifies that unvested units just transfer back to the control of OpenAI when an employee undergoes a termination event (which is normal).

There's a common legal definition of a general release, and it's just a waiver of claims against each other. Even someone who read the contract closely might be assuming they will only have to sign such a waiver of claims.

But when you actually quit, the 'general release'? It's a long, hardnosed, legally aggressive contract that includes a confidentiality agreement which covers the release itself, as well as arbitration, nonsolicitation and nondisparagement and broad 'noninterference' agreement.

And if you don't sign within sixty days your units are gone. And it gets worse - because OpenAI can also deny you access to the annual events that are the only way to sell your vested PPUs at their discretion, making ex-employees constantly worried they'll be shut out.

Larks @ 2024-05-18T01:34 (+29)

Sounds like it is time for someone to report them to the NLRB.

Linch @ 2024-05-18T22:17 (+3)

I'm not sure if you need standing to complain, but here's the relevant link.