Sam Altman fired from OpenAI
By Larks @ 2023-11-17T21:07 (+133)
This is a linkpost to https://openai.com/blog/openai-announces-leadership-transition
The board of directors of OpenAI, Inc, the 501(c)(3) that acts as the overall governing body for all OpenAI activities, today announced that Sam Altman will depart as CEO and leave the board of directors. Mira Murati, the company’s chief technology officer, will serve as interim CEO, effective immediately.
A member of OpenAI’s leadership team for five years, Mira has played a critical role in OpenAI’s evolution into a global AI leader. She brings a unique skill set, understanding of the company’s values, operations, and business, and already leads the company’s research, product, and safety functions. Given her long tenure and close engagement with all aspects of the company, including her experience in AI governance and policy, the board believes she is uniquely qualified for the role and anticipates a seamless transition while it conducts a formal search for a permanent CEO.
Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.
In a statement, the board of directors said: “OpenAI was deliberately structured to advance our mission: to ensure that artificial general intelligence benefits all humanity. The board remains fully committed to serving this mission. We are grateful for Sam’s many contributions to the founding and growth of OpenAI. At the same time, we believe new leadership is necessary as we move forward. As the leader of the company’s research, product, and safety functions, Mira is exceptionally qualified to step into the role of interim CEO. We have the utmost confidence in her ability to lead OpenAI during this transition period.” [emphasis added]
Minh Nguyen @ 2023-11-18T02:17 (+52)
Found this on Reddit: Anxious_Bandicoot126 comments on Sam Altman is leaving OpenAI (reddit.com)
I feel compelled as someone close to the situation to share additional context about Sam and company.
Engineers raised concerns about rushing tech to market without adequate safety reviews in the race to capitalize on ChatGPT hype. But Sam charged ahead. That's just who he is. Wouldn't listen to us.
His focus increasingly seemed to be fame and fortune, not upholding our principles as a responsible nonprofit. He made unilateral business decisions aimed at profits that diverged from our mission.
When he proposed the GPT store and revenue sharing, it crossed a line. This signaled our core values were at risk, so the board made the tough decision to remove him as CEO.
Greg also faced some accountability and stepped down from his role. He enabled much of Sam's troubling direction.
Now our former CTO, Mira Murati, is stepping in as CEO. There is hope we can return to our engineering-driven mission of developing AI safely to benefit the world, and not shareholders.
Obviously just speculation for now, but seems plausible. The moment the GPT store was released I thought:
"wow that's really good for business ... wow that's really bad for alignment"
Chris Leong @ 2023-11-18T08:45 (+15)
I'm skeptical.
I've read their other comments. The initial comment sounded somewhat plausible, but their other comments sounded less like what I'd expect someone in that position to sound like.
Jonas Vollmer @ 2023-11-18T17:58 (+3)
This seems the most plausible speculation so far, though probably also wrong: https://twitter.com/dzhng/status/1725637133883547705
Lorenzo Buonanno @ 2023-11-18T18:53 (+6)
If you think it's more plausible than misalignment with OpenAI's mission, you could make some mana on
SiebeRozendal @ 2023-11-18T17:42 (+47)
Worth noting that of the 4 remaining board members, 2 are associated with EA: Helen Toner (CSET) and Tasha McCauley (EV UK board member)
JWS @ 2023-11-18T18:13 (+58)
This is a critically important point to hold in mind if the reason for the move seems to be due to safety concerns as opposed to personal malpractice/deceiving the board[1]
I don't know what the hell happened. I guess further clarifications on the decision-making process and corporate landscape will be known tomorrow or, more likely, early next working week
I've voiced concerns before that EA is unaware that it can be drawn into 'one-way fights' sometimes, and this feels like another such moment. The Silicon Valley tech-twitter scene[2] has exploded over this, and so far EA is not coming out well in their eyes from what I can see. I think the days of "e/acc" being a meme movement are rapidly drawing to a close, and EA might find itself in a hostile atmosphere in what used to be one of the most EA-friendly places in the world.
Again, early speculations, but be careful out there Bay-Area EAs. Keep your wits about you.
- ^
Really strange that, while this looks like the most likely reason, it's not really reflected in the language
- ^
Perhaps one of the few cases where Twitter might be an accurate representation of thoughts on the ground
SiebeRozendal @ 2023-11-19T10:02 (+17)
Ironically, this particular set of comments is doing the rounds on Twitter with some banal commentary. https://twitter.com/tobi/status/1726132247227740623?t=Qu5UR4QKDz5anypwmuANwQ&s=19
BrownHairedEevee @ 2023-11-20T06:55 (+3)
🙄🙄
Sharmake @ 2023-11-19T02:54 (+13)
Yeah, this is one of the few times where I believe that the EAs on the board likely overreached here, because they probably didn't give enough evidence to justify their excoriating statement there that Sam Altman was dishonest, and he might be coming back to lead the company.
I'm not sure how to react to all of this, though.
Edit: My reaction is just WTF happened, and why did they completely play themselves? Though honestly, I just believe that they were inexperienced.
Pablo @ 2023-11-19T13:04 (+23)
I'm not sure how to react to all of this, though.
Kudos for being uncertain, given the limited information available.
(Not something one cay say about many of the other comments to this post, sadly.)
SiebeRozendal @ 2023-11-18T19:50 (+9)
Yeah, the tech scene really seems to come down on the side of Sam Altman already. Let's hope the board had good grounds and will be able to demonstrate evidence of dishonesty soon
Jelle Donders @ 2023-11-19T02:19 (+8)
I've shared very similar concerns for a while. The risk of successful narrow EA endeavors that lack transparency backfiring in this manner feels very predictable to me, but many seem to disagree.
Lukas_Gloor @ 2023-11-19T03:19 (+8)
There's some related discussion here on LW.
Ben Chancey @ 2023-11-18T20:02 (+7)
This is a critically important point to hold in mind if the reason for the move seems to be due to safety concerns as opposed to personal malpractice/deceiving the board
Really strange that, while this looks like the most likely reason, it's not really reflected in the language.
Do these explanations seem at odds to you for some reason? The language used in the statement does not say anything about personal malpractice/deception, just that he was "not consistently candid in his communications with the board". It seems entirely possible to me, and indeed probably most likely given what else we now know, that the board is alleging dishonesty re: safety-related commitments he made, or something like this.
Lorenzo Buonanno @ 2023-11-18T18:56 (+22)
Adam D'Angelo also worked at Facebook with Moskovitz from 2004 to 2008 (incl. as CTO 2006-2008) and is on the board of Asana
andrewpei @ 2023-11-18T21:49 (+42)
Twitter is full of people laying into EA for being behind Sam Altman's firing. However, if it's true that this happened because the board thought Altman was trying to take the company in an 'unsafe' direction then I'm glad they did this. And I'm glad that for the time being considerations other than 'shareholder value' are not the defining motivation behind AI development.
Fermi–Dirac Distribution @ 2023-11-19T03:19 (+16)
This is incredibly short-sighted. The board’s behavior was grossly unprofessional and the accompanying blog post was borderline defamatory. And Altman is one of the most highly-connected and competent people in the Bay Area tech scene. Altman can easily start another AI company; in fact, media outlets are now reporting that he's considering doing just that, or might even return to OpenAI by pressuring the board to resign.
In fact, Manifold is at 50% that Altman will return as CEO, and at 38% that he'll start another AI company. It seems that the board was unable to think even just two steps ahead if they thought this would end well.
Greg_Colbourn @ 2023-11-19T14:29 (+9)
Altman starting a new company could still slow things down a few months. Which could be critically important if AGI is imminent. In those few months perhaps government regulation with teeth could actually come in, and then shut the new company down before it ends the world.
Pablo @ 2023-11-24T13:27 (+7)
The board’s behavior was grossly unprofessional
You had no evidence to justify that claim back when you made it, and as new evidence is released, it looks increasingly likely that the claim was not only unjustified but also wrong (see e.g. this comment by Gwern).
Luke Freeman @ 2023-11-20T06:59 (+28)
Latest (48 hours in): OpenAI Board Stands by Decision to Force Sam Altman Out of C.E.O. Role
After 48 hours of furious negotiations, the A.I. company said Mr. Altman would not return to his job and that former Twitch C.E.O. Emmett Shear would be its interim boss.
The board of directors at OpenAI, the high-flying artificial intelligence start-up, stood by its decision to push out its former chief executive Sam Altman, according to an internal memo sent to the company’s staff on Sunday night.
OpenAI named Emmett Shear, a former executive at Twitch, as the new interim chief executive, pushing aside Mira Murati, a longtime OpenAI executive who was named interim chief executive after Mr. Altman’s ouster. The board said Mr. Shear has a “unique mix of skills, expertise and relationships that will drive OpenAI forward,” according to the memo viewed by The New York Times.
“The board firmly stands by its decision as the only path to advance and defend the mission of OpenAI,” said the memo, referring to Mr. Altman’s ouster on Friday. It was signed by each of the four directors on the company’s board; Adam D’Angelo, Helen Toner, Ilya Sutskever, and Tasha McCauley.
“Put simply, Sam’s behavior and lack of transparency in his interactions with the board undermined the board’s ability to effectively supervise the company in the manner it was mandated to do,” the memo said.
SiebeRozendal @ 2023-11-20T08:49 (+13)
Oh wow, that last paragraph seems like a good sign that they have good grounds for these statements they're not walking back
JWS @ 2023-11-20T09:28 (+7)
It seems odd for them to say that given that there were relatively credible rumours that the board was negotiating with Sam about a potential return (which we can assume broke down as they looked for an alternative CEO).
[I've retracted the above, as it seems inaccurate with the new hiring of Shear and reports that the board just went silent in response to pressure from investors and Microsoft]
Can they not share some of the reasoning though? Like, sure, some of it may involved corporate propreitary knowledge and NDAs, but part of the reason there was such a blowback to the decision was that it seemed to come out of nowhere. People assumed another shoe was going to drop because of the manner of the board's decision, and then it just hasn't?
The new CEO has literally just promised to:
- Hire an independent investigator to dig into the entire process leading up to this point and generate a full report.
- Continue to speak to as many of our employees, partners, investors, and customers as possible, take good notes, and share the key takeaways.
- Reform the management and leadership team in light of recent departures into an effective force to drive results for our customers.
So he's accepted the position without even knowing why they did what they did at a high level. [seems false, see Joshua's reply below]
While the board probably have the right to do what they did via the OpenAI Charter, the fact they are not sharing the reasons for doing so, at either a high or low level, internally or externally, means that they have lost and are continuing to lose a lot of credibility and legitimacy, regardless of the legal facts of the case.
Linch @ 2023-11-20T10:44 (+23)
Why do you think that the rumors that the board was negotiating with Sam was "relatively credible?" At this point, seems more likely than not to be false, eg either random fake news or a PR spin by pro-Altman VCs.
JWS @ 2023-11-20T11:03 (+1)
I mean I definitely agree that there's a fog-of-war situation going on. Given some new updates here, I've retracted that paragraph.
Some original points were:
- Things like this https://nitter.net/emilychangtv/status/1726337590901796927#m. - yes distrust the media etc etc but it seemed the main state of play
- Altman's photo wearing the guest pass - seems like an obvious "i'm coming back to return as a CEO or not at all implication". Like he was obviously in the OpenAI offices for some reason, seems weird for it not to be negotiations with the board over something as opposed to collecting his belongings
- Roon had a now-deleted tweet along the lines of "crossed the rubicon troops marching on rome" which again, implies there was an internal open-ai move to get sam back
It still find the board silence is pretty weird, and the big missing piece here.
I stand by my current belief that the radio-silence is currently damaging for the perception and support of the AI Safety cause
Update on point 2: https://nitter.net/ashleevance/status/1726457222169829838#m
It seems that the board wasn't present when he visited. I guess what seemed to be going on were two different factions: 1) Mira Murati as interim CEO was trying to find some way to get Altman and Brockman back 2) The board was trying to find its own new CEO choice asap to foreclose any chance of Sam returning to the position
John G. Halstead @ 2023-11-20T11:47 (+17)
I think you are over-responding when we basically have no good information, as illustrated by the fact that you keep having to walk back claims you have made only a short time before
JWS @ 2023-11-20T12:15 (+10)
I take your point here John. There's a lot that's still to come out about the events of the weekend, and I've probably been a bit trigger-happy with responses. I'm going to step back from this thread and possibly the Forum as a whole for a little bit.
I do want to note that I picked up a somewhat hostile/adversarial tone to your comment (I'm not saying this was intentional). To 'keep having to walk back claims' seems a bit of an implied overclaim to me, especially as from my PoV it only happened twice - once seeing Ashlee Vance's updated reporting, and the other with Joshua's comment.
'Walking back' seems to also be more adversarial than just 'corrected mistakes' too (compare 'you keep having to walk back claims' vs 'you made corrections twice'. In any case, while the reporting has changed, a lot of my intuitions and feelings haven't shifted much. I still find the board's complete silence strange, and think this could be a precarious moment for AI Safety.
JoshuaBlake @ 2023-11-20T11:28 (+6)
he's accepted the position without even knowing why they did what they did at a high level
I don't think this is correct, from the same statement:
Before I took the job, I checked on the reasoning behind the change. The board did not remove Sam over any specific disagreement on safety, their reasoning was completely different from that. I'm not crazy enough to take this job without board support for commercializing our awesome models.
JWS @ 2023-11-20T11:57 (+5)
Thanks for this, have retracted that sentence.
Feels like some version of the reasoning should be made available to investors/microsoft/the public is some short-term timeframe though? I feel like that would do a fair amount to quell some of the reactions
JoshuaBlake @ 2023-11-20T12:52 (+3)
I would like that, however, how much they care about external reactions is unclear to me
Ben Chancey @ 2023-11-20T18:58 (+4)
How on earth does one reconcile this with the fact that Ilya has now publicly tweeted that he deeply regrets his involvement in the board’s actions, and that he has signed the open letter threatening to quit unless the board resigns?
HenryStanley @ 2023-11-20T14:31 (+26)
An open letter from 500 of ~700 OpenAI employees to the board, calling on them to resign (also on The Verge).
Suggests there's an enormous amount of bad feeling about the decision internally. It also seems like a bad sign that the board was unwilling to provide any 'written evidence' of wrongdoing, though maybe something will appear in the coming days.
But all told it looks pretty bad for EA. Seems like there's an enormous backlash online - initially against OpenAI for firing everyone’s favourite AI CEO, and now against “EA” “woke” “decelerationist” types.[1][2]
It’s also seemed to trigger a flurry of tweets from Nick Cammarata, saying that EAs are overwhelmingly self-flagellating and self-destructive and that EA caused him and his friends enormous harm. I think his claims are flatly wrong (though they may be true for him and his friends), and some of the replies seem to agree, but it has 500K views as I publish.
Seems like the whole episode (combined with at least one prominent EA seemingly saying it’s emblematic dreadful and toxic) has the potential to cause a lot of reputational damage, especially if the board chooses not to clarify its actions (although it's possibly too late for that).
NickLaing @ 2023-11-20T14:47 (+24)
I make this speculative comment with no inside information
There may be a world in which this is net positive. If EAs have been wrong the whole time about the best approach being the "narrow" or "inside" game, this might force EAs into being mostly adversarial vs. Tech accelerationists and many in silicon valley in general. This could be more effective at stopping or slowing doom in the medium to long term than trying to force safety from the inside against strong market forces.
It could even help the EA AI risk crowd come more alongside the sentiment of the general public, after the initial reputational loss simmers down.
I'm not saying this is even likely, it's just a different take.
Lizka @ 2023-11-22T19:53 (+20)
FYI — lots of relevant links collected here: OpenAI: The Battle of the Board and OpenAI: Facts from a Weekend
Jackson Wagner @ 2023-11-17T22:20 (+18)
Very interested to find out some of the details here:
- Why now? Was there some specific act of wrongdoing that the board discovered (if so, what was it?), or was now an opportune time to make a move that the board members had secretly been considering for a while, or etc?
- Was this a pro-AI-safety move that EAs should ultimately be happy about (ie, initiated by the most EA-sympathetic board members, with the intent of bringing in more x-risk-conscious leadership)? Or is this a disaster that will end up installing someone much more focused on making money than on talking to governments and figuring out how to align superintelligence? Or is it relatively neutral from an EA / x-risk perspective? (Update: first speculation I've seen is this cautiously optimistic tweet from Eliezer Yudkowsky)
- Greg Brockman, president of the board, is also stepping down. How might this be related, and what might this tell us about the politics of the board members and who supported/opposed this decision?
Rebecca @ 2023-11-18T01:02 (+18)
Side note: Greg held two roles: chair of the board, and president. It sounds like he was fired from the former and resigned from the latter role.
Jonas Vollmer @ 2023-11-18T00:34 (+17)
Regarding the second question, I made this prediction market: https://manifold.markets/JonasVollmer/in-a-year-will-we-think-that-sam-al?r=Sm9uYXNWb2xsbWVy
Jackson Wagner @ 2023-11-18T00:43 (+5)
Nice! I like this a lot more than the chaotic multi-choice markets trying to figure out exactly why he was fired.
titotal @ 2023-11-19T10:46 (+17)
From this article:
Brad Lightcap, an OpenAI executive, told employees on Saturday morning that the company had been talking with the board to “better understand the reason and process behind their decision,” according to an internal message I obtained.
“We can say definitively that the board’s decision was not made in response to malfeasance or anything related to our financial, business, safety or security/privacy practices,” he wrote. “This was a breakdown in communication between Sam and the board.”
If this is true, then I think the board has made a huge mess of things. They've taken a shot without any ammunition, and not realised that the other parties can shoot back. Now there are mass resignations, Microsoft is furious, seemingly all of silicon valley has turned against EA, and it's even looking likely that Altman comes back.
It seems like they didn't think they had to act like the boards of other billion dollar companies (notifying your partners of big decisions, being literal instead of euphemistic when discussing reasons for firing, selling your decisions with PR, etc). But often norms and customs happen for a reason, and corporate governance seems to be no exception.
SiebeRozendal @ 2023-11-19T11:50 (+51)
I think it's premature to judge things based on the little information that's currently available. I would be surprised if there weren't reasons for the board's unconventional choices. (I'm not ruling it out though, that what you say ends up being right)
trevor1 @ 2023-11-19T18:26 (+19)
If this is true, then I think the board has made a huge mess of things. They've taken a shot without any ammunition, and not realised that the other parties can shoot back. Now there are mass resignations, Microsoft is furious, seemingly all of silicon valley has turned against EA, and it's even looking likely that Altman comes back.
How much of this is "according to anonymous sources"?
The Board was deeply aware of intricate details of other parties's will and ability to shoot back. Probably nobody was aware of all of the details, since webs of allies are formed behind closed doors and rearrange during major conflicts, and since investors have a wide variety of retaliatory capabilities that they might not have been open about during the investment process.
John G. Halstead @ 2023-11-20T11:49 (+12)
What is your current view given how things have developed? Why do you keep putting forward strong views that are based on very bad information?
Jelle Donders @ 2023-11-20T02:01 (+9)
The board must have thought things through in detail before pulling the trigger, so I'm still putting some credence on there being good reasons for their move and the subsequent radio silence, which might involve crucial info they have and we don't.
If not, all of this indeed seems like a very questionable move.
Burnydelic @ 2023-11-18T06:06 (+14)
"OpenAI’s ouster of CEO Sam Altman on Friday followed internal arguments among employees about whether the company was developing AI safely enough, according to people with knowledge of the situation.
Such disagreements were high on the minds of some employees during an impromptu all-hands meeting following the firing. Ilya Sutskever, a co-founder and board member at OpenAI who was responsible for limiting societal harms from its AI, took a spate of questions.
At least two employees asked Sutskever—who has been responsible for OpenAI’s biggest research breakthroughs—whether the firing amounted to a “coup” or “hostile takeover,” according to a transcript of the meeting. To some employees, the question implied that Sutskever may have felt Altman was moving too quickly to commercialize the software—which had become a billion-dollar business—at the expense of potential safety concerns."
Kara Swisher also tweeted:
"More scoopage: sources tell me chief scientist Ilya Sutskever was at the center of this. Increasing tensions with Sam Altman and Greg Brockman over role and influence and he got the board on his side."
"The developer day and how the store was introduced was in inflection moment of Altman pushing too far, too fast. My bet: [Sam will] have a new company up by Monday."
Apparently Microsoft was also blindsided by this and didn't find out until moments before the announcement.
"You can call it this way," Sutskever said about the coup allegation. "And I can understand why you chose this word, but I disagree with this. This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAl builds AGI that benefits all of humanity." AGI stands for artificial general intelligence, a term that refers to software that can reason the way humans do.
When Sutskever was asked whether "these backroom removals are a good way to govern the most important company in the world?" he answered: "I mean, fair, I agree that there is a not ideal element to it. 100%."
https://twitter.com/AISafetyMemes/status/1725712642117898654
Fai @ 2023-11-18T14:38 (+5)
Apparently Microsoft was also blindsided by this and didn't find out until moments before the announcement.
Not sure how important this is: Judging from the behavior of Satya Nadella during OpenAI's dev day 12 days ago, Microsoft quite likely didn't see that coming at that moment.
SiebeRozendal @ 2023-11-19T17:57 (+11)
Thought this was a good article on Microsoft's power: https://archive.li/soZMQ
It is unclear if OpenAI could continue as a going concern without continual cash inflows from Microsoft. While OpenAI is, according to reports, making about $80 million per month currently and may be on track to make $1 billion in revenue in 2023—ten times more than it anticipated when it secured an additional $10 billion funding commitment from Microsoft in January—it is not known if the company is profitable or what its burn rate it is. But it is likely to be fast. The company lost $540 million dollars in 2022 on revenue of less than $30 million for the entire year, according to documents seen by Fortune. If its costs have also ramped up in line with revenues, the company would need continual support from Microsoft just to keep operating.
Furthermore, OpenAI is entirely dependent on Microsoft’s cloud computing datacenters to both train and run its models. The global shortage of graphic processing units (GPUs), the specialized computer chips needed to train and run large AI models, and the size of OpenAI’s business, with tens of millions of paying customers dependent on those models, mean that the San Francisco AI company cannot easily port its business to another cloud service provider.
Sharmake @ 2023-11-20T16:31 (+6)
It seems like the board did not fire Sam Altman for safety reasons, but instead for other reasons instead. Utterly confusing, and IMO demolishes my previous theory, though a lot of other theories also lost out.
Sources below, with their archive versions included:
https://twitter.com/norabelrose/status/1726635769958478244
Dave Cortright @ 2023-11-17T23:14 (+4)
This is mere speculation, but another group I'm on posited this might be part of it:
Sam Altman's sister, Annie Altman, claims Sam has severely abused her
Lukas_Gloor @ 2023-11-18T00:09 (+16)
This doesn't seem impossible given the timing, but I'd still be very surprised if this was what the board's decision was about. (I'm especially skeptical that it would be exclusively about this.) For one thing, the board announcement uses the wording "hindering [the board's] ability to exercise its responsibilities." This doesn't seem like the wording someone would choose if their decision was prompted by investigating events that happened more than twenty years ago and which don't directly relate to beneficial use of AI or running a company. (Even in the unlikely case where the board decided to open an investigation into abuse allegations and then caught Sam Altman lying about details related to that, it's not apparent why they would describe these hypothetical lies as "hindering [the board's] ability to exercise its responsibilities," as opposed to using wording that's more just about "lost the board's trust.") Besides, I struggle to picture board members starting an investigation solely based on one accusation from when the person in question was still a teenager. I'm not saying that these accusations are for sure unimportant – in fact, I said the opposite on that LW comment thread. It's just that... Despite the good advice here about how boards should keep a close eye on leadership, I don't think it's a board's role or comparative advantage to focus on investigating stuff like that. Especially once they already have confirmed their standing CEO and in the absence of more direct red flags. (It would maybe be a bit different if this was a CEO selection process and Sam Altman was a new applicant that board members had only little information about.) One option I can see is that, maybe if the board already had other reasons to be concerned, then learning about the accusations could give them further fuel for investigations. Alternatively, though, it seems much more likely to me that this was about other things entirely. (Perhaps something related to publicly announcing that OpenAI "created AGI internally" and then backpedaling it, while also saying that short AI timelines are best for humanity even though an alignment solution is far from in sight?)
Yarrow Bouchard @ 2023-11-18T03:23 (+6)
publicly announcing that OpenAI "created AGI internally" and then backpedaling it
Wasn't that just a throwaway joke on Reddit?
titotal @ 2023-11-18T11:18 (+4)
I very much doubt he was fired over the allegations. However, if the allegations are true, it would raise the likelihood that he engaged in other sketchy or unethical behaviour that we don't know about.
"not consistently candid" seems to be an implication that he was deceptive to the board about something, at least. It could have just been about strategy, or it could have involved personal misbehaviour as well.
Lukas_Gloor @ 2023-11-18T11:39 (+12)
Yeah, now that more information has come to light, it seems to be clearly about disagreements about how to pursue the OpenAI mission. I wonder if the board can point to at least one objectively outrageous thing that Altman was deceptive about, or whether it was more subtle stuff that added up but is hard to convey to outsiders. For instance, I could imagine that they got "empty promises" vibes from Altman where he was placating the most safety-concerned voices at OpenAI by saying he'll take such and such precautions later in the future, but then kept doing things that are at odds with taking safety seriously, until people had enough and felt deceived and like they could no longer trust his assurances. In this scenario, it's going to be difficult for the board and for Sutskever to convey that their decision wasn't some overreaction. (FWIW, I think it can be totally justifiable to fire someone over weasel-like assurances about mission alignment that never led to any visible actions – it's just tricky that there's always some plausible deniability where the CEO can say "I was going to take action later, like I said; it's just that you people are insufficiently pragmatic and don't have experience dealing with investors like Microsoft; and anyway, the tech isn't risky enough yet and you all are freaking out.")
titotal @ 2023-11-18T15:55 (+26)
It would seem like a bad move to openly say the "not consistently candid" and "hindering responsibilities" thing if there was no objective deception they could point to. Even if they don't state what happened publicly, the board has to be able to defend it's actions to it's employees and to it's partners at Microsoft.
My impression is that this type of public admonishment is rather rare for the ousting of a CEO, and it would be more typical to talk about a "difference of vision" or something similarly bland. I think either they have a clear cut case against him, or the board has mishandled the situation.
Adebayo Mubarak @ 2023-11-20T14:43 (+3)
We are at a critical time as we stand; either we have the Board yielding to the plea/threat of the worker or we have inexperienced actors being at the helm of the driving force in AI. What do you think organizations like EA can do in this regard, should we just sit and watch or should we regard the threat as non-existent because to me, having this sort of people managing the AI space is a ticking time bomb
Steve @ 2023-11-17T23:09 (+2)
Interesting. The press release defines the board's governance mission as "ensure that artificial general intelligence benefits all humanity," and then asserts that Sam hindered that mission.
I suppose one could interpret that as a shift towards greater caution and governance in the name of AI safety, or a shift towards greater speed/open-sourcing if the board views their mission through a lens of accelerationism and accessibility.
Or something entirely different... we're digging into talmudic nuance here, and all of these are near-wild guesses.
It could be noteworthy that they chose to highlight Mira's governance experience.
The latter part of the press release (not quoted above, but visible in the original here) also points out that the majority of board members hold no OpenAI equity, which could be a nod towards this being a move that sacrifices profitability for the sake of the mission. Again though, only a guess, and even if true it would still leave open the question of how the board is interpreting the mission.
KaliCorte @ 2023-11-24T15:47 (+1)
Not too long an unemployment period of 5 days, but on the other hand, not a bad endorsement.
The reinstatement of Altman as head of OpenAI took place under truly revolutionary circumstances. Reportedly, 650 employees threatened to leave immediately and investors threatened legal action against the ChatGPT creator. Unsurprisingly, Microsoft, the largest investor, owning 49% of the shares and pumping huge amounts of money into the company, had the most at stake. It was the tech giant that first expressed great dissatisfaction with Altman's dismissal and even offered him the creation of an AI division within Microsoft, should OpenAI's board of directors nonetheless relent.
Xing Shi Cai @ 2023-11-21T11:58 (+1)
Just saw this on hacker news as a response to Sam Altman Exposes the Charade of AI Accountability. The damage for EA's reputation is hard to estimate but perhaps real.
I think people have yet to realize that this whole AI Safety thing is complete BS. It's just another veil, like Effective Altruism, to get good PR and build a career around. The only people who truly believe this AI safety stuff are those with no technical knowledge or expertise.
Ian Turner @ 2023-11-20T00:58 (+1)
Here’s a Bloomberg article with a few more details.
Linch @ 2023-11-18T02:58 (+1)
Apropos of nothing, I'm reminded of this old update from CEA.
Linch @ 2023-11-18T20:55 (+6)
Can someone who downvoted explain why they downvoted?
Gregory Lewis @ 2023-11-18T23:10 (+32)
Seemed not relevant enough to the topic, and too apt to be highly inflammatory, to be worthwhile to bring up.
slg @ 2023-11-18T07:31 (+3)
What’s the lore behind that update? This was before I followed EA community stuff
Larks @ 2023-11-18T18:10 (+28)
My understanding, though I'm not sure the board ever publicly confirmed this, was they decided that Larissa was acting on behalf of Leverage Research, and hence contrary to the best interests of CEA, and they wanted to stop the entryism.
Habryka @ 2023-11-18T18:44 (+17)
IIRC the official reason (or at least the thing that caused stuff to come to a head) was that Larissa and Kerry had been dating for multiple months but had never told the rest of leadership or the board about it.
kevinj @ 2023-11-18T01:52 (+1)
If Holden or other folks in EA blew up OpenAI, that ain't gonna be good for the movement... fr fr