What's Going on With OpenAI's Messaging?
By Ozzie Gooen @ 2024-05-21T02:22 (+213)
This is a quickly-written opinion piece, of what I understand about OpenAI. I first posted it to Facebook, where it had some discussion.
Some arguments that OpenAI is making, simultaneously:
- OpenAI will likely reach and own transformative AI (useful for attracting talent to work there).
- OpenAI cares a lot about safety (good for public PR and government regulations).
- OpenAI isn’t making anything dangerous and is unlikely to do so in the future (good for public PR and government regulations).
- OpenAI doesn’t need to spend many resources on safety, and implementing safe AI won’t put it at any competitive disadvantage (important for investors who own most of the company).
- Transformative AI will be incredibly valuable for all of humanity in the long term (for public PR and developers).
- People at OpenAI have thought long and hard about what will happen, and it will be fine.
- We can’t predict concretely what transformative AI will look like or what will happen after (Note: Any specific scenario they propose would upset a lot of people. Vague hand-waving upsets fewer people).
- OpenAI can be held accountable to the public because it has a capable board of advisors overseeing Sam Altman (he said this explicitly in an interview).
- The previous board scuffle was a one-time random event that was a very minor deal.
- OpenAI has a nonprofit structure that provides an unusual focus on public welfare.
- The nonprofit structure of OpenAI won’t inconvenience its business prospects or shareholders in any way.
- The name “OpenAI,” which clearly comes from the early days when the mission was actually to make open-source AI, is an equally good name for where the company is now.* (I don’t actually care about this, but find it telling that the company doubles down on arguing the name still is applicable).
So they need to simultaneously say:
“We’re making something that will dominate the global economy and outperform humans at all capabilities, including military capabilities, but is not a threat.”
“Our experimental work is highly safe, but in a way that won’t actually cost us anything.”
“We’re sure that the long-term future of transformative change will be beneficial, even though none of us can know or outline specific details of what that might actually look like.”
“We have a great board of advisors that provide accountability. Sure, a few months ago, the board tried to fire Sam, and Sam was able to overpower them within two weeks, but next time will be different.”
“We have all of the benefits of being a nonprofit, but we don’t have any of the costs of being a nonprofit.”
Meta’s messaging is clearer.
“AI development won’t get us to transformative AI, we don’t think that AI safety will make a difference, we’re just going to optimize for profitability.”
Anthropic’s messaging is a bit clearer
“We think that AI development is a huge deal and correspondingly scary, and we’re taking a costlier approach accordingly, though not too costly such that we’d be irrelevant.”
This still requires a strange and narrow worldview to make sense, but it’s still more coherent.
But OpenAI’s messaging has turned into a particularly tangled mess of conflicting promises. It’s the kind of political strategy that can work for a while, especially if you can have most of your conversations in private, but is really hard to pull off when you’re highly public and facing multiple strong competitive pressures.
If I were a journalist interviewing Sam Altman, I’d try to spend as much of it as possible just pinning him down on these countervailing promises they’re making. Some types of questions I’d like him to answer would include:
“Please lay out a specific, year-by-year, story of one specific scenario you can imagine in the next 20 years.”
“You say that you care deeply about long-term AI safety. What percentage of your workforce is solely dedicated to long-term AI safety?”
“You say that you think that globally safe AGI deployments require international coordination to go well. That coordination is happening slowly. Do your plans work conditional on international coordination failing? Explain what your plans would be.”
“What do the current prediction markets and top academics say will happen as a result of OpenAI’s work? Which clusters of these agree with your expectations?”
“Can you lay out any story at all for why we should now expect the board to do a decent job overseeing you?”
What Sam likes to do in interviews, like many public figures, is to shift specific questions into vague generalities and value statements. A great journalist would fight this, force him to say nothing but specifics, and then just have the interview end.
I think that reasonable readers should, and are, quickly learning to just stop listening to this messaging. Most organizational messaging is often dishonest but at least not self-rejecting. Sam’s been unusually good at seeming genuine, but at this point, the set of incoherent promises seems too baffling to take literally.
Instead, I think the thing to do is just ignore the noise. Look at the actual actions taken alone. And those actions seem pretty straightforward to me. OpenAI is taking the actions you’d expect from any conventional high-growth tech startup. From its actions, it comes across a lot like:
“We think AI is a high-growth area that’s not actually that scary. It’s transformative in a way similar to Google and not the Industrial Revolution. We need to solely focus on developing a large moat (i.e. monopoly) in a competitive ecosystem, like other startups do.”
OpenAI really seems almost exactly like a traditional high-growth tech startup now, to me. The main unusual things about it are the facts that:
- Its in an area that some people (not the OpenAI management) think is unusually high-risk,
- Its messaging is unusually lofty and conflicting, even for a Silicon Valley startup, and
- It started out under an unusual nonprofit setup, which now barely seems relevant.
Matthew_Barnett @ 2024-05-22T01:38 (+57)
This post contains many claims that you interpret OpenAI to be making. However, unless I'm missing something, I don't see citations for any of the claims you attribute to them. Moreover, several of the claims feel like they could potentially be described as misinterpretations of what OpenAI is saying or merely poorly communicated ideas.
I acknowledge that this post was hastily-written, and it's not necessary to rigorously justify every claim, but your thesis also seems like the type of thing that should be proven, rather than asserted. It would indeed be damning if OpenAI is taking contradictory positions about numerous important issues, but I don't think you've shown that they are in this post. This post would be stronger if you gave concrete examples.
For example, you say that OpenAI is simultaneously claiming,
- OpenAI cares a lot about safety (good for public PR and government regulations).
- OpenAI isn’t making anything dangerous and is unlikely to do so in the future (good for public PR and government regulations).
Is it true that OpenAI has claimed that they aren't making anything dangerous and aren't likely to do so in the future? Where have they said this?
Manuel Allgaier @ 2024-05-22T14:08 (+16)
I agree that it would be good to have citations. In case neither Ozzie nor anyone else here finds it a good use of their time to do it - I've been following OpenAIs and Sam Altman's messaging specifically for a while and Ozzie's summary of their (conflicting) messaging seems roughly accurate to me. It's easy to notice the inconsistencies in Sam Altman's messaging, especially when it comes to safety.
Another commenter (whose name I forgot, I think he was from CLTR) put it nicely: It feels like Altman does not have one consistent set of beliefs (like an ethics/safety researcher would) but tends to say different things that are useful for achieving his goals (like many CEOs do), and he seems to do that more than other AI lab executives at Anthropic or Deepmind.
Owen Cotton-Barratt @ 2024-05-22T15:10 (+11)
Thanks for sharing your impressions. But even if many observers have this impression, it still seems like it could be quite valuable to track down exactly what was said, because there's some gap between:
(a) has nuanced models of the world and will strategically select different facets of those to share on different occasions; and
(b) will strategically select what to say on different occasions without internal validity or consistency.
... but either of these could potentially create the impressions in observers of inconsistency. (Not to say that (a) is ideal, but I think that (b) is clearly more egregious.)
Ozzie Gooen @ 2024-05-22T20:08 (+3)
I imagine we're basically all in agreement on this.
Only question is who might want to / be able to do much of it. It does seem like it could be a fairly straightforward project, though it feels like it would be work.
Rebecca @ 2024-05-28T19:24 (+2)
It could be partially crowdsourced. People could add links to interviews to a central location as they come across them, quotes can be taken from news articles, maybe some others can do AI transcription of other interviews. I think subtitles from YouTube videos can also be downloaded?
SiebeRozendal @ 2024-05-23T10:34 (+4)
Yes, that comment was made by Lukas Gloor here, when I asked what people thought Sam Altman's beliefs are.
Ozzie Gooen @ 2024-05-22T16:03 (+7)
> thesis also seems like the type of thing that should be proven, rather than asserted. It would indeed be damning if OpenAI is taking contradictory positions about numerous important issues, but I don't think you've shown that they are in this post. This post would be stronger if you gave concrete examples.
I agree with this. I'd also prefer that there would be work to track down more of this. I've been overall surprised at the response my post had, but from previous comments, I assumed that readers mostly agreed with these claims. I'd like to see more work go into this (I'll look for some sources, and encourage others to do a better job).
> OpenAI isn’t making anything dangerous and is unlikely to do so in the future (good for public PR and government regulations).
I feel like this is one of the more implicit items listed. It's true that this is one that I don't remember them saying explicitly, more in the manner of which they speak. There's also a question here of what the bar for "dangerous" is. Also, to be clear, I think OpenAI's is stating "We are working on things that could be dangerous if not handled well, but we are handling them well, so the results of our work won't be dangerous", not, "We are working on things that could never be dangerous."
Here are some predictions I'd make:
- If someone were to ask Sam Altman, "Do you think that OpenAI releasing LLMs to the point it has now, has endangered over 100 lives, or has the effects of doing so in the next few years?", he'd say no.
- If someone were to ask Sam Altman, "Do you think that GPT-5 is likely to be a real threat to humanity", he'd say something like, "This is still too early. If it's any threat, it's the potential for things like misinformation, not an existential threat. We're competent at handling such threats."
- If someone were to ask Sam Altman, "Is there a substantial chance that OpenAI will create something that destroys mankind, or kills 1k+ people, in the next 10 years", he'll say, "We are very careful, so the chances are very low of anything like that. However, there could be other competitors...."
Their actions really don't make it seem, to me, like they think it's very dangerous.
- Main information otherwise is the funding of the alignment team, but that was just disbanded.
- Removed the main board members who were publicly concerned about risk.
- Very little public discussion of concrete/specific large-scale risks of their products and the corresponding risk-mitigation efforts (outside of things like short-term malicious use by bad API actors, where they are doing better work).
I'd also flag that such a message (it's not very dangerous / it will be handled well) seems more common from Microsoft, I believe even when asked about OpenAI.
Rebecca @ 2024-05-22T18:11 (+7)
One quote from Sam I came across recently that might be of interest to you: "What I lose the most sleep over is the hypothetical idea that we already have done something really bad by launching ChatGPT. That maybe there was something hard and complicated in there (the system) that we didn't understand and have now already kicked it off."
peterhartree @ 2024-05-22T18:10 (+3)
You wrote:
[OpenAI do] very little public discussion of concrete/specific large-scale risks of their products and the corresponding risk-mitigation efforts (outside of things like short-term malicious use by bad API actors, where they are doing better work).
This doesn't match my impression.
For example, Altman signed the CAIS AI Safety Statement, which reads:
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
The “Preparedness” page—linked from the top navigation menu on their website—starts:
The study of frontier AI risks has fallen far short of what is possible and where we need to be. To address this gap and systematize our safety thinking, we are adopting the initial version of our Preparedness Framework. It describes OpenAI’s processes to track, evaluate, forecast, and protect against catastrophic risks posed by increasingly powerful models.
The page mentions “cybersecurity, CBRN (chemical, biological, radiological, nuclear threats), persuasion, and model autonomy”. The framework itself goes into more detail, proposing scorecards for assessing risk in each category. They define "catastrophic risk" as "any risk which could result in hundreds of billions of dollars in economic damage or lead to the severe harm or death of many individuals—this includes, but is not limited to, existential risk". The phrase "millions of deaths" appears in one of the scorecards.
Their “Planning for AGI & Beyond” blog post describes the risks as "existential", I quote the relevant passage in another comment.
On their “Safety & Alignment” blog they highlight recent posts called Reimagining secure infrastructure for advanced AI and Building an early warning system for LLM-aided biological threat creation.
My sense is that there are many other examples, but I'll stop here for now.
Ozzie Gooen @ 2024-05-22T20:21 (+3)
I agree that there's a lot of evidence that people at OpenAI have thought that AI could be a major risk, and I think that these are good examples.
I said here, "concrete/specific large-scale risks of their products and the corresponding risk-mitigation efforts (outside of things like short-term malicious use by bad API actors, where they are doing better work)."
Just looking at the examples you posted, most feel pretty high-level and vague, and not very related to their specific products.
> For example, Altman signed the CAIS AI Safety Statement, which reads...
This was a one-sentence statement. It easily sounds to me like saying, "Someone should deal with this, but not exactly us."
> The framework itself goes into more detail, proposing scorecards for assessing risk in each category.
I think this is a good step, but it seems pretty vague to me. There's fairly little quantifiable content here, a lot of words like "medium risk" and "high risk".
From what I can tell, the "teeth" in the document is, "changes get brought up to management, and our board", which doesn't fill me with confidence.
Related, I'd be quite surprised if they actually followed through with this much in the next 1-3 years, but I'd be happy to be wrong!
Manuel Allgaier @ 2024-05-22T14:05 (+4)
This could be a community effort. If you're reading this and have a spare minute, can you recall any sources for any of Ozzie's claims and share links to them here? (or go the extra mile, copy his post in a google doc and add sources there?).
peterhartree @ 2024-05-22T08:36 (+4)
Yes. (4) and (11) are also very much "citation needed". My sense is that they would need to be significantly moderated to fit the facts (e.g. the profit cap is still a thing).
yanni kyriacos @ 2024-05-22T10:10 (+3)
"Is it true that OpenAI has claimed that they aren't making anything dangerous and aren't likely to do so in the future? Where have they said this?"
Related > AFAICT they've also never said "We're aiming to make the thing that has a substantial chance of causing the end of humanity". I think that is a far more important point.
There are two obvious ways to be dishonest: tell a lie or not tell the truth. This falls into the latter category.
peterhartree @ 2024-05-22T17:34 (+6)
OpenAI's “Planning for AGI & Beyond” blog post includes the following:
As our systems get closer to AGI, we are becoming increasingly cautious with the creation and deployment of our models. Our decisions will require much more caution than society usually applies to new technologies, and more caution than many users would like. Some people in the AI field think the risks of AGI (and successor systems) are fictitious; we would be delighted if they turn out to be right, but we are going to operate as if these risks are existential.
At some point, the balance between the upsides and downsides of deployments (such as empowering malicious actors, creating social and economic disruptions, and accelerating an unsafe race) could shift, in which case we would significantly change our plans around continuous deployment.
[...]
The first AGI will be just a point along the continuum of intelligence. We think it’s likely that progress will continue from there, possibly sustaining the rate of progress we’ve seen over the past decade for a long period of time. If this is true, the world could become extremely different from how it is today, and the risks could be extraordinary. A misaligned superintelligent AGI could cause grievous harm to the world; an autocratic regime with a decisive superintelligence lead could do that too.
[...]
Successfully transitioning to a world with superintelligence is perhaps the most important—and hopeful, and scary—project in human history. Success is far from guaranteed, and the stakes (boundless downside and boundless upside) will hopefully unite all of us.
Altman signed the CAIS AI Safety Statement, which reads:
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
In 2015 he wrote a blog post which begins:
Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity.
Ozzie Gooen @ 2024-05-22T20:25 (+12)
I have bad feelings about a lot of this.
The "Planning for AGI & Beyond" doc seems to me to be heavily inspired by a few other people at OpenAI at the time, mainly the safety team, and I'm nervous those people have less influence now.
At the bottom, it says:
Thanks to Brian Chesky, Paul Christiano, Jack Clark, Holden Karnofsky, Tasha McCauley, Nate Soares, Kevin Scott, Brad Smith, Helen Toner, Allan Dafoe, and the OpenAI team for reviewing drafts of this.
Since then, Tasha and Helen have been fired off the board, and I'm guessing relations have soured with others listed.
James Payor @ 2024-05-23T01:26 (+4)
Fwiw the relationship with Nate seemed mostly that Sam asked for comments, Nate gave some, and there was no back and forth. See Nate's post: https://www.lesswrong.com/posts/uxnjXBwr79uxLkifG/comments-on-openai-s-planning-for-agi-and-beyond
Sam seemed to oversell the relationship with this acknowledgement, so I don't think we should read much into the other names except literally "they were asked to review drafts".
Ozzie Gooen @ 2024-05-23T01:37 (+10)
sigh... Part of me wants to spend a bunch of time trying to determine which of the following might apply here:
1. This is what Sam really believes. He wrote it himself. He pinged these people for advice. He continues to believe it.
2. This is something that Sam quickly said because he felt pressured by others. This could either be direct pressure (they asked for this), or indirect (he thought they would like him more if he did this)
3. Someone else wrote this, then Sam put his name on it, and barely noticed it.
But at the same time, given that Sam has, what seems to me, like a long track record of insincerity anyway, I don't feel very optimistic about easily being able to judge this.
James Payor @ 2024-05-23T15:19 (+4)
These are good points!
At the time I thought that Nate feeling the need to post and clarify about what actually happened was a pretty strong indication that Sam was using this opportunity to pretend they are on better terms with these folks. (Since I think he otherwise never talks to Nate/Eliezer/MIRI? I could be wrong.)
But yeah it could be that someone who still had influence thought this post was important to run by this set of people. (I consider this less likely.)
I don't think Sam would have barely noticed. It sounds like he was the one who asked for feedback.
In any case this event seems like a minor thing, though imo a helpful part of the gestalt picture.
Ozzie Gooen @ 2024-05-23T01:45 (+16)
I just came across this post, which raises a good point:
https://forum.effectivealtruism.org/posts/vBjSyNNnmNtJvmdAg/
Basically, Sam Altman:
1. Claimed that a slower takeoff would be better. "a slower takeoff gives us more time to figure out empirically how to solve the safety problem and how to adapt.”
2. He's more recently started to raise a fund for $7 Trillion to build more AI chips.
Actions speak louder than words. But, if one wanted to find more clear "conflicting messages" in words, I'm sure that if someone were to ask Sam, "Will your recent $7T fundraise for chips lead to more AI danger?", he'd find a way to downplay it and/or argue that it's better for safety somehow.
nathanhb @ 2024-05-24T05:17 (+2)
In an interview soon after the time of the talk of the $7 trillion fundraise news Sam Altman said that it was just a ludicrous rumor and that he wasn't trying to do any such thing.
He said that he was playing along with a joke when he said things seeming to acknowledge the reality of the deal.
Ozzie Gooen @ 2024-05-24T17:37 (+17)
That's interesting to note, thanks!
At the same time, I'm not sure what to make of this, from Sam's perspective. "Playing along with a joke?" So a non-trivial amount of his public communication is really him just joking, without the audience knowing it? That makes his other communication even harder to trust.
I'm not sure how much I buy the "joke" argument. It's something I'm used to trolls using as a defense.
"*obviously* I was just joking when I made controversial statement X publicly, and it got a lot of hype."
I'm having trouble imagining any high-up management or PR strategy signing off on this.
"So, you were just accused of a major scandal. We suggest that you pretend that it's true for a while. Then after a few weeks or so, tell everyone that you were joking about it."
dogmatic rationalist @ 2024-06-08T02:28 (+1)
Either, Sam altman jokes a bit too much if you are in the memer/redditor mindset you can find a good amount of his tweets to be some form of jokes not to be taken seriously.
But then again I have found people who over-psychoanalyse all his joke tweets to reaching conclusions like "he has severe trauma and salvation fantasy" so I don't know where to draw the line in reading his tweets but then again since it's twitter and elon musk is trying to be funny I generally lean with the former.
I think a good amount of joking culture was increased by E/acc memeing their way against AI safety crowd on twitter.
titotal @ 2024-05-23T11:33 (+11)
Ai ethicists often imply that certain people in AI are just using x-risk as a form of tech hype/PR. The idea is that signaling concern for AI x-risk might make it look like you are concerned and responsible, while simultaneously advertising how powerful your product is and implying it will be much more powerful soon so that people invest in your company. Bonus points are that you can signal trustworthiness and caring while ignoring current-day harms caused by your product, and you can still find justifications for going full steam ahead on development (such as saying we have to put the moral people at the head of the race).
I think this theory would explain the observed behavior here pretty well.
Ozzie Gooen @ 2024-05-29T22:45 (+7)
This interview seems very relevant.
https://twitter.com/JMannhart/status/1795652390563225782
"But more to that, like, no one person should be trusted here. I don't have super voting shares. Like I don't want them. The board can fire me. I think that's important. I think the board over time needs to get like democratized to all of humanity.There's many ways that could be implemented.
But the reason for our structure and the reason it's so weird and one of the consequences of that weirdness was me ending up with no equity, is we think this technology, the benefits, the access to it, the governance of it, belongs to humanity as a whole.
You should not, if this really works, it's quite a powerful technology, and you should not trust one company and certainly not one person with it.
Interviewer
So why should we trust OpenAI? Are you saying we shouldn't?
Sam
No, I think you should trust OpenAI, but only if OpenAI is doing these sorts of things. Like if we're years down the road and have not sort of figured out how to start democratizing control, then I think you..."
Ozzie Gooen @ 2024-05-22T16:15 (+7)
I started doing a bit of digging into sources/citations. One issue is that much of the communication is spread over interviews, many of which don't have easily-accessible transcriptions. Other parts are spread over blog posts and tweets.
I'm curious how much this process could be automated now. Seems like it would be good to have some system to:
1. Make a list of all the recorded interviews he did.
2. Auto-transcribe the ones without transcriptions.
3. List blog posts + tweets
4. Feed all of these to an LLM
5. Ask, for each of the items I mentioned, which, if any, quotes fit into it.
6. Try to come up with other interesting items I didn't mention.
J Litborn @ 2024-05-22T18:04 (+2)
Zvi has been covering a ton of Altman appearances, munching his blog could probably go a long way https://thezvi.substack.com/