Sam Altman’s Chip Ambitions Undercut OpenAI’s Safety Strategy
By Garrison @ 2024-02-10T19:52 (+280)
This is a linkpost to https://garrisonlovely.substack.com/p/sam-altmans-chip-ambitions-undercut
If you enjoy this, please consider subscribing to my Substack.
Sam Altman has said he thinks that developing artificial general intelligence (AGI) could lead to human extinction, but OpenAI is trying to build it ASAP. Why?
The common story for how AI could overpower humanity involves an “intelligence explosion,” where an AI system becomes smart enough to further improve its capabilities, bootstrapping its way to superintelligence. Even without any kind of recursive self-improvement, some AI safety advocates argue that a large enough number of copies of a genuinely human-level AI system could pose serious problems for humanity. (I discuss this idea in more detail in my recent Jacobin cover story.)
Some people think the transition from human-level AI to superintelligence could happen in a matter of months, weeks, days, or even hours. The faster the takeoff, the more dangerous, the thinking goes.
Sam Altman, circa February 2023, agrees that a slower takeoff would be better. In an OpenAI blog post called “Planning for AGI and beyond,” he argues that “a slower takeoff gives us more time to figure out empirically how to solve the safety problem and how to adapt.”
So why does rushing to AGI help? Altman writes that “shorter timelines seem more amenable to coordination and more likely to lead to a slower takeoff due to less of a compute overhang.”
Let’s set aside the first claim, which is far from obvious to me.
Computational resources, or compute, is one of the key inputs into training AI models. Altman is basically arguing that the longer it takes to get to AGI, the cheaper and more abundant the compute, which can then be plowed back into improving or scaling up the model.
The amount of compute used to train AI models has increased roughly one-hundred-millionfold since 2010. Compute supply has not kept pace with demand, driving up prices and rewarding the companies that have near-monopolies on chip design and manufacturing.
Last May, Elon Musk said that “GPUs at this point are considerably harder to get than drugs” (and he would know). One startup CEO said “It’s like toilet paper during the pandemic.”
Perhaps no one has benefited more from the deep learning revolution than the 31-year-old GPU designer Nvidia. GPUs, chips originally designed to process 3D video game graphics, were discovered to be the best hardware for training deep learning models. Nvidia, once little-known outside of PC gaming circles, reportedly accounts for 88 percent of the GPU market and has ridden the wave of AI investment. Since OpenAI’s founding in December 2015, Nvidia’s valuation has risen more than 9,940 percent, breaking $1 trillion last summer. CEO and cofounder Jensen Huang was worth $5 billion in 2020. Now it’s $64 billion.
If training a human-level AI system requires an unprecedented amount of computing power, close to economic and technological limits, as seems likely, and additional compute is needed to increase the scale or capabilities of the system, then your takeoff speed may be rate-limited by the availability of this key input. This kind of reasoning is probably why Altman thinks a smaller compute overhang will result in a slower takeoff.
Given all this, many in the AI safety community think that increasing the supply of compute will increase existential risk from AI, by both shortening timelines AND increasing takeoff speed — reducing the time we have to work on technical safety and AI governance and making loss of control more likely.
So why is Sam Altman reportedly trying to raise trillions of dollars to massively increase the supply of compute?
Last night, the Wall Street Journal reported that Altman was in talks with the UAE and other investors to raise up to $7 trillion to build more AI chips.
I’m going to boldly predict that Sam Altman will not raise $7 trillion to build more AI chips. But even one percent of that total would nearly double the amount of money spent on semiconductor manufacturing equipment last year.
Perhaps most importantly, Altman’s plan seems to fly in the face of the arguments he made not even one year ago. Increasing the supply of compute is probably the purest form of boosting AI capabilities and would increase the compute overhang that he claimed to worry about.
The AI safety community sometimes divides AI research into capabilities and safety, but some researchers push back on this dichotomy. A friend of mine who works as a machine learning academic once wrote to me that “in some sense, almost all [AI] researchers are safety researchers because the goal is to try to understand how things work.”
Altman makes a similar point in the blog post:
Importantly, we think we often have to make progress on AI safety and capabilities together. It’s a false dichotomy to talk about them separately; they are correlated in many ways. Our best safety work has come from working with our most capable models. That said, it’s important that the ratio of safety progress to capability progress increases.
There are good reasons to doubt the numbers reported above (mostly because they’re absurdly, unprecedentedly big). But regardless of its feasibility, this effort to massively expand the supply of compute is hard to square with the above argument. Making compute cheaper speeds things up without any necessary increase in understanding.
Following November’s board drama, early reporting emerged about Altman’s Middle East chip plans. It’s worth noting that Helen Toner and Tasha McCauley, two of the (now ex-) board members who voted to fire Altman, reviewed drafts of the February 2023 blog post. While I don’t think there was any single smoking gun that prompted the board to fire him, I’d be surprised if these plans didn’t increase tensions.
OpenAI deserves credit for publishing blog posts like “Planning for AGI and beyond.” Given the stakes of what they’re trying to do, it’s important to look at how OpenAI publicly reasons about these issues (of course, corporate blogs should be taken with a grain of salt and supplemented with independent reporting). And when the actions of company leaders seem to contradict these documents, it’s worth calling that out.
If Sam Altman has changed his mind about compute overhangs, it’d be great to hear about it from him.
MaxRa @ 2024-02-16T13:58 (+32)
Some other relevant responses:
My current impression of OpenAI’s multiple contradictory perspectives here is that they are genuinely interested in safety - but only insofar as that’s compatible with scaling up AI as fast as possible. This is far from the worst way that an AI company could be. But it’s not reassuring either.
Zvi Mowshowitz writes
Even scaling back the misunderstandings, this is what ambition looks like.
It is not what safety looks like. It is not what OpenAI’s non-profit mission looks like. It is not what it looks like to have concerns about a hardware overhang, and use that as a reason why one must build AGI soon before someone else does. The entire justification for OpenAI’s strategy is invalidated by this move.
[...]
The chip plan seems entirely inconsistent with both OpenAI’s claimed safety plans and theories, and with OpenAI’s non-profit mission. It looks like a very good way to make things riskier faster. You cannot both try to increase investment on hardware by orders of magnitude, and then say you need to push forward because of the risks of allowing there to be an overhang.
Or, well, you can, but we won’t believe you.
This is doubly true given where he plans to build the chips. The United States would be utterly insane to allow these new chip factories to get located in the UAE. At a minimum, we need to require ‘friend shoring’ here, and place any new capacity in safely friendly countries.
Also, frankly, this is not The Way in any sense and he has to know it:
Sam Altman: You can grind to help secure our collective future or you can write substacks about why we are going fail.
SiebeRozendal @ 2024-02-17T09:26 (+4)
Thanks, these are good
SiebeRozendal @ 2024-02-16T11:01 (+25)
What would be the proper response of the EA/AI safety community, given that Altman is increasingly diverging from good governance/showing his true colors? Should there be any strategic changes?
SiebeRozendal @ 2024-02-16T10:57 (+11)
So, what do we think Altman's mental state/true belief is? (Wish this could be a poll)
- Values safety, but values personal status & power more
- Values safety, but believes he needs to be in control of everything & has a messiah complex
- Doesn't really care about safety, it was all empty talk
- Something else
I'm also very curious what the internal debate on this is - if I were working on safety inside OpenAI, I'd be very upset.
Lukas_Gloor @ 2024-02-20T16:46 (+23)
- and 2. seem very similar to me. I think it's something like that.
The way I envision him (obviously I don't know and might be wrong):
- Genuinely cares about safety and doing good.
- Also really likes the thought of having power and doing earth-shaking stuff with powerful AI.
- Looks at AI risk arguments with a lens of motivated cognition influenced by the bullet point above.
- Mostly thinks things will go well, but this is primarily from an instinctive feel of a high-energy CEO, who are predominantly personality-selected for optimistic attitudes. If he were to really sit down and try to introspect on his views on the question (and stare into the abyss), as a very smart person, he might find that he thinks things might well go poorly, but then thoughts come up like "ehh, if I can't make AI go well, others probably can't either, and it's worth the risk especially because things could be really cool for a while or so before it all ends."
- If he ever has thoughts like "Am I one of the bad guys here?," he'll shrug them off with "nah" rather than having the occasional existential crises and self-doubts around that sort of thing.
- He maybe has no stable circle of people to whom he defers on knowledge questions; that is, no one outside himself he trusts as much as himself. He might say he updates to person x or y and considers them smarter than himself/better forecasters, but in reality, he "respects" whoever is good news for him as long as they are good news for him. If he learns that smart people around him are suddenly confident that what he's doing is bad, he'll feel system-1 annoyed at them, which prompts him to find reasons to now disagree with them and no longer consider them included in his circle of epistemic deference. (Maybe this trait isn't black and white; there's at least some chance that he'd change course if 100% of people he at one point in time respects spoke up against his plan all at once.)
- Maybe doesn't have a lot of mental machinery built around treating it as a sacred mission to have true beliefs, so he might say things about avoiding hardware overhang as an argument for OpenAI's strategy and then later do something that seemingly contradicts his previous stance, because he was using arguments that felt like they'd fit but without really thinking hard about them and building a detailed model for forecasting that he operates from for every such decision.
RedStateBlueState @ 2024-02-18T23:09 (+22)
Altman, like most people with power, doesn’t have a totally coherent vision for why him gaining power is beneficial for humanity, but can come up with some vague values or poorly-thought-out logic when pressed. He values safety, to some extent, but is skeptical of attempts to cut back on progress in the name of safety.
Linch @ 2024-02-21T20:54 (+13)
I'm not convinced that he has "true beliefs" in the sense you or I mean it, fwiw. A fairly likely hypothesis is that he just "believes" things that are instrumentally convenient for him.
Matthew_Barnett @ 2024-02-22T19:39 (+5)
There's an IMO fairly simple and plausible explanation for why Sam Altman would want to accelerate AI that doesn't require positing massive cognitive biases or dark motives. The explanation is simply: according to his moral views, accelerating AI is a good thing to do.
[ETA: also, presumably, Sam Altman thinks that some level of safety work is good. He just prefers a lower level of safety work/deceleration than a typical EA might recommend.]
It wouldn't be unusual for him to have such a moral view. If one's moral view puts substantial weight on the lives and preferences of currently existing humans, then plausible models of the tradeoff between safety and capabilities say that acceleration can easily be favored. This idea was illustrated by Nick Bostrom in 2003 and more recently by Chad Jones.
Arguably, it is effective altruists who are the unusual ones here. The standard EA theory employed to justify extreme levels of caution around AI is strong longtermism. But most people, probably including Sam Altman, are not strong longtermists.
SiebeRozendal @ 2024-02-23T21:21 (+10)
Arguably, it is effective altruists who are the unusual ones here. The standard EA theory employed to justify extreme levels of caution around AI is strong longtermism.
This suggests people's expected x-risk levels are really small ('extreme levels of caution'), which isn't what people believe.
I think "if you believe the probability that a technology will make humanity go extinct with a probability of 1% or more, be very very cautious" would be endorsed by a large majority of the general population & intellectual 'elite'. It's not at all a fringe moral position.
Matthew_Barnett @ 2024-02-23T23:31 (+8)
I think "if you believe the probability that a technology will make humanity go extinct with a probability of 1% or more, be very very cautious" would be endorsed by a large majority of the general population & intellectual 'elite'.
I'm not sure we disagree. A lot seems to depend on what is meant by "very very cautious". If it means shutting down AI as a field, I'm pretty skeptical. If it means regulating AI, then I agree, but I also think Sam Altman advocates regulation too.
I agree the general population would probably endorse the statement "if a technology will make humanity go extinct with a probability of 1% or more, be very very cautious" if given to them in a survey of some kind, but I think this statement is vague, and somewhat misleading as a frame for how people would think about AI if they were given more facts about the situation.
Firstly, we're not merely talking about any technology here; we're talking about a technology that has the potential to both disempower humans, but also make their lives dramatically better. Almost every technology has risks as well as benefits. Probably the most common method people use when deciding whether to adopt a technology themselves is to check whether the risks outweigh the benefits. Just looking at the risks alone gives a misleading picture.
The relevant statistic is the risk to benefit ratio, and here it's really not obvious that most people would endorse shutting down AI if they were aware of all the facts. Yes, the risks are high, but so are the benefits.
If elites were made aware of both the risks and the benefits from AI development, most of them seem likely to want to proceed cautiously, rather than not proceed at all, or pause AI for many years, as many EAs have suggested. To test this claim empirically, we can just look at what governments are already doing with regards to AI risk policy, after having been advised by experts; and as far as I can tell, all of the relevant governments are substantially interested in both innovation and safety regulation.
Secondly, there's a persistent and often large gap between what people say through their words (e.g. when answering surveys) and what they actually want as measured by their behavior. For example, plenty of polling has indicated that a large fraction of people are very cautious regarding GMOs, but in practice most people are willing to eat GM foods happily without much concern. People are often largely thoughtless when answering many types of abstract questions posed to them, especially about topics they have little knowledge about. And this makes sense, because their responses typically have almost no impact on anything that might immediately or directly impact them. Bryan Caplan has discussed these issues in surveys and voting systems before.
David Mathers @ 2024-02-23T21:33 (+5)
I think that whilst utilitarian but not longtermist views might well justify full-speed ahead, normal people are quite risk averse, and are not likely to react well to someone saying "let's take a 7% chance of extinction if it means we reach immortality slightly quicker and it benefits current people, rather than being a bit slower so that some people die and miss out". That's just a guess though. (Maybe Altman's probability is actually way lower, mine would be, but I don't think a probability more than an order of magnitude lower than that fits with the sort of stuff about X-risk he's said in the past.)
Matthew_Barnett @ 2024-02-24T04:37 (+6)
I think OpenAI doesn't actually advocate a "full-speed ahead approach" in a strong sense. A hypothetical version of OpenAI that advocated a full speed ahead approach would immediately gut its safety and preparedness teams, advocate subsidies for AI, and argue against any and all regulations that might impede their mission.
Now, of course, there might be political reasons why OpenAI doesn't come out and do this. They care about their image, and I'm not claiming we should take all their statements at face value. But another plausible theory is simply that OpenAI leaders care about both acceleration and safety. In fact, caring about both safety and acceleration seems quite rational from a purely selfish perspective.
I claim that such a stance wouldn't actually be much different than the allegedly "ordinary" view that I described previously: that acceleration, rather than pausing or shutting down AI, can be favored in many circumstances.
OpenAI might be less risk averse than average compared to the general public, but in that case we're talking about a difference in degree here, not a qualitative difference in motives.
Ozzie Gooen @ 2024-05-23T01:51 (+7)
Quick notes, a few months later:
1. Now, the alignment team was dissolved.
2. On Advocacy, I think that it might well make more sense for them to effectively lobby via Microsoft. Microsoft owns 49% of OpenAI (at least, the business part, and for some amount of profit cap, whatever that means exactly). If I were Microsoft, I'd prefer to use my well-experienced lobbyists for this sort of thing, rather than to have OpenAI (which I value mainly for their tech integration with Microsoft products), worry about it. I believe that Microsoft is lobbying heavily against AI regulation, though maybe not for many subsidies directly.
I am sympathetic to the view that OpenAI leaders think of themselves as caring about many aspects of safety, and also that they think their stances are reasonable. I'm just not very sure how many others, who are educated on this topic, would agree with them.
NickLaing @ 2024-02-23T05:48 (+4)
I agree that's possible, but I'm not sure I've seen his rhetoric put that view forward in a clear way.
Nick K. @ 2024-02-23T07:59 (+1)
You don't need to be an extreme longtermist to be sceptical about AI, it suffices to care about the next generation and not want extreme levels of change. I think looking too much into differing morals is the wrong lens here.
The most obvious explanation for how Altman and people more concerned about AI safety (not specifically EAs) differ seems to be in their estimates about how likely AI risk is vs other risks.
That being said, the point that it's disingenuous to ascribe cognitive bias to Altman for having whatever opinion he has, is a fair one - and one shouldn't go too far with it in view of general discourse norms. That said, given Altman's exceptional capability for unilateral action due to his position, it's reasonable to be at least concerned about it.
Jelle Donders @ 2024-02-19T10:34 (+1)
Hard to say, but his behavior (and the accounts from other people) seems most consistent with 1.
Prometheus @ 2024-02-14T00:57 (+4)
I imagine Sam's mental model is the bigger lead OpenAI has over others, the more control they can have at pivotal moments, and (in his mind) the safer things will be. Everyone else is quickly catching up in terms of capability, but if OpenAI has special chips their competitors don't have access to, then they have an edge. Obviously, this can't really be distinguished from Sam just trying to maximize his own ambitions, but it doesn't necessarily undercut safety goals either.
James Payor @ 2024-02-14T09:08 (+31)
Sam is not pitching special chips for OpenAI here, right?
I do not read safety goals into this project, which sounds more like it's "make there be many more fabs distributed around the world for more chips and decreased centralization". (Which, fwiw, erodes options for containing specialized chips.)
Prometheus @ 2024-02-15T22:59 (+1)
Wouldn't Sam selling large amounts of chips to OAI's direct competitors constitute a conflict of interest? It also doesn't seem like something he would want to do, since he seems very devoted to OAI's success, for better or worse. Why would he want to increase decentralization?