Why don't governments seem to mind that companies are explicitly trying to make AGIs?

By Ozzie Gooen @ 2021-12-23T07:08 (+82)

Epistemic Status: Quickly written, uncertain. I'm fairly sure there's very little in terms of the public or government concerned about AGI claims, but I'm sure there's a lot I'm missing. I'm not at all an expert on government or policy and AI.

This was originally posted to Facebook here, where it had some discussion.  Many thanks to Rob Bensinger, Lady Jade Beacham, and others who engaged in the discussion there.


Multiple tech companies now are openly claiming to be working on developing AGI (Artificial General Intelligence).

As written in a lot of work on AGI (See Superintelligence, as an example), if any firm does establish sufficient dominance in AGI, they might have some really powerful capabilities.

And yet, from what I can tell, almost no one seems to really mind? Governments, in particular, seem really chill with it. Companies working on AGI get treated similarly to other exciting AI companies.

If some company were to make a claim like,

"We're building advanced capabilities that can hack and modify any computer on the planet"

or,

"We're building a private nuclear arsenal",

I'd expect that to draw attention.

But with AGI, crickets.

I assume that governments dismiss corporate claims of AGI development as overconfident marketing-speak or something.

You might think,

"But concerns about AGI are really remote and niche. State actors wouldn't have come across them."

That argument probably applied 10 years ago. But at this point, the conversation has spread a whole lot. Superintelligence was released in 2014 and was an NYT bestseller. There are hundreds of books out now about concerns about increasing AI capabilities. Elon Musk and Bill Gates both talked about it publicly. This should be one of the easiest social issues at this point for someone technically savvy to find.

The risks and dangers (of a large power-grab, not of alignment failures, though those too) are really straightforward and have been public for a long time.

Responses

In the comments to my post, a few points were made, some of which I was roughly expecting.  Points include:

  1. Companies saying they are making AGI are ridiculously overconfident
  2. Governments are dramatically incompetent
  3. AGI will roll out gradually and not give one company a dominant advantage

My quick responses would be:

  1. I think many longtermist effective altruists believe these companies might have a legitimate chance in the next 10 to 50 years, in large part because of a lot of significant research (see everything on AI and forecasting on LessWrong and the EA Forum). At the same time, my impression is that most of the rest of the world is indeed incredibly skeptical of serious AGI transformation.
  2. I think this is true to an extent. My impression is that government nonattention can change dramatically and quickly, particularly in the United States, so if this is the crux, it might be a temporary situation.
  3. I think there's substantial uncertainty here. But I would be very hesitant to put over a 70% chance that: (a) one, or a few, of these companies will gain a serious advantage, and (b) the general-purpose capabilities of these companies will come with significant global power capabilities. AGI is general-purpose, it seems difficult to be sure that your company can make it without it being an international security issue of some sort or other.

Updates

This post was posted to Reddit and Hacker News, where it had a total of around 100 more comments. The Hacker News crowd mostly suggested Response #1 ("AGI is a pipe dream that we don't need to worry about")


Jackson Wagner @ 2021-12-23T07:33 (+65)

This isn't intended to be a complete response to your post, but for comparison, here are some other things that ambitious tech companies have serious plans to accomplish:

Obviously the consequences of even Mars colonization, full cryptoization of the economy, abundant power from fusion, and significantly mitigated biorisk pale in comparison to the transformative power of AGI. But from a government's perspective they might all seem to be in the same reference class. Yet it's surprisingly close to "crickets" on all counts. (I admit that AGI might be especially neglected even here, though -- SpaceX at least gets normal NASA contracts, etc.)

Taymon @ 2021-12-24T23:01 (+15)

I think SpaceX's regular non-Mars-colonization activities are in fact taken seriously by relevant governments, and the Mars colonization stuff seems like it probably won't happen and also wouldn't be that big a deal if it did (in terms of, like, national security; it would definitely affect who gets into the history books). So it doesn't seem to me like governments are necessarily acting irrationally there.

Same with cryptocurrency; its implications for investor protection, tax evasion, capital controls evasion, and facilitating illicit transactions are indeed taken seriously, and while governments would obviously care quite a lot if it displaced fiat currency, I just don't think there's any way that's happening. If it does, then this is probably because fiat currency itself somehow stopped working and something was needed to fill the void; if governments think this scenario is at all plausible, then presumably their attention would be on the first part where fiat currency fails, since that's much more within their control and cryptocurrency isn't really a relevant input.

The scientific and regulatory culture around fusion power seems to be shaped, as you suggest, by the long history of failures in that domain; judging by similar situations in other fields, I wouldn't be surprised if no one wanted to admit to putting any credence in it, so that they wouldn't look stupid in case it fails again.

The state of pandemic preparedness does indeed seem like just straight-up government incompetence.

Ozzie Gooen @ 2021-12-23T07:34 (+6)

That's a good point, and I like the examples, thanks!

HaydnBelfield @ 2021-12-24T13:22 (+37)

Governments are concerned/interested in near-term AI. See EU, US, UK and Chinese regulation and investment. They're maybe about as interested in it as like clean tech and satellites, more than lab-grown meat.

Transformative AI is several decades away, governments aren't good at planning for possibilities over long time periods. If/when we get closer to transformative capabilities, governments will pay more attention. See: nuclear energy + weapons, bioweapons + biotech, cryptography, cyberweapons, etc etc. 

Jade Leung's thesis is useful on this. So to is Jess Whittlestone's conceptual clarifications of near/long distinctions (with Carina Prunkl) and on transformative AI (with Ross Gruetzemacher)

Greg_Colbourn @ 2021-12-24T14:29 (+18)

What makes you confident that  "Transformative AI is several decades away"? Holden estimates "more than a 10% chance we'll see transformative AI within 15 years (by 2036)", based on a variety of reports taking different approaches (that are IMO conservative).  Given the magnitude of what is meant by "transformative", governments (and people in general) should really be quite a bit more concerned. As the analogy goes - if you were told that there was a >10% chance of aliens landing on Earth in the next 15 years, then you should really be doing all you can to prepare, as soon as possible!

Davidmanheim @ 2021-12-27T12:24 (+11)

Governments have trouble responding to things more than a few  years away, and even then, only when it's effectively certain. If they had reliable data that there are aliens showing up in 10 years, I'd expect them to respond by fighting about it and commissioning studies.

Greg_Colbourn @ 2021-12-27T16:41 (+2)

Yep. Watched Don't Look Up last night; can imagine that.

Davidmanheim @ 2021-12-28T08:22 (+5)

Fictional evidence! And I haven't seen the movie, but expect it to be far too under-nuanced about how government works.

HaydnBelfield @ 2021-12-28T22:22 (+2)

Median estimate is still decades away.  I personally completely agree people should be more concerned.

Greg_Colbourn @ 2021-12-29T11:58 (+4)

Median is ~3-4 decades away. I'd call that "a few", rather than "several" (sorry to nitpick, but I think this is important: several implies "no need to worry about it, probably not going to happen in my lifetime", whereas a few implies (for the majority of people) "this is within my lifetime; I should sit up and pay attention.")

Greg_Colbourn @ 2021-12-29T12:05 (+4)

The way I sometimes phrase it to people is that I now think it's more urgent than Climate Change (and people understand that Climate Change is getting quite urgent, and is something that will have a big impact within their lifetimes).

Ozzie Gooen @ 2021-12-25T01:05 (+15)

Thanks! 
(For those casually browsing, I just want to flag that Haydn works directly in this field, and has much more experience and knowledge in it than I do. I wish it were easier to point this out on the EA Forum.)

Mjreard @ 2021-12-24T11:06 (+28)

This interview with Obama Allan Defoe pointed to once is pretty instructive around these questions. On reflection, reasonable government actors see the case, it's just really hard to prioritize given short-run incentives ("maybe in 20 years the next generation will see things coming and deal with it").

My basic model is that government actors are all tied up by the stopping problem. If you ever want to do something good you need to make friends and win the next election. The voters and potential allies are even more short-termist than you. Availability bias explains why people would care about private nuclear weapons. Superintelligence codes as dinner party/dorm room chat. It will sell books, but it's not action relevant.  

Charles_Guthmann @ 2021-12-23T14:52 (+19)

"The average age of Members of the House at the beginning of the 117th Congress was 58.4 years; of Senators, 64.3 years."

Ozzie Gooen @ 2021-12-23T22:35 (+10)

This is a good point, but I'd flag that there are many departments of the government with different levels of autonomy. It seems easy for me to imagine some special cluster in the military or intelligence departments to be spending a lot of time around AGI events, but I so far don't have evidence of anything like that. 

Charles_Guthmann @ 2021-12-24T04:04 (+13)

Fair point. First let me add another piece of info about the congress: "The dominant professions of Members are public service/politics, business, and law."

Now on to your point. 

 

  • How old are the leaders of the military? How many of them know what python is? What was their major in college? Now ask yourself the same thing about the CIA/NSA./Etc. This isn't a rhetorical question. I assume each department will differ.  Though there may be a bit of smugness implicit.
  • Conditional on such a cluster existing: How likely do you think it is that it would be declassified? I don't find it that unlikely that the NSA or CIA could be running a program and not speaking on it, and it seems possible to figure this out simply by accounting for where every CS/AI graduate in the US works.  I feel less strongly that the military would hide such a project. FWIW my epistemic confidence is very low for this entire claim, I am not someone who has obsessed over governmental classification and things like that.
  • How many CS PHDs are there in the US government in total? How many masters? how many bachelors?

I think there is also more to say about the variety of reasons people feel more comfortable giving their input on economic, social, and foreign policy issues (even if they have no business doing so),  which I think could leak into leaders just naturally trending towards dealing with those issues, but I think this is a much more delicate argument that I don't feel comfortable fleshing out right now. 

 

I think aogaras point above is reasonable and mostly true, but I don't think it goes to the level of explaining the discrepancy.  This is incredibly skewed because of who I associate with(not all of my friends are eas though),  but anecdotally I think AGI is starting to gain some recognition as a very important issue among people my age (early 20s), specifically those in STEM fields.  Not a lot, but certainly more than it is talked about in the mainstream. Let's be real though, none of my friends will ever be in the military or run for office, nor do I believe they will work for the intelligence agencies. My point is, In addition to age,  we have a serious problem with under-representation of stem in high up positions and over-representation of lawyers. It would be interesting to test the leaders of various Gov departments on their level of computer science competency/comprehension. 

Peter Wildeford @ 2021-12-24T18:57 (+17)

What do you think it would look like if the US government was minding companies explicitly making AGIs?

Ozzie Gooen @ 2021-12-25T00:32 (+12)

I feel like there's a whole lot I could imagine seeing.
Different parts of the government mind a whole lot of things. Here in Berkeley, there are regulations you need to abide by for all sorts of things (often they go too far, in my opinion). I also know of people who got reported to the CIA or FBI for a lot of very minor hacking/IT issues. 

Some quick things:
- Politicians talking about AGI publicly.
- Members of the CIA/NSA attending meetups/conferences around AGI and asking a lot of questions. 
- Government security or military professionals engaging with both longtermists concerned about AGI, and with AI companies working on AGI.
- Early legislation that really calls out AGI or similar general-purpose AI issues.
- Reports from government agencies that go into detail on potential scenarios.
- The hiring of promising AGI people (both technical and policy) into secretive or public government organizations.

There are clearly others around our community who have more expertise here (I'm really an amateur on this topic), so other suggestions are appreciated.

aogara @ 2021-12-23T19:02 (+14)

One of EA’s most important and unusual beliefs is that superintelligent AGI is imminently possible. While ideally effective altruism is just an ethical framework that can be paired with any set of empirical beliefs, it is a very important fact that people in this community hold extremely unusual beliefs about the empirical question of AI progress.

Nobody I have ever met outside of the EA sphere seriously believes that superintelligent computer systems could take over the world within decades. I study computer science in school, I work in the field of data science, and everybody I know anticipates progress-as-usual for the foreseeable future. GPT-3 is a cool NLP algorithm, but it doesn’t spell world takeover anytime soon. The stock market would arguably agree, with DeepMind receiving a valuation of only $400M in 2014, though more recent progress within Google and Facebook has not received public financial valuations. The AI Impacts investigation into the history of technological progress revealed just how rare it is for a single innovation to bring decades worth of progress on an important metric. Much more likely in my opinion is the gradual and progressive acceleration of progress in AI and ML systems where the 21st century sees a booming Silicon Valley, but no clear “takeoff point” of discontinuous progress, with the possibility that the supposed impacts of AGI (such as automating most of the labor force or >2xing the global GDP growth rate) do not emerge for a century or centuries.

To be clear, I agree that unprecedented AI progress is possible and important. There are some strong object-level arguments, particularly Ajeya’s OpenPhil analysis of the size of the human brain vs. the size of our biggest computers. These arguments have helped convince influential experts to write books, conduct research, and bring attention to the problem of AGI safety. Perhaps the more persuasive argument is that no matter how slim the chances are, the chance cannot be disproven, and the impact of such a transformation would be so great that a group of people should be seriously thinking about. But it shouldn’t be a surprise when other groups do not take the superintelligence revolution seriously, nor should it be a surprise if the revolution does not come this century.

Epistemic Status: Possibly overstated.

EDIT: Here’s a better summary of my views. https://forum.effectivealtruism.org/posts/7ZZpWPq5iqkLMmt25/aogara-s-shortform?commentId=xZFEv84LGqbRFwt4G

Ozzie Gooen @ 2021-12-23T19:10 (+18)

Nobody I have ever met outside of the EA sphere seriously believes that superintelligent computer systems could take over the world within decades.

Yep, this roughly matches my impressions. I think very, very few people really believe that superintelligence systems will be that influential.

One notable exception, of course though, would be the AGI companies themselves. I'm fairly confident that people in these groups really do think that they have a good shot at making AGI, and that it will be transformative. 

This would be an example of Response 1 that I listed. 

As to the question of, "Since everyone else besides AGI companies and select longtermists doesn't seem to think this is an issue, maybe it isn't an issue?"; I specifically am not that interested in discussion of that question. This sort of question is just very different and gets discussed in depth elsewhere. 

But I think the discrepancy is interesting to understand, to better understand why society at large is doing what it's doing.

aogara @ 2021-12-23T21:47 (+6)

Agreed, and I don't have any specific explanation of why government is unconcerned with dramatic progress in AI. As usual, government seems just a bit slow to catch up to the cutting edge of technological development and academic thought. Charles_Guthmann's point on the ages of people in government seems relevant. Appreciate your response though, I wasn't sure if others had the same perceptions.

Evan R. Murphy @ 2022-04-08T20:10 (+2)

I think very, very few people really believe that superintelligence systems will be that influential.

 

A lot of prominent scientists, technologists and intellectuals outside of EA have warned about advanced artificial intelligence too. Stephen Hawking, Elon Musk, Bill Gates, Sam Harris, everyone on this open letter back in 2015 etc.

I agree that the number of people really concerned about this is strikingly small given the emphasis longtermist EAs put on it. But I think these many counter-examples warn us that it's not just EAs and the AGI labs being overconfident or out of left field. 

aogara @ 2021-12-30T15:24 (+6)

Counterpoint on market sentiment: Anthropic raised a $124M Series A with few staff and no public facing product. The money comes from a handful of individuals including Jaan Tallin and Eric Schmidt, which makes unusual beliefs more likely to govern the bid (think unilateralist’s curse). But this seems like it has to be a financial bet on the possibility of incredible AI progress.

Separate question: Anthropic seems to be composed largely of people from OpenAI, another well-funded and socially-minded AGI company. Why did they leave OpenAI?

Ozzie Gooen @ 2021-12-31T21:28 (+6)

I think market sentiment is a bit complicated. Very few investors are talking about AGI, but organizations like OpenAI still seem to think that talking about AGI is good marketing for them (for talent, and I'm sure for money, later on).  

I think most of the Anthropic investment was from people close to effective altruism: Jaan Tallinn, Dustin Moskovitz, and Center for Emerging Risk Research, for example. 
https://www.anthropic.com/news/announcement

On why those people left OpenAI, I'm not at all an expert here. I think it's common for different teams to have different ways of seeing things, and wanting independence. In this case, I think there weren't all too many reasons to stay part of the same org (it's easy enough to get funding independently, as is evidenced by the Anthropic funding). I guess if Anthropic stayed close to OpenAI, it could have been part of scaling GPT-3 and similar, but I'm not sure how valuable that was to the rest of the team (especially in comparison to having more freedom to do things their own ways). I'd note that right now, there seem to be several more technical alignment focused people at Anthropic.

Davidmanheim @ 2021-12-27T12:22 (+9)

You're modeling government as a single coherent actor - and I think that's the most critical mistake. That's not to say they are incompetent, just that governments aren't actually looking at what companies do to decide how to respond. (And many would say this is a feature, not a bug!)

Ozzie Gooen @ 2021-12-27T18:03 (+3)

Sorry if my post made it seem that way, but I don't feel like I've been thinking of it that way. In fact, it's sort of worse if it's not a single actor; many different departments could have done something about this, but none of them seemed to take public action.

I'm not sure how to understand your second sentence exactly. It seems pretty different from your first sentence, from what I can tell?

Davidmanheim @ 2021-12-28T08:32 (+24)

A multi-actor system is constrained in ways that a group of single actors are not. Individual agencies can't do their own thing publicly, and you can't see what they are doing privately.

For the agencies that do pay attention, they can't publicly respond - and the lack of public monitoring and response by government agencies which can slap new regulations on individual companies or individuals is what separates a liberal state from a dictatorship. If US DOD notices something, they really, really aren't allowed to respond publicly, especially in ways that would be seen as trying to interfere with business or domestic policy. If NSA or the FBI notices something, they can only enforce extant laws, and are limited in their legal ability. And agencies which can respond, like the FTC, are in fact already working on drafting regulations for relevant applications of AI. (And yes, Congress could act to respond, but it's really fundamentally broken.)

Ozzie Gooen @ 2021-12-28T08:36 (+4)

An, that’s really good to know… and kind of depressing. Thanks so much.

Derek Shiller @ 2021-12-24T16:32 (+9)

Are you sure that they don't mind? I would be surprised if intelligence agencies weren't keeping some track of the technical capabilities of foreign entities, and I'd be unsurprised if they were also keeping track of domestic entities as well. If they thought we were six months away from transformative AGI, they could nationalize it or shut it down.

Ozzie Gooen @ 2021-12-24T23:53 (+3)

Are you sure that they don't mind?

I don't have any inside information into the government, it's of course possible there are secretive programs somewhere

"If they thought we were six months away from transformative AGI, they could nationalize it or shut it down."
Agreed, in theory. In practice, many different parts of the government think differently. It seems very likely that one will think that "there might be a 5% chance we're six months away from transformative AGI", but the parts that could take action just wouldn't.
 

fjcl @ 2021-12-27T20:58 (+7)

AGI concerns are outside the overtone-window and are often considered actively harmful. The narrative "The whole debate about existential risks AI poses to humanity in the far off future is a huge distraction" (as illustrated in this post https://www.skynettoday.com/editorials/dont-worry-agi/) is quite wide-spread in the AI policy community. 

In this situation, actors who raise AGI concerns thus additionally risk being portrayed as working against the public interest.

Ozzie Gooen @ 2021-12-27T22:36 (+3)

You seem to have a small formatting mistake in the link, this should work though.
https://www.skynettoday.com/editorials/dont-worry-agi/

fjcl @ 2021-12-27T22:46 (+9)

Thanks! Here just another recent example:

https://mobile.twitter.com/fchollet/status/1473656408713441285

Charles_Guthmann @ 2021-12-24T04:09 (+7)

https://www.ai.gov/ What do you make of this? 

Charles He @ 2021-12-24T04:36 (+8)

My guess is that this site focuses on the prosaic, mainstream sense of AI harms, e.g. automation, privacy, competition, what Acemoglu means here.

 

By the way, of the content on the webpage "advancing trustworthy AI" seems like it could be the most relevant to AGI/ASI risk.  But the link is broken, which is really on the nose!

 

Charles_Guthmann @ 2021-12-24T12:28 (+7)

The link for the trustworth AI wasn't broken for me? https://www.ai.gov/strategic-pillars/advancing-trustworthy-ai/#Use-of-AI-by-the-Federal-Government

But unsurprisingly, it mostly seems like they are talking about bigoted algorithms and not singularity. 

However it did link this:

https://www.nscai.gov/

Find their abriged 2021 report here:

https://reports.nscai.gov/final-report/table-of-contents/ 

https://reports.nscai.gov/final-report/chapter-7/ 

Personally, this looked more promising than anything else I had seen. There was a section titled adversarial AI, which I thought might be about AGI, but upon further reading, it wasn't. So this appears to also be in the vein of what Ozzie is saying.  However, It seems they have events semi-frequently. I think someone from EA should really try to go to they are allowed.  The second link is the closest chapter in the report to AGI stuff if anyone wants to take a look- again though it's not that impressive. 

And Also I found this: https://www.dod-coe4ai-ml.org/leadership-members

But I can't really tell if this is the DODs org or Howard universities; it seems like they only hire Howard professors and students so probably the latter. 

Closest paper I could find from them to anything AGI related: https://www.techrxiv.org/articles/preprint/Recent_Advances_in_Trustworthy_Explainable_Artificial_Intelligence_Status_Challenges_and_Perspectives/17054396/1

Ozzie Gooen @ 2021-12-24T05:49 (+4)

Yep; The US government is definitely taking some actions to progress AI development in general. 

Its work to promote AI safety, and particularly, regulate or at least discuss what to do about AGI, seems to be much more lacking.

Evan R. Murphy @ 2021-12-30T19:59 (+3)

There are two governance-related proposals in the second EA megaprojects thread. One is to create a really large EA-oriented think tank. The other is essentially EA lobbying, i.e. to put major funding behind political parties and candidates who agree to take EA concerns seriously.

Making one of these megaprojects a reality could get officials in governments to take AGI more seriously and/or get it more into the mainstream political discourse.

Evan R. Murphy @ 2021-12-30T19:28 (+3)

Andrew Yang made transformative AI a fairly central part of his 2020 presidential campaign. To the OP's point though, I don't recall him raising any alarms about the existential risks of AGI.

Guy Raveh @ 2021-12-24T23:51 (+3)

One possibility is that either the plausibility of AGI being developed soon is smaller than we think, or the danger it imposes is smaller than we think. This is far from the only explanation though.

Ozzie Gooen @ 2021-12-25T00:20 (+4)

Yea; I think this fits into response 1, "Companies saying they are making AGI are ridiculously overconfident". 

I think it's pretty clear that almost everyone outside of EAs + AGI developers are very skeptical of AGI. Very arguably, they're the ones who are correct. (Personally, I'm in-between, I just mean to point out the discrepancy) 

hrosspet @ 2021-12-30T18:03 (+2)

I think governments are not aware of the stop button problem and they think in case of emergency they can just shut down the company / servers running the AGI using force. That's what happened in the past with digital currencies (which Jackson Wagner mentions here as a plausible member of the same reference class as AGI for governments) before bitcoin - they either failed on their own, or if successful, were shut down by government (https://en.wikipedia.org/wiki/Digital_currency#History).