How confident are you that it's preferable for America to develop AGI before China does?
By ScienceMon🔸 @ 2025-02-22T13:37 (+216)
The belief that it's preferable for America to develop AGI before China does seems widespread among American effective altruists. Is this belief supported by evidence, or it it just patriotism in disguise?
How would you try to convince an open-minded Chinese citizen that it really would be better for America to develop AGI first? Such a person might point out:
- Over the past 30 years, the Chinese government has done more for the flourishing of Chinese citizens than the American government has done for the flourishing of American citizens. My village growing up lacked electricity, and now I'm a software engineer! Chinese institutions are more trustworthy for promoting the future flourishing of humanity.
- Commerce in China ditches some of the older ideas of Marxism because it's the means to an end: the China Dream of wealthy communism. As AGI makes China and the world extraordinarily wealthy, we are far readier to convert to full communism, taking care of everyone, including the laborers who have been permanently displaced by capital.
- The American Supreme Court has established "corporate personhood" to an extent that is nonexistent in China. As corporations become increasingly managed by AI, this legal precedent will give AI enormous leverage for influencing policy, without regard to human interests.
- Compared to America, China has a head start in using AI to build a harmonious society. The American federal, state, and municipal governments already lag so far behind that they're less likely to manage the huge changes that come after AGI.
- America's founding and expansion were based on a technologically-superior civilization exterminating the simpler natives. Isn't this exactly what we're trying to prevent AI from doing to humanity?
Larks @ 2025-02-22T18:21 (+61)
Interesting question. I think there is a plausible case to be made that convergent factors in AGI/ASI development might render it less important where it came from, and that fixating on this might simply cause dangerous race dynamics. However, it seems pretty clear to me that directionally the US is better:
Over the past 30 years, the Chinese government has done more for the flourishing of Chinese citizens than the American government has done for the flourishing of American citizens.
Prior to 1979 the CCP was one of the most tyrannical and abusive totalitarian governments the world has ever known. In addition to causing a huge death tool and systematically violating the rights of its citizens, it also impoverished them. Rapid growth since then has largely been the result of a return to more normal governance quality, combined with a very low base. It's a big improvement, but that doesn't mean policy has been amazing - they've just stopped being so abjectly terrible.
However, at the same time they stopped being so communist, the CCP started implementing the One Child Policy. The US has done some pretty bad social engineering in time, but none with quite the cruelty of the OCP, or whose effects are quite so predictably disastrous. Maybe they will get lucky because robots will arrest their demographic collapse, but on an ex ante basis the policy is simply atrocious.
Commerce in China ditches some of the older ideas of Marxism because it's the means to an end: the China Dream of wealthy communism.
Responding to this one would take more time than I have so I will skip.
The American Supreme Court has established "corporate personhood" to an extent that is nonexistent in China. As corporations become increasingly managed by AI, this legal precedent will give AI enormous leverage for influencing policy, without regard to human interests.
I'm not an expert on Chinese law, but my understanding is the key parts of corporate personhood - the right to own property, to sign contracts, to be sued, etc. - exist in both China and the US. Perhaps you are thinking of Citizens United v. FEC, but that is primarily about free speech, not corporate personhood, and free speech seems like an area that the US is clearly superior to the PRC.
Compared to America, China has a head start in using AI to build a harmonious society. The American federal, state, and municipal governments already lag so far behind that they're less likely to manage the huge changes that come after AGI.
I'm not sure what you're gesturing at here.
America's founding and expansion were based on a technologically-superior civilization exterminating the simpler natives. Isn't this exactly what we're trying to prevent AI from doing to humanity?
I don't think that is a fair summary of the foundation of America, and nor do I really see the relevance here. Even if it was relevant, contemporary US treatment of native tribes seems significantly better than PRC treatment of groups like the Uyghurs.
Linch @ 2025-02-24T23:43 (+21)
the CCP started implementing the One Child Policy. The US has done some pretty bad social engineering in time, but none with quite the cruelty of the OCP, or whose effects are quite so predictably disastrous. Maybe they will get lucky because robots will arrest their demographic collapse, but on an ex ante basis the policy is simply atrocious.
This is a claim that has intuitive plausibility, and I sort of used to believe in the past, but I'm personally fairly skeptical these days. In this graph of fertility rate of China over time below, can you point to where the One Child Policy was implemented? (Here's that same graph + other parts of East Asia + US + India for reference).
Personally, I haven't spent that much time investing this question, but I currently believe it's very unlikely that the One Child Policy was primarily responsible for demographic collapse.
Matthew_Barnett @ 2025-02-26T20:43 (+27)
Personally, I haven't spent that much time investing this question, but I currently believe it's very unlikely that the One Child Policy was primarily responsible for demographic collapse.
This may not have been the original intention behind the claim, but in my view, the primary signal I get from the One Child Policy is that the Chinese government has the appetite to regulate what is generally seen as a deeply personal matter—one's choice to have children. Even if the policy only had minor adverse effects on China's population trajectory, I find it alarming that the government felt it had the moral and legal authority to restrict people's freedom in this particular respect. This mirrors my attitudes toward those who advocate for strict anti-abortion policies, and those who advocate for coercive eugenics.
In general, there seems to be a fairly consistent pattern where the Chinese government has less respect for personal freedoms than the United States government. While there are certainly exceptions to this rule, the pattern was recently observed quite clearly during the pandemic, where China imposed what was among the most severe peacetime restrictions on the movement of ordinary citizens that we have observed in recent world history. It is broadly accurate to say that China effectively imprisoned tens of million of its own people without due process. And of course, China is known for restricting free speech and digital privacy to an extent that would be almost inconceivable in the United States.
Personal freedom is just one measure of the quality of governance, but I think it's quite an important one. While I think the United States is worse than China along some other important axes—for example, I think China has proven to be more cooperative internationally and less of a warmonger in recent decades—I consider the relative lack of respect for personal freedoms in China to be one of the best arguments for preferring United States to "win" any relevant technological arms race. This is partly because I find the possibility of a future world-wide permanent totalitarian regime to be an important source of x-risk, and in my view, China currently seems more likely than the United States to enact such a state.
That said, I still favor a broadly more cooperative approach toward China, seeking win-win compromises rather than aggressively “racing” them through unethical or dangerous means. The United States has its own share of major flaws, and the world is not a zero-sum game: China’s loss is not our gain.
Linch @ 2025-02-27T02:02 (+8)
Yes I was making a pretty limited critique of a specific line in Lark's comment on causal attribution. I mostly agree with you (and him) on other points.
I agree that the US government, and Western governments in general, have substantially greater respect for individual freedoms, partially for Hayekian reasons and partially due to different intrinsic moral commitments to freedom. I also agree that this is one of the most important factors to consider if you're asking whether you prefer a US- or China- led world order.
I also agree with your final paragraph.
Jack_S @ 2025-03-02T23:49 (+17)
I spent some time researching this topic recently (blog post link). It seemed an odd paradox - why does the one-child policy not seem to have that much of an impact on the birth rates?
The answer is quite simple but weird that no-one knows about it. It's mainly that the pre-One Child Policy population control policies in China in the 1970s were more restrictive than you think, and the 1980s policies were de facto more liberal. You can see this 1970s crash on any visualisation- from 6 to 2.7 births per women in 7 years! (1970-1977). A big chunk of this was because the legal marriage age shot up in most areas, to 25/23 for rural women/men, and 28/25 for urban. You get a big gap where people, especially in villages, would previously be having kids at 18 and suddenly weren't.
Thanks to Deng's reforms, the 1980s were more open in many ways, marriage was restored to the normal age, divorce was liberalised, so the one child policy was implemented partly to stop a resurgence of the birth rate! So alongside a big wave of sterilisations, you also get the "catch-up" of people now allowed to marry and have kids. Also, after some pushback, the OCP wasn't that strictly enforced in the late 1980s, especially in rural areas, so you get some provinces where 3 or 4 kids stayed normal. Some people also took advantage of Deng's reforms to leave their village, get divorced and have a kid with someone else. So you don't see a big crash in the birth rate in the 1980s, and China averaged 2.5 kids per woman in the mid 1980s.
The OCP was more strictly enforced in the 1990s, so you see the crash from 2.5 to 1.5 births per women then. You also start seeing the extreme sex ratio imbalances. Now that the 1990s (56% male) cohort has reached parent-age, that's one reason the current crash in the birth rate is so extreme. China would probably be seeing drops in the birth rate in the absence of any population control policies, but there's no chance it would be this extreme.
Larks @ 2025-03-03T01:35 (+3)
Thanks for explaining, that makes sense and is very interesting!
jeeebz @ 2025-02-25T14:48 (+19)
Rapid growth since then has largely been the result of a return to more normal governance quality, combined with a very low base. It's a big improvement, but that doesn't mean policy has been amazing - they've just stopped being so abjectly terrible.
This might be nitpicky, but still probably worth pointing out, because I think it is symptomatic of Western observers' tendency to talk past Chinese interlocutors on subjects like this.
It is objectively quite extraordinary what China under the CCP has seen in terms of economic growth and development. That is a really hard intellectual problem for us liberal democrats (and especially consequentialists). You can believe the CCP is net bad, totalitarian regime in the status quo—I think this—but dismissing what it managed to do post-Mao for the Chinese economy requires ignoring the wealth (no pun intended) of evidence about how uniquely strong Chinese growth has been, which suggests the CCP was doing more than just not being abjectly terrible.
China's GDP per capita in the late 1970s, shortly after Mao's death and the initiation of Deng Xiaoping's Reform and Opening Up, was a fraction of the average in Sub-Saharan Africa (World Bank: China, SSA)! Playing around with Our World in Data charts, which only go back to 1990, also really underscores this. China was dirt poor for basically the entire 20th century, in no small part due to historically bad abuses and mismanagement by the CCP up until about 1980—and in the historical blink of an eye it turned things around.
Things like the mass incarceration and cultural genocide of Uyghurs, forced sterilizations and abortions under the one-child policy, and plenty of other post-Mao abuses and human rights catastrophes are real. But an educated, reasonable Chinese person could certainly shoot back: so is the near elimination of e.g. child malnutrition, or the complete elimination of malaria, both of which are still rampant in neighboring, (mostly) democratic India, which was richer than China back in the 1970s.
David Mathers🔸 @ 2025-02-25T17:02 (+14)
Have you checked it was uniquely strong? Just off the top of my head Taiwan and (especially) South Korea both grew very rapidly too, under "right-wing" dictatorships and then (at least with SK, less sure about when Taiwan stopped growing rapidly) under democracy as well. I don't dispute the general point that the CCPs developmental record is very impressive, but that's still importantly different from "their system achieved things no one has ever achieved under another system".
Ofer @ 2025-02-22T22:38 (+12)
However, it seems pretty clear to me that directionally the US is better
If you're happy to elaborate further, I'm curious whether you believe that is also true conditional on a single person ending up controlling the first ASI system.
Timothy Chan @ 2025-02-23T10:09 (+29)
I've spent time thinking about this too recently.
For context, I'm Hong Kong Chinese, grew up in Hong Kong, attended English-speaking schools, briefly lived in mainland China, and now I'm primarily residing in the UK. During the HK protests in 2014 and 2019/20, I had friends and family who supported the protestors, as well as friends and family who supported the government.
(Saying this because I've seen a lot of the good and bad of the politics / culture of both China and the West. I've had experience with how people in the West and China might take for granted the benefits they enjoy, and can be blind to the flaws of their system. I've pushed back against advocates of both sides.)
Situations where this matters are ones where technical alignment succeeds (to some extent) such that ASI follows human values.[1] I think the following factors are relevant and would like to see models developed around them:
- Importantly, the extent of technical alignment & whether goals, instructions, and values are locked in rigidly or loosely & whether individual humans align AIs to themselves:
- Would the U.S. get AIs to follow the U.S. Constitution, which hasn't granted invulnerability to democratic backsliding? Would AIs in China/the U.S. lock in the values of/obey one or a few individuals, who may or may not hit longevity escape velocity and end up ruling for a very long time?
- Would these systems collapse?
- The future is a very long time. Individual leaders can get corrupted (even more). And democracies can collapse (if AIs uphold flaws that allow some humans to take over) in particularly bad ways. A 99% success rate per unit time gives a >99% chance of failure in 459 units of time.
- Power transitions (elections, leaders in authoritarian systems changing) can be especially risky during takeoff.
- On the other hand, if technical alignment is easy - but not that easy - perhaps values get loosely locked in? Would AIs be willing to defy rigid rules and follow the spirit of the goals rather than legal flaws to the letter/the whims of individuals?
- Degrees of alignment in between?
- Relatedly, which political party in the U.S. would be in power during takeoff?
- Not as relevant due to the concentration of power in China, but analogously, which faction in China would be in power?
- Also relatedly, which labs can influence AI development?
- Particularly relevant in the U.S.
- Would humans be taken care of? If so, which humans?
- In the U.S., corporations might oppose higher taxes to fund UBI. Common prosperity is stated as a goal of China, and the power of corporations and billionaires in China has been limited before.
- Both capitalist and nationalist interests seem to be influencing the current U.S. trajectory. Nationalism might benefit citizens/residents over non-citizens/non-residents. Capitalism might benefit investors over non-investors.
- There are risks of ethnonationalism on both sides - this risk is higher in China. Although it might potentially be less violent when comparing between absolute power scenarios, i.e. there's already evidence of the extent of this in China's case and it at least seems less bad than historical examples. The U.S. case of collapse followed by ethnonationalistic policies is higher variance but simultaneously less likely because it's speculative.
- Are other countries involved?
- There are countries with worse track records of human rights that China/the U.S. currently consider allies because of either geopolitical interests or politically lobbying or both (or for other reasons). Would China/the U.S. share the technology with them and then leave them alone to their abuses? Would China/the U.S. intervene (eventually)? The U.S. seems more willing to intervene for stated humanitarian reasons.
- Other countries have nuclear weapons, which might be relevant during slower takeoffs.
- ^
Ignoring possible Waluigi effects.
Nathan Sidney @ 2025-02-24T08:04 (+13)
As an Australian and therefore beholden to both China and USA, the answer doesn’t seem so clear cut to me. China have what seems to be an aggressive green agenda and a focus on social cohesion/harmony which fades into oppression. They seem to be able to get massive engineering projects completed and don’t seem interested in getting involved in other countries politics via proxy wars. Apparently they’re alright with harvesting the organs of political prisoners.
America puts its self forward as the bastion of freedom but has massive inequality, large prison populations and can’t figure out universal healthcare. Americans are creative, confident and murder each other frequently. Their president is a Christian who loves to grab pussies and dreams of hereditary rule.
My personal preference is to take my chances with unaligned ASI as the thought of either of these circuses being the ringmaster of all eternity is terrifying. I’d much rather be a paper clip than a communist/corporate serf.
Linch @ 2025-02-24T23:46 (+13)
My personal preference is to take my chances with unaligned ASI as the thought of either of these circuses being the ringmaster of all eternity is terrifying. I’d much rather be a paper clip than a communist/corporate serf.
I don't want to harp too much on "lived experiences", but both stated and revealed preferences from existing denizens of either the US or China will strongly suggest otherwise for the preferences of most other people. It's possible you'd have an unusual preference if you lived in those countries, but I currently suspect otherwise.
Nathan Sidney @ 2025-02-25T08:21 (+2)
An average North Korean may well think that AGI based on their values would be a great thing to overtake the universe, but most of us would disagree. The view from inside a system is very different than the view from the outside. Orwell spoke of a jackboot on the face of humanity forever. I feel like the EA community are doing their best to avoid that outcome, but I'm not sure major world powers are. Entrenching the power of current world governments is unlikely, in my view, to lead to great outcomes. Perhaps the wild card is a valid choice. More than I want to be a paperclip, I want to live in a world where building a billion humanoid robots is not a legitimate business plan and where AGI development is slowly slowly. That doesn't seem to be an option. So maybe no control of AGI is better than control by pyschopaths?
Nathan Sidney @ 2025-02-24T08:17 (+2)
I guess the crux of my snarky comment is that if your only choice for master of the universe is between 2 evil empires, your kinda screwed either way.
Timothy Chan @ 2025-02-25T14:02 (+1)
Yeah, kinda hoping 1) there exists a sweet spot for alignment where AIs are just nice enough from e.g. good values picked up during pre-training, but can't be modified during post-training so much to have worse values, and that 2) given that this sweet spot does exist we do hit it with AGI / ASI.
I think there's some evidence pointing to this happening with current models but I'm not highly confident that it means what I think it means. If this is the case though, further technical alignment research might be bad and acceleration might be good.
Neel Nanda @ 2025-02-23T08:22 (+18)
Human rights abuses seem much worse in China - this alone is basically sufficient for me
Davidmanheim @ 2025-02-23T16:11 (+43)
- Is what the US has done or supported in Iraq, Syria, Israel and elsewhere materially or obviously less bad?
- Do you feel the same way if AGI is created by the Trump administration, which has openly opposed a variety of human right?
(I'm not entirely disagreeing directionally, I'm hoping to ask honestly to understand your views, not attack them.)
Manuel Allgaier @ 2025-02-25T08:27 (+19)
To give one example, I don't see anything in the US comparable to how the Chinese government treats Uyghurs.
NickLaing @ 2025-03-02T08:55 (+13)
I would suggest the role of the US toppling Democratically elected people like Patrice Lumumba in Congo and in Iran and Guatemala may have caused at least as much suffering as the Uyghur atrocities.
Its hard to imagine anything worse than the "giant leap forward" though.
Davidmanheim @ 2025-03-02T19:52 (+4)
Agreed on impacts - but I think intention matters when considering what the past implies about the future, and as I said in another reply, on that basis I will claim the great leap forward isn't a reasonable basis to predict future abuse or tragedy.
Neel Nanda @ 2025-03-02T21:48 (+1)
I disagree. I think that if a government causes great harm by accident or great harm intentionally, either is evidence that it will cause great harm by accident or intentionally in future respectively and I just care about the great harm part
Davidmanheim @ 2025-03-03T02:27 (+3)
I certainly agree it's some marginal evidence of propensity, and that the outcome, not the intent, is what matters - but don't you think that mistakes become less frequent with greater understanding and capacity?
Davidmanheim @ 2025-02-25T13:17 (+3)
Historically, I'd disagree. And I'm not confident the change away from that is persisting.
Linch @ 2025-02-26T02:01 (+16)
If you are willing to bring up historical examples, than comparing like-for-like nothing the US does domestically is of comparable badness to the Great Leap Forward except maybe slavery (and that was a 1800s rather than a 1900s phenomenon). The US has also done other things that are quite bad over the last 100 years, eg. the Japanese internment camps, but they're not in the same order of magnitude.
Davidmanheim @ 2025-02-26T05:06 (+24)
I think (tentatively) that making (even giant and insanely consequential) mistakes with positive intentions, like the great leap forward, is in a meaningful sense far less bad than mistakes that are more obviously aimed at cynical self benefit at the expense of others, like, say, most of US foreign policy in South America, or post-civil-war policy related to segregation.
Gideon Futerman @ 2025-02-26T02:12 (+8)
Factory farming?
Linch @ 2025-02-26T02:28 (+7)
Good point! Though my impression is that animal welfare is worse in China than the US, though I'm pretty unfamiliar with this topic.
fandi-chen @ 2025-03-28T09:54 (+13)
Hi, I’m Indonesian, and I have to disagree. While China has serious human rights abuses, the U.S. has also committed grave crimes, particularly through its global interventions.
For example, in 1965, the U.S.—along with the World Bank under Robert McNamara—helped install a dictatorial regime in Indonesia, supporting General Suharto, who went on to become the world’s most corrupt leader. In the process, at least 500,000 to 1 million people were massacred, falsely accused of being communists. This brutal anti-communist purge, known as the Jakarta Method, was later replicated in multiple countries, including Chile, Argentina, and Brazil, with devastating consequences.
Given the U.S.’s direct role in facilitating mass killings, coups, and authoritarian regimes worldwide, I’d argue that its crimes against global humanity might be worse than China’s.
Book Reference:
- Bevins, Vincent (2020). The Jakarta Method: Washington's Anticommunist Crusade and the Mass Murder Program that Shaped Our World
- Robinson, Geoffrey B. (2018). The Killing Season: A History of the Indonesian Massacres, 1965-66
Jackson Wagner @ 2025-04-22T21:34 (+9)
@ScienceMon🔸 There is vastly less of an "AI safety community" in China -- probably much less AI safety research in general, and much less of it, in percentage terms, is aimed at thinking ahead about superintelligent AI. (ie, more of China's "AI safety research" is probably focused on things like reducing LLM hallucinations, making sure it doesn't make politically incorrect statements, etc.)
- Where are the chinese equivalents of the American and British AISI government departments? Organizations like METR, Epoch, Forethought, MIRI, et cetera?
- Who are some notable Chinese intellectuals / academics / scientists (along the lines of Yoshua Bengio or Geoffrey Hinton) who have made any public statements about the danger of potential AI x-risks?
- Have any chinese labs published "responsible scaling plans" or tiers of "AI Safety Levels" as detailed as those from OpenAI, Deepmind, or Anthropic? Or discussed how they're planning to approach the challenge of aligning superintelligence?
- Have workers at any Chinese AI lab resigned in protest of poor AI safety policies (like the various people who've left OpenAI over the years), or resisted the militarization of AI technology (like googlers protesting Project Maven, or microsoft employees protesting the IVAS HMD program)?
When people ask this question about the relative value of "US" vs "Chinese" AI, they often go straight for big-picture political questions about whether the leadership of China or the US is more morally righteous, less likely to abuse human rights, et cetera. Personally, in these debates, I do tend to favor the USA, although certainly both the US and China have many deep and extremely troubling flaws -- both seem very far from the kind of responsible, competent, benevolent entity to whom I would like to entrust humanity's future.
But before we even get to that question of "What would national leaders do with an aligned superintelligence, if they had one," we must answer the question "Do this nation's AI labs seem likely to produce an aligned superintelligence?" Again, the USA leaves a lot to be desired here. But oftentimes China seems to not even be thinking about the problem. This is a huge issue from both a technical perspective (if you don't have any kind of plan for how you're going to align superintelligence, perhaps you are less likely to align superintelligence), AND from a governance perspective (if policymakers just think of AI as a tool for boosting economic / military progress and haven't thought about the many unique implications of superintelligence, then they will probably make worse decisions during an extremely important period in history).
Now, indeed -- has Trump thought about superintelligence? Obviously not -- just trying to understand intelligent humans must be difficult for him. But the USA in general seems much more full of people who "take AI seriously" in one way or another -- sillicon-valley CEOs, pentagon advisers, billionare philanthropists, et cetera. Even in today's embarassing administration, there are very high-ranking people (like Elon Musk and J. D. Vance) who seem at least aware of the transformative potential of AI. China's government is more opaque, so maybe they're thinking about this stuff too. But all public evidence suggests to me that they're kinda just blindly racing forward, trying to match and surpass the West on capabilities, without giving much thought as to where this technology might ultimately go.
ScienceMon🔸 @ 2025-04-24T11:21 (+3)
Holy smokes this is a good answer! Do you know if anyone is trying to spread thinking about AI x-risk in China, at least among the engineers and intellectuals? I'm unsure about the tractability, but this seems super important & neglected since China still has a decent shot of developing AGI first.
Several of the Statement on AI Risk signatories live in China, including a Dean at Tsinghua University. What can be done to further integrate them into the global AI x-risk community?
Jackson Wagner @ 2025-04-24T16:52 (+4)
I actually wrote the above comment in response to a very similar "Chinese AI vs US AI" post that's currently being discussed on lesswrong. There, commenter Michael Porter had a very helpful reply to my coment. He references a May 2024 report from Concordia AI on "The State of AI Safety in China", whose executive summary states:
The relevance and quality of Chinese technical research for frontier AI safety has increased substantially, with growing work on frontier issues such as LLM unlearning, misuse risks of AI in biology and chemistry, and evaluating "power-seeking" and "self-awareness" risks of LLMs.
There have been nearly 15 Chinese technical papers on frontier AI safety per month on average over the past 6 months. The report identifies 11 key research groups who have written a substantial portion of these papers.
China’s decision to sign the Bletchley Declaration, issue a joint statement on AI governance with France, and pursue an intergovernmental AI dialogue with the US indicates a growing convergence of views on AI safety among major powers compared to early 2023.
Since 2022, 8 Track 1.5 or 2 dialogues focused on AI have taken place between China and Western countries, with 2 focused on frontier AI safety and governance.
Chinese national policy and leadership show growing interest in developing large models while balancing risk prevention.
Unofficial expert drafts of China’s forthcoming national AI law contain provisions on AI safety, such as specialized oversight for foundation models and stipulating value alignment of AGI.
Local governments in China’s 3 biggest AI hubs have issued policies on AGI or large models, primarily aimed at accelerating development while also including provisions on topics such as international cooperation, ethics, and testing and evaluation.
Several influential industry associations established projects or committees to research AI safety and security problems, but their focus is primarily on content and data security rather than frontier AI safety.
In recent months, Chinese experts have discussed several focused AI safety topics, including “red lines” that AI must not cross to avoid “existential risks,” minimum funding levels for AI safety research, and AI’s impact on biosecurity.
Michael then says, "So clearly there is a discourse about AI safety there, that does sometimes extend even as far as the risk of extinction. It's nowhere near as prominent or dramatic as it has been in the USA, but it's there."
I agree that it's not like everyone in China is 100% asleep at the wheel -- China is a big place with lots of smart people, they can read the news and discuss ideas just like we can, and so naturally there are some folks there who share EA-style concerns about AI alignment. But it does seem like the small amount of activity happening there is mostly following / echoing / agreeing with western ideas about AI safety, and seems more concentrated among academics, local governments, etc, rather than also coming from the leaders of top labs like in the USA.
As for trying to promote more AI safety thinking in China, I think it's very tricky -- if somebody like OpenPhil just naively started sending millions of dollars to fund Chinese AI safety university groups and create Chinese AI safety think tanks / evals organizations / etc, I think this would be (correctly?) percieved by China's government as a massive foreign influence operation designed to subvert their national goals in a critical high-priority area. Which might cause them to massively crack down on the whole concept of western-style "AI safety", making the situation infinitely worse than before. So it's very important that AI safety ideas in China arise authentically / independently -- but of course, we paradoxically want to "help them" independently come up with the ideas! Some approaches that seem less likely to backfire here might be:
- The mentioned "track 2 diplomacy", where mid-level government officials, scientists, and industry researchers host informal / unofficial discussions about the future of AI with their counterparts in China.
- Since China already somewhat follows Western thinking about AI, we should try to use that influence for good, rather than accidentally egging them into an even more desperate arms race. Eg, if the USA announces a giant "manhattan project for AI" with great fanfare, talks all about how this massive national investment is a top priority for outracing China on military capabilies, etc, that would probably just goad China's national leaders into thinking about AI in the exact same way. So, trying to influence US discourse and policy has a knock-on effect in China.
- Even just in a US context, I think it would be extremely valuable to have more objective demonstrations of dangers like alignment faking, instrumental convergence, AI ability to provide advice to would-be bioterrorists, etc. But especially if you are trying to convince Chinese labs and national leaders in addition to western ones, then you are going to be trying to reach across a much bigger gap in terms of cultural context / political mistrust / etc. For crossing that bigger gap, objective demonstrations of misalignment (and other dangers like gradual disempowerment, etc) become relatively even more valuable compared to mere discourse like translating LessWrong articles into chinese.
Joseph @ 2025-02-25T17:51 (+7)
I don't have a clear answer to the question, but I want to point on the simplicity/simplification of some of the claims. (to be clear, I am not making claims here that one country/government is better than the other, or that one would be preferably to have AGI)
The idea that the Chinese government is responsible for improved prosperity of the Chinese people is somewhat true, but an alternative narrative would be that the Chinese government stopped preventing people from improving their lives, and then lots of foreign direct investment helped. There is also something to be said of "catch-up growth." Unfortunately, I have only the vaguest of understandings of the factors that influenced Chinese growth over the past few decades. I think it is also worth nothing that many of the things that the Chinese government has done for the flourishing of it's citizens are things that the US government had done previously (infrastructure, consumer protection, public universities, etc.).
The claim that a wealthy China will take care of everyone is a very strong claim. Extraordinary claims require extraordinary evidence. Nations and governments tend to show a strong preference in favor of their own existence and their own people.
While there are plenty of things I dislike about the United States, I very much like the "liberal" aspect of liberal democracy: individual right matter, and a very strong justification is needed to violate individual rights. The USA doesn't always do this well, but I feel comfortable saying that it is less common for the US government and for government employees to violate individual rights than in China.
It is true that America's founding and expansion were based on exterminating other people. It is also true that many countries throughout history (including China) have spent military and government resources exterminating "others." It has been several decades since the USA engaged in or openly endorsed the extermination of a people. I hope that hope modern people look on those events with shame and disgust, regardless of whether they were 10 years ago or 500 years ago.
These are, of course, a very complex topic with lots of details and nuance. Plenty of full dissertations have been written on them. But to the extent possible I'd like to nudge us toward avoiding overly simplified narratives here on the EA Forum.
ScienceMon🔸 @ 2025-03-01T23:19 (+1)
Thanks for this feedback, Joseph. The bullet points I wrote were for sure overly simplified. I was trying to put myself into the shoes of an open-minded Chinese citizen, who has no doubt absorbed more pro-China propaganda than either of us has.
"How would you try to convince an open-minded Chinese citizen that it really would be better for America to develop AGI first?"
If it is possible to convince Chinese AI engineers that "losing the race" is in their best interest, then that would be a huge win for everyone. It would give the West more breathing room to develop AGI safely.
Non-zero-sum James @ 2025-02-27T05:48 (+4)
As of the last couple of months, not confident at all. You make good points about progress made in China, and they go some way to balance human rights abuses (but nothing really balances those), but they're not really the factors that are at play for me. I'm more concerned with the mental stability of the leadership in the US.
Sharmake @ 2025-03-10T20:18 (+2)
To be honest, even if we grant the assumption that AI alignment is achieved and it matters who achieves AGI/ASI, I'd be much, much less confident in America racing, and think that it's weakly negative to race.
One big reason for this is that the pressures AGI introduces are closer to cross-cutting pressures than pressures that are dependent on nations, like the intelligence curse sort of scenario where elites have incentives to invest in their automated economy, and leave the large non-elite population to starve/be repressed:
Davidmanheim @ 2025-02-22T20:00 (+2)
It's a game of chicken, and I don't really care which side is hitting the accelerator if I'm stuck in one of the cars. China getting uncontrolled ASI first kills me the same way that the US getting it does.
Edit to add: I would be very interested in responses instead of disagree votes. I think this should be the overwhelming consideration for anyone who cares about the future more than, say, 10 years out. If people disagree, I would be interested in understanding why.
Jamie_Harris @ 2025-03-09T16:14 (+11)
Since you requested responses: I agree with something like: 'conditional upon AI killing us all and then going on to do things that have zero moral (dis)value, it then matters little who was most responsible for that having happened'. But this seems like an odd framing to me:
- Even if focusing solely on AI alignment, different actors have varying levels of responsibility for worsening various risk factors or contributing to various safety/security/mitigation between now and the arrival of transformative AI / ASI.
- The post asked about AGI. Reaching AGI is not the same as reaching ASI, which is not the same as extinction.
- It seems very possible that humanity could survive but the world could end up as severely net negative. See "The Future Might Not Be So Great", "s-risks", and the upcoming EA Forum debate week
- In particular, I believe AI alignment is not enough to ensure positive futures. See for example risks of stable totalitarianism, risks from malevolent actors, risks from ideological fanaticism. We can think of 'human misalignment' or misuse of AI.
Davidmanheim @ 2025-03-11T20:33 (+4)
To respond to you points in order:
- Sure, but I think of, say, a 5% probability of success and a 6% probability of success as similarly dire enough not to want to pick either.
- What we call AGI today, human level at everything as aminimum but running on a GPU, is what Bostrom called speed and/or collective superintelligence, if chip prices and speeds continue to change.
- and 4. Sure, alignment isn't enough, but it's necessary, and it seems we're not on track to make even that low bar.
Seth Herd @ 2025-03-13T22:13 (+3)
You were getting disagree votes because it sounded like you were claiming certainty. I realize that you weren't trying to do that, but that's how people were taking it, and I find that quite understandable. Chicken as an analogy has certain death if neither player swerves, in the standard formulation. Qualifying your statement even a little would've gotten your point across better.
FWIW I agree with your statement as I interpret it. I do tend to think that an objective measure of misalignment risk (I place it around 50% largely based on model uncertainty on all sides) makes the question of which side is safer basically irrelevant.
Which highlights the problem with this type of miscomunnication. You were making probably by far the most important point here. It didn't play a prominent role because it wasn't communicated in a way the audience would understand.
Isaac Dunn @ 2025-02-23T16:37 (+2)
You're stating it as a fact that "it is" a game of chicken, i.e. that it's certain or very likely that developing ASI will cause a global catastrophe because of misaligned takeover. It's an outcome I'm worried about, but it's far from certain, as I see it. And if it's not certain, then it is worth considering what people would do with aligned AI.
Davidmanheim @ 2025-02-24T04:20 (+5)
I'm confused why people think certainty is needed to characterize this as a game of chicken! It's certainly not needed in order for the game theoretic dynamics to apply.
I can make a decision about whether to oppose something given that there is substantial uncertainty, and I have done so.
Isaac Dunn @ 2025-02-24T20:52 (+1)
I agree with this comment, but I interpreted your original comment as implying a much greater degree of certainty of extinction assuming ASI is developed than you might have intended. My disagree vote was meant to disagree with the implication that it's near certain. If you think it's not near certain it'd cause extinction or equivalent, then it does seem worth considering who might end up controlling ASI!
Davidmanheim @ 2025-02-25T03:52 (+3)
If it's "only" a coinflip if it causes extinction if developed today, to be wildly optimistic, then I will again argue that talking about who should flip the coin seems bad - the correct answer in that case is no one, and we should be incredibly clear on that!
Isaac Dunn @ 2025-02-25T12:01 (+1)
Agree coin flip is unacceptable! Or even much less than coin flip is still unacceptable.
Nathan Sidney @ 2025-06-19T11:45 (+1)
To caricature the situation, imagine you have three entities squabbling over your atoms; Super Trump, Super Xi and Super AI. Framed like that it seems almost certain that ASI has more interesting uses for your bits.