How good would a CCP-dominated AI future be?

By OscarD🔸 @ 2025-10-22T01:14 (+52)

This is a linkpost to https://oscardelaney.substack.com/p/how-good-would-a-ccp-dominated-ai

I am not American. But compared to China, I have plenty of friends from the U.S., Australia is more culturally similar to the U.S., and I have spent more time there. So, is the fact that I support the U.S. over China in the AI race just selfish and downstream of my personal circumstances?

I would like to think not. But it is worth seriously considering what the future would be like if the CCP becomes the dominant player in AI development. Dominance in AI development could (but would not necessarily) give China a decisive strategic advantage and control over the galactic future. Here, I think through what a CCP-led future would mean for human flourishing, and avoiding risks from misalignment and war.[1]

Present-day harms

I care about creating a Better Future â€“ not just today's world minus poverty and disease, but a utopian future qualitatively different from today. How likely are we to achieve such a future if the CCP is in charge?

First, I think some common reasons to dislike the CCP aren't as decisive in the long term as they might seem. I take a fairly utilitarian perspective here; placing intrinsic value on diversity, democracy, or human rights regardless of welfare outcomes makes the picture look considerably worse.

Persecuting minorities: China is (in)famously not a nice place for ethnic or religious minorities (e.g. Tibetans, Uyghurs). But in a CCP-controlled future, the vast majority of people (either biological or digital) will likely be culturally Han Chinese. This is because minorities may be successfully (if brutally) assimilated, or they could simply be underrepresented in space colonization and digital minds programs. If we're scope sensitive and thinking about trillions of future beings, the persecution of minorities in the 21st century, while deeply tragic, will not feature prominently in the overall (dis)value of the future.

Repressing freedom: But it isn't just minorities that have a hard time in China – everyone's speech and action is closely policed, which is arguably incompatible with a flourishing society. However, this too might change: you only need to rule with an iron fist if you are scared of losing your grip on power. In an ASI-enabled CCP dictatorship, there could be common knowledge that overthrowing the government is impossible, so leaders might not have as much to fear from some protest action and dissident speech. For instance, minituarized drones could simply incapacitate anyone attempting serious violence, reducing the need for pre-emptive thought-policing. Of course, such a society would still be meaningfully unfree in some senses, but having a far narrower set of activities (violent rebellion) that are impermissible could be a vast improvement from today’s repression of free speech.

That said, there are strong counterarguments here. Historical authoritarian regimes have rarely relaxed control even when firmly entrenched. And free speech that cannot change anything arguably doesn’t contribute much to flourishing, given it is constrained. So while I think some loosening of day-to-day repression is possible, I'm far from confident. I tentatively think an ASI-powered CCP might allow somewhat more personal freedom than exists today.

Economic stasis: Centrally planned economies have historically not produced innovations at the same rate as liberal democracies. Restricting the free market is putatively the road to serfdom. However, ASI could for the first time allow centralised information processing to be competitive with the distributed information processing of the market. It may not be fully efficient, but an ASI-powered centralised economy is likely to avoid catastrophic blunders like the Great Leap Forward.

Based on these considerations, I tentatively expect that the average welfare of individual subjects in a CCP-led future would be fairly high—perhaps better than many pessimistic portrayals suggest. However, I think this still misses out on most possible value.

A Flourishing Future

Moral innovation: The truly best futures may require substantial moral reflection and innovation, ending up very different from today. Recent centuries have seen enormous moral progress: increasing consideration of the interests of peasants, women, ethnic minorities, animals, and future people. My impression is that most of this innovation has originated in the West and been exported later, if at all, to China and other authoritarian states.[2] Moral philosophy research also seems far stronger in the West than in China. The ethical schools of thought I'm most aligned with—longtermism, sentientism, effective altruism, and utilitarianism—are far more prominent in the West (though still very niche).[3]

Western countries appear more likely to expand the moral circle to include animals.[4] If the far future contains vast numbers of animals (or especially digital minds), the ruling culture being more pro-animal might matter greatly. Of course, the U.S. has awful factory farming too, so perhaps isn’t that much better.

It is also interesting that China ranked last out of 24 major countries on charitable giving as a percentage of GDP, with 0.03%, compared to the U.S. at 1.44%. But I don’t put much weight on this, given the very different cultures and economies of the two countries.[5]

Pluralism, liberalism, and the long reflection: Despite my tentative prediction that China might become less repressive if it controlled the future, I don't expect China to become a liberal democracy. Power will likely remain immensely concentrated in one or a few CCP leaders. And for all their faults, liberal democracies still seem far better at dynamism and taking new ideas seriously. If something like "the moral truth" exists to be discovered, it will probably look quite weird and different from any current ideology. A pluralistic, liberal society has a better chance of progressing towards the moral truth; Xi Jinping Thought surely isn’t the last word on moral truths in the universe. Even under moral anti-realism, a more pluralistic moral reflection process may produce better outcomes by most people’s lights.

It's worth noting that Taiwan, which shares Chinese cultural heritage but developed democratic institutions, scores much better on liberalism, pluralism, and moral/institutional innovation than the mainland. This suggests the issue is less about "Chinese values" and more about the governance system the CCP has imposed.

So, even if a CCP-run future delivers reasonable welfare for most beings, I expect it to miss out on the vastly greater value that could be unlocked through continued moral progress and liberal dynamism. The difference between a "pretty good" future and a truly excellent one could be astronomical in a universe-spanning civilization.

Avoiding AI catastrophe

But before we even get to designing utopia, humanity needs to safely navigate the acute risks associated with developing ASI. How would a Chinese lead in AI affect our chances of avoiding misaligned AI takeover and war?

Misalignment: Historically, most work outlining risks from misaligned AI and potential solutions has come from the West. Some safety work is emerging from China, but my impression is that there are still far fewer people who deeply grasp the risks from misaligned ASI. Part of this simply reflects that the West leads in AI research generally, not some deep cultural difference. Still, by default, I expect a Chinese lead in AI development to mean less effort from the leading AI project in preventing AI takeover.

Moreover, given that the US is currently ahead, if China has a lead, it will likely be a narrow one, with both the US and China racing recklessly to not fall behind. This would be terrible for doing deep alignment work. Conversely, it is more possible that the US will have a large lead, allowing them to slow down and invest more in safety work at the crucial moment (though whether they actually would is another question).[6]

One countervailing consideration: conditional on a Chinese lead, China's AI developers have probably been centralized under state control, which could reduce within-country racing between projects and potentially allow for more safety work. But this effect seems relatively weak, and the centralization itself creates other problems. Overall, I expect a Chinese lead to significantly harm our chances of solving alignment in time.

War: Forecasting which AI development pathways are more likely to lead to a US-China war is extremely difficult. As I've argued previously, the commitment problems created by the possibility of decisive strategic advantage make rational war more likely than in typical geopolitical contexts.

One side (likely the US) having a large lead could reduce the chance of war, as the laggard would recognize their low chances of success (whereas in a close race the laggard would try to catch up legitimately). Conversely, the laggard might be desperate if they are far behind, or unwilling to "lose face" by accepting a lopsided bargain. Overall, the interplay between the size and direction of an AI lead and the risk of war seems murky.

Conclusion

So, I have reaffirmed the traditional conclusion that a US lead is good. What should we do about this? Probably nothing new – I think this validates the AI governance community’s focus on denying China access to AI compute, and on making the US government take AI more seriously. Still, given the possibility of a Chinese lead in AI (and, thereafter, maybe domination of space futures), an increase in people thinking about AI safety and moral innovation in China seems great.

  1. ^

     I focus less on concentration of power as a distinct risk here because, as discussed in the flourishing section, power is already highly concentrated in China. The question is more "what do they do with it?" than "will they accumulate it?"

  2. ^

     A partial counterexample is that Soviet gender norms were arguably more egalitarian than those in the West from earlier on, such as having higher employment participation rates.

  3. ^

     Interestingly, one could argue that the CCP's willingness to sacrifice individuals for collective goals reflects a kind of crude utilitarianism. But I don’t think the CCP is particularly utilitarian in the philosophical sense; it is more just that they don’t value individuals much.

  4. ^

     An alternative hypothesis is that animal-frindliness is a ‘luxury belief’ associated with living in a rich society, and that China hasn’t been rich for long enough for the cultural effects to flow through.

  5. ^

     I don’t think this ranking is a great guide to the moral fiber of a country or anything (e.g. the Nordics are also relatively low). Just one small piece of evidence.

  6. ^

     The longer a lead the US has, the more likely it becomes that key decision-makers vote to slow down and do more safety work. If the US is one year ahead, taking a three-month safety pause seems more feasible than if they're only one month ahead.


Wei Dai @ 2025-10-24T06:09 (+43)

The ethical schools of thought I'm most aligned with—longtermism, sentientism, effective altruism, and utilitarianism—are far more prominent in the West (though still very niche).

I want to point out that the ethical schools of thought that you're (probably) most anti-aligned with (e.g., that certain behaviors and even thoughts are deserving of eternal divine punishment) are also far more prominent in the West, proportionately even more so than the ones you're aligned with.

Also the Western model of governance may not last into the post-AGI era regardless of where the transition starts. Aside from the concentration risk mentioned in the linked post, driven by post-AGI economics, I think different sub-cultures in the West breaking off into AI-powered autarkies or space colonies with vast computing power, governed by their own rules, is also a very scary possibility.

I'm pretty torn and may actually slightly prefer a CCP-dominated AI future (despite my family's past history with the CCP). But more importantly I think both possibilities are incredibly risky if the AI transition occurs in the near future.

OscarD🔸 @ 2025-10-25T00:49 (+4)

I agree that both possibilities are very risky. Interesting re belief in hell being a key factor, I wasn't thinking about that.

Even if a future ASI would be able to very efficiently manage todays economy in a fully centralised way, possibly the future economy will be so much more complicated that it will still make sense to have some distributed information processing in the market rather than have all optimisation centrally planned? Seems unclear to me one way or the other, and I assume we won't be able to know with high confidence in advance what economic model will be most efficient post-ASI. But maybe that just reflects my economic ignorance and others are justifiedly confident.

Wei Dai @ 2025-10-26T01:44 (+7)

Interesting re belief in hell being a key factor, I wasn't thinking about that.

It seems like the whole AI x-risk community has latched onto "align AI with human values/intent" as the solution, with few people thinking even a few steps ahead to "what if we succeeded"? I have a post related to this if you're interested.

possibly the future economy will be so much more complicated that it will still make sense to have some distributed information processing in the market rather than have all optimisation centrally planned

I think there will be distributed information processing, but each distributed node/agent will be a copy of the central AGI (or otherwise aligned to it or shares its values), because this is what's economically most efficient, minimizes waste from misaligned incentives and so on. So there won't be the kind of value pluralism that we see today.

I assume we won't be able to know with high confidence in advance what economic model will be most efficient post-ASI.

There's probably a lot of other surprises that we can't foresee today. I'm mostly claiming that post-AGI economics and governance probably wont look very similar to today's.

Joseph_Chu @ 2025-10-23T14:59 (+31)

I'm curious what you think of Geoffrey Hinton's recent comments during his interview with Jon Stewart, where he said that in a recent trip to China, he met with a member of the Politburo and found that this person was very serious about the concerns of AI safety and AI takeover and that Hinton felt that China was more likely to do things about it than the U.S.

Also, while it's definitely true that China hasn't embraced most western liberal values like multiparty democracy, rule of law, and human rights, you can debate some of the finer points and argue that, for instance, the Marxist intellectual tradition is western in origin, and that China's alternative to western liberalism is a strange mixture of Marxism and Confucianism.

And, it might be noted regarding ethnic minorities that while separatism is severely punished, minorities that conform to the existing system are often rewarded with, for instance, extra points on the university entrance examination system (Gaokao), as a form of affirmative action.

Back to moral philosophy, the nature of Chinese moral philosophy seems to be more practical than analytical. Probably the most analytical moral philosophy to come out of China was Mohism, which considering how much it predates it, is very, very similar to Utilitarianism in being an overall consequentialist framework with an emphasis on human equality and the greatest good. Interestingly, some of the CCP literature in the past has tried to emphasize Mohism as some kind of forerunner to modern Marxism.

In terms of the future going well, I think the strongest argument for a CCP aligned AGI being beneficial would be that some kind of post-scarcity communism is likely to achieve more human flourishing than the techno-feudalism that western capitalism could potentially devolve into with the AGI company leaders owning everything and the rest of us surviving on basic income that exists at the whim of these AGI owners.

The CCP, for all its faults, is nominally still a communist party, and so is more likely to, given an actual chance to succeed at it, introduce post-scarcity communism that spreads the benefits of AGI in a generally egalitarian way. Though, obviously a possible failure state is that the party instead monopolizes AGI's benefits and we still get techno-feudalism, albeit state-run instead of private.

Also, while China ranks in the middle on the World Happiness Report, it actually ranked highest on the IPSOS Global Happiness Report from 2023, which was the last year that China was included in the survey.

As for the lack of charitable donations, there are probably a number of reasons for this. Certain scandals involving the Red Cross have in the past made people weary of donating. And, probably more significantly, Chinese cultural expectations mean that a lot of what would be charitable work in the west is expected to be done by either family or the government. I personally have tried to convince some Chinese nationals to donate to, for instance, AMF, and their response is usually along the lines of this being the local government's responsibility. There is definitely a strain of collectivism in China that contrasts with the individualism of western liberal democracies.

So, I think, a CCP led AI future would probably be notably different than a western led one, but I'm unclear on whether this would actually be that much worse. At the end of the day, both would, ideally, be led by humans and human-aligned ASI.

OscarD🔸 @ 2025-10-25T00:55 (+2)

Interesting, I hadn't seen that interview. I stand by the overall claim that AI safety is more prominent in the West than China, though I am glad to see more people in China becoming safety-oriented.

Re the CCP being more redistributionist: that could be the case, but I am also worried that once individuals aren't economically useful their interests won't be looked out for as much by the state, unless they stay politically empowered, which requires democracy. I think the CCP would still care enough about its people to distribute AI benefits to them even when the people aren't useful investments, but I'm unsure. Whereas I think I would be more surprised if e.g. the US let its people be greatly deprived even if they were ~useless deadweights.

titotal @ 2025-10-22T18:13 (+17)

This analysis seems to have an oddly static view of the future, as if the values of the current day CCP will be locked-in forever. But the worldview of chinese leadership has changed massively, multiple times over the last century.

It's easily seen in history that the economic organisation of a company can have huge effects on it's culture and how it's governed: Would not the advent of powerful AI do the same? For example, perhaps the chinese would have more time to discuss moral philosophy if AI allowed them to not have to work as hard. 

OscarD🔸 @ 2025-10-25T00:56 (+2)

Possibly, though I expect ASI could also be used to lock in one's values such that there will be more stasis unless the people in pwoer deliberately embrace dynamism and liberalism of values.

Kevin Xia 🔸 @ 2025-10-22T08:29 (+14)

Super interesting read, thanks for writing this! I have been thinking a bit about the US and China in an AI race and was wondering whether I could get your thoughts on two things I have been unsure about:

1) Can we expect the US to remain a liberal democracy once it develops AGI? (I think I first saw this point brought up in a comment here), especially given recent concerns around democratic backsliding? (And if we can't, would AGI under the US still be better?)

2) On animal welfare specifically, I'm wondering whether the very pragmatic, techno-optimistic, efficiency stance of China could make a pivot to alternative proteins (assuming they are an ultimately more efficient product) more likely than in the US, where alt-proteins might be more of a politically charged topic?

I don't have strong opinions on either, but these two points first nudged me to be significantly less confident in my prior preference for the US in this discussion.

OscarD🔸 @ 2025-10-22T17:58 (+2)

Great points, I agree both of those are concerns, and don't have much to add. I think the risk of further democratic backsliding in the U.S. is very real, and could be AI-exacerbated. But I suppose a risk of backsliding is better than China already being autocratic.

And interesting re alt proteins, yes that seems quite plausible to me! If this ends up being hte crux it would probably be worth foing more surveys and social science work to understand this better.

Kiara 🔸 @ 2025-10-22T15:11 (+9)

Thanks for writing this, this is a super valuable question to be asking. I've been wondering about this myself recently.

Can I ask what your level of confidence is for these conclusions, or your knowledge of China generally, given that you stated you are more familiar with the U.S.? My level of information about China is not super high either (I do have a degree in Global and International Studies, but spent relatively little time focused on China), but I did find myself questioning some arguments / wondering what info they are based on. This is a valuable exercise even if you don't have high confidence in your China knowledge, but it would be helpful to have a sense of what that level is.

If it helps for context, here's some examples of what stuck out to me:

I hope that this doesn't come across as super critical! (Tone online can be hard to get right). I think this was a really good post and found it very valuable, I just feel it would be good to know up front how highly you would rate your knowledge/confidence that led to these conclusions.

OscarD🔸 @ 2025-10-22T18:09 (+3)

Thanks, not over-critical at all! Good point: I am fairly confident that by my values a US-led future would be better, but I am quite uncertain how large this effect is, and each individual consideration/argument is fairly fuzzy.

I don't have any particular China expertise, but I work in international AI governance so try to stay quite familiar with at least AI-relevant aspects of things going on in China.

  • Moral innovation: I was considering citing something like comparing some university rankings for philosophy vs natural sciences where Chinese universities seem to do better in the latter than the former. But I'm not sure how much to trust such rankings, and my claim is more vibes-based that even though things I hear are very Western-tinted, I feel far more likely to hear about cutting-edge scientific work coming out of China than cutting-edge philosophy. Though yes, of course it is also the case that I personally just find Western philosophy more useful (specifically analytic philosophy, not continental).
  • Economic stasis: True, I think China is becoming more innovative and dynamic technologically/economically, and it is possible it will overall catch up with the West. Though my guess is that liberal, capitalist political-economic systems will still overall prove better for long-run innovation. 
Arepo @ 2025-10-22T14:06 (+5)

Cool to see someone trying to think objectively about this. Inspired by this post, I had a quick look at the scores on the world happiness report to compare China to its ethnic cousins, and while there are many reasons to take this with a grain of salt, China does... ok. On 'life evaluation', which appears to be the all things considered metric (I didn't read the methodology, correct me if I'm wrong), some key scores:

Taiwan: 6.669

Philippines: 6.107

South Korea: 6.038

Malaysia: 5.955

China: 5.921

Mongolia: 5.833

Indonesia: 5.617

Overall it's ranked 68th of 147 listed countries, and outscores several (though I think a minority of) LMIC democratic nations. One could attribute some of its distance from the top simply as a function of lower GDP per capita, though one could also argue (as I'm sure many do) that its lower GDP per capita is a result of CCP control (though maybe if this is true and is going to continue to be true, that's incompatible with the idea that they've got a realistic chance of winning an AI arms race and consequently dominating the global economy).

One view I wish people would take more seriously is the possibility that it can be true both that

OscarD🔸 @ 2025-10-22T17:55 (+2)

Interesting, yes perhaps liberalising/democratising China may be desirable but not worth the geopolitical cost to try to make happen.

Joseph_Chu @ 2025-10-23T15:01 (+1)

As I mentioned in another comment, while China ranks in the middle on the World Happiness Report, it actually ranked highest on the IPSOS Global Happiness Report from 2023, which was the last year that China was included in the survey.

Jack_S🔸 @ 2025-10-27T23:40 (+6)

I'd lean towards the World Happiness Report results here. IPSOS uses a fully online sample, which means you end up losing the "bottom half" of the population. World Happiness Report is phone and in-person.

Joseph_Chu @ 2025-10-28T13:54 (+2)

Oh, thanks for the clarification! I totally missed that difference.

Given how the "bottom half" of China's population is, to my admittedly cursory knowledge, mostly the poor rural farmers and migrant workers who have benefited a lot less from China's recent economic growth, and are likely a big reason why China's GDP per capita is still a fair bit lower than most western developed countries despite the shiny new city skylines, it makes sense that including that segment would make a big difference in the evaluation.

Thanks again! That actually makes me update on my earlier evaluation of the utilitarian impact of China a lot.

Jackson Wagner @ 2025-10-23T17:51 (+4)

Linking my own thoughts as part of previous discussion "How confident are you that it's preferable for America to develop AGI before China does?".  I generally agree with your take.

Owen Cotton-Barratt @ 2025-10-23T00:53 (+4)

I might have thought that some of the most important factors would be things like: 

(Roughly because: either power is broadly distributed, in which case your comments about liberal democracy don't seem to have so much bite; or it's not, in which case it's really the values of leadership that matter.) But I'm not sure you really touch on these. Interested if you have thoughts.

OscarD🔸 @ 2025-10-25T23:02 (+2)

Not sure I follow properly - why would liberal democracy not matter? I think whether biological humans are themselves enhanced in various ways matters less than whether they are getting superhuman (and perhaps super-wise advice). Though possibly wisdom is different and you need the principal to themselves be wise, rather than just getting wise advice.

Owen Cotton-Barratt @ 2025-10-26T15:00 (+2)

Yeah roughly the thought is "assuming concentrated power, it matters what the key powerful actors will do" (the liberal democracy comment was an aside saying that I think we should be conditioning on concentrated power).

And then for making educated guesses about what the key powerful actors will do, it seems especially important to me what their attitudes will be at a meta-level: how they prefer to work out what to do, etc. 

Elias Au-Yeung @ 2025-10-23T15:43 (+3)

Higher-variance outcomes seem more likely with a U.S.-led future than with a China-led future.

This might be one reason to think that worst-case outcomes are more likely in a U.S.-led future.