Are China’s Calls for Global AI Governance Strategic or Genuine?

January 22, 2026

In a Gist

"China is ready to work with all other countries in a spirit of openness and cooperation to advance sci-tech innovation... and address global challenges in such areas as AI governance." — Vice-Premier He Lifeng, Davos, January 21, 2026

This is the latest in a consistent two-year drumbeat from Beijing. The record:

On safety risks, China has been unusually forthcoming. Vice-Premier Ding Xuexiang warned at Davos 2025 that unregulated AI could become a "grey rhino"—a visible but ignored catastrophe—adding: "if the braking system isn't under control, you can't step on the accelerator with confidence." Xi chaired a rare Politburo study session on AI in April 2025, warning of "unprecedented risks." At Bletchley in 2023, China signed a declaration acknowledging AI's "unintended issues of control"—language pointing at existential risk.

On global governance, Beijing moved from rhetoric to institution-building. Xi launched the Global AI Governance Initiative (October 2023), calling for a UN-based AI institution. China proposed WAICO with a 13-point action plan (July 2025). Chinese scientists contributed to the International AI Safety Report. China launched a bilateral AI dialogue with the UK (May 2025).

On military AI, Xi and Biden agreed in Lima (November 2024) that "human beings and not AI should make decisions over nuclear weapons"—the first such bilateral commitment.

These initiatives nest within Xi's broader framework: the Global Security Initiative (2021), Global Development Initiative (2021), and Global Civilization Initiative (2023). As our Strategic Memo argues, this points toward comprehensive AI governance spanning military and civilian domains—the scope the 1946 Baruch Plan envisioned for nuclear technology.

Is this strategic posturing? The skeptical case is obvious: China trails the AI race, so slowing development benefits the follower. WAICO's proposed Shanghai headquarters reveals institutional self-interest. Positioning as the "responsible power" helps Beijing's image when American protectionism offers contrast.

Yet several factors suggest genuine concern. China's leadership is unusually science-literate—Xi studied chemical engineering. Unlike nations whose governance consists of declarations, China has implemented binding regulations: pre-deployment safety assessments, content watermarks, and removal of 3,500+ non-compliant products. The two-year consistency—across venues, officials, and formats—suggests institutional commitment, not opportunism.

The question isn't whether Chinese motives are pure—no nation's are. The question is whether this window can be leveraged for a serious US-China-led treaty before it closes. The upcoming Trump-Xi meeting in Beijing offers that opportunity.

A Consistent Two-Year Record

The 2023 Inflection Point

Credit belongs first to UK Prime Minister Rishi Sunak, who bet enormous political capital on the Bletchley Park AI Safety Summit. When most leaders treated AI as economic opportunity, Sunak insisted on centering safety risks—including existential ones. He convened the first global summit on frontier AI dangers, secured US and China participation, and created political space for serious discussion of risks governments preferred to downplay.

China seized the opening. Two weeks before Bletchley, Xi announced the Global AI Governance Initiative, acknowledging "unpredictable risks and complicated challenges" and calling for "common, comprehensive, cooperative, and sustainable security." The timing was deliberate—positioning Beijing as constructive partner.

At Bletchley, Vice Minister Wu Zhaohui sat alongside US Commerce Secretary Raimondo and UK Technology Secretary Donelan in what observers called a "moment of symbolic unity." China signed the Declaration committing to "identifying AI safety risks of shared concern."

2024-2025: Deepening Commitments

The momentum continued:

  • November 2024: Xi-Biden Lima agreement on human control of nuclear weapons and "prudent and responsible" military AI development

  • January 2025: Ding Xuexiang's "braking system" speech at Davos; warning of "grey rhino" risks

  • April 2025: Xi chairs Politburo study session warning of "unprecedented" AI risks

  • May 2025: China-UK bilateral AI dialogue launched

  • July 2025: WAICO proposal and Global AI Governance Action Plan announced

  • January 2026: He Lifeng's Davos address reaffirming cooperation on AI governance

As Brian Tse of Concordia AI documents, China "issued more national AI standards in the first half of 2025 than in the previous three years combined."

Integrated with Broader Vision

He Lifeng's Davos 2026 speech explicitly connected these threads: "President Xi has put forward a global development initiative, a global security initiative, a global civilization initiative and, last year, the global governance initiative."

AI governance isn't standalone Chinese priority—it's part of an integrated worldview treating global challenges as requiring cooperative solutions. This matters: a comprehensive AI treaty must address dual-use technologies spanning military and civilian applications—precisely the scope of the Baruch Plan.

The Strategic Positioning Argument

China trails in frontier AI. Any framework constraining maximum-speed development benefits the follower. WAICO's Shanghai headquarters would give China institutional influence. And "responsible power" framing helps Beijing's image when US protectionism offers contrast.

These advantages don't make China insincere—they make it rational. American nuclear non-proliferation support served US interests and reflected genuine concern. Strategic benefit and genuine concern aren't mutually exclusive.

Why No Direct Treaty Proposal?

If China wants comprehensive governance, why no specific treaty covering dual-use military/civilian AI?

Likely answer: any serious treaty must be co-led with the United States. A China-only proposal lacks legitimacy, enforcement capacity, and coverage of leading AI developers. Beijing appears to understand that pursuing the impossible serves no one. Better to signal willingness and wait for American partnership.

This interpretation is supported by China's indication it was open to continuing Biden-era AI risk conversations—the channel that yielded the nuclear weapons agreement.

The Post-Bletchley Silence on Extinction Risk

Since Bletchley, world leaders have largely stopped discussing AI extinction risks—despite growing scientific consensus. Even MI5 Director Ken McCallum warned in October 2025 of "potential future risks from non-human, autonomous AI systems which may evade human oversight and control."

The explanation: after Bletchley acknowledged control risks, the AI race intensified. Any leader warning of extinction now risks being labeled a "doomer" setting their nation back economically. The political cost of stating uncomfortable truths has become prohibitive.

Breaking this equilibrium requires creating political cover for honest discussion—precisely what a bilateral US-China framework could provide.

Indicators of Genuine Concern

Strategic positioning doesn't explain the full picture.

Science-Driven Leadership

China's Politburo Standing Committee has unusually high engineer/scientist representation. Xi studied chemical engineering. This technical literacy shapes how leadership processes AI risk assessments—they understand the science in ways many Western politicians don't.

That Xi personally convened an AI Politburo study session signals internal priority-setting, not merely external diplomacy.

Domestic Regulatory Action

Unlike nations whose governance consists of declarations, China has implemented binding regulations: pre-deployment safety assessments, watermark mandates, pre-approval for public LLMs. Regulators removed 3,500+ non-compliant products.

This contrasts with the US, where Executive Order 14179 revoked Biden-era safety requirements.

Scientific Engagement

Chinese researchers contributed substantively to the International AI Safety Report and Singapore Consensus on AI Safety Research Priorities. This goes beyond diplomatic gestures—it's genuine intellectual contribution to understanding AI risks.

Conclusion: Window of Opportunity

While there is not certainty that China and Xi are not in reality aiming a global domination and any proposal for bold and sane global AI governance is just positioning, much evidence points to the fact that the contrary is most probable.

Much evidence points to the contrary being more probable:

  • Two years of consistent messaging across different officials, venues, and formats—not the pattern of opportunistic rhetoric

  • Binding domestic regulations imposing real costs on Chinese industry—not the behavior of a nation treating governance as pure posturing

  • Scientific engagement contributing substantively to international safety research—not the approach of actors merely seeking diplomatic cover

  • Integration with Xi's broader philosophical framework—suggesting AI governance fits a genuine worldview, not a tactical add-on

  • The nuclear weapons agreement with Biden—extending cooperation into the most sensitive military domain

None of this guarantees Chinese sincerity. But it suggests a meaningful probability that Beijing would genuinely engage in a serious, comprehensive AI treaty if the United States offered credible partnership.

This should give us hope. Because it means the chokepoint of global AI governance—and therefore, plausibly, the future trajectory of humanity—rests on a single question:

Can Trump be persuaded to co-lead with Xi an extraordinarily bold, timely, and properly scoped global AI treaty?

Not whether China would cooperate. Not whether the technical challenges are solvable. Not whether middle powers would join. The evidence suggests these obstacles are surmountable. The binding constraint is American willingness to lead.

The upcoming Trump-Xi meeting in Beijing offers precisely that opportunity. The Strategic Memo we've developed maps how Trump's own advisors—figures like JD Vance, David Sacks, and even Elon Musk—might be persuaded that a US-China-led AI treaty serves American interests, American values, and Trump's own legacy.

The window won't stay open forever. But right now, it appears open.

Join us in our The Deal of the Century.


The Coalition for a Baruch Plan for AI advocates for a US-China-led international treaty to govern transformative AI before the window closes. Read our Strategic Memo and support this effort.

Rufo Guerreschi

I am a lifetime activist, entrepreneur, and researcher in the area of digital civil rights and leading-edge IT security and privacy – living between Zurich and Rome.

https://www.rufoguerreschi.com/
Next
Next

Your Gut Feeling About AI is Right. And You Are Not Alone.