After a US-China Emergency Deal, Only a Realist Constitutional Convention Can Deliver the Global AI Treaty We Need
by Rufo Guerreschi on February 9th, 2026
Abstract: Assume the hardest part happens: the US and China reach a temporary emergency agreement on AI. What comes next? The honest answer is that every conventional treaty-making model — UN multilateralism, plurilateral clubs, voluntary frameworks, even a simple bilateral expansion — will fail to produce the global treaty we actually need, on a timeline that matters. There is exactly one model with a realistic chance of success: a Realist Constitutional Convention for AI, with voting weighted by GDP, technological leadership, and population, promoted by the two AI superpowers in broad agreement with most middle powers. Here's why nothing else comes close.
The Starting Assumption
Let's stipulate something ambitious: the United States and China have reached a temporary emergency agreement on AI. Perhaps triggered by a near-miss incident, perhaps by political alignment at a 2026 Trump-Xi summit, perhaps by mounting public pressure — 63% of US voters already believe humans will lose control of AI.
This bilateral deal buys time. It establishes emergency constraints on the most dangerous capabilities and mutual verification mechanisms. But everyone involved knows it's temporary. A deal between two powers, however dominant, cannot govern a technology that will reshape every nation on Earth. The bilateral agreement must evolve into a comprehensive global treaty — one that is enforceable, legitimate, and fast enough to outpace the race to Artificial Superintelligence.
The question isn't whether we need a global treaty. It's which treaty-making process can actually deliver one. And when you honestly evaluate the alternatives, the field narrows to one.
Why UN-Style Multilateralism Will Fail
The dominant model for international treaty-making is some variant of UN multilateralism: large conferences, consensus-seeking, years of preparatory meetings, and binding agreements requiring unanimous or near-unanimous approval. This model has a long track record of failure on existential challenges.
The UN Framework Convention on Climate Change was signed in 1992. The Paris Agreement came in 2015 — 23 years later — and it still lacks binding enforcement mechanisms. The global temperature keeps rising. The Comprehensive Nuclear-Test-Ban Treaty was opened for signature in 1996 and still hasn't entered into force, nearly 30 years on. The UN Conference on Disarmament has been effectively deadlocked since the 1990s.
The structural reasons for these failures apply with even greater force to AI. UN treaty-making is characterized by unanimous non-binding statements, unstructured summits co-opted by a few powerful states, and big-power vetoes that can kill any agreement. With ASI timelines plausibly measured in years rather than decades, we cannot afford a process that took two decades to produce the (weak) Paris climate accord.
And there's a deeper problem. The UN's formal institutions — the General Assembly, the Security Council, the specialized agencies — lack the mandate, representativity, and decision-making structure to lead a global constituent process for AI. The G7 excludes China. The G20 operates by consensus and has no enforcement capability. The AI Safety Summits are useful talk shops but structurally incapable of producing binding agreements. As our Strategic Memo documents, these initiatives are largely smokescreens — politically useful distractions from the bilateral power dynamics that actually determine outcomes.
Why "Plurilateral" Club Models Won't Work Either
A popular alternative is the club model: a small group of like-minded nations negotiates an agreement, then invites others to join. Think of the G7's Hiroshima AI Process, or proposals for a "NATO for AI" among democratic allies.
The problem is twofold. First, any club that excludes China is negotiating about AI governance without the country that controls roughly half of the world's frontier AI research talent and a rapidly growing share of compute. It's like negotiating a climate treaty that excludes the largest emitter. The resulting agreement would be unenforceable against the very actor it most needs to constrain.
Second, club models suffer from legitimacy deficits that undermine global compliance. A treaty designed by wealthy democracies and presented to the rest of the world as a fait accompli will be resisted — fairly — by the Global South and middle powers who were excluded from its design. China is already exploiting this dynamic, positioning itself as the champion of inclusive AI governance through UN resolutions co-sponsored by 140+ countries. A club treaty would hand Beijing a propaganda victory while failing to achieve actual governance.
Why Expanding the Bilateral Deal Directly Won't Work
Could the US and China simply expand their emergency agreement, inviting other nations to join one by one? In theory, yes. In practice, this approach hits a wall fast.
Every additional state added to a bilateral negotiation effectively gains a veto — or at least the ability to delay and dilute. The process becomes progressively slower and weaker with each new participant. The history of nuclear arms control is instructive: the bilateral US-Soviet framework produced real constraints (SALT, START), but every attempt to expand it multilaterally produced weaker results or outright failure.
More fundamentally, a bilateral-plus model locks in the values, priorities, and biases of the two superpowers into a framework that will govern all of humanity. Even nations sympathetic to the US or China will resist a treaty in which they had no formal decision-making role. For the treaty to be enforceable without catastrophic conflict, it needs buy-in from at least another dozen powerful nations — not as signatories to someone else's deal, but as participants in designing it.
Why Voluntary Frameworks Are Worse Than Nothing
The weakest alternative is the one most commonly proposed: voluntary commitments, industry pledges, non-binding norms. The White House AI commitments of 2023, the Bletchley Declaration, the various "responsible AI" pledges from labs.
These evaporate under competitive pressure. Always. When billions of dollars and geopolitical dominance are at stake, voluntary commitments are worth exactly what they cost to sign: nothing. The recent US-China science and technology cooperation agreement, while positive, explicitly excludes AI and semiconductors — precisely the areas where binding agreements are most needed.
Voluntary frameworks also create a dangerous illusion of governance that reduces pressure for binding action. Every joint statement and industry pledge tells policymakers that "something is being done" when in fact nothing enforceable exists. In the narrow window before ASI, this illusion may be the most dangerous outcome of all.
The Case for a Realist Constitutional Convention
Every alternative fails for predictable, structural reasons: too slow, too exclusive, too weak, too easily vetoed, or too lacking in legitimacy. There is one model that addresses all of these failure modes simultaneously. It has proven historical precedent. And it can realistically be launched in the window opened by a US-China emergency deal.
That model is a Realist Global Constitutional Convention for AI.
Why "Realist" — and Why It Matters
Let's be precise about terminology, because a naïve constitutional convention would be a non-starter.
If you imagine a one-nation-one-vote global assembly where every country has equal say over the future of AI, no superpower would participate — and rightly so. Over three billion people remain illiterate or without internet access. A purely idealistic global democracy, however appealing in the abstract, would produce a process that no major power could accept and no treaty that could be enforced. It would be theater.
What is feasible — and what has worked repeatedly in history — is a realist constitutional convention. And the entire difference comes down to who designs the voting rules and how.
The Mechanism: Pre-Negotiated Mandate
Here is the core innovation. Before the convention begins, the two AI superpowers together with a majority of middle powers negotiate a mandate — the rules governing the entire process. This mandate defines:
Vote weighting based on GDP, technological leadership, and population. The US and China together hold roughly 25-30% of votes — enough to lead, never enough to dictate. Middle powers like the EU, India, Japan, the UK, Brazil, and South Korea collectively hold the balance.
Hard deadlines: 12-18 months maximum. When the clock runs out, voting proceeds — first seeking consensus, then supermajority, then if necessary, simple majority. No open-ended negotiations. Silicon Valley speed, not Geneva pace.
No vetoes: Supermajority rules mean no single nation — not even the United States — can block the outcome. The subsidiarity principle keeps authority local where possible, global only where necessary.
Scope constraints: The mandate defines what the convention can and cannot agree to, ensuring that both the process and its range of possible outcomes are acceptable to the nations that matter most.
The mandate is agreed before anyone sits down to negotiate substance. No one walks in blind. And because the two superpowers and middle powers jointly design these rules, every participant enters the convention knowing that the process reflects actual power realities — not utopian fiction.
Why This Model — and Only This Model — Meets Every Requirement
Consider what an adequate global AI treaty requires:
Speed. ASI timelines may be 3-10 years. Only a time-bound process with hard deadlines can match this pace. The constitutional convention model delivers: Philadelphia 1787 produced the US Constitution in four months.
Legitimacy. A treaty must be accepted by enough of the world to be enforceable. Club models and bilateral expansions fail here. A convention where every nation participates, with transparent weighted voting, produces the broad legitimacy that sustains compliance.
Realism. Superpowers won't join a process where they can be outvoted by a coalition of small states. Weighted voting solves this: the US and China retain outsize influence, but cannot dictate. Middle powers hold the balance, preventing a US-China duopoly.
Enforceability. A treaty needs teeth. The convention model produces binding agreements by supermajority — not the voluntary pledges and consensus-dependent statements that characterize UN processes. And because the superpowers helped design the rules, they have ownership of the outcome.
Protection against value lock-in. A treaty designed only by two superpowers risks locking in their values and biases for all of humanity. Including at least 8-10 key states with full decision-making rights — and all others with formal weighted voting — diversifies the perspectives shaping AI's future.
No other treaty-making model satisfies all five requirements simultaneously. UN multilateralism fails on speed and enforceability. Club models fail on legitimacy. Bilateral expansion fails on legitimacy and value lock-in. Voluntary frameworks fail on everything.
The Historical Track Record
This isn't theory. The constituent assembly model has produced some of the most consequential and durable governance frameworks in history — and each time, it worked because the most powerful participants helped design the rules first:
The 1787 US Constitutional Convention: Fifty-five delegates, four months, thirteen fractious states with wildly different economies and populations. The result was the most consequential governance document in modern history. It worked because the larger states (Virginia, Pennsylvania) helped design a weighted representation system (the Great Compromise) before substantive negotiations began.
The Swiss Confederation (1848), the German Federal Republic (1949), the European Union (from Rome 1957 to Maastricht 1993), the African Union (2002) — all emerged from constituent assembly processes where powerful actors shaped the rules, but no single actor could veto the outcome.
As OpenAI CEO Sam Altman said in March 2023, an intergovernmental constituent assembly modeled on the 1787 Convention would be the "platonic ideal" for AI governance. He was right then. The question is whether we act on it before the window closes.
The Window
Four Trump-Xi summits are planned for 2026. 77% of US voters support a strong international AI treaty. Over 400 leading scientists demand binding agreements by end of 2026. Xi has repeatedly called for global AI governance. The political, scientific, and public opinion conditions for action are aligned as never before.
If a US-China emergency deal materializes, the next question will be immediate and urgent: how do we turn a temporary bilateral agreement into a permanent global treaty? Every conventional answer to that question leads to failure. The only process that can deliver — on time, with legitimacy, with teeth — is a Realist Constitutional Convention for AI.
The alternative is not "more time to negotiate." The alternative is no treaty at all — and an uncontrolled race to build the most powerful technology in human history, governed by nothing but competitive pressure and hope.
We know how that ends.
Learn more at cbpai.org. Read our 356-page Strategic Memo for the full analysis of treaty-making models, enforcement mechanisms, and influencer convergence scenarios.