A Credible Path to
Global AI Governance
Leading AI labs assign a 10-25% probability to catastrophic outcomes from advanced AI. Governance has not kept pace. The current trajectory — an unregulated US-China race — increases risk for everyone, with the prospect of loss of control, nuclear conflict or immense concentrations of power and wealth
We're building an alliance coalition of key potential influencers of Trump's AI policy to change that trajectory.
(If you are an AI safety expert, click here for a donation case for you!)
Our Approach
We target the critical leverage point: key influencers of US AI policy in the current administration. We've identified eight figures with outsized influence—JD Vance, Elon Musk, Peter Thiel, Sam Altman, Steve Bannon, Tucker Carlson, Joe Rogan, and Pope Leo XIV—and developed tailored persuasion strategies for each.
While most AI governance efforts focus on technical research, voluntary commitments, or EU-style regulation, we focus on the hardest target: a binding international treaty with verification mechanisms. History suggests transformative dual-use technologies ultimately require hard governance.
"This Will Never Work—Serious Treaties Don't Happen"
You are right to be sketpical. But the historical record offers more hope than cynics admit.
The lesser precedents. Several treaties have successfully constrained dangerous technologies through superpower cooperation: The Antarctic Treaty (1959) demilitarized an entire continent at the height of the Cold War. The Outer Space Treaty (1967) banned nuclear weapons in orbit when the space race was at its most intense. The Montreal Protocol (1987) reversed ozone depletion within a decade. These succeeded because both sides recognized that unconstrained competition served no one's interests.
But the greater precedent is the Baruch Plan itself. And here the lesson is not failure but near-success.
In June 1946, the United States—holding a nuclear monopoly—proposed to surrender that advantage to international control. The plan passed the UN Atomic Energy Commission 10-0 (with two abstentions). It failed only at the Security Council, where the Soviet veto killed it.
Why did it fail? Not because the idea was wrong:
The treaty-making process was too slow. By the time serious negotiations began, the Soviets were months from their own bomb.
Verification technology didn't exist. 1946 offered no way to confirm compliance without intrusive inspections.
The political coalition was too narrow. Truman supported it, but key advisors quietly undermined it.
Every one of these failure modes is addressable today. Verification technology has transformed: satellite imagery, compute monitoring, AI-assisted inspection. Leaders can move faster than 1946 diplomats. And the coalition-building we propose specifically targets the failure mode of insufficient political alignment.
If Truman could propose surrendering America's greatest strategic advantage at the peak of its nuclear power, Trump could propose sharing the burden of AI's greatest strategic risk.
(Refer to our new 354-page Strategic Memo 2.5 for details)
"Won't This Lead to Authoritarian Dystopia?"
This is Peter Thiel's stated primary concern—and likely the hesitation behind most AI lab leaders' reluctance to support strong governance. We take it seriously.
The paradox of mutual distrust. Trump, Xi, and other leaders who would negotiate this treaty are not paragons of democratic governance. But their distrust of each other is a feature, not a bug. For a treaty to be credible to them, it must include enforcement mechanisms that none of them can circumvent unilaterally.
This requires extreme transparency in monitoring, self-enforcing mechanisms that don't depend on any single party's goodwill, and distributed protocols requiring consensus across multiple nations. To trust a system that constrains their rivals, each leader must accept that it also constrains them.
China's paradoxical interest in democratic global governance. Beijing would never accept a treaty governed by a single leader-for-life model—that would threaten Chinese sovereignty. China has strong incentives to push for governance structures that prevent any single nation (including the US) from dominating.
Structural safeguards we advocate:
Subsidiarity principles limiting treaty scope to genuine global necessities
Federated oversight preventing any single point of control
Anti-bureaucracy mechanisms (drawing on Thiel/Trump/Sacks critiques) ensuring competence
Pope Leo XIV's moral authority pushing toward human dignity
The alternative is worse. A world dominated by unaligned ASI or by China's surveillance state model would offer no democratic safeguards at all. The treaty path is risky. The no-treaty path is catastrophic.
(Refer to our new 354-page Strategic Memo 2.5 for details)
"Won't This Fail to Prevent ASI Anyway?"
Perhaps. No governance system offers certainty. But the question is comparative: does coordinated action improve our odds versus the status quo?
The current trajectory—multiple actors racing with minimal coordination, insufficient investment in safety, verification impossible across borders—is the worst of all worlds for preventing catastrophic AI outcomes.
A treaty framework enables: shared safety standards that don't disadvantage compliant actors, international verification of compute and training runs, coordinated response to emergent risks, and legitimate enforcement mechanisms against defectors.
77% of US voters support a strong international AI treaty (University of Maryland, 2024). The public understands what many elites resist: that unconstrained competition serves no one's interests when the downside is extinction.
(Refer to our new 354-page Strategic Memo 2.5 for details)
"Won't This Kill AI's Potential for Human Flourishing?"
This is the "hyper-Luddite" concern—that governance will strangle the golden goose.
We are not anti-AI. We are pro-human-controlled AI.
The goal is not to prevent AI development but to ensure it remains under meaningful human oversight. The Baruch Plan didn't propose eliminating nuclear energy—it proposed international control to share benefits equitably while preventing catastrophe.
Similarly, a proper AI treaty would:
Preserve innovation within safety boundaries
Distribute AI benefits more equitably than the current winner-takes-all race
Actually accelerate beneficial applications by removing the "move fast and break things" pressure
Ensure that AI serves humanity rather than a handful of corporations or states
The uncontrolled race is not producing optimal flourishing—it's producing anxiety, arms-race dynamics, and underinvestment in safety. Coordinated governance is the path to sustainable abundance.
(Refer to our new 354-page Strategic Memo 2.5 for details)
"But Can You Actually Get Access?"
This is the right question. Our answer rests on credibility transfers, not cold outreach.
We're not sending emails into the void. We've mapped the introducer networks for each target influencer—people with direct personal relationships, relevant domain expertise, and political acceptability to the Trump administration.
Our introducer categories include:
AI lab policy executives (heads of safety, policy, and government affairs at frontier labs who maintain independent relationships with policymakers)
National security experts with credibility on treaty verification
Religious leaders with Vatican connections (opening the Pope Leo XIV pathway)
Trump-aligned media voices and congressional members
AI safety researchers whose warnings provide political cover
We've already secured:
Seed funding from Jaan Tallinn via Survival and Flourishing Fund ($60K)
100+ coalition members with advisors from UN, NSA, WEF, Yale, Princeton
Active field engagement at OpenAI, Anthropic, and DeepMind headquarters
Testimonials from senior figures in AI governance, diplomacy, and security
The fundamental insight: we're not seeking mere connections but credibility transfers. An introduction from a trusted source pre-validates the seriousness of our proposal before the influencer reads a single word.
The Window
The 2025-2026 window is narrow:
New administration still forming policy. Key appointments (David Sacks, Michael Kratsios) have backgrounds sympathetic to strategic engagement.
Anticipated Trump-Xi engagement creates natural venue for treaty discussion.
Shifting elite opinion. Growing recognition that unregulated race benefits no one.
Once policy hardens, changing course becomes exponentially harder.
Funding Needs: $50K-$400K
Every dollar goes to mission—no fancy offices, no bloated staff. We operate at ~$7.5K/month, a fraction of typical DC policy organizations.
$50,000 — Extend and Amplify
Extend our runway into mid-2026. Add one outreach support staff. Engage part-time consultants to expand and sharpen the Memo. Launch highly targeted select op-eds and podcasts in Washington. Fund critical travel.$150,000 — Building Momentum
Sustain twelve months of growth. Hire two part-time consultants. Organize closed-door dinners in Washington, DC and Mar-a-Lago. Support targeted media framing the treaty as “peace through strength.” Expand NGO and think tank partners. Prepare a more extensive second persuasion tour.$400,000 — Breakthrough Scale
Transform capacity. Build unstoppable momentum for Trump to champion the Deal of the Century. Move from one to three full-time staff. Launch a highly targeted, subdued, multi-channel campaign across podcasts, op-eds, and outreach. Deepen the scientific and diplomatic scope and breadth of new versions of the Strategic Memo.
Donate Now
Email us: cbpai@trustlesscomputing.org
Donate directly:
Via credit card or PayPal on Patreon
Via credit card, wire transfer or crypto on the AI safety funding portal Manifund
Via wire transfer:
Recipient: Coalition for a Baruch Plan for AI ETS
IBAN: BE17 9054 0557 7821
Swift/BIC: TRWIBEB1XXX
Bank: WISE, Rue du Trône 100, 3rd floor, Brussels, 1050, Belgium
For donations above $50,000, reach out directly to discuss how your contribution maps to specific deliverables.
Every contribution matters. $25 helps cover research costs. $100 supports travel to key meetings. $500 funds a month of strategic communications.
Questions? Want to learn more? Contact us at rufo@trustlesscomputing.org