An Open Letter to Drafters and Signers of Recent Calls for a Global AI Treaty
By Rufo Guerreschi, Executive Director, Coalition for a Baruch Plan for AI (CBPAI)
February 5th, 2026
During Fall 2025, over 400 prominent figures, most top AI scientists, two dozen Nobel and Turing Laureates and dozens of leading NGOs, have signed multiple calls for the urgent establishment of global AI treaties to prevent timely and very significant safety risks of AI, while mostly embracing a similar humanist worldview:
The Global Call for AI Red Lines demanded an “international agreement on red lines for AI — ensuring they are operational, with robust enforcement mechanisms — by the end of 2026, able to prevent the worst risks of AI, including loss of control.
The Statement of Superintelligence, states that they“call for a prohibition on the development of superintelligence, not lifted before there is (1) broad scientific consensus that it will be done safely and controllably, and (2) strong public buy-in.”
The Global Appeal for Peaceful Human Coexistence and Shared Responsibility, - coordinated by Paolo Benanti, lead AI advisor to Pope Leo XIV - calls for “demands moral courage, meaningful accountability mechanisms, farsighted leadership from all sectors of society and binding international treaty establishing red lines and an independent oversight institution with enforcement powers.”
Drafters of the Vatican-led Global Appeal for Peaceful Human Coexistence and Shared Responsibility
These calls follow up on earlier calls for a bold global AI treaty, led by volunteer-based European micro-NGOs. It all started in 2023, with the AItreaty.org (led by Tolga Bilge), followed in 2024 by the Trustless Computing Association’s Open Call for the Harnessing AI Risk Initiative (led by Rufo Guerreschi) and in December 2024 by the Open Call for a Coalition for a Baruch Plan for AI, promoted by both these orgs and four more.
From Calls to Action: Praiseworthy Efforts, but a Missing Link
These calls are not just sitting on paper. Several serious initiatives are working to translate them into institutional reality. All are praiseworthy — and all share the same structural limitation.
The United Nations has moved faster than usual. In August 2025, the General Assembly established a 40-member Independent International Scientific Panel on AI and a Global Dialogue on AI Governance, rooted in Secretary-General Guterres's High-Level Advisory Body recommendations. Guterres has been saying since 2023 that he'd favor an AI agency "inspired by what the IAEA is today" — but he also acknowledged that "only member states can create it, not the Secretariat of the United Nations." The UN can provide the global architecture. What it cannot do is compel the two nations that are madly racing to build superintelligence to participate. As Michael Kratsios, speaking for the Trump administration at the UN Security Council, stated, the US "totally rejects all efforts by international bodies to assert centralized control and global governance of AI." and “The path to this world is found” in the “prudence and cooperation of statesmen”.
Coalitions of non-superpower nations are forming. DeepMind CEO Demis Hassabis has called for the UK, France, Canada, and Switzerland to band together as a "counterweight to the two global superpowers" on AI governance. The Future of Life Institute — the world's leading AI safety advocacy organization — is working with governments on international AI governance frameworks and advising multiple nations on treaty design. These are important groundwork efforts, and they could become essential complements to a superpower-led initiative.
China has moved most aggressively. In July 2025, Premier Li Qiang formally proposed the World AI Cooperation Organization (WAICO), a Shanghai-based international body to coordinate global AI governance. President Xi further championed the initiative at the November 2025 APEC summit. WAICO is ambitious in scope, but without US participation, it would be a governance body that excludes the nation with the most advanced AI labs in the world. While
AI Safety Summits (Bletchley, Seoul, Paris, soon Delhi) have fostered valuable international dialogue. But they produce communiqués, not enforcement mechanisms. At the February 2025 Paris Summit, the US and UK refused to sign even a non-binding communiqué on AI safety.
Each of these initiatives is doing valuable work. But they all face the same bottleneck: without joint US-China leadership, a global AI treaty is either unenforceable or irrelevant. The compute, the talent, and the frontier labs are concentrated in two countries. Any governance framework that doesn't bind both is a framework that governs nothing that matters.
Worse, some of these initiatives carry a paradoxical risk. A publicly launched coalition of willing states pursuing a global AI treaty — however well-intentioned — could actually steal the thunder from Trump and Xi, making it politically harder for either leader to champion a treaty that appears to have originated from others. As our Strategic Memo v2.6 argues: coalitions of non-superpower nations launched as substitutes for superpower leadership are counterproductive. Launched as complements following a US-China declaration of intent, they become invaluable.
The Single Path — and the One Uncomfortable Truth
So the logic is straightforward: a binding AI treaty requires US-China co-leadership. Xi Jinping is already there. Since launching the Global AI Governance Initiative in October 2023, China has signed the Bletchley Declaration acknowledging AI risks, proposed WAICO with a 13-point action plan, agreed with Biden that humans, not AI, must control nuclear weapons, and Vice-Premier Ding Xuexiang warned at Davos 2025 that unregulated AI is a "grey rhino." Unlike many Western governments, China has already implemented binding domestic AI regulations: pre-deployment safety assessments and watermark mandates. The evidence suggests Beijing would genuinely engage in a serious treaty — if the United States offered credible partnership.
The bottleneck is Donald Trump.
This is where most people stop. The assumption — understandable, widely held — is that Trump's radical unilateralism, his anti-regulatory instincts, and his style make this impossible. Most AI governance organizations have simply written off this path. They focus instead on what seems achievable within existing institutional channels.
We believe this is a mistake — not because convincing Trump is easy, but because every other path leads to a dead end without US leadership.
And convincing Trump doesn't start with public pressure. His voter base is already there: 63% of US voters believe it's likely that "humans won't be able to control AI anymore." 77% of all US voters support a strong international AI treaty. Among Republicans specifically, 78% believe AI could eventually pose an existential threat. The political soil is already fertile.
What's missing is someone working the people around Trump — on pragmatic grounds, in language he responds to, with arguments tailored to each influencer's specific worldview. That's the gap. And almost nobody is filling it.
That's precisely what we're doing.
It's More Feasible Than You Think
Most people in the AI safety community assume that Trump leading a global AI treaty is a fantasy. We've spent fifteen months pressure-testing that assumption. The evidence points the other way.
Trump is a pragmatist, not an ideologue. He has shown opposition to domestic AI regulation, but has never explicitly opposed a deal with China on AI. His historically low approval ratings — hovering around 35-40% in early 2026 — create hunger for a transformative achievement. His foreign policy record — the Abraham Accords, the Kim Jong-un summit, pressuring European allies on defense spending — demonstrates that he can and does pursue deals that seemed impossible beforehand.
His psychology — the appetite for big, legacy-defining moves, the instinct for unpredictable pivots, the negotiating style that thrives on ambiguity before dramatic reveals — actually favors rather than precludes bold treaty-making. He can walk into his meeting with President Xi this April with maximum ambiguity, then pivot dramatically to claim a "historic deal" on his own terms.
In this blog post the author details 22 specific reasons why Trump could be persuaded, from his pragmatic character to the political opportunity created by voters' fears, to China's demonstrated willingness to engage, to the fact that a bold AI treaty could help him contain rivals like Musk whose xAI firm increasingly challenges government authority.
But here's our most counterintuitive finding: the key influencers around Trump are far more aligned with treaty objectives than they appear.
After eleven months of intensive analysis — reviewing thousands of pages of interviews, speeches, podcasts, and writings across a dozen key figures — we've discovered that most of them are motivated more by philosophy, values, and legacy than by wealth or power per se. This creates openings that pure interest-based lobbying would miss.
JD Vance has publicly deferred to Pope Leo XIV on AI ethics, stating, "The American government is not equipped to provide moral leadership… I think the Church is." He genuinely believes in protecting human dignity from technological displacement — a core treaty objective.
Steve Bannon has delivered the most explicit MAGA endorsement of AI governance to date, calling for immediate AI treaties on his War Room podcast: "I would have treaties, and stop this immediately." He frames unregulated Silicon Valley as both neoliberal globalism and dystopian post-humanism — language that resonates deeply with Trump's base.
Sam Altman has called for international AI governance and acknowledged the need for "world governance" of superintelligence. Dario Amodei has warned consistently about extinction risks of 10-20% and written extensively about the unacceptability of deploying AI systems we cannot understand.
Pope Leo XIV chose his papal name explicitly to signal that AI governance will be central to his papacy. His top AI advisor, Paolo Benanti, conceived and led the Coexistence Appeal — one of the very calls this letter addresses.
These figures fall into three philosophical camps that are more compatible than they appear:
Conservative humanists (Vance, Bannon, Pope Leo XIV, Gabbard, Rubio) — already philosophically aligned with treaty objectives. They see uncontrolled AI as a threat to human dignity, meaningful work, and social cohesion.
Techno-humanists (Altman, Amodei, Hassabis, Suleyman) — trapped in a race they privately don't want. They share genuine safety concerns but feel locked into competitive dynamics. Show them the humanist alliance has momentum and that the treaty architecture preserves innovation space, and their calculus shifts.
Trans/post-humanists (Thiel, Musk) — harder to persuade, but not hopeless. Musk was the loudest voice calling for AI regulation in 2023-2024 before pivoting to racing. That wasn't a philosophical conversion — it was a probability reassessment that could shift again.
The first two camps include a majority of key influencers. A shift in even a few of eight key AI predictions — on the feasibility of enforcement, the probability of loss of control, the viability of treaty architecture — could cascade into an informal alliance that sways Trump. As our Strategic Memo documents: demonstrating convergence accelerates convergence. No one wants to be first, but everyone wants to be part of a winning coalition.
The political window is now. Trump's anticipated meetings with Xi in 2026, starting in April, create a once-in-a-generation opportunity.
The Deal of the Century
The Coalition for a Baruch Plan for AI (CBPAI) is — to our knowledge — the only initiative actively pursuing this path: a precision persuasion campaign targeting key influencers of Trump's AI policy to champion a US-China-led global AI treaty before artificial superintelligence outpaces governance.
We call it The Deal of the Century. The name is deliberate. Trump responds to deals, not declarations.
Our approach draws on the precedent of the 1946 Baruch Plan, which proposed placing all nuclear weapons research and arsenals under exclusive international control. Presented by President Truman to the United Nations — barely one hour after the birth of Donald Trump — it was approved 10-0 by the UN Atomic Energy Commission before being vetoed by the Soviet Union.
Rather than the product of a visionary president, it was the product of political circumstances: a few key influencers of Truman's nuclear policy — led by Robert Oppenheimer and Dean Acheson — convinced a pragmatic, skeptical president of the irrefutable logic. The key reasons the Baruch Plan failed — lack of proportionate diplomatic bandwidth, a fitting treaty-making model, and mutually-trusted enforcement — are now addressable.
Over the past 15 months, we've built:
A 356-page Strategic Memo (v2.6)—a treasure trove synthesizing 667+ sources with detailed persuasion profiles for each target influencer: their interests, philosophies, psychology, and key AI predictions. It includes hyper-personalized "primary hooks" — the 2-3 arguments most likely to shift each influencer's calculus — and maps of coalition synergies showing how persuading one figure accelerates the next. No other organization has assembled anything comparable.
85+ direct pathway contacts to influencers, established during our October 2025 US Persuasion Tour across the Bay Area and Washington DC. This includes 23 AI lab officials at OpenAI, Anthropic, and DeepMind, 18 national security contacts, and direct introducer pathways to 2 of our 10 primary targets — including the top AI advisors or head of policy of two targeted influencers. (2025 Achievements)
A coalition of 40+ expert advisors from the UN, NSA, World Economic Forum, Yale, Princeton, and 10 international NGOs — plus 24 contributors to the Strategic Memo. (Team and Contributors)
All on $72,000 total funding — roughly $180 per high-value meeting. We operate at ~$7,500/month, a fraction of a typical DC policy NGO.
Our 2026 Roadmap targets 150+ introducer engagements across four hubs (Bay Area, DC, Mar-a-Lago, Rome), 30+ direct engagements with influencers or their key staff, 5-8 substantive influencer meetings, Strategic Memo updates timed to the summit window, and potential closed-door convenings in Rome to catalyze a humanist AI alliance — uniting figures concerned about AI's threat to human dignity with Vatican leadership and sympathetic tech leaders, leveraging the Vatican's unique moral authority and the Pope's explicit AI governance mandate.
What We Ask of You
If you signed the AI Red Lines call, the Statement on Superintelligence, or the Global Appeal for Peaceful Human Coexistence and Shared Responsibility, or those previous calls, you already agree on the destination. We're building the road. Here's how you can help pave it.
Leave a testimonial. Your endorsement strengthens the credibility of the only initiative we know of that is actively pursuing treaty-making with the people who hold the power to make it happen. A sentence from a recognized signer of these appeals carries weight with influencers, introducers, and funders alike.
Sign the Open Call for a Coalition for a Baruch Plan for AI. Over 100 experts have already endorsed this call. Adding your name — especially as a signer of AI Red Lines or the Coexistence Appeal — signals growing convergence across the AI safety community.
Apply to join as an advisor. We're looking for expertise in AI policy, diplomacy, national security, and treaty design — or access to influencer networks. Advisors contribute to Strategic Memo refinements, help shape convergence strategies, and open doors we can't reach alone.
Join our Rome convenings. In June 2026, we're planning closed-door meetings in Rome — and potentially the Vatican — bringing together key introducers, advisors, and sympathetic influencers around the Pope's explicit AI governance mandate. These gatherings will leverage the Vatican's unique moral authority and convening power to catalyze the humanist AI alliance at the heart of our strategy. Participation is possible both in person and remotely. See our 2026 Roadmap for details.
Donate — or introduce us to funders. We need $100,000–$400,000 to scale through the critical 2026 political windows: 2-3 hires to execute coordinated outreach across DC, Bay Area, Rome, and Mar-a-Lago. Our capital efficiency is exceptional — ~$7,500/month, ~$180 per high-value meeting, $72,000 total to build everything we've built so far. Every dollar goes directly to the mission. Donate here | Case for AI Safety Experts | Case for Concerned Citizens
Apply for any of the above here.
Calls and appeals establish moral authority. Scientific consensus shifts the Overton window. But the next step isn't another signature — it's a strategy to reach the handful of people who will decide whether humanity governs AI or AI governs humanity.
The Deal of the Century won't negotiate itself.