The Deal of

 the Century

This far into the race, only an extraordinarily bold and well thought-out AI treaty, led by Trump and Xi, can prevent immense concentrations of power, superpowers conflict, human replacement, or extinction.

Executive Summary

(Last Updated on April 22st , 2026)

Elevator Pitch

An initiative by a coalition of 10 NGOs, 20 advisors and 24 contributors, to persuade key potential influencers of Trump's AI policy to champion a bold, timely and proper US-China-led AI treaty. Not just any treaty but one that reliably prevents loss of human control and grave misuse, prevents unilateral dominance and global concentration of power, future-proof US/China economic leadership — and enacts durable measures to fairly share AI wealth and power, and protects liberty, within the US.

One-Pager

The race to Artificial Superintelligence is forcing humanity toward one of three irreversible outcomes: loss of control to AI, entrenched global authoritarianism, or a safe future secured by a bold, timely, and well-designed AI treaty. Intermediate results are highly unlikely.

Given the extreme global concentration of AI power, Xi's consistent calls for global AI governance, and ever-shorter ASI timelines — like it or not, President Trump’s future AI policy will largely determine Humanity's future.‍ ‍

Trump's AI policy is currently dominated by an ultra-libertarian, hands-off, and de-facto post-humanist stance to the race to ASI, largely inspired by Peter Thiel and key US officials close to him (Kratsios, Sacks, Helberg, Vance). His approach is seemingly supported by half of AI lab leaders (Page, Brin, Huang, Zuckerberg). Thiel is fiercely against an AI treaty, warning it risks leading to a global authoritarianism. While this is a very real concern, he ignores that if AI remains unregulated — and does not result in extinction, which he is unconcerned about — it would certainly lead to a global autarchy. 

Yet, this approach is highly minoritarian among citizens and other powerbrokers. By now, 63% of US voters believe it's likely that "humans won't be able to control it anymore," 53% believe "AI will destroy humanity" is somewhat or very likely, and 77% of all US voters support a strong international AI treaty. Most potential influencers of Trump’s AI policy are increasingly concerned or calling for a treaty — including Bannon, Carlson, Rogan, and most leaders of frontier labs (Altman, Amodei, Hassabis, Suleyman), and partly Sacks and Musk. Others, like Rubio and Gabbard, are likely to join for conviction and political expediency.

Yet, Trump is far from this radical ideology, he is the ultimate pragmatic politician. Xi has repeatedly called for global AI governance. Four Trump-Xi summits are planned for 2026, starting in May. Trump's approval ratings have been consistently low, in the thirties. This political liability can be turned into the opportunity of a lifetime.

What's missing is not public support, but a concerted pitch by a critical mass of people Trump trusts standing up for the silent majority, while addressing in fine detail and head-on Thiel’s stated concentration of power concerns.

We identified an actionable path to such a treaty that we believe can succeed and best appeal to Trump:

  1. Mitigating the Safety Risks. An initial US-China deal to mitigate globally the most imminent safety/security AI risks in chemical and nuclear domains, lead by a “cooperation of statesmen” — as he called for at the UN Security Agency, via his OSTP director. An extensible, bilateral US-China deal to mitigate the most urgent safety risks and frame a wider process. Not a typical, inconclusive, UN treaty-making process, but a bilateral-first approach similar to that proposed Secretary of War Stimson's in September 1945 to President Truman, which would have avoided the failure of the June 1946 Baruch Plan.

  2. US Leadership and Sharing AI Wealth and Power. An extension of Trump’s February 2025 U.S. Sovereign Wealth Fund and his November 2025 Genesis Mission will ensure — not only that the US will continue to lead — but also that every citizen will fairly and durably share of the power and wealth that AI will create. The latter has been loudly called for by US voters, MAGA leaders, OpenAI, Anthropic, One Project, The Human Movement, and even AI lab investors afraid of social unrest. The durability of such terms will require the inclusion in an international treaty that will license AI labs.

Trump has a chance to future-proof US economic leadership, prevent Chinese dominance, avoid immense risks to his life and his family, and achieve unparalleled prestige and grandeur.

Under a strikingly similar political context, another highly pragmatic US president, Harry Truman, presented to the UN (on the very day Trump was born, June 14th, 1946) what is still by far history’s boldest treaty proposal for nuclear weapons and energy.

Trump has an opportunity to fulfill his destiny, succeed where Truman failed and leave a legacy worth more than 100 Nobel Peace Prizes, leading him to be regarded as one of the greatest statesmen when he retires in January 2029.

Our mission is to help a critical mass of these potential influencers come together, by helping them deepen their resolve, provide actionable pathways. We aim facilitate their convergence towards a shared pitch to Trump.

We execute via direct outreach towards them, their relevant staff, advisors and envoys, and introducers to them; an Appeal (draft), Open Letter and 350-page Strategic Memo (v2.6) for President Trump;as well as a planned closed-door roundtable to be held in Washington DC on September 15-16 (tentative), 2026 (draft event page).

Our Existential Predicament with AI

An immense consolidation of wealth and influence is currently converging into a few ultra-billionaires, threatening to politically and economically marginalize the vast majority of the population.

At the same time, the overt and unchecked pursuit of Superintelligence is steering human civilization toward a critical three-way fork: (1) an irreversible loss of sovereignty to AI, (2) a permanent state of global authoritarianism controlled by an elite few, or (3) a successful future secured by prompt, equitable, and secure AI oversight. Intermediate results are deemed highly improbable.

Little recognition has gone into the fact that neither can win such a race—one bound to end in a nuclear conflict or loss of human control over AI. Furthermore, the safety concerns and the need for international agreements to tackle them have also been largely ignored.

Not Just Any Treaty and Treaty-Making Process

A bad treaty-making process would be worse than none, and perceived so by most of those key potential influencers. Persuading a critical mass of them to decisively act and pitch Trump requires a treaty-making process that is very well thought out.

Many AI leaders increasingly think that an ASI gamble may be less risky than a treaty that results in an authoritarian dystopia or completely forecloses the astounding prospects of flourishing for humans and sentient beings.

Therefore, the treaty-making process must be designed to durably prevent ASI and serious misuse, lessen the concentration of power and wealth worldwide, and sustainably leave room for future flourishing, autonomy, and private innovation. This is under the guiding principles of an open but cautious humanism.

In our memo v2.6 (pp. 124-139), we detail how and why a US-China-led treaty (while led by two "strong executive" leaders and requiring extensive surveillance to reliably prevent ASI) can be made to substantially reduce concentration of power — and why It is most likely to do so given certain inherent foreseeable dynamics.

Historical Precedents

In late 1945 and early 1946, a pragmatic US president, Harry Truman, was progressively led by a few key advisors to present to the United Nations what remains by far history's boldest treaty proposal: the Baruch Plan for the international control of nuclear weapons and energy.

The historical parallels with today’s predicament with AI are striking: ever-louder warnings from most top scientists and AI lab leaders, plummeting presidential approval ratings, mounting concerns and calls for a treaty by a large majority of US voters. Remarkably, the Plan was presented on Donald Trump's very date of birth, June 14th, 1946. 

Many scholars agree that if the US had sought an agreement first with the Soviets, and not with its western allies, the deal would have succeeded. This approach was embodied in the September 1945 proposal of Secretary of War Stimson, which was backed by former VP Henry Wallace and Deputy Secretary of State Dean Acheson. 

If a critical mass of key potential influencers of Trump's AI policy rise to history's calling and learn from Truman's mistakes, Trump could co-lead with Xi such a global AI treaty.

In doing so, Trump would contain China, future-proof US economic leadership, and prevent catastrophic risks that would spare no one. He would succeed where Truman failed and secure an unparalleled legacy.

Another positive example is the Organisation for the Prohibition of Chemical Weapons (OPCW). Created in 1997 and ratified by 193 nations, the OPCW has succeeded in durably restraining the large-scale research, production, and use of chemical weapons, preventing large-scale accidents or misuse to date.

Who We Are

The Coalition comprises 10 international NGOs and 20+ exceptional advisors, including former staff from the UN, National Security Agency, World Economic Forum, UBS, Yale, and Princeton, plus 24 contributors to its 356-page Strategic Memo for the Deal of the Century (v2.6). The Coalition launched in July 2024 and was seed funded by Jaan Tallinn's Survival and Flourishing Fund in February 2025.

What We Built So Far (April 2026)

Since our seed funding in February 2025, we have deeply researched key potential influencers of Trump’s AI policy and executed a targeted persuasion campaign towards them (and their advisors, staff, and introducers to them) with the purpose of swaying Trump on purely pragmatic terms. Our work consisted of two main activities:

(1) Developed an evolving 356-page Strategic Memo

  • The memo Contains detailed analysis of treaty-making pathways, enforcement mechanisms, and convergence scenarios, and over 150 pages of deeply researched “persuasion profiles” of each key potential influencer of Trump's AI policy (their interests, philosophies, psychology, and key AI predictions),

  • Targeted influencers include — in revised order following our Q2 2026 pivot — Rubio, Gabbard, Bannon, and DeSantis (Catholic/conservative humanists); Carlson and Rogan (humans-first media); Altman, Amodei, Hassabis, Suleyman, and Sutskever (humans-first techno-humanists); with Pope Leo XIV and Vance now secondary; and trans/post-humanists (Musk, Thiel) and administration officials (Kratsios, Sacks, Helberg) tracked as structural constraints."

(2) Engaged selected key potential influencers of Trump's AI policy

  • So far, we have engaged 85+ relevant staff and advisors of influencers in SF and DC, held group private dinners in SF and DC, engaged 23+ AI lab officials directly and 28+ DC-based AI safety experts. In three cases (Dario Amodei, Pope Leo XIV, Marco Rubio), the engagement was with officials or advisors one step removed from the influencers.

  • Since Q4 2025, we reached out and engaged with them (and their advisors, staff, and introducers to them) directly or through our network, via email and via a three-week October 2025 Persuasion Tour in DC and the Bay Area.

  • In Q1 2026, counting on a potential key role for a Pope-Vance alignment following Vance’s deferment to the Pope on AI ethics and safety, we engaged extensively with Vatican AI leaders, and gathered much interest around a June Roundtable in Rome with exceptional participants. However — due to Thiel’s March 2026 lectures in Rome depicting anyone fostering a treaty to prevent ASI as an Antichrist and a brutal clash between Trump, Vance and the Pope on the war in Iran — the “Caesar-Popist fusion” feared by Thiel has been successfully averted (or pre-empted). Hence, we decided to refocus away from the Pope and Vance for now as targets, and merge our planned June Rome event with one being planned in Washington DC in September, with several US and DC based local co-hosting NGOs. 

Roadmap (May 2026 - May 2027)

  • Follow up with and reach out to key potential influencers, their advisors and staff, and their introducers.

  • Draft or update documents:

    • The Appeal to DJT (draft), Open Letter to DJT (draft), and Strategic Memo for DJT (3.0).

    • Open or Direct Letters from key potential influencers.

    • An Internal Strategy Plan to share with members, partners, and advisors.

  • Hold closed-door meetings in Washington, DC and in SF, according to budget, including:

    • The September 15-16, 2026 Roundtable in Washington, DC, titled “The Cooperation of Statesmen” (draft gdoc of event web page). It is framed to attract suitable US officials, State Department, National Security think tanks, and humans-first/pro-human advocates.

  • Reach out directly to White House officials and the Office of the President.

  • Move the Coalition and its director from Rome to Washington, DC, due to reduced emphasis on the Vatican, and increased focus on DC-based figures.

A Treaty-Making Process That Can Succeed

Should such an alliance succeed in convincing Trump, negotiations would start during one of his four 2026 meetings with Xi, starting this May. Once Trump and Xi are committed to a process we foresee that a nation felt as neutral equally by China and the US, such as Singapore, and its president's uniquely forceful statements about global AI coordination.

PHASE 1. Trump’s and Xi’s initial emergency treaty and framing.

In our vision, Trump and Xi should start by fast-tracking a temporary, emergency bilateral treaty focused on transparency for civilian and military frontier AI and the most egregious and accessible AI risks (e.g., biological threats, nuclear integration, or recursive self-improvement). Concurrently, they should launch a "global Apollo Program" to jointly build, at wartime speed, a mutually-trusted socio-technical infrastructure for ultra-high-bandwidth diplomatic communications and trustworthy treaty enforcement mechanisms that ensure accountability, subsidiarity, and checks and balances.

PHASE 2. Expanding to ensure compliance with safety bans.

Midway through Phase 1, the US and China will negotiate with most middle powers the scope and rules of a global treaty-making process for AI based on the constitutional convention model. This model—inspired by the 1787 US Constitutional Convention, as suggested by Sam Altman in 2023—is the only one that can prevent the fatal use of the veto, succeed in delivering an extremely wide-scoped and fair treaty in short and predictable times, and ensure resilient subsidiarity terms. The model will be amended to be realistic: voting will initially be weighted by GDP to secure and future-proof US and China leadership, while still preventing a global duopoly.

Substantial functional roles should be granted to world religious traditions, security agencies, top AI labs, independent AI scientists, and citizen assemblies to ensure the necessary moral authority, scientific knowledge, operational expertise, and democratic legitimacy the scope of this treaty demands.

Towards a Humans-First Humanist AI Alliance

Given the prevalence of secular or Judeo-Christian humanism among MAGA opinion leaders, US voters, and most of the key potential influencers of Trump's AI policy, a humans-first Humanist AI Alliance among a critical mass of those influencers.

In the next 6 months, through closed-door meetings, we will contribute to the emergence of such an alliance, which will develop a successful pitch for Trump that: (1) demonstrates vast practical benefits for Trump and US voters and (2) establishes a set of minimal core ethical principles underlying the scope and methods of a global AI treaty-making process.

The alliance's ethical framework will be developed through an open, non-doctrinal, dialectical approach, grounded in human dignity while engaging in honest dialogue with moderate transhumanist aspirations for human flourishing through technology. This is designed to win the hearts of influential Silicon Valley techno-optimist humanists and outcompete the growing hardcore accelerationist post-humanist camp.

These principles will be conceived from the start to integrate the perspectives of other key global stakeholders: Chinese leadership, middle powers, and most of humanity during the treaty-making process.

Operation Capacity & Funding

We have achieved all of this with minimal funding, operating on a $7,500/month burn rate. Now we need to switch gears. If we can make 2-3 AI-skilled junior hires, we can easily transform our 356-page treasure trove of tailored intelligence and our overflow of open influencer pathways into a highly-tailored, high-bandwidth, targeted persuasion campaign and successful convergence meetings in SF and DC. This will enable us to 10x our impact in just months.

With only $75,000, we were able to activate 2,100 hours of professional pro-bono work and achieve astounding results in 2025. We are now seeking $100,000–$400,000 to move to the next stage (and an urgent $10–30k in bridge funding). We are also looking to diversify our funding sources with some more aligned with our humans-first humanist AI focus. Every dollar goes directly to the mission—no fancy offices, no high staff costs. We operate at ~$7,500/month, a fraction of a typical DC policy NGO. (See Donate or Funding So Far)

Ways You Can Help

  • Introductions to the influencers profiled in our Memo or their close advisors;

  • Funding to move to the next stage or maintain operations;

  • Contributors for the Memo with expertise in AI policy, diplomacy, or access to target networks.https://publicconsultation.org/governance/ai_2024/