The Deal of the Century


An Initiative to Persuade Key Potential Influencers of Trump's AI Policy to Jointly Champion a Bold, Timely and Proper US-China led Global AI Treaty


We believe there is still a realistic—if narrow—path to convince President Trump to co-lead with Xi Jinping the boldest treaty in history: a global agreement to prevent loss of control over AI while avoiding techno-authoritarianism. This is The Deal of the Century.

1) Our Aim: Privately persuade a critical mass of key potential influencers of Trump's AI policy—including Vance, Altman, Bannon, Musk, Thiel, Amodei, Pope Leo XIV and others—to jointly champion a bold US-China-led global AI treaty that prevents both catastrophe and authoritarianism, while securing American economic leadership and realizing AI's astounding potential for humanity. (See full chapter below)

2) Our Predicament: The race toward Artificial Superintelligence (ASI) is bringing humanity to a three-way hard fork: (a) immense concentration of power in unaccountable entities, (b) loss of control over AI—and likely extinction, or (c) humanity's triumph via sane, fair and durable global AI governance. Middle outcomes are increasingly unlikely. The decisions made in the next few months will shape the future of all conscious life. (See full chapter below)

3) The Uncomfortable Truth: Our future rests overwhelmingly on whether Trump will be persuaded to co-lead a bold AI treaty with Xi. That's due to short ASI timelines, the time needed for a treaty of such scope, and Xi's consistent calls for global AI governance. Yet no other AI advocacy organization has fully recognized this working fact and acted accordingly. We have. (See full chapter below)

4) A Glimmer of Hope: Many believe it is impossible, but they are wrong. Trump's utterly non-ideological and pragmatic approach, his craving for a "big win," his aversion to weak multilateral institutions, his reliance on loyal advisors, his courage in rapidly shifting policy, and his instinct for self-preservation. Plus public concern has been skyrocketing:

  • In 2023, 55% of citizens surveyed in 12 countries were fairly or very worried about "loss of control over AI".

  • By 2025, 78% of US Republican voters believed artificial intelligence could "eventually pose a threat to the existence of humanity".

  • By March 2025, Americans very or somewhat concerned about AI “causing the end of the human race” were 37% and increased to 43% by July 2025.

  • In October 2025, 63% of US citizens believe it's likely that "humans won't be able to control it anymore", and 53% believe it's somewhat or very likely that "AI will destroy humanity".

All this makes it more likely Trump could eventually sway 180° on AI, much as Truman did in 1946 when he presented the Baruch Plan. (See full chapter below)

5) Our Means: We're executing a precision persuasion campaign with two components: (a) a 350-page Strategic Memo containing deeply researched "persuasion profiles" of each influencer—their interests, philosophies, psychology, and key AI predictions—plus tailored direct letters; and (b) direct engagement through Persuasion Tours across Washington DC, Bay Area, Mar-a-Lago, Rome/Vatican, and the New Delhi AI Action Summit. (See full chapter below)

6) Key Intuitions: Trump is pragmatic, not ideological on AI. He follows whoever makes the most compelling case. Persuading influencers requires understanding what actually drives them—and after eleven months of analysis, we've discovered something counterintuitive: most are motivated more by philosophy, values, and legacy than by wealth or power per-se. A shift in even a few key AI predictions of some influencers could cascade into an informal alliance that sways Trump. (See full chapter below)

7) Feasibility: The path is narrow but real. 77% of all US voters support a strong international AI treaty. Trump needs a big win for his low ratings. Xi has consistently called for global AI cooperation. The April 2026 Trump-Xi summit creates a natural focal point. The Baruch Plan nearly succeeded even at the height of Cold War tension—and today's failure modes are addressable. (See full chapter below)

8) Who We Are: The Coalition comprises 10 international NGOs and 40+ exceptional advisors and team members—including former officials from the UN, NSA, WEF, UBS, Yale, and Princeton—plus 24 contributors to the Strategic Memo. Led by its only full-time staff, Rufo Guerreschi, the Coalition has activated over 2,100 hours of professional pro-bono work. Seed-funded by Jaan Tallinn's Survival and Flourishing Fund. (See full chapter below)

9) The Opportunity: The political moment, our 350-page treasure trove of tailored intelligence, and our extreme capital efficiency (~$7,500/month, ~$180 per high-value meeting) enable us to 10x our impact with moderate funding. The constraint is no longer strategy or positioning—it's operational capacity. With 2-3 dedicated hires, we can transform our arsenal into personalized outreach at scale. (See full chapter below)

10) 2025 Achievements: In ten months, starting from a $72,000 seed grant, we built: a 350-page Strategic Arsenal with more actionable intelligence than any document we're aware of; 85+ proven contacts from our October US Tour (vs. 15-20 projected); direct introducer pathways to 2 of 10 primary targets; 23 AI lab official engagements; and strategic positioning across Bay Area, DC, Rome/Vatican, and Mar-a-Lago. (See full chapter below)

11) 2026 Roadmap: The window is now. Trump's anticipated meeting with Xi in late April 2026 creates a once-in-a-generation opportunity. Our targets: 150+ introducer engagements across four hubs, 30+ direct engagements with influencers or their key staff, 5-8 substantive meetings with influencers themselves, two Strategic Memo updates timed to the summit window, and potential Vatican convenings to catalyze a humanist AI alliance. (See full chapter below)

12) Funding: After 10 months of primarily volunteer work, we received $75,000 in seed donations and now seek $50,000–$400,000 to move to scale or maintain operations. Every dollar goes to the mission—no fancy offices, no high staff costs. We operate at ~$7,500/month, a fraction of typical DC policy organizations. (See full chapter below)

13) Ways You Can Help: We need: Introductions to the influencers profiled in our Memo or their close advisors; Funding to move to the next stage or maintain operations; Contributors with expertise in AI policy, diplomacy, or access to target networks. (See full chapter below)

Read The Strategic Memo | Donate | Join

Full Prospectus

1) Our Aim

We aim to privately persuade a critical mass of key potential influencers of Trump's AI policy to champion a bold and timely US-China-led global AI treaty-making process.

Pursuing such a treaty entails risks as enormous as not doing so. We therefore foster not just any treaty, but one that can reliably:

  • Prevent loss of control, grave misuses, and major powers conflict over AI

  • Secure and future-proof American economic leadership while substantially increasing superpower leaders' political standing and legacy

  • Durably reduce global concentration of power and wealth

  • Realize AI's astounding potential for abundance, agency, and flourishing for all

Our target influencers include: JD Vance, Sam Altman, Steve Bannon, Peter Thiel, Elon Musk, David Sacks, Dario Amodei, Demis Hassabis, Mustafa Suleyman, Tulsi Gabbard, Marco Rubio, Pope Leo XIV, Joe Rogan, Tucker Carlson, and others profiled in detail in our Strategic Memo.

The initiative draws direct inspiration from the 1946 Baruch Plan—the proposal by President Truman to the United Nations for exclusive, veto-free international control of dangerous nuclear technologies. That plan passed the UN Atomic Energy Commission 10-0 and failed only at the Security Council by a single Soviet veto. We believe a similarly bold initiative—led as it was by a few key advisors to another pragmatic US president—represents the best path forward.

Learn more: Theory of Change | Open Letters to Influencers | Strategic Memo v2.6 (PDF): "Not Just Any AI Treaty" (pp. 20–22), "What the Humanists' Proposal Must Deliver" (pp. 35–36)

2) Our Predicament

As we face mounting and interlocking global risks, the race toward Artificial Superintelligence (ASI) is bringing humanity to what are undoubtedly the most consequential months and years in its history.

It's a three-way hard fork, with increasingly unlikely middle outcomes:

(a) Immense concentration of power in one or a few unaccountable entities—whether state or corporate. As Putin stated, "Whoever becomes the leader in this sphere will become the ruler of the world." AI-driven surveillance and planetary-scale infrastructure could enable de facto global authoritarianism.

(b) Definitive loss of control over AI—and consequent (near) human extinction or some AI-governed human utopia. Amodei, Musk, and Hinton assign roughly 20% probability to catastrophic outcomes. A 2023 survey of AI scientists found a median 5% extinction risk and a mean of ~16%.

(c) Humanity's triumph via sane, fair and durable global governance of AI—unleashing unimaginable human flourishing, unprecedented happiness, freedom, agency, and well-being.

We stand at this fork now. OpenAI, xAI, Meta, and NVIDIA have openly declared their intent to build ASI. SoftBank's CEO announced the same goal at the White House during the $500 billion Stargate launch. Predicted timelines have collapsed from decades to months: Anthropic CEO Amodei predicts AI matching a "country of geniuses" by 2026.

The psychological and political dynamics that prevent proportionate action are well-studied—and must be overcome. The situation parallels what was depicted in the Don't Look Up movie.

Learn more: A Baruch Plan for AI? | Strategic Memo v2.6 (PDF): "The Three-Way Fork: Extinction, Enslavement, or Mastery" (pp. 48–50), "Our Shocking AI Predicament" (pp. 17–18, 50–52), "Three Possible Futures" (pp. 51–52)

3) The Uncomfortable Truth

We must face a truth that is too uncomfortable for many to admit: our future rests on whether Trump will be persuaded to co-lead a bold AI treaty with Xi.

This is due to three factors:

Short timelines to ASI. Major AI labs estimate ASI could emerge within 2-5 years. Musk stated in June 2025: "If it doesn't happen this year, next year for sure."

The time required for a treaty of such scope. Even under the most optimistic scenarios, establishing meaningful global AI governance requires months of intensive negotiation. The window is narrowing rapidly.

Xi's consistent position. China's Premier Li Qiang has repeatedly called for global AI cooperation, offering a ready partner. The anticipated April 2026 Trump-Xi summit creates a natural focal point.

No other AI advocacy organization has fully recognized this working fact and acted accordingly. Many focus on traditional multilateral channels, European regulations, or general public awareness—all valuable, but none addressing the decisive chokepoint.

We have chosen to focus our limited resources on the intervention point that actually matters: the handful of individuals who can influence Trump's thinking on AI.

Learn more: Strategic Memo v2.6 (PDF): "The Message to Trump" (pp. 34–35), "A Dozen People Who Hold Our Future" (pp. 18–20), "Collapsing Timelines" (pp. 49–50)

4) A Glimmer of Hope

Many believe persuading Trump is impossible. They are wrong. It is very difficult—but that is very different.

To comprehend this, we must view the strategic context with ruthless realism and identify hidden leverage where others find only excuses.

Why Trump could shift 180°:

  • Utterly non-ideological and pragmatic approach to foreign policy—he follows whoever makes the most compelling case for American advantage

  • Craving for a very "big win" to raise his historically low ratings of 36%—he needs a signature achievement

  • Aversion to weak and useless multilateral institutions—a US-led treaty framed as "peace through strength" could appeal to him

  • Reliance on loyal or admired advisors—the humanists in his circle (Vance, Bannon, Gabbard) are just as numerous as the accelerationists but haven't yet made their case coherently

  • Courage in rapidly shifting policy—Trump has demonstrated willingness to pivot dramatically when convinced

  • Instinct of self-preservation—when briefed on the true scale of AI risks, his survival instincts could activate

The Truman precedent. Much of these dynamics played out in early 1946, when shifting circumstances and forceful advice from Oppenheimer and Acheson led pragmatic President Truman to present the extraordinarily bold Baruch Plan—barely an hour after the birth of Donald Trump in Queens, New York.

The risks' enormity and the AI pie's immensity are fast becoming clear to all, creating strong win-win incentives for everyone except a currently-influential faction of post-humanists comfortable with the ASI gamble.

Learn more: The Baruch Plan: A Model for AI? | Experts' Calls for a Baruch Plan | Strategic Memo v2.6 (PDF): "Experts Calling for a Baruch Plan for AI" (pp. 27–29), "The Message to Trump" (pp. 34–35), "From Persuasion to Convergence, to Treaty Making" (pp. 24–26)

5) Our Means

We're executing a deeply researched, precision persuasion campaign with two components:

A. The Strategic Arsenal

Our Strategic Memo (v2.6, published December 30, 2025) is a 350-page document—the fruit of reviewing over 667 sources (517 articles/videos, 150+ academic papers). Developed with 20+ contributors from the UN, NSA, WEF, Yale, and Princeton. Seed-funded by Jaan Tallinn's Survival and Flourishing Fund.

It contains:

  • Detailed "persuasion profiles" of each influencer's interests, philosophies, psychology, and key AI predictions—along with strategies to persuade them

  • In-depth analysis of every influencer's relevant public statements, mapped to their philosophical commitments and persuadable points

  • A path to foster a humanist AI alliance to outmaneuver powerful post-humanist influencers comfortable with the ASI gamble

  • Treaty-making models and enforcement provisions that can both prevent ASI and reduce global authoritarianism—addressing Peter Thiel's "Antichrist or Armageddon" dilemma

  • Tailored direct/open letters and strategic briefs for each target

B. Direct Engagement

Our Persuasion Tours in 2026 bring us face-to-face with introducers and influencers across Washington DC, Mar-a-Lago, the Bay Area, Rome/Vatican, and the New Delhi AI Action Summit.

This builds upon our October 2025 Persuasion Tour, which generated 85+ contacts, 23 AI lab official engagements, and direct introducer pathways to multiple primary targets.

Learn more: Strategic Memo page | Contributors | 2025 Achievements | Strategic Memo v2.6 (PDF): "About this Document" (pp. 2–3), Influencer Profiles (pp. 150–300), "Treaty-Making Roadmap" (pp. 140–142), "A Treaty Enforcement that Prevents both ASI and Authoritarianism" (pp. 130–139)

6) Key Intuitions

After eleven months of intensive analysis, we've discovered insights that reshape the persuasion challenge:

1. Trump is pragmatic, not ideological on AI. He follows whoever among his circle makes the most compelling case for American advantage and political success. His current accelerationist posture emerged from the trans/post-humanist faction's influence—not engrained personal conviction. The humanists in his circle are just as numerous and powerful (and much more aligned with the MAGA voter base on AI!) but haven't yet made their case coherently. We aim to change that.

2. Philosophy drives influencers more than power or wealth. In their actions about AI governance, most key figures are not primarily motivated by wealth or power per se. They're often motivated more by the actualization of philosophical ideas, values, personal aspirations, and legacy ambitions within what they predict are the most likely AI scenarios.

3. Persuasion must happen on three concurrent dimensions:

  • Interests: How the AI treaty advances their personal, political, or business position

  • Philosophy: Finding minimum common ground on fundamental worldviews as they relate to AI

  • Key AI predictions: Their probability estimates on questions like whether ASI will be controllable, whether a treaty can prevent both catastrophe and authoritarianism, or whether AI will have consciousness

4. A shift in even one key prediction could cascade. Several key figures have already voiced direct support for strong AI treaties in the past: Altman called for a global "Constitutional Convention," Musk was the loudest voice for AI regulation until 2024, Bannon stated he wants treaties "immediately," Pope Francis initiated the Rome Call for AI Ethics. Their change wouldn't be a drastic reversal—it would be a tipping back of their scales under conditions of deep uncertainty.

Learn more: Strategic Memo v2.6 (PDF): "Why Influencers Can Be Substantially Swayed and What Are the Low-Hanging Fruits" (pp. 39–47), "The Core Discovery: Philosophy Over Power" (pp. 39–40), "Three Dynamics Working in Our Favor" (pp. 40–41), "The Coalition Cascade" (pp. 45–46), "Why a Shift on Even One Prediction Matters" (pp. 46–47)

7) Feasibility

The path is narrow but real. Here's why:

Broad public support:

Political alignment:

  • Trump's approval ratings are at historical lows—he needs a signature foreign policy win

  • Xi has consistently called for global AI cooperation

  • The anticipated April 2026 Trump-Xi summit creates a natural focal point

  • David Sacks as AI Czar and Michael Kratsios as OSTP Director understand the policy landscape

The Baruch Plan precedent: The Baruch Plan's near-success at the peak of Cold War tension—when the US had every incentive to maintain its monopoly—remains the most relevant precedent. It passed 10-0 at the UN Atomic Energy Commission. It failed only because: (1) the treaty process was too slow, (2) verification technology didn't exist, and (3) the political coalition was too narrow.

Every one of these failure modes is addressable today. We can move faster than 1946 diplomats. Verification technology has transformed: satellite imagery, compute monitoring, and AI-assisted inspection make compliance confirmation feasible. And our coalition-building specifically targets the failure mode of insufficient political alignment.

Experts calling for a Baruch Plan for AI: Ian Hogarth (UK AI Safety Institute Chair), CSET Georgetown, Jack Clark (Anthropic), Yoshua Bengio and Jaan Tallinn, and many others.

Learn more: A Baruch Plan for AI? | Experts' Calls for a Baruch Plan | Strategic Memo v2.6 (PDF): "Experts Calling for a Baruch Plan for AI" (pp. 27–29), "The Global Oligarchic Autocracy Risk—And How It Can Be Avoided" (pp. 124–129), "Why an AI Treaty Must Include Military, Intelligence, and Nuclear Sectors" (pp. 121–123)

8) Who We Are

The Coalition for a Baruch Plan for AI was convened in July 2024 by the Trustless Computing Association and launched in December 2024 by six founding NGO partners.

The Coalition comprises:

  • 10 international NGOs including PauseAI Global, PauseAI USA, AITreaty.org, World Federalist Movement's Transnational Working Group on AI, and the European Center for Peace and Development (UN)

  • 40+ exceptional advisors and team members including former officials from the UN, NSA, WEF, UBS, Yale, and Princeton

  • 24 contributors to the Strategic Memo

Key credentials include:

  • Mark Barwinski, former Group Head of Cybersecurity Operations at UBS and the NSA (Tao Unit)

  • Richard Falk, Emeritus Professor at Princeton University

  • Amb. Muhammadou Kah, Chair of the UN Commission on Science and Technology for Development

  • Jennifer Blanke, former Chief Economist of the World Economic Forum

  • Chase Cunningham, Co-founder of Zero Trust, former Chief Cryptologist at NSA

Led by its only full-time staff, Rufo Guerreschi, the Coalition has activated over 2,100 hours of professional pro-bono work.

Learn more: People and Members | Contributors | Funders and Volunteers | Strategic Memo v2.6 (PDF): "About the Author" (p. 10), "Contributors" (pp. 11–13)

9) The Opportunity

The political moment, our strategic arsenal, and our extreme capital efficiency create an unprecedented opportunity.

The treasure trove is built. The 350-page Strategic Memo contains more actionable intelligence on AI policy influencers than any document we're aware of. Every profile, every argument, every convergence scenario is ammunition for persuasion. With the right team, even junior staff can use AI tools to transform this material into highly personalized engagements.

Proven pathways exist. The October tour established 85+ contacts with demonstrated interest. These aren't cold leads—they're relationships ready to be activated.

Extreme capital efficiency:

  • Monthly burn rate: ~$7,500

  • Cost per high-value meeting: ~$180

  • We've achieved more analytical depth, direct engagement, and strategic positioning than organizations with 10x our budget

The constraint is no longer strategy or positioning—it's operational capacity. With $150,000–$400,000 and 2-3 dedicated hires, we can easily 10x our outreach power:

  • The memo becomes a living arsenal rather than a static document

  • Every contact from October gets systematic follow-up

  • Every introducer pathway gets worked toward influencer access

  • Multiple hubs operate simultaneously rather than sequentially

The next six months represent the most critical window for influencing American AI policy. We're positioned to seize it.

Learn more: 2025 Achievements | 2026 Roadmap | Donate (Case for Concerned Citizens) | Strategic Memo v2.6 (PDF): "Executive Summary" (pp. 16–33), "From Persuasion to Convergence, to Treaty Making" (pp. 24–26)

10) 2025 Achievements

In ten months, starting from a $72,000 seed grant, we built:

A Strategic Arsenal:

  • 350-page Strategic Memo with more actionable intelligence than any comparable document

  • 667+ sources analyzed (517 articles/videos, 150+ academic papers)

  • Three week-long in-person analysis sprints

  • 20+ expert contributors engaged

Proven Pathways:

  • 85+ contacts from October US Tour (vs. 15-20 projected)

  • 23 AI lab policy/strategy officials at OpenAI, Anthropic, DeepMind

  • 18 national security establishment contacts in DC

  • Direct introducer pathways to 2 of 10 primary targets

Coalition Building:

  • 100+ members, advisors, and supporters

  • Geographic concentration in Bay Area, DC, Rome/Vatican, Mar-a-Lago

  • 14 new testimonials from October tour

  • 12 new volunteers recruited

Vatican Connections:

  • Relationships with Pope Leo XIV's inner circle and top Vatican AI advisors

  • Potential to convene unprecedented private gatherings catalyzing the humanist AI alliance

Learn more: Full 2025 Achievements Report | Testimonials | Strategic Memo v2.6 (PDF): "Contributors" (pp. 11–13), Influencer Profiles with field intelligence (pp. 150–300)

11) 2026 Roadmap

The window is now. Trump's anticipated meeting with Xi in late April 2026 creates a once-in-a-generation opportunity.

2026 Targets:

  • 150+ introducer engagements across four hubs (Bay Area, DC, Mar-a-Lago, Rome)

  • 30+ direct engagements with influencers or their key AI policy staff/advisors

  • 5-8 substantive meetings with influencers themselves

  • Strategic Memo v3.0 by end of January; v3.5 by mid-April

  • Potential Rome/Vatican convenings catalyzing the humanist AI alliance

The Campaign Architecture:

Phase 1 (Jan–Feb): Foundation + First Push

  • Rome/Vatican: Finalize funding, deepen Vatican engagement, publish Memo v3.0

  • Washington DC: Intensive briefings with national security establishment

  • Mar-a-Lago Area: Private gatherings, one-to-one meetings with introducers

  • Bay Area: Systematic follow-up with all 23 AI lab contacts

  • New Delhi AI Action Summit (Feb 19-20): Direct engagement with attending AI lab CEOs

Phase 2 (Mar–Apr): Deepening + Catalyzing

  • Rome/Vatican: Private convenings with introducers and potentially influencers

  • Washington & Mar-a-Lago: Converting introducer relationships into influencer access

  • Final positioning before the summit window

Phase 3 (May–Dec): Summit Response + Sustained Campaign

  • Post-summit reassessment and strategy recalibration

  • Continued engagement based on political evolution

Learn more: Full 2026 Roadmap | Strategic Memo v2.6 (PDF): "Treaty-Making Roadmap" (pp. 140–142), "A Proposed Trump's Executive Order on a US-China-led AI Treaty" (pp. 143–147)

12) Funding

After 10 months of primarily volunteer work and $75,000 in seed donations, we seek $50,000–$400,000 to move to scale or maintain operations.

Current funders:

  • $60,000 from Jaan Tallinn's Survival and Flourishing Fund

  • $10,000 from Ryan Kidd

  • Additional contributions from Jens Groth, Troy Davis, Roberto Savio

Funding tiers:

$50,000 — Extend and Amplify: Extend runway into mid-2026. Add one outreach support staff. Engage part-time consultants. Launch targeted op-eds. Fund critical travel.

$150,000 — Building Momentum: Sustain twelve months of growth. Hire two part-time consultants. Organize closed-door dinners in DC and Mar-a-Lago. Expand NGO partners. Prepare extensive 2nd and 3rd Persuasion Tours.

$400,000 — Breakthrough Scale: Transform capacity. Build unstoppable momentum. Move from one to three full-time staff. Launch multi-channel campaign. Deepen scientific depth and operational value of new Memo versions.

Donate:

Learn more: Case for Concerned Citizens | Case for AI Safety Experts | Funders and Volunteers

13) Ways You Can Help

Introductions: We need warm introductions to the influencers profiled in our Memo—including Vance, Altman, Bannon, Thiel, Musk, Amodei, Pope Leo XIV—or their close advisors and key staff.

Funding: We need bridge funding to operate for the next month, so even a small but timely donation can make a huge difference. We also need $100,000–$400,000 to move to the next stage.

Contributors: We seek experts in AI policy, diplomacy, national security, or those with access to our target networks. Even 10-20 hours of volunteer work can significantly advance our mission.

Learn more: Join the Coalition | Donate | Strategic Memo v2.6 (PDF): Specific influencer profiles for your network—see Table of Contents (pp. 3–9) for the full list of profiles by name

Read The Strategic Memo | Donate | Join

Contact: cbpai@trustlesscomputing.org