Theory of Change

Executive Summary

The future of humanity rests overwhelmingly on the choices of a small number of individuals in the next 12-24 months. Leading AI labs are now openly racing toward Artificial Superintelligence (ASI) — systems potentially billions of times smarter than humans. Whether this ends in unprecedented abundance or human extinction depends largely on whether the United States and China can agree to govern this technology before it governs us.

Our theory of change is simple but unconventional: persuade the handful of people who will shape Trump’s AI policy to advocate for a bold, US-led global AI treaty. These influencers — Vance, Altman, Bannon, Thiel, Musk, Sacks, Pope Leo XIV, and others — already share crucial concerns about uncontrolled AI development, even if they don’t yet recognize their potential for convergence. By demonstrating that a proper treaty can prevent both ASI catastrophe and authoritarian global governance, we can build the coalition needed to shift Trump’s calculus.

This isn’t fantasy. The 1946 Baruch Plan shows that American presidents can propose surrendering strategic advantages to international control — and nearly succeed. Truman did it for nuclear weapons. Trump can do it for AI. The Baruch Plan failed not because the idea was wrong, but because the treaty-making process was too slow, verification technology didn’t exist, and the political coalition was too narrow. Every one of these failure modes is addressable today.

Our approach: superb content tailored to each influencer’s worldview, direct engagement through trusted introducers, and physical presence in Washington, Mar-a-Lago, Silicon Valley, and Rome during critical windows. With extreme capital efficiency (~$7.5K/month burn rate), we operate as a precision instrument rather than a bureaucracy — targeting maximum impact where traditional approaches cannot reach.

The window is narrow. Trump’s anticipated meeting with Xi in April 2026 creates a deadline. If a critical mass of influencers can align by Q1 2026, the “Deal of the Century” becomes possible. If not, the ASI race continues toward outcomes no one can predict or control.


———

Detailed Theory of Change

The Core Problem: A Race No One Can Win

The world’s leading AI labs are no longer merely developing increasingly capable AI systems — they are explicitly racing toward Artificial Superintelligence. In October 2024, Dario Amodei (CEO of Anthropic) described the imminent arrival of systems like “a country of geniuses in a data center,” potentially by 2027. He has separately warned of a “10-20% chance of extinction” from AI — yet continues racing forward.

This paradox — brilliant people building systems they believe could destroy humanity — reflects a deeper strategic trap. Each leading lab and nation fears that if they don’t build ASI first, someone else will, and that someone might be less careful or more hostile to their interests. The result is a coordination failure of civilizational proportions.

Three possible outcomes:

(a) A proper treaty prevents ASI and unleashes unimaginable, human-controlled abundance

(b) A poorly-designed treaty entrenches authoritarian global dystopia

(c) No treaty — the race continues to extinction or disempowerment

The challenge isn’t just stopping ASI. It’s doing so without creating governance worse than the risks it prevents. This is why most AI lab leaders, despite their warnings, have grown skeptical of international coordination — they fear authoritarian lock-in as much as extinction. Our theory of change addresses both risks simultaneously.

The Pathway: Influencer Persuasion → Coalition Formation → Treaty Initiative

Unlike conventional policy advocacy, we don’t target lawmakers or mobilize public opinion as primary strategies. We’ve concluded, after deep analysis, that the decisive variable is a small number of individuals who have Trump’s ear on AI policy.

These aren’t random advisors. They are people whose philosophical commitments, business interests, and risk assessments could align — under the right framing — with support for a bold AI treaty. Our Strategic Memo profiles each in exhaustive detail, analyzing thousands of pages of their interviews, writings, and speeches.

The key insight: most influencers are not primarily motivated by wealth or ego. Thiel, Vance, Altman, and Musk already have more money and influence than they could ever use. What drives them is the realization of their philosophical commitments — and those commitments create openings that pure interest-based lobbying would miss.

Vance genuinely believes in protecting human dignity from technological displacement. Bannon truly fears technofeudal oligarchy. Altman authentically worries about the catastrophic risks he’s helping create. Even Musk’s accelerationism stems from his cosmic vision of multi-planetary humanity — a vision that could be better served by treaty frameworks that guarantee AI doesn’t destroy civilization before Mars is colonized.

The Three-Stage Convergence Strategy

Our coalition-building operates in three sequential-but-overlapping phases:

Stage 1: Unite the Humanist Alliance (Q4 2025 – Q1 2026)

Primary targets: Vance, Pope Leo XIV, Bannon, Gabbard, Carlson

These influencers share sufficient philosophical alignment — rooted in Christian or traditional humanist values — to form an initial critical mass. They all recognize, at some level, that an uncontrolled race to ASI serves no one: not American workers, not national security, not humanity’s future.

The key is getting them to recognize their coalition potential. No one wants to be first — but everyone wants to be part of a winning coalition. Demonstrating convergence accelerates convergence.

Stage 2: Bridge to Techno-Humanists (Q1 – Q2 2026)

Primary targets: Altman, Amodei, Hassabis, Sacks

These figures are more cautious about governance but fundamentally share safety concerns. Bridging requires demonstrating that treaty architecture preserves innovation space, showing that the humanist alliance has momentum, and addressing their specific technical objections.

Altman’s prior statements supporting “world governance” make him particularly tractable. Amodei’s genuine safety concerns can be leveraged despite his current public positioning. Jack Clark (Anthropic’s Head of Policy) already advocated for a Baruch Plan — we’re making that vision actionable.

Stage 3: Engage Trans/Post-Humanists (Q2 2026)

Primary targets: Thiel, Musk

The hardest but not impossible. Thiel’s primary concern — preventing authoritarian global governance — can be addressed through treaty architecture emphasizing decentralization and subsidiarity. Musk’s stated fear that regulation leads to authoritarianism must be countered with evidence that failure to achieve a treaty is the most likely path to authoritarianism — via global domination by whichever nation or firm builds ASI first.

If direct persuasion fails, the humanist alliance must be prepared for confrontation. The terrain favors them: polls show MAGA voters increasingly aligned with Bannon’s position. The trans/post-humanist vision appeals to a tiny elite; it terrifies everyone else.

Why This Can Work: Historical Precedent

Skeptics will ask: when has anything like this succeeded? The honest answer is that nothing exactly like this has been attempted — because nothing exactly like ASI has existed. But the historical record offers more hope than cynics admit.

The lesser precedents demonstrate that superpowers can cooperate on dangerous technologies: the Antarctic Treaty (1959) demilitarized an entire continent at the height of the Cold War; the Outer Space Treaty (1967) banned nuclear weapons in orbit when the space race was most intense; the Montreal Protocol (1987) reversed ozone depletion within a decade through binding commitments from rivals.

But the greater precedent is the Baruch Plan itself — and here the lesson is not failure but near-success. In June 1946, the United States — holding a nuclear monopoly — proposed to surrender that advantage to international control. The plan passed the UN Atomic Energy Commission 10-0 (with two abstentions). It failed only at the Security Council, where the Soviet veto killed it.

The Baruch Plan’s failure modes were specific and identifiable:

1. The treaty-making process was too slow. By the time serious negotiations began, the Soviets were months from their own bomb. The window closed.

2. Verification technology didn’t exist. 1946 offered no way to confirm compliance without intrusive inspections that neither side would accept.

3. The political coalition was too narrow. Truman supported it, but key advisors quietly undermined it.

Every one of these failure modes is addressable today. We have months, not years — but we also have leaders who can move faster than 1946 diplomats. Verification technology has transformed: satellite imagery, compute monitoring, and AI-assisted inspection make compliance confirmation feasible. And the coalition-building this Memo proposes specifically targets the failure mode of insufficient political alignment.

If Truman could propose surrendering America’s greatest strategic advantage, Trump can propose sharing the burden of AI’s greatest strategic risk.

Expert Endorsement of the Baruch Plan Model

The idea isn’t fringe. Some of the most credentialed voices in AI safety and governance have explicitly invoked the Baruch Plan:

Ian Hogarth, now Chair of the UK AI Safety Institute, argued in 2018 that the Baruch Plan is “a necessary model for the governance of AI.”

Allan Dafoe, then President of Oxford’s Centre for the Governance of AI, now Head of Long-Term AI Strategy at Google DeepMind, co-authored a 70-page paper exploring international control of powerful technology through the Baruch Plan lens.

Jack Clark, Cofounder and Head of Policy at Anthropic, suggested the Baruch Plan for global AI governance, as reported by The Economist.

Jaan Tallinn, cofounder of the Future of Life Institute, suggested the Baruch Plan as a potential solution in December 2023.

Nick Bostrom referenced the Baruch Plan as a positive scenario in his foundational Superintelligence (2014) — the book that shaped how Silicon Valley thinks about existential risk.

The Deal-Making Architecture

This isn’t another failed UN approach producing toothless accords over years and decades. We propose revolutionary treaty mechanics that are history-proven, technically feasible, and match the speed and stakes of the AI challenge:

Phase 1: Ultra-High-Bandwidth Bilateral Negotiations (Early 2026)

Hundreds of negotiators and two special envoys work 3 weeks per month in secure facilities — potentially in Singapore or another bridge nation — with parallel teams in DC and Beijing. This isn’t traditional diplomacy; it’s intensive, continuous negotiation at the speed the moment requires.

Phase 2: Constitutional Convention Model (Mid-2026)

A time-bound multilateral assembly — modeled on Philadelphia 1787 — to lock in broader legitimacy. Unlike endless UN processes, this has a hard deadline and weighted voting that realistically accounts for asymmetries of power.

From Bilateral to Global

The US-China framework serves as the anchor. Once the superpowers agree, other nations join on terms that protect their interests while preserving the core constraints. Middle powers provide venues, infrastructure, and legitimacy; NGOs provide advocacy and citizen mobilization; technical experts ensure the architecture actually works.

Our Methodology: Maximum Impact, Minimal Overhead

We achieve our goals through four pillars:

Superb Content — The Strategic Memo is exhaustive and dense with insights, covering each influencer’s philosophy, psychology, interests, and specific persuasion hooks. Open Letters tailored to each figure analyze their public positions, anticipate objections, and make cases calibrated to their worldview.

Direct Engagement — We focus outreach on key influencers and their trusted introducers, not general audiences. Priority is given to persons of rare relevant expertise: national security experts, tech safety researchers, religious leaders, Trump-aligned media voices, and relevant members of Congress.

Tailored Communications — Different framings for different audiences: “peace through strength” for Trump’s circle, Catholic social teaching for religious figures, democratic governance for tech leaders, economic nationalism for populist voices.

Physical Presence — Maintaining a footprint in Washington, Mar-a-Lago, Silicon Valley, and Rome during critical moments. Some conversations can only happen in person.

This methodology enables extreme capital efficiency (~$7.5K/month burn rate versus typical DC policy organizations). We operate as a precision instrument, not a bureaucracy.

Timeline and Critical Windows

Q4 2025 — Stage 1: Humanist Core (Vance, Bannon, Pope, Gabbard). Coalition recognizes itself through religious/philosophical alignment.

Q1 2026 — Stage 2: Techno-Humanist Bridge (Altman, Amodei, Hassabis, Sacks). Critical mass forming through moderate transhumanism framework.

Q2 2026 — Stage 3: Trans/Post-Humanist Engagement (Thiel, Musk). Trump’s calculus shifts through East-West political philosophy convergence.

Q2-Q3 2026 — Compound persuasion via coalition momentum. 3-4 aligned influencers → Trump persuaded.

Q3-Q4 2026 — Coalition presents unified proposal. US-China bilateral framework emerges.

2027 — Global coalition expands. Treaty-making process launched with 12+ powerful nations.

The critical window centers on Trump’s anticipated meeting with Xi in April 2026. If a critical mass of influencers align by Q1 2026, they can present a unified recommendation that Trump pursue the “Deal of the Century.” The historical parallel: Oppenheimer, Acheson, and a handful of others shifted Truman’s calculus through persistent, calibrated advocacy. The technical and philosophical challenges were no less daunting in 1946.

What Success Looks Like

A successful outcome isn’t just a signed treaty — it’s a treaty that achieves five objectives in a timely, reliable, and durable way:

1. Secure a safe, optimistic, shared pathway for human flourishing via AI-driven innovations

2. Globally ban ASI, exit the dangerous race, and prevent grave dual-use misuse

3. Secure future American and Chinese economic leadership, ensuring each against dominance by the other

4. Reduce global concentration of power and wealth while making every person and nation substantially better off through AI-enabled abundance

5. Safeguard the future of leading AI firms and the freedom of innovation within wide safety constraints

The treaty must prevent both extinction and authoritarian dystopia. That’s why the process matters as much as the substance — and why we’ve invested so heavily in understanding how to design treaty-making that can actually succeed.

The Stakes

As citizens, we must do everything we can to help build this coalition. We must do it for ourselves, our children, and future generations.

As daunting as this task may be, if we face it head-on with clarity and fearlessness, we have a chance to realize a radical and durable improvement in human well-being for generations to come. If we succeed, ever more advanced safe AI will bring extraordinary benefits to humanity. The precedent of a successful, sweeping democratic AI treaty will also establish an extensible model to combat other civilizational risks.

The challenge is enormous, and success is highly uncertain. It may be tempting to succumb to powerlessness, to stick our heads in the sand, or to watch doom unfold from the sidelines.

But how can we find true peace or look our children in the eyes if we do not at least try? We have the unique privilege of having agency in the most consequential years of human history.