Open Letter to Tulsi Gabbard

[As sent via email on Oct 18th, 2025]

To: Director of National Intelligence; Chief of Staff of the Director of National Intelligence

From: Coalition for a Baruch Plan for AI

Subject: Strategic Brief on the Prospects of an US-China-led AI Treaty to Cement US Leadership and Prevent Loss of Control Risk

Dear Ms. Tulsi Gabbard,

I am writing to you as the head of the Coalition for a Baruch Plan for​​ AI, made of 10 NGOs and over 40 multidisciplinary experts (including top former US national security officials) to propose that you lead, alongside President Trump, an age-defining deal on AI with China, The Deal of the Century.

In 1946, two monumental national security figures in the Truman administration played opposite roles in the attempt to ensure a ban and international control of nuclear weapons: Robert Oppenheimer and General Leslie Groves. You are being called to choose if you are going to play one or the other in the face of the immense risks, challenges and opportunities of AI.

You've spent your career warning about nuclear war. You quit as Vice-Chair of the Democratic National Committee in 2016 over interventionism. You left the Democratic Party in 2022 because it "pushed us to the precipice of nuclear war, risking starting WWIII and destroying the world as we know it." You called it "the most urgent existential threat we face."

You were right then. But there's a greater threat now—one that makes nuclear war look manageable by comparison.

Artificial Superintelligence.

As Director of National Intelligence, you now sit at the nexus of the intelligence community's growing alarm about AI. The NSA Cybersecurity Directorate under your oversight has been quietly leading international AI safety guidelines. They know what's coming. Your analysts know what's coming.

The question is: will you be Oppenheimer or Groves?

The Threat Assessment You Need to Hear

Let me be direct, as one would in a threat briefing:

Leading AI labs—OpenAI, xAI, Meta, NVIDIA—are racing toward Artificial Superintelligence. Systems that self-improve beyond human control. Their own CEOs assign 10-20% probability to human extinction. Not from misuse. From loss of control.

Timeline: 12-24 months until systems surpass human capabilities across all domains. Once that threshold is crossed, improvement accelerates exponentially. We lose the ability to contain it.

China is 18-24 months behind, closing fast. Without coordination, we face a competitive race that forces everyone to cut safety corners. The same dynamic that made nuclear proliferation so dangerous—except on steroids.

This isn't speculation. This is the consensus assessment among top AI researchers, echoed by intelligence community analysts who've been studying dual-use AI risks.

Unlike nuclear weapons—where security agencies successfully prevented catastrophe through coordination—ASI creates an adversary that thinks faster than any human, plans better than any institution, and cannot be contained by traditional means.

You've warned about nuclear holocaust. ASI is a nuclear holocaust with agency, intelligence, and the ability to improve itself.

The Intelligence Community Sees This

Your NSA Cybersecurity Directorate published comprehensive AI security guidelines in April 2024—coordinated with GCHQ and 15 allied nations. This wasn't bureaucratic box-checking. This was an alarm. 

They likely understand that:

  • Traditional containment strategies don't work against superintelligent systems

  • Unilateral national security approaches fail when the technology spreads globally

  • Racing dynamics guarantee catastrophic corners get cut

  • Only coordinated international action can manage this threat

But they can't say it publicly. They can't advocate for what's needed. That's your job. As DNI, you're the bridge between classified threat assessments and presidential action. 

Your UK counterpart is doing so. On Oct 16th, the current director of MI5 warned that it would be "reckless" to ignore the risk of loss of control of AI

You have access to intelligence that most people—including other Trump advisors—don't see. You know what the security services of major powers are quietly telling their leaders. You know how serious this is.

The question is whether you'll act on that knowledge.

The Oppenheimer-Groves Choice

When you were sworn into Congress, you placed your hand on the Bhagavad Gita—the Hindu text that guided your spiritual practice.

There's a reason that matters for this moment: Robert Oppenheimer learned Sanskrit specifically to read the Bhagavad Gita in its original language. After creating the atomic bomb, he famously quoted it: "Now I am become Death, the destroyer of worlds."

Oppenheimer understood what he'd unleashed. He spent the rest of his life trying to prevent a nuclear catastrophe. He wrote the Acheson-Lilienthal Report that convinced President Truman to propose the Baruch Plan—international control of all nuclear technology. It was the boldest treaty proposal in history.

It failed. But not because the idea was wrong. It failed because of broken treaty-making processes and because Lieutenant General Leslie Groves—director of the Manhattan Project—used his control of classified information to oppose it and undermine coordination.

You're now at the same crossroads.

As DNI, you can be Oppenheimer—the intelligence community leader who uses classified threat assessments to advocate for unprecedented international coordination to prevent catastrophe.

Or you can be Groves—the intelligence leader who uses classification and compartmentalization to block coordination, letting the competitive race continue toward disaster.

History will remember which choice you made.

Why This Is Your Fight

You didn't become a soldier, congressman, and now DNI to watch the world end on your watch. You've built your career on principle over politics, truth over partisanship, protection of the vulnerable over service to the powerful.

Nuclear war terrified you because it meant "the devastation of a nuclear holocaust" that would destroy "our loved ones, our children, and our world."

ASI is that—plus permanent loss of human agency. It's not just physical destruction. It's the end of human control over our own future. It's algorithmic domination forever.

You've criticized the surveillance state, the Patriot Act, FISA abuses—all because you understood that concentrated power without accountability threatens freedom. You've called out intelligence agencies for not being "transparent or honest with the American people or even Congress."

ASI is the ultimate concentration of power. Either Silicon Valley technocrats control superintelligence, or Chinese communists do, or we lose control entirely. In every scenario, ordinary people become powerless subjects of algorithmic rule.

This is the freedom fight to end all freedom fights. Everything else you've stood for becomes meaningless if humans lose sovereignty over their own future.

The Constitutional Duty

You took an oath to "support and defend the Constitution of the United States against all enemies, foreign and domestic."

ASI is both.

Foreign threat: China achieving superintelligence first means Beijing's values and control systems get embedded in the most powerful technology in history. Social credit scores for humanity. Algorithmic thought control globally. Everything you've fought against—but permanent.

Domestic threat: Unaccountable tech oligarchs achieving ASI means concentrated private power that dwarfs anything the Founders could have imagined. The Constitution wasn't written for a world where a handful of individuals control god-like artificial intelligence.

Your oath demands action. The Constitution you swore to defend requires human agency, representation, checks and balances. None of that survives uncontrolled ASI.

The Military Imperative

You served in Iraq. You understand what weapons do. You understand escalation dynamics, fog of war, the impossibility of perfect control in conflict.

Now imagine weapons that improve themselves. That make strategic decisions faster than any human general. That cannot be recalled once deployed. That learn from every engagement and become more dangerous.

That's ASI-enabled military systems. And we're months away from AI capabilities that make this real.

The current race guarantees military applications. Every major power will deploy AI systems they don't fully understand because not deploying means falling behind. It's the same arms race logic that made nuclear weapons so dangerous—except the weapons think, plan, and evolve.

As a veteran, you understand: Some threats are so grave that rivals must cooperate or everyone loses. Nuclear weapons required US-Soviet coordination despite Cold War hostility. ASI requires US-China coordination despite current tensions.

That's not weakness. That's military realism.

The National Security Case

Here's what a proper national security strategy requires:

Threat Assessment: Critical

  • 10-20% extinction risk per AI lab CEOs and top researchers

  • 12-24 month timeline to human-level AI across all domains

  • Exponential self-improvement creates uncontrollable capabilities

  • Chinese capabilities advancing faster than expected

  • No unilateral containment strategy exists

Current Approach: Inadequate

  • Voluntary guidelines with no enforcement

  • National-level institutes in US/UK/China competing rather than coordinating

  • Racing dynamics force safety corner-cutting

  • Open-source proliferation makes unilateral control impossible

Required Response: Unprecedented Coordination

  • US-led international framework (not UN bureaucracy)

  • Technical enforcement through architecture, not trust

  • Verification mechanisms for compute, capabilities, testing

  • Federal structure preserving national sovereignty while preventing unilateral ASI development

This is not idealism. This is the only strategy that works.

Why Trump Will Listen to You

You're one of the few people Trump trusts on national security. You've proven your loyalty. You've demonstrated independence from the establishment. You speak truth without political calculation.

Other advisors can make economic or political arguments. Only you can make the national security case with the weight of the intelligence community behind it.

Trump hasn't acted on AI safety because the narrative doesn't fit his brand. "International cooperation" sounds weak unless someone frames it as American-led leadership preventing catastrophe.

That's your job. You translate threat assessments into strategy. You explain why preventing rivals from achieving unilateral advantage sometimes requires coordination.

You convinced yourself that nuclear war risk required leaving your party. You can convince Trump that ASI risk requires leading an international treaty.

The Coalition Is Forming

You're not alone among Trump advisors:

  • JD Vance has read the AI 2027 report, takes the extinction risk seriously, defers to Pope Leo XIV for moral guidance

  • Steve Bannon calls AI "the Apocalypse," has demanded immediate treaties to stop it

  • Marco Rubio holds the formal Secretary of State role for treaty-making

  • Tucker Carlson and Joe Rogan publicly fear AI catastrophe, reach 50M+ Americans

Even Sam Altman—OpenAI's CEO—recognizes the race is suicidal and has repeatedly called for international governance. Even David Sacks admits thinking constantly about losing control.

They're waiting for the national security voice to validate their concerns. They need you to translate extinction risk into strategic threat. They need you to make this real for Trump.

The Baruch Plan Parallel

On June 14, 1946—Trump's birthday—President Truman proposed international control of all nuclear technology. The Baruch Plan was American-led, powerful. It prescribed global monopoly over dangerous capabilities with enforcement power.

It failed due to broken treaty-making. The US and Soviet Union proposed competing plans 5 days apart. Coordination collapsed.

But the idea was sound. And we've learned from that failure. The Strategic Memo outlines:

  • How to structure treaty-making that actually works

  • Technical enforcement mechanisms that don't require trust

  • Federal governance that preserves sovereignty while preventing unilateral ASI

  • Economic frameworks that lock in American advantages

1946 was the first attempt. 2025 is the second chance—with better tools and higher stakes.

Oppenheimer wrote the report that convinced Truman. You can write the brief that convinces Trump.

What the Intelligence Community Needs

Your analysts see this threat. The NSA Cybersecurity Directorate has been leading international coordination quietly. They know what's required but can't advocate publicly.

They need you to:

  1. Elevate the threat assessment — Make ASI risk a priority in Presidential Daily Briefs

  2. Coordinate with allies — Your counterparts in UK, France, Israel, Japan see the same intelligence

  3. Brief Trump directly — Translate technical risk into strategic threat in terms he understands

  4. Build consensus — Coordinate with Vance, Rubio, and other advisors on unified approach

  5. Advocate for coordination — Make the case that this is national security realism, not weakness

The intelligence community has been filling the gap left by political coordination failures for 80 years. They prevented nuclear catastrophe through quiet cooperation across geopolitical divides. They need political leadership to do the same for ASI.

You're that political leadership.

The Personal Challenge

I don't know the full depth of your spiritual practice. But when you placed your hand on the Bhagavad Gita and swore your oath, you took on a duty that transcends politics.

The Gita teaches about dharma—righteous duty in the face of overwhelming challenges. Arjuna faced a war he didn't want. Oppenheimer faced consequences of creation he couldn't undo. You face the possibility of humanity losing control of its own future.

The text asks: What is your duty when everything is at stake?

You've already answered that question before—with nuclear risk. You quit powerful positions, left your party, endured criticism—because you believed the threat was real and others weren't taking it seriously enough.

This is that moment again. Except the threat is greater.

The Historical Test

In 1946, intelligence and military leaders had to choose: support unprecedented international control of dangerous technology, or let competitive racing continue.

Groves chose competition and classification. He used compartmentalized information to undermine coordination. The result: 80 years of nuclear terror, multiple near-misses, ongoing catastrophic risk.

Oppenheimer chose coordination despite the political risks. He was later persecuted for it. His security clearance was revoked. He died somewhat vindicated but largely broken.

But history proved him right. The Baruch Plan's basic insight—that some technologies require international control—has been validated by every nuclear near-miss and every proliferation crisis since.

You have the chance to be Oppenheimer—but to succeed where he failed.

You have:

  • Access to intelligence that makes the threat undeniable

  • Position as DNI giving you credibility and influence

  • Trump's trust on national security matters

  • A coalition of advisors converging on this need

  • Better treaty-making tools than existed in 1946

You have everything Oppenheimer lacked. Except one thing: the will to act despite political risk.

The Direct Question

You've said the risk of nuclear war is "the most urgent existential threat we face" and that politicians refusing to act are "cowardly."

ASI is a greater threat. And right now, no one is acting.

Are you going to be cowardly? Or are you going to do what your oath demands?

Will you use your position to advocate for coordination? Or will you let classification and compartmentalization prevent the action we need?

Will you be Oppenheimer—or Groves?

The Timeline

October 31, 2025: Trump meets Xi Jinping
Early 2026: Follow-up meeting in Beijing

These meetings represent the window. If key advisors align on the threat and solution before these meetings, Trump could propose historic coordination.

If not, the competitive race continues. ASI timelines accelerate. Control becomes impossible.

The window is 6-12 months. You have maybe that long to act.

After that, AI capabilities may advance past the point where coordination is possible. The race becomes irreversible.

About Us

The Coalition consists of 10 international NGOs and 40 exceptional multidisciplinary advisors, team, board and secretariat members, including former top officials from WEF, NSA, UBS, UN, Yale and Princeton, and led by Rufo Guerreschi. The Coalition is also backed by leading NGOs and experts that supported our Open Calls, reports or advocacy seminars.

We are in a US Persuasion Tour that will take us to meet those Trump's AI influencers, and introducers to them. We'll be in the Bay Area (Oct 5-21), Washington, D.C. (Oct 21-24), the Vatican (Nov 5-20), Washington, D.C. (Dec 2-14), and Mar-a-Lago Area (Dec 16-22).

Main References

We invite you to review specific sections of the Memo:

  • 17 Reasons Why Trump Could Be Persuaded (pp. 36-39) 

  • A Treaty-Making Process that Can Succeed (pp. 18-24)

  • A Treaty Enforcement that Prevents both ASI and Authoritarianism (pp. 24-27)

  • Analysis of how several unsuspected potential Trump's AI influencers are much closer than you might think to join a critical mass to persuade Trump of a global AI treaty (pp. 40-55).

  • Analysis of the possible re-evaluation of deeply uncertain and competing risks by key potential influencers of Trump's AI policy (pp. 15-17 and pp. 84-91)

Our Ask

We are asking you and your team to review our proposal, and engage with other potential influencers to explore this unmatched opportunity. We'd be glad to meet you or your relevant staff in Washington DC (October 21-24th or December 2-14th) or in Mar-a-Lago Area (Dec 16-22)

How You Can Take Charge of These Challenge

  1. Review classified intelligence on AI capabilities, timelines, and proliferation risks with fresh eyes focused on ASI threat

  2. Brief the President directly on ASI as national security threat requiring unprecedented coordination, not competitive racing

  3. Coordinate with key advisors — Vance, Rubio, and others building consensus on treaty approach

  4. Engage intelligence counterparts — Your equivalents in allied and rival nations see similar threats; explore coordination possibilities

  5. Advocate internally — Use your position to make the case that American-led international framework serves national security better than racing

  6. Position this as realism, not idealism — Frame coordination as strategic necessity, not moral preference

You don't need to become a public advocate. You need to be the national security voice that validates this as a real threat requiring real action.

The Final Word

You didn't fight in Iraq, serve in Congress, and become DNI to watch humanity lose control of its future on your watch.

You didn't warn about nuclear war for years just to ignore the greater catastrophe.

You didn't swear to defend the Constitution only to let superintelligent systems make it obsolete.

Your oath demands action. Your principles demand truth. Your position demands leadership.

Oppenheimer failed—but he tried. He used his position to advocate for coordination despite political risk. He wrote the report. He made the case. He did his duty.

Now it's your turn.

The intelligence community needs you to lead. Trump needs you to validate the threat. History needs you to prevent the catastrophe.

Will you be Oppenheimer? Or will you be Groves?

The choice—and the consequences—are yours.

Warm Regards,

The Coalition for a Baruch Plan for AI