How Vance and Pope Leo XIV Could Lead Trump to a Historic Global AI Treaty

In 1946, amidst mounting fears of nuclear armageddon, a small circle persuaded a pragmatic US president Truman to propose—on the very day Donald Trump was born—what remains by far history's boldest treaty proposal. It entailed exclusive international control of all dangerous nuclear activities, and the sharing of nuclear energy benefits with all nations in return for compliance.

Today, rapidly mounting fears about AI—and statements by JD Vance, Pope Leo XIV, and other key potential influencers of Trump’s AI policy—signal the same dynamics are ripe to play out for AI and beyond. Trump would succeed where Truman failed, finish what he started, preempt China, secure US leadership, and prevent catastrophe—inking a Deal of the Century worth more than 100 Nobel Peace Prizes.

The Astounding Historical Parallel

Leading up to early 1946, nuclear scientists were ever more loudly warning the public, and privately President Truman, of the enormous proliferation risks of ever more potent nuclear weapons. 

US voters fears of nuclear war grew high and increasing. While support for international control oscillated, due to key facts withheld from the public, 70% of Americans said atomic weapons should never be used again.

The approval ratings of President Truman - a highly pragmatic US president who dropped the Bomb, was deeply anti-Soviet, and was not a globalist for sure - fell 40% between June 1945 and February 1946 due to economic woes. 

At that exact time, a handful of key advisors, especially Acheson, Oppenheimer, and Baruch, got Truman's ear and gathered to produce strategic memos that convinced him that (raising his approval ratings and) preventing nuclear's immense risks needed a treaty far bolder than any ever conceived—and proved its political and technical feasibility. 

As an outcome, on June 14th, 1946, Truman proposed what is by far the boldest treaty in the history of mankind, the Baruch Plan, for the exclusive, veto-less international control of all dangerous nuclear activities and other dangerous technologies and a sharing of the upsides of nuclear energy. It passed the UN Atomic Energy Commission 10–0, with only the Soviet Union and Poland abstaining—then died at the Security Council, blocked by the Soviet veto. 

Most scholars were right to conclude that the Baruch Plan's failure stemmed from the Soviets' fear of a perpetual minority in the proposed authority and from unresolved disagreements about the treaty's phasing. While the leading academic text on the plan concluded its failure was virtually inevitable, we argue, these differences could have been overcome, given the enormous win-win at stake, if the primary impediment would have been removed: the exceedingly insufficient bandwidth of diplomatic communications relative to the complexity of the agreement and timelines required.

While a US-UK direct secure voice line was activated since 1943 — from Bohr's first proposal in 1944 to the UN vote in December 1946 — the world's most consequential negotiations between US and USSR relied on sparse meetings and diplomatic couriers taking days and coded telegrams constrained by laborious encryption and decryption — all amidst post-war chaos and extreme secrecy. That same academic text admitted that secrecy, "muddled policymaking," and poor-quality information plagued the process on both sides.

Some of the most influential AI experts and leaders have referred to the Baruch Plan as a model for AI governance, including Yoshua Bengio, the most cited AI scientist, Ian Hogarth, (UK AI Safety Institute), Jack Clark, (Head of Policy at Anthropic), Jaan Tallinn, (Future of Life Institute).

A Second Chance

Eighty years later, in early 2026, another US president—more unconventional and unpredictable but just as pragmatic—finds himself in a remarkably similar situation with AI technologies. Most AI scientists and leaders are launching ever more urgent warnings. 

In October 2025, 63% of US citizens believed it was “somewhat or very likely” that "humans won't be able to control it anymore" and 53% believed that "AI will destroy humanity" — roughly up 20% from 6 months earlier. Trump is at his lowest approval ratings, around 35-40%, and needs a big political win. Xi has been calling repeatedly for global AI governance since its Global AI Governance Initiative in 2023 and is likely genuinely doing so. And 77% of US voters supported a strong international AI treaty. 

Yet, Trump has not acted yet. Public outcry is yet to reach a peak, which could be accelerated by a notable AI accident. Furthermore, while concerns about AI safety and calls for an AI treaty are increasing among key potential influencers of Trump's AI policy—and even among MAGA opinion leaders  like Bannon, Carlson, Rogan, Beck, and De Santis—we are yet to see the kind of concerted effort by key advisors and influencers that led to the 1946 Acheson-Lilienthal Report and related lobbying activities.

Just as it was the case for the Baruch Plan, such a treaty would necessarily extend to both civilian and military domains and to other dangerous technologies. Given the huge scope of the treaty and enforcement mechanisms that will need to be put in place, it would eventually likely extend to nuclear weapons, where the leading edge of capabilities is increasingly tied to AI dominance.

So there's a unique opportunity for a few individuals who find themselves in a position to influence Trump's AI policy to shape history at its most crucial juncture, convincing him to pursue such a treaty on purely pragmatic grounds: to prevent risks to his life and his family's, the risk of China prevailing, and to secure and future-proof US economic advantage. By inking the "Deal of the Century", Trump would build on a legacy worth 100 Nobel Peace Prizes before retiring in 2029, widely cherished by most Americans and the world over.

In a fateful coincidence, the Baruch Plan was proposed on the same day as the birth of Donald Trump in Queens, a fact that, given the psychological forces at play, could prove surprisingly historically consequential.

Won’t an AI Treaty Lead to Global Authoritarianism?

Some oppose the idea, especially US AI leaders, as they fear that a global AI treaty co-led by two "strong-executive" leaders and needing extreme means of surveillance to enforce it would end up increasing concentration of power and reducing freedom, entrenching an oligarchic authoritarian - or just fail to prevent ASI emergence altogether.

While that is a concrete and severe risk, it can be made to reduce rather than increase the concentration of power and is most likely to do so under plausible scenarios for several reasons. Autocratic leaders and their security agencies are the least interested in delegating too much power at the global level to preserve their national sovereignty, pushing for institutions that are firmly federalist. 

More than half of the key potential Trump AI policy influencers that are close to the idea of a treaty - among them Suleyman, Hassabis, Altman, Pope Leo XIV, Amodei, and Rogan - have democratic and/or Catholic humanist worldviews, while the Pope and Altman deeply care about the subsidiarity principle (as even Thiel does, at least rhetorically). 

We already live in an extremely surveilled world, largely unaccountable and shrouded in secrecy, justified mainly by a constant global state of anarchy and (cold) war. By carefully accruing such powers to the global level, it'll be in every nation's (and AI firms’) interest to render those surveillance powers accountable, again to preserve their sovereignty and so indirectly that of their citizens. The joint fast-tracked creation of mutually-trusted treaty enforcement technologies would also force their substantially increased global transparency and accountability towards nations, non-state stakeholders and citizens.

The Vance-Pope Bridge and Treaty-Making that Can Succeed

While the initiative for such an alliance could come from other key potential influencers of Trump's AI policy, we believe that eventually an undeclared, ironclad partnership between Vance and the Pope could be key. Here's why.

Pope Leo XIV chose his name and dedicated his papacy to the AI revolution. His top AI advisor, Paolo Benanti, has gathered top AI scientists around a call for a bold and humanist AI treaty, called for explicitly by Pope Francis. The Vatican has gathered major world religions and US big tech around  shared AI ethics. When asked about the seriousness of extinction risks of AI, JD Vance said he had read the AI 2027 report, followed by stating that the American government is 'not equipped to provide moral leadership' on AI and that 'the Church is". Most notably and revealingly, these positions were starkly opposed as a dangerous "Caesar-Papist fusion" by Peter Thiel, Vance's mentor and arguably the main driver of Trump's current accelerationist and de-facto post-humanist AI policies. 

Should such an alliance succeed in convincing Trump, he could pursue starting with his first of four 2026 meetings with Xi this April. At this stage, the Vatican's role would give way to a key role for Singapore, in its unique current neutral China/US position, and its president's uniquely forceful statements about global AI coordination. 

Trump and Xi should start off by inking an emergency bilateral treaty focused on (a) transparency for frontier AI and mitigating AI-driven biological risks. In parallel, they should launch a "global Apollo Program" to jointly build, at wartime speed, a mutually-trusted socio-technical infrastructure for ultra-high-bandwidth diplomatic communications and treaty verification mechanisms—to set the basis for extraordinarily effective treaty-making and sufficiently trusted and trustworthy treaty enforcement.

To prevent the fatal use of vetoes, this second phase should be based on the constitutional convention model (but a realist one, with vote weighting based on GDP and population) - involving extensively religions, security agencies, and leading AI firms - to succeed where Truman and Stalin failed in 1946.  

The window is narrow. The April Trump-Xi is coming up soon. The question is not whether a Baruch Plan for AI is feasible—it is whether anyone with access to these leaders will make the case before the window closes.


Rufo Guerreschi is the president of the Rome-based Coalition for a Baruch Plan for AI. He leads its The Deal of the Century initiative to persuade Trump to co-lead with Xi a bold, timely, and proper global AI treaty and is the lead author of its 350-page Strategic Memo.

APPENDIX: Go Deeper in The Arguments Behind This Post

This blog post summarizes key arguments developed in depth across the 356-page Strategic Memo of The Deal of the Century (v2.6). Here's where to find the detailed analysis:

  • Review 150 pages of deep profiling of over a dozens key potential influencers of Trump's AI policy (pp 170-324)

  • Why Trump can be persuaded on purely pragmatic grounds — including 17 specific reasons, from approval ratings to voter concerns about AI extinction risk — is laid out in "Why and How Trump Could Be Persuaded" (pp. 60–67).

  • How a handful of key influencers shaped the original Baruch Plan — and why the same dynamics apply today — is analyzed in "When the Impossible Became Policy: Lessons from 1946" (pp. 27–30) and "1946: How Close We Came—And Why We Failed" (p. 69).

  • The Vance profile — his growing autonomy from Thiel, the "Pope Card," and his pivotal evolution on AI — is examined in "J.D. Vance" (pp. 178–181).

  • Pope Leo XIV's unique positioning — from his choice of papal name to the Benanti-led AI ethics network and alliance potential with key influencers — is detailed in "Pope Leo XIV" (pp. 199–215).

  • The Thiel-Vance tension on AI and Thiel's framing of the "Antichrist vs. Armageddon" trilemma — which Vance's Vatican turn directly challenges — is unpacked in "Why Our Three-Way Fork is Best Framed by Peter Thiel" (pp. 81–85) and "Peter Thiel (Friend or Foe?)" (pp. 285–299).

  • The treaty-making architecture — ultra-high-bandwidth bilateral negotiations, a realist constitutional convention model, and the April 2026 window — is detailed in "A Treaty-Making Process that Can Succeed" (pp. 103–119) and the "Treaty-Making Roadmap" (pp. 140–152).

  • Why a global AI treaty need not lead to authoritarianism — and how mutual distrust between superpowers paradoxically creates transparency — is argued in "The Global Oligarchic Autocracy Risk—And How It Can Be Avoided" (pp. 124–129).

Rufo Guerreschi

I am a lifetime activist, entrepreneur, and researcher in the area of digital civil rights and leading-edge IT security and privacy – living between Zurich and Rome.

https://www.rufoguerreschi.com/
Next
Next

Our Detailed Case for AI Governance and Safety Experts