The Deal of the Century:
A Third Way Between
Machine Supremacy and Human Tyranny
As a winner-take-all race to Superintelligence is spiraling out of control, and four Trump-Xi summits are planned for 2026, the US and China may soon pursue an extraordinarily bold AI treaty — just as Truman and Stalin suddenly did for nuclear technology in early 1946, as nations started racing to build atomic and hydrogen bombs.
Convened by the Coalition for a Baruch Plan for AI, as part of its The Deal of the Century initiative
A by-invitation symposium and roundtables (25–35 participants) in Rome, June 18–19, 2026, convened by the Coalition for a Baruch Plan for AI. Leveraging a 356-page Strategic Memo of the The Deal of the Century, developed with 24 experts, the convenings will explore whether a consensus can emerge among a critical mass of influential humanist leaders on the core requirements of a credible US-China-led AI treaty-making process — one that can be expected to reliably prevent both catastrophic loss of control and global authoritarianism. Participants include AI safety and governance experts, Vatican AI ethics and governance leaders, treaty-making scholars, US-China policy bridge figures, influential US and MAGA media figures, US politicians, and relevant advisors, staff or envoys of top US AI lab leaders and key AI US administration officials. The goal is a confidential synthesis document for private circulation to senior decision-makers to be turned into a strategic memo to be presented to the White House in September 2026.
A race that neither superpower wanted — and neither can win
The US and China are locked in an all-important, accelerating, ungoverned, winner-takes-all race for AI dominance — one that neither wanted, and that neither can win. So far, the US administration has aggressively fostered AI acceleration while opposing any federal or state safety regulation, to protect US and Western values from the risk of being radically outcompeted or subjugated by another superpower. This has served its purpose: the US has stayed ahead.
Yet, the strategic context is radically changing. China has followed largely the same script, despite its globalist rhetoric, and is rapidly closing the gap. Leading US and Chinese AI scientists, most US AI lab leaders, and a large majority of US voters are increasingly alarmed by clear and present risks of ungoverned AI: concentration of power, grave misuse, major conflicts, loss of control, and human replacement — on timelines of a few years, or less.
Given the pace of progress and current AI architectures, the race will almost inevitably produce AI beyond human control — Artificial Superintelligence — with most leading AI labs publicly admitting it.
We are at a three-way fork. Continuing on the currently ungoverned race to ASI carries immense risks of concentration of power, loss of control, human disempowerment, and even replacement. Yet, creating a treaty strong enough to reliably prevent those risks could produce a global authoritarian dystopia. Most are working to prevent one or the other, while we should be building a third way that reliably avoids both, is grounded on universal humanist values, and realizes AI astounding potential for humanity and all sentient beings.
While China has been calling for multilateral global AI governance, the United States has stated it totally rejects any role of the UN and multilateral institutions, calling instead on "the prudence and cooperation of statesmen" — a position surprisingly in line with Guterres who in 2023 said, "only member states can create it, not the Secretariat of the United Nations."
Only a timely, bold and proper US-China-led AI treaty — perceived as suitable by middle powers, large AI labs and their investors — can prevent these risks from materializing, and in doing so, unlock the astounding promises of AI.
Yet, this direly needed policy shift is held back by a few influential informal advisors, driven by anti-science sentiments, blinding short-term greed, or radical post-humanist visions, and fears of a global treaty turning into authoritarianism that, while understandable, are exaggerated and addressable. These positions are not only catastrophically dangerous, but in stark minority among US voters, and ever more so, create an enormous political liability for Trump and 2028 US Presidential candidates.
It is high time for Trump and the silent majority among his advisors (and Xi in parallel) to come together to draft a credible and resilient treaty-making process — just as Truman, Oppenheimer, Acheson and others did in the early months of 1946 leading up to the Baruch Plan — and turn AI into humanity's greatest invention and Trump's greatest legacy.
Building convergence on a credible treaty-making process
The immensity of the challenge turns it into a second historic opportunity — after 1946 — to sanely govern an emerging technology rather than be governed by it, and so realize the astounding promises of AI.
The Symposium and Roundtables will seek to answer the following question: What would the core requirements be for a credible US-China-led AI treaty-making process — one capable of maintaining human control, preventing destructive competition, future-proofing US economic leadership, ensuring governance subsidiarity, and grounding itself in shared humanist ethics? And under what conditions might such a process become politically viable?
The events will explore whether a consensus on a proper US-China AI treaty-making process may emerge among key humanist figures with the potential to inform the US President's AI policy (and their advisors and envoys) — as it did around Truman in early 1946. This effort draws on evidence of a widely shared situational awareness and humanist worldview among these key figures — and the influence that the Vatican's moral guidance and interfaith ethics work has had on many of them.
It will also seek to engage leading scientists, policy experts, AI lab representatives, and national security leaders from the US (and possibly China), as well as leading accelerationist and trans/post-humanist thinkers, and US administration officials.
What are actionable paths toward a credible treaty-making process? How can an AI treaty reliably constrain dangerous AI development and use? And how can it do so while reducing — rather than increasing — concentrations of power and wealth? Are there ethical questions that will need global answers and won't be able to be delegated to the subsidiarity principle?
A deliberately small group
Typically 25–35 individuals — selected for direct relevance to the questions at hand.
- Advisors, staff, or envoys of key figures with decisive potential to inform future US international AI policy.
- Vatican AI ethics leaders — officials and advisors within the Pontifical Academies, the Renaissance Foundation, and interfaith AI ethics initiatives.
- Treaty-making and governance scholars — experts in nuclear, biological, and AI arms control, international law, and constitutional convention design.
- National security and intelligence professionals — current and former officials from the US, China, and allied nations, with expertise in cybersecurity, verification, and enforcement.
- AI scientists and lab representatives — researchers at the frontier of capabilities, safety, and alignment.
- US-China policy bridge figures — scholars and practitioners with operational knowledge of both nations' AI governance ecosystems.
Participants are drawn primarily from the United States and the Vatican, but also possibly from China. The full participant list is confidential.
A three-way fork — with unlikely middle outcome
Among leading AI thinkers, Peter Thiel has framed the challenge that Humanity faces with AI better than anyone else, albeit in (heretical) theological terms. We are at a three-way fork with unlikely middle outcomes:
Yet, Thiel has spent far more time denouncing the risk of an Antichrist in original ways, rather than detailing his proposal for a third way — except strong hints at post-humanist and elitist aspects that are currently widely minoritarian in the US, and around the world.
Much remains underexplored in any public vision for a third way — particularly its governance architecture, enforcement mechanisms, and safeguards against power concentration.
What are the core requirements of such a third way? What are the key aspects of a global governance of AI, of a global treaty? How can the US administration best promote such a third way? Should the US and China lead the way?
The Vatican is uniquely positioned to catalyze what no secular body can.
While Pope Leo XIV has made AI governance central to his papacy, a recent initiative by his leading AI advisor Paolo Benanti — in line with a 2024 New Year's message by Pope Francis — has called for a bold AI treaty to prevent the most significant risks and ensure an ethical human future in the age of AI.
The Pontifical Academy for Life and its Renaissance Foundation has brought eleven world religious traditions to converge on a Rome Call for AI Ethics.
Senior US officials have recognized this unique role.
"The United States has an opportunity to lead on AI within the ethical framework of Western civilization — speech, privacy, and respect for human rights. The Church brings an essential human and moral dimension to that innovation."
— Brian Burch, US Ambassador to the Holy See, February 2026The Vice President JD Vance has gone further, publicly entrusting last May 2025 moral leadership on AI to the Church:
"The American government is not equipped to provide moral leadership, at least full-scale moral leadership, following all the changes that are going to come along with AI. I think the Church is."
— JD Vance, Vice President of the United States, May 2025To confirm the crucial importance of these debates within and around the US administration, and the central role of the Vatican, Peter Thiel has starkly criticized such statements by Vance and he even decided to hold four days of lectures on AI and the Antichrist in Rome this March 15–18th, 2026.
Rome offers what no other venue can: neutral moral ground, interfaith credibility, and direct channels to consciences that no diplomatic cable can reach. For the Vatican, this convening offers a concrete channel from moral leadership to policy influence — connecting its ethical frameworks directly to the advisors and envoys who shape US AI policy. The presence of national security professionals, AI scientists, and US-China policy experts ensures that Vatican contributions move beyond declarations toward the co-determining the all-important AI treaty making processes that will come.
A narrow opening. A rare alignment. The Ultimate Debate.
Multiple Trump-Xi summits are planned for 2026. Public opinion has shifted decisively:
What's missing are politically actionable paths for the US administration, supported and validated by a critical mass of trusted advisors, experts and key stakeholders — as it happened for Truman in early 1946, leading him to present history's boldest treaty proposal, the Baruch Plan.
The stakes of this ultimate debate are already playing out in Rome. Peter Thiel will deliver lectures on "AI and the Antichrist" in Rome on March 15–18, 2026 — a direct response to Vance's public deference to the Church on AI ethics. The intellectual battle over the direction of US AI policy is converging on this city.
What the convening seeks to produce
- —A confidential synthesis document Mapping areas of convergence on treaty architecture, enforcement mechanisms, and the Vatican's potential institutional role — intended for private circulation to senior advisors and decision-makers in the US administration and the Vatican.
- —A shared situational assessment of the ASI race Bridging US national security, Chinese strategic, Vatican ethical, and AI lab perspectives into a common framework that doesn't yet exist anywhere.
- —A preliminary blueprint for a credible treaty-making process Identifying minimum requirements that both Washington and Beijing could accept, for an initial bilateral emergential deal and for their possible convening of a sort of realist (adjusted to GDP) "constitutional convention for AI" to onboard sufficient middle powers to ensure the treaty's enforceability.
- —A network of trusted interlocutors Across the Vatican, US policy, AI lab, and US-China diplomatic communities — capable of sustained, private coordination beyond the convening itself.
- —A roadmap for Vatican engagement Clarifying how the Vatican's moral authority and interfaith infrastructure could be operationalized in support of a US-China-led treaty process, building on the Rome Call, the Coexistence Appeal, and Pope Leo XIV's anticipated encyclical.
The Coalition for a Baruch Plan for AI
The Coalition for a Baruch Plan for AI comprises 10 international NGOs and 40+ advisors — including former officials at the UN, National Security Agency, World Economic Forum, Yale, and Princeton — plus 24 contributors to the Strategic Memo. Launched in July 2024, seed-funded by Jaan Tallinn's Survival and Flourishing Fund.
The initiative draws its name from the Baruch Plan — history's boldest treaty proposal, presented by President Truman to the United Nations on June 14, 1946, for the international control of nuclear weapons and energy. On the very day Donald Trump was born.
Other Organizations & Participants
Additional organizations and individuals are participating in or exploring engagement with the Consultations and Convening.
The full participant list is confidential. These meetings are by invitation only.
If your organization is interested in participating, or for any inquiries or introductions: