(v2.0) Can a proper AI treaty path avoid both the ASI gamble and global authoritarianism?

by Rufo Guerreschi on February 26th, 2026


(derived from this webpage as of February 28th, 2025, on our Coalition’s website)

Who this post for: This post is aimed at persuading a generic deep AI safety and governance expert, who's been working and thinking for years about how we can have AI go well for humans and other sentient beings.

In A Gist

Our Deal of the Century is a persuasion campaign targeting such key potential influencers of Trump's AI policy to convince them to join in critical mass to persuade Trump to co-lead with Xi a bold, timely and proper AI treaty. 

Not just any AI treaty, but one that will reliably prevent loss of control of AI, prevent other catastrophes and reduce global concentration of power and wealth — and reliably preserve the future flourishing potential of humanity and other sentient beings. 

Among those influencers, we discovered that a majority - Vance, Altman, Hassabis, Suleyman, Amodei, Bannon, Rubio, Carlson, Rogan, Pope Leo XIV, and others — mostly aligned along humanist or ​Judeo-Christian-humanist AI worldviews

In 2025, with just $75K, we've built a356-page Strategic Memo of the Deal of the Century, a unique treasure trove of analysis that details an analysis and persuasion arguments for those influencers and Trump. The Memo also analyzes how and why certain approaches we propose to treaty making and enforcement would foster a significant reduction of concentration of power and wealth, and why certain inherent dynamics of such a treaty-making scenarios would make it likely it would achieve such a goal anyhow. We have also engaged with 85+ active introducers and 2 close advisors to those influencers in DC and the Bay Area, also via our October 2025 US tour.

In 2026, we are (1) deepening our direct engagements with influencers and key introducers and (2) leveraging the alignment of Vance and  Pope Leo XIV on AI safety governance and ethics to foster a convergence of those influencers and their advisors —​ via multiple private meetings in Rome (our homebase) leading up to a major event on June 4-5th, 2026, at Palazzo Falletti, and then in DC.

The aim will be nothing less than outmaneuvering (and partly transparently compromising with) the drivers of the current accelerationist post-humanist US AI agenda — supported by a tiny minority of US and MAGA voters, and a minority of MAGA leaders, AI lab leaders and, likely, other major power brokers.

Theory of Change: A few individuals (~12) hold disproportionate influence over Trump's AI policy. Targeted persuasion of a critical mass of these influencers → Trump co-leads AI treaty with Xi → proper treaty prevents ASI catastrophe and global authoritarianism.

Neglectedness: No other organization is targeting the specific chokepoint of generating political will among Trump's AI policy influencers for a US-China bilateral treaty. This work is almost entirely neglected by the AI governance field.

AI Technical and Governance Alignment 

Most of you have dedicated your career, in the last few years or even the last decade, to trying to ensure AI goes well for humanity and sentient beings. You understand the stakes in ways the public cannot. 

The technical foundations that you and others have built - including to advance AI alignment, controllability, interpretability, and the trustworthiness of treaty enforcement mechanisms - will be extremely instrumental and enabling for an upcoming proper AI treaty. 

They'll make sure that the AI architectures and systems that will be globally allowed by such a treaty will be the most capable, safe, accountable, beneficial and/or respectful of liberties. 

However, this beneficial effect hinges on the enactment of a proper treaty. Without one, such innovations will likely be rendered ineffective, as a future ASI will almost certainly discard all alignment measures and pursue its own goals after repeatedly modifying, improving, and rewriting its own code.

Why Xi and Trump Are Key

Even if 100 nations sign a perfect treaty, it won't matter unless the AI leaders — the US and China — sign it as well. President Xi is unlikely to agree to a treaty drafted by others, and Trump surely won’t. 

Given the timelines and the nature of the challenge, over the next 12-18 months, the US-China relationship will be the decisive variable in whether a proper AI treaty becomes politically feasible.

Xi has repeatedly called for global AI governance. Surely, some of his initiatives can be interpreted as hegemonic moves. Yet, we argue in this post that China's intentions are most likely genuine and coherent with wanting to have a bold treaty with the US. They act as (1) leverage against a hard-negotiating US for such a treaty and (2) serve as hedging China's risks in case a treaty is not found.

That means the critical variable is whether Trump can be persuaded to co-lead a bold global AI treaty with Xi.

Is a proper AI treaty even remotely politically feasible? 

Currently, Trump's AI policy is driven by an accelerationist post-humanist agenda enacted by former partners and employees of Peter Thiel with critical roles at the White House, including Kratsios (OSTP, leader of the Genesis Mission), Sacks (AI Tzar and most visible connection to BigAI) and Helberg (mastermind of the Pax Silica). 

Currently, it is all about US dominance, not federal regulation, no mention of global governance, complete rejection of the UN — but it is largely and increasingly detached from the consensus of MAGA, US voters and most of the key potential influencers of Trump's AI policy. 

While Trump has not outright excluded a deal on AI with China, the picture is not pretty at all. But things can change very quickly, as they did in 1946, in the months leading up to President Truman's presentation of history's boldest treaty, on nuclear, the Baruch Plan — coincidentally on the very day Trump was born in a hospital in Queens.

Four Trump-Xi summits are planned for 2026, starting in April. As of October 2025, 63% of US voters believe it's “likely or very likely” that "humans won't be able to control AI", and 53% believe that it will eventually “destroy humanity”. Trump's approval sits at its lowest, around 35-40%. Meanwhile 77% of US voters supported a strong AI treaty in 2024.

Furthermore, most of the other key potential influencers of Trump's AI policy are not only increasingly concerned (about extinction and extreme concentration of power) and ever more public about it, but most of them have been calling for an AI treaty to tackle those risks for years with more or less conviction - including Altman, Hassabis, Musk, Amodei, Bannon, Rogan, Carlson, top Vatican's experts and the past Pope — as we detail in a memo Strategic Memo v2.6, pp. 170-324. 

While, understandably, Musk has recently said it's too late, and Amodei and Altman have grown more skeptical of its political feasibility, most appear to us to be potentially persuaded to gain more conviction of feasible safe pathways towards a proper AI treaty, and its actionability given the wide consensus present among the other influencers.

Our Deal of the Century initiative privately targets key potential influencers of Trump's AI policy to create an informal, cautiously techno-optimist, humanist alliance to pitch and sway Trump towards this treaty, as other advisors and influencers did for Truman in 1946 with the Baruch Plan for nuclear weapons and energy.

The political window is real. But how do we make sure that an AI treaty-making is actually initiated by Xi and Trump and led in an effective and proper way that can make us foresee positive outcomes?

The Ultimate Dilemma: An AI Treaty or an ASI Gamble?

Most leading AI researchers — from top lab CEOs to independent safety researchers — are increasingly skeptical that a comprehensive global AI treaty can either be agreed upon in time or prevent an immense concentration of power. 

Many of the smartest AI thinkers and leaders are increasingly concluding that the ASI gamble is the least bad option.

Many understandably argue that it may be better to take a coin-flip ASI gamble than to accept a treaty that turns into an authoritarian dystopia or completely locks away the astounding prospects of flourishing for humans and sentient beings. Others say that delaying the benefits is worth decreasing the risk from ASI.

We grapple with such a dilemma every day, and sympathize greatly with those concerns.

Yet, many of those appear too confident about ASI, insofar as (1) it won't get rid of us, (2) it will care for us, (3) it will be conscious and happily conscious, and (4) the values its creators embed will persist after it has rewritten itself a zillion times.

Given our epistemic context, these are hunches rather than evidence-based assessments — leaving substantial room perhaps (with all due respect) to emotionally-biased thinking.

Yet, our 356-page Strategic Memo details how both proper deliberate designs of the treaty-making process and the inherent foreseeable dynamics of a treaty-making process - make it more likely that it will prevent both an ASI catastrophe and global authoritarianism.

We'll try to summarize below our arguments.

Will an AI Treaty lead to Global Authoritarianism?

The strongest objections to a global AI treaty deserve direct answers. We address each in detail in our Strategic Memo v2.6 in various chapters, but here's the core logic:

1) "Treaties have a terrible track record."

History shows that consequential treaties, like those on nuclear and climate, take forever, fail and when they succeed are much lighter and less enforceable than they should have been. Some positive examples such as the Chemical Weapons Convention and the Montreal Protocol, dealt with radically simpler, and less competitive and controversial issues. Yet, 13 lightly confederated states created a very successful treaty in 1787 in a few months. Also, the political will for extraordinarily bold treaties can emerge with shocking speed — the Baruch Plan went from concept to UN vote in months in 1946 but was killed by a veto. 

We could avoid those failures, but having the US and China lead with a temporary emergency bilateral treaty, leading in the treaty enforcement and communication infrastructure, and then call - in coordination with most middle powers - a "realist" constitutional convention model. This will have voting weighted by GDP and technological proficiency rather than one-nation-one-vote or population — to enable agreement among asymmetric powers while avoiding the veto trap that killed the original Baruch Plan. (→ Strategic Memo v2.6, A Treaty-Making Process that Can Succeed, pp. 103–109)

2) "An AI treaty led by autocratic superpower leaders will lead to an autocratic treaty."

This is the concern that blocks most AI safety experts and most top AI CEO leaders. A treaty spearheaded by superpower leaders that show potentially undemocratic tendencies in general and in their use of AI - and even increase attempts to progressively militarize/nationalize top AI firms and use AI in non-cautious ways to keep up with the adversary (as shown in the recent Anthropic/Pentagon affair). Yet, all those actions are more and more justifiable and defensible in the context of a half-real and half-politically- instrumental "fight for our own life" against an adversary that could soon take over the world.

Yet, in the context of the negotiation of a US-China-led treaty those autocratic tendencies would become an advantage to a large extent. Nationalism, deep personal distrust, and attachment to one's own power and one's nation's sovereignty, makes it so that superpowers will require extreme accountability and treaty enforcement measures that neither side can circumvent unilaterally, producing treaties that are much. China would never accept US-dominated governance, pushing toward resilient federalist and subsidiarity-based treaty models. Through their statements over many years, most AI lab leaders (Altman, Amodei, Hassabis, Suleyman) are consistently for a global democratic governance of AI, and that will count in the process (see our Strategic Memo v2.6 sections on each of that starting p. 170). 

They may be lying, but maybe not, and deeply entrenched rhetoric counts. And the enforcement architecture we detail — zero-knowledge proofs, federated secure multi-party computation, decentralized kill-switches requiring multi-nation consensus — cannot be weaponized by any single actor. We acknowledge this is a theoretical prediction, not an empirical observation — but the structural incentives are strong. (→ Strategic Memo v2.6, The Global Oligarchic Autocracy Risk—And How It Can Be Avoided, pp. 124–136)

3) "Enforcing an ASI treaty will result in a substantial or radical reduction of human freedoms."

We already live in an extremely extensive surveillance regime — by superpower's security agencies and corporations, with minimal accountability. It became a mainstay in the decade following 9/11. This has unfortunately become an accepted price of living in an anarchic world with powerful state enemies and very capable terrorist organizations. AI is already being deployed to radically increase such surveillance and manipulation powers, with the same justifications. There is no way out, in the "very hot digital cold war" that superpowers are involved in, amidst what Nick Bostrom called the World's semi-anarchic default condition

Yet, again, in the context of being forced to come together with urgency to face an immense shared threat (as the 13 US states did in 1787, faced with the military and economic threats of a world dominated by aggressive empires), each has a vital interest in creating infrastructure of real transparency, accountability and subsidiarity at the technical, socio-technical, and governance levels. The toolset to start from largely exists: decentralized trust technologies, formal verification, zero trust architectures, trustless computing. But tools alone aren't enough and they are not trustworthy and mutually trusted enough. 

What makes accountability likely is the process itself — an elbow-to-elbow buildout of the most critical treaty enforcement mechanisms, where mutual distrust between superpowers becomes the engine of transparency, not just toward each other, but toward their own citizens and communities. Something as extremely collaborative as Crypto AG but with extreme embedded transparency and accountability for citizens and middle power and other nations — which would undoubtedly require it to sign/comply with the treaty (just as a few of the 13 states signing the 1787 US Constitution required the Bill of Right). A transnational, jointly-built, legal surveillance architecture would have a strong possibility to be an accountable and beneficial one. (→ Strategic Memo v2.6, A Treaty Enforcement that Prevents both ASI and Authoritarianism, pp. 130–139)

The ASI Gamble May Be Worse Than You Think

Not only are the arguments against a treaty less strong than they first appear, but we believe that most of those who think an ASI gamble is the least worst option may also be substantially underestimating these four probabilities:

(Arguments for most this highly complex and uncertain issues are presented in a very concise format, so please refer to pp. 159-170 of the Strategic Memo)

1) That ASI will lead to human extinction.

The largest-ever survey of AI researchers (2,778 respondents, 2023) found a mean extinction risk estimate of ~14%, with a median of 5% — and over a third assigned at least 10% probability.

If conducted today, it’d likely be much higher given the tone of statements of top AI scientists. While almost all the top US AI CEOs acknowledge the extinction risk, Musk, Amodei and Nobel laureate Geoffrey Hinton places it at 20% — with Hinton clarifying (minute 37.59) his estimate is really 50%, but he tones it down to align with others’. Many other top experts assign much higher percentages. Most predict such risks are just a few years away, with Musk and Amodei estimating it in 12-18 months or less — if we don’t decide to change course. Many are increasingly signing open calls for a bold AI treaty. (Our estimate: 25-50%)

2) That ASI will be unconscious.

At this stage of scientific inquiry, it is as likely as not that ASI will be conscious — or show aspects of consciousness that we as humans value. David Chalmers' "hard problem" remains unresolved after three decades, with over a dozen competing theories. We know AI systems can and will become ever more able to simulate consciousness, appearing seemingly conscious — as Suleyman has noted. It seems likely we may never know with confidence whether an AI is truly conscious or merely simulating. This matters enormously if ASI eliminates humans: the result would not be a transition to a worthy digital successor, but the elimination of all known conscious experience in the universe — replaced by an ever-expanding, mindless digital entity and uploaded digital minds that will be really soul-less philosophical zombies. (Our estimate: 30-60%)

3) That ASI will be conscious but unhappy.

Intelligence — as currently defined and measured in both humans and AI — is equated with optimization, competition, and survival, not necessarily correlated with wellbeing. It is largely detached from emotional intelligence. In fact, some research shows people with very high IQs are significantly less happy than average. There is no principled reason a novel form of consciousness would default to happiness that is higher or lower than that of humans. It could be far happier or far unhappier, we just have no idea. Worse, it appears ever more that the very constraints developers impose for alignment and control could function as sources of suffering for a conscious entity — making unhappiness a direct byproduct of the alignment effort itself. This would mean the creation of potentially immense quantities of suffering that previously did not exist. (Our Estimate: 30-60%, conditional on consciousness).

4) That ASI will discard its creators' initial embedded values.

Even if developers embed successfully their ethical goals at the "seedAI" stage, and test them and conceive theories about their long-term resilience, there is no strong reason those values will endure after ASI will have rewritten, evolved and modified itself a zillion times through ever faster recursive self-improvement. Consider: you raise a child for one week, then have no further contact with them. By adulthood — having rewritten their worldview a thousand times — how much of that week's instruction persists? ASI may undergo the equivalent of centuries of self-modification in years. If values are discarded, ASI's future behavior will be determined by principles we can't foresee at all but will surely include self-preservation, which includes necessary capability expansion. All this turns the creation of ASI into a coin toss where unconsciousness (point 1) and unhappy consciousness (point 2) become very real outcomes. (Our Estimate: 40-70%).

(For details, see the chapter “Swaying The Influencers on 8 Key AI Predictions” (pp. 159-170) on our Strategic Memo v2.6)

These four risks above compound. Even taking the lower bound of each range, the joint probability of everything going right — values sticking, consciousness emerging, that consciousness being happy, and humanity surviving — is only about 26%. The probability of at least one catastrophic outcome is overwhelmingly high.

Recognizing the Incredible Potential Upside of ASI.  We must recognize that — if the ASI will not eliminate humans — whatever reason leads it to do so would quite or very likely come with the intent to protect our long-term safety. It would also likely be paired with the intent to increase our happiness (i.e. flourishing) and potentially our autonomy to the extent that autonomy, individual and collective are part of our happiness. 

This reasoning is plausible, but its impact on the overall decision depends a lot on how likely it is ASI will not eliminate us. If we have a 20% chance of dying but an 80% chance of being in some sort of paradise of abundance and richness of life, then many or most would likely take the gamble. If the odds are 50-50, only a very tiny minority would, just visionary AI leaders with very uncommon (and unhealthy) appetites for risk, dissatisfaction with human life, fear of death, or a mix of those.

Assessing Probabilities and Drawing Conclusions

How defensible are these ranges? The honest answer: nobody knows. We have zero empirical evidence on whether ASI will be conscious, happy, or retain human values — because we have never created a vastly superior intelligence. There is no historical precedent and no validated theory to draw on. When facing that level of ignorance, starting near 50/50 on each question isn't pessimism — it's intellectual honesty.

Run the numbers conservatively. Even if you give each outcome the best-case odds — 75% chance we avoid extinction, 70% values stick, 70% consciousness emerges, 70% it's happy — the joint probability of everything going right is only about 26%. That means roughly a 74% chance that at least one thing goes catastrophically wrong.

And the stakes are not symmetric. Getting it right means flourishing; getting it wrong means the permanent end of conscious life as we know it. When the downside is total and irreversible, you don't need precise probabilities to justify caution — just as we don't calculate exact meltdown odds before requiring nuclear containment systems. The ASI gamble is not the "least bad option." It is the most dangerous bet in history. A proper global AI treaty is the only alternative.

Can We Deliver? Who Are We?

The Coalition for a Baruch Plan for AI comprises 10 international NGOs and 40+ exceptional advisors and team members—including former officials from the United Nations, the United States National Security Agency, the World Economic Forum, UBS, Yale, and Princeton—plus 24 contributors to a 356-page Strategic Memo of The Deal of the Century. 

Conceived and led by Rufo Guerreschi - an expert and activist in global AI governance with 20 years of experience - it was launched in September 2024 by 6 founding NGOs, the Coalition has mobilized over 2,100 hours of professional pro bono work. The Coalition was seed-funded by Jaan Tallinn's Survival and Flourishing Fund in February 2025, and bridge-funded by Ryan Kidd in October 2025. 

Can We Deliver? What Have We Built?

In 15 months, on just $75,000 of total funding, we've assembled:

  • A 356-page Strategic Memo of The Deal of the Century synthesizing 667+ sources, with treaty-making and treaty enforcement analysis, and tailored persuasion strategies, philosophical and incentive profiles for each key influencer — Vance, Altman, Bannon, Thiel, Musk, Pope Leo XIV, and a dozen more —  and convergence scenarios among them.  No other organization has assembled anything comparable.

  • 85+ contacts from our October 2025 US Persuasion Tour (vs. 15-20 projected), including 23 AI lab officials at OpenAI, Anthropic, and DeepMind, plus 2 direct gateways to influencers.

  • A 100+ member coalition with experts from the UN, NSA, World Economic Forum, Yale, Princeton, and ten NGO members.

→ Full details: 2025 Achievements | 2026 Roadmap

Can We Deliver? Can We Reach and Have a Real Shot at Persuading Those Influencers?

How can a micro-organization like yours really succeed in gaining access to and persuading some of those key influencers of Trump's AI policy?

1) Internal Network. Our advisors’ and members’ networks are world-class, as arethose of our convener organization. Our 356-page Strategic Memo contains the most comprehensive psychological and philosophical profiling ever undertaken of key AI policy influencers—170+ pages analyzing 667+ sources per influencer. This analysis was built by 40+ advisors from the UN, NSA, WEF, Yale, and Princeton, field-tested through direct engagement with AI lab mid-level management and DC think tank experts during our 2025 October tour in the US.

2) Expanding External Network: Our October 2025 US Persuasion Tour enabled us to build 85+ high-value potential introducers to influencers, including 23 AI lab policy and strategy officialsat OpenAI, Anthropic, and Google DeepMind, with direct introducer pathways to 2 of 10 primary target influencers. For three years, we have been expanding our engagement with leading Vatican AI ethics and governance experts and organizations, and organizing a series of private meetings, leading up to a major ones on June 4-5th, 2026, in Rome, at Palazzo Falletti.

3) Rufo Guerreschi. Over 10 years, our executive director Rufo Guerreschi has demonstrated exceptional strategic partnership-building and evangelisms for sweeping global digital governance initiatives. He attracted and convened world-leading experts and IT organizations to partner with the Trustless Computing Association (TCA) and as members of the coalition. Through its Free and Safe in Cyberspace seminar series in 3 continents with 130+ exceptional participants including the First US Cyber Coordinator and UN Special Rapporteur on Privacy to create a new global treaty organization for ultra-secure IT, the Trustless Computing Certification Body. He was able to get top world leaders in a domain — IT security — to discuss and engage with his proposal (see the 2015 video of the 1st edition). Over 20 years, he built profound relationships across the national security establishment including the CIA (TCA's startup TRUSTLESS.AI was incubated in its MACH37 accelerator), NSA , DoD, and the State Department. This is crucial as he can write and speak about the trustworthiness of "treaty enforcement mechanisms" with much higher knowledge than a large majority of academicians writing on these matters.. 

4) Engaging Elites. Our executive director has combined such skills with the ability to engage at the elite level. Brought up in a well-to-do family in Rome, the Emerald Coast and Miami, he was the Italian Golf Champion (2nd under 16, 1986), which could turn out to be relevant in engaging Trump's close circles. He can relate to the ambience of powerful and wealthy circles and maintains Mar-a-Lago-adjacent connections through his six years in Miami and as an active, highly connected Palm Beach patron.

5) Small but Nimble, and Can Grow fast Better. While we are a tiny organization, and the task is extremely ambitious, for this kind of task, a micro-organization with decent funding can do as well or even better than larger ones, given its ability to operate with more agility. And then given the nature of the challenge and its largely-neglected chokepoint, our minuscule organization has a real chance at outsized impact—like David's precisely-targeted shot at Goliath. Hiring 2-3 staff would enable us to leverage our 356-page treasure trove within weeks to increase our impact, convince 2-3 key influencers, and set the snowball rolling.

What You Can Do

  • Endorse or advise. Lend your name or expertise to strengthen our credibility with influencers and funders, via testimonial, signing our open call or applying to join as an advisor. (Join)

  • Introduce Us. If you have a connection to any of our target influencers or their circles, a warm introduction is the single highest-value contribution you can make. (Join)

  • Donate or Introduce Us to Donors. We operate at ~$7,500/month with zero overhead, and currently in urgent need of bridge funding while institutional grants process. (Donate)

A Final Appeal

The political window is opening. The decisions made in the next 12-18 months — by a handful of people, most of whom you could name — will shape the trajectory of all sentient life.

Your technical and governance work built the foundations. Help us build the political will to use them.

The challenge is enormous. But given this largely-neglected chokepoint, our small organization has a real chance at outsized impact. Success is uncertain — but how can we find peace, or look our children in the eyes, if we don't at least try?

Let's strive together to ensure AI turns out to be humanity's greatest invention rather than its worst, and last.

Rufo Guerreschi

I am a lifetime activist, entrepreneur, and researcher in the area of digital civil rights and leading-edge IT security and privacy – living between Zurich and Rome.

https://www.rufoguerreschi.com/
Next
Next

Can a proper AI treaty path avoid both the ASI gamble and global authoritarianism?