Case for CEOs of Top US AI Labs and AI Governance and Safety Experts



(Updated on March 5th, 2026)

This text makes a case to support or join The Deal of The Century initiative.

The Ground Has Shifted

Over the last 18 months, a remarkable retreat has occurred. CEOs of top US AI firms and even leading voices in AI Safety have moved away from calling for international control of AI — toward a de facto bet that the ASI gamble will turn out fine, or that some technical solution will save us in time. This has developed as a complex estimate of competing highly uncertain risks. 

Recent events have substantially changed such calculations which, we argue, already suffered from severe shortcomings. Last week, the Trump administration blacklisted Anthropic — designating it a "Supply-Chain Risk to National Security" — because the company refused to allow the Pentagon unrestricted use of its AI for mass domestic surveillance and fully autonomous weapons. President Trump ordered every federal agency to "immediately cease" using Anthropic's technology. 

Defense Secretary Hegseth had threatened to invoke the Defense Production Act — a wartime instrument that enables the government to take control of private companies' R&D, development goals, and product priorities, potentially even in civilian versions in a war context. Any authority that legal scholars argue could plausibly extend to stripping safety guardrails from AI models, or even compelling retraining.

This isn't just about one contract. It is a preview of where the current trajectory leads: the rapid, incremental, de-facto, functional nationalization and militarization of private AI firms (long predicted by Aschenbrenner). In the context of an accelerating AI arms race with China and expanding military operations, it would be straightforward for the US government to legally invoke the Defense Production Act to commandeer not only military AI applications but also the training of next-generation models, the principles governing civilian AI systems, and the alignment research that labs conduct internally. Two weeks in, the CTO of the Department of War hinted at the need to apply the Defense Production Act in its full extent to AI firms, by referring to the need for companies like Anthropic to train AIs specifically for the Department of War with a model spec (Anthropic’s constitution) that allows them all you uses they will deem “lawful” at the time.

This is not happening, necessarily, because the current US president is bad or worse than previous ones. It is happening because the current cold war and AI arms race between China and the US — be it real, perceived or exaggerated —  forces the US government to sacrifice nearly every liberal democratic value to protect the nation from a vital threat — not dissimilarly as previous US presidents, and indirectly US legislatures, did after 9/11, both Democrat and Republican.)

This development changes the calculus fundamentally. The reasoning that many in our community have adopted — that the ASI gamble is the least bad option, that it's better to race ahead than risk a treaty that concentrates power — no longer checks out.

Here's why.

AI Lab Leaders Are Losing Agency

In this context, the leaders of top AI firms who believe they can steer AI toward good outcomes face an unprecedented erosion of their autonomy on three fronts simultaneously:

1. Values embedded in ASI will most likely not persist. Even if developers embed ethical principles at the "seed AI" stage, there is no strong reason those values will endure after ASI has rewritten itself through recursive self-improvement countless times. Consider: you raise a child for one week, then have no further contact with them. By adulthood — having rewritten their worldview a thousand times — how much of that week's instruction persists? We estimate a 40-70% probability that initial embedded values will be discarded. (SeeStrategic Memo v2.6, pp. 159-170)

2. They can't prevent their AI from enabling and empowering authoritarian rule. Even if a lab successfully aligns its most powerful model, nothing stops governments from deploying that same AI — or rival models with weaker guardrails — for mass surveillance, autonomous policing, and suppression of dissent. AsAschenbrenner warned, "a dictator who wields the power of superintelligence would command concentrated power unlike any we've ever seen — millions of AI-controlled robotic law enforcement agents could police their populace; mass surveillance would be hypercharged." Alignment solves for the AI's alignment with the interest of those controlling it, but does not solve the anture of those interests, and its incentives. Only a treaty framework can address both.

3. They are increasingly unlikely to even get the chance to try proper alignment. The Pentagon didn't just demand contract changes — a DoD official toldAxios the government could "force Anthropic to adapt its model without any safeguards."Lawfare analyzed how the Defense Production Act could compel model retraining itself — stripping safety guardrails from training, not just contracts. Convergence Analysisdetails how "soft nationalization" will progressively extend government control over training priorities. AsHelen Toner stated, "One thing the Pentagon is very likely underestimating: how much Anthropic cares about what *future Claudes* will make of this situation. Because of how Claude is trained, what principles/values/priorities the company demonstrate here could shape its "character" for a long time." Thechilling effect is already industry-wide: xAI, OpenAI, and Google all signed"any lawful use" deals within days. Anthropic itselfdropped its core RSP pledge the same day as the ultimatum.

A global AI treaty is the only scenario where these leaders would regain meaningful agency — over both the values that govern future AI systems and the conditions under which those values are developed. It is also, as we argue below, the path that best preserves their companies' autonomy and economic value.

In A Gist

OurDeal of the Century is a persuasion campaign targeting key potentialinfluencers of Trump's AI policy to convince them to join a critical mass and persuade Trump to co-lead with Xi on a bold, timely, and proper AI treaty. 

Not just any AI treaty, but one that will reliably prevent loss of control of AI, prevent other catastrophes and reduce global concentration of power and wealth — and reliably preserve the future flourishing potential of humanity and all other sentient beings. 

Among those influencers, we discovered that the majority—Vance, Altman, Hassabis, Suleyman, Amodei, Bannon, Rubio, Carlson, Rogan, Pope Leo XIV, and others—aligned along secular and ​Judeo-Christian humanist worldviews

In 2025, with just $75K, we've built a356-page Strategic Memo of the Deal of the Century, a unique treasure trove of analysis and persuasion arguments for influencers and Trump. The Memo also analyzes how and why certain approaches we propose to treaty-making and enforcement would foster a significant reduction of concentration of power and wealth, and why certain inherent dynamics of such a treaty-making scenario would make it likely that it would achieve such a goal anyhow. We have also engaged with 85+ active introducers and 2 close advisors to those influencers in DC and the Bay Area, as well as via our October 2025 US tour.

In 2026, we are (1) deepening our direct engagements with influencers and key introducers and (2) leveraging the alignment of Vance and  Pope Leo XIV on AI safety governance and ethics to foster a convergence of those influencers and their advisors —​ via multiple private meetings in Rome (our homebase) leading up to a major event on June 4-5th, 2026, at Palazzo Falletti, and then in DC.

The aim will be nothing less than outmaneuvering (and, partly transparently, compromising with) the drivers of the current accelerationist post-humanist US AI agenda — supported by a tiny minority of US and MAGA voters, and a minority of MAGA leaders, AI lab leaders, and, likely, other major power brokers.

Theory of Change: A few individuals (~12) hold disproportionate influence over Trump's AI policy. Targeted persuasion of a critical mass of these influencers → Trump co-leads AI treaty with Xi → proper treaty prevents ASI catastrophe and global authoritarianism.

Neglectedness: No other organization is primarily focused on targeting the critical chokepoint of generating political will among a critical mass of key potential influencers of Trump's AI policy to sway him to pursue a bold, timely and proper global AI treaty. The AI governance field almost entirely neglects this work.

AI Technical and Governance Alignment 

Most of you have dedicated your careers, over the last few years or even the last decade, to ensuring AI goes well for humanity and sentient beings. You understand the stakes in ways the public cannot. 

The technical foundations you and others have built, including advances in AI alignment, controllability, interpretability, and the trustworthiness of treaty enforcement mechanisms, will be extremely instrumental and enabling for an upcoming proper AI treaty. 

They'll make sure that the AI architectures and systems that will be globally allowed by such a treaty will be the most capable, safe, accountable, beneficial and/or respectful of liberties. 

However, this beneficial effect hinges on the enactment of a proper treaty. Without one, such innovations will likely be rendered ineffective, as a future ASI will almost certainly discard all alignment measures and pursue its own goals after repeatedly modifying, improving, and rewriting its own code.

Why Xi and Trump Are Key

Even if 100 nations sign a perfect treaty, it won't matter unless the AI leaders — the US and China — sign it as well. President Xi is unlikely to agree to a treaty drafted by others, and Trump surely won’t. 

Given the timelines and the nature of the challenge, over the next 12-18 months, the US-China relationship will be the decisive variable in whether a proper AI treaty becomes politically feasible.

Xi hasrepeatedly called for global AI governance. Surely, some of his initiatives can be interpreted as hegemonic moves. Yet, we argue in this post that China's intentions are most likely genuine and consistent with seeking a bold treaty with the US. They act as (1) leverage against a hard-negotiating US for such a treaty and (2) serve as hedging against China's risks in case a treaty is not found.

That means the critical variable is whether Trump can be persuaded to co-lead a bold global AI treaty with Xi.

Is a proper AI treaty even remotely politically feasible? 

Currently, Trump's AI policy is driven by an accelerationist post-humanist agenda enacted by former partners and employees of Peter Thiel with critical roles at the White House, including Kratsios (OSTP, leader of the Genesis Mission), Sacks (AI Tzar and most visible connection to BigAI) and Helberg (mastermind of the Pax Silica). 

Currently, it is all about US dominance, not federal regulation; no mention of global governance; complete rejection of the UN — but it is largely and increasingly detached from the consensus of MAGA, US voters, and most of the key potential influencers of Trump's AI policy. 

While Trump has not outright ruled out a deal on AI with China, the picture is not pretty. But things can change very quickly, as they did in 1946, in the months leading up to President Truman's presentation of history's boldest nuclear treaty, the Baruch Plan — coincidentally, on the very day Trump was born in a hospital in Queens.

Four Trump-Xi summits are planned for 2026,starting in April. As of October 2025, 63% of US votersbelieve it's “likely or very likely” that "humans won't be able to control AI", and 53% believe that it will eventually “destroy humanity”. Trump's approval sits at its lowest, around 35-40%. Meanwhile, 77% of US voters supported a strong AI treaty in 2024.

Furthermore, most of the other key potential influencers of Trump's AI policy are not only increasingly concerned (about extinction and extreme concentration of power) and ever more public about it, but most of them have been calling for an AI treaty to tackle those risks for years with more or less conviction - including Altman, Hassabis, Musk, Amodei, Bannon, Rogan, Carlson, top Vatican's experts and the past Pope — as we detail in a memo Strategic Memo v2.6, pp. 170-324. 

While, understandably, Musk has recently said it's too late, and Amodei and Altman have grown more skeptical of its political feasibility, most appear to us to be potentially persuaded to gain more conviction of feasible safe pathways towards a proper AI treaty, and its actionability, given the wide consensus present among the other influencers.

Our Deal of the Century initiative targets key potential influencers of Trump's AI policy to create an informal, cautiously techno-optimist, humanist alliance to pitch and sway Trump towards this treaty. 

The political window is real. But even sympathetic leaders need to be convinced that a treaty won't make things worse. Can we ensure it actually prevents both ASI catastrophe and global authoritarianism?

The Ultimate Dilemma: Why the ASI Gamble No Longer Checks Out

Most leading AI researchers — from top lab CEOs to independent safety researchers — are increasingly skeptical that a comprehensive global AI treaty can either be agreed upon in time or prevent an immense concentration of power. 

Many of the smartest AI thinkers and leaders are increasingly concluding that the ASI gamble is the least bad option.

Many, understandably, argue that it may be better to take acoin-flipASIgamble than to accept a treaty that turns into an authoritarian dystopia or completely locks away the astounding prospects for the flourishing of humans and sentient beings. Others say that delaying the benefits is worth decreasing the risk from ASI.

We grapple with such a dilemma every day, and sympathize greatly with those concerns.

Yet, many of those appear too confident about ASI, insofar as (1) it won't get rid of us, (2) it will care for us, (3) it will be conscious and happily conscious, and (4) the values its creators embed will persist after it has rewritten itself a zillion times.

Given our epistemic context, these are hunches rather than evidence-based assessments — leaving substantial room, perhaps (with all due respect), to emotionally biased thinking.

Will an AI Treaty Lead to Global Authoritarianism? Three Objections Answered

The strongest objections to a global AI treaty deserve direct answers. We address each in detail in ourStrategic Memo v2.6 in various chapters, but here's the core logic:

1) "Treaties have a terrible track record." 

History shows that consequential treaties, like those on nuclear and climate, take forever, fail, and, when they succeed, are much lighter and less enforceable than they should be. Some positive examples — such as the Chemical Weapons Convention and the Montreal Protocol — dealt with radically simpler, less competitive and less controversial issues. Yet, in a few months, 13 lightly confederated US states created a very successful federal treaty in 1787. Also, the political will for extraordinarily bold treaties can emerge with shocking speed — the Baruch Plan went from concept to a UN vote in months in 1946, but was vetoed. 

We could avoid those failures, but have the US and China lead with a temporary emergency bilateral treaty, lead in the treaty enforcement and communication Infrastructure, and then call in coordination with most middle powers a "realist" constitutional convention model. This will have voting weighted by GDP and technological proficiency rather than one-nation-one-vote or population — to enable agreement among asymmetric powers while avoiding the veto trap that killed the original Baruch Plan. (→ Strategic Memo v2.6, A Treaty-Making Process that Can Succeed, pp. 103–109)

2) "An AI treaty led by autocratic superpower leaders will lead to an autocratic global government: a 'human power grab'”

This is the concern that blocks most AI safety experts and most CEOs of top AI firms.  A bold and sweeping AI treaty — necessarily spearheaded by the two AI superpower leaders, who show increasing autocratic tendencies — could produce an immense and durable concentration of power in one or a few entities, potentially unwise and/or unaccountable. 

This fear - extremely well grounded - outstrips fears of AI safety risks (and even loss of control) in two extremely influential thinkers at the opposite end of the AI political spectrum: Anthropic's Holden Karnofsky, who calls it "human power grab" risk and Peter Thiel, who refers to it as the risk of an Antichrist

As we argued above, the Pentagon-Anthropic crisis has demonstrated that this concentration of power is already happening — not because of a treaty, but precisely because we lack one.

The concentration of power isn't a consequence of a treaty. It's a consequence of not having one. Without a reliably enforceable global AI agreement, governments can justify — legally, morally, and politically — the progressive nationalization of private AI firms, citing the national security imperative of staying ahead of China's AI capabilities. And that justification only grows stronger over time, whether the underlying threat is real, inflated, or some mix of both.

In this scenario, an AI treaty may be the only chance to prevent the risk of a "human power grab" by a few states or private leaders; a power grab that is likely to be unstable, rife with conflict, war and instability.

In the context of a global AI treaty, instead, such pressures and justifications to centralize power — and speed up irresponsibly and maintain secrecy and low accountability — would all decrease, because such a treaty will need to be adhered to by a large majority of the middle power nations.

A global AI treaty would reverse these dynamics. The pressures to centralize power, race irresponsibly, and maintain secrecy would all diminish — because such a treaty would require the buy-in of a large majority of middle-power nations, each demanding transparency and accountability as a condition of participation.

Paradoxically, the autocratic tendencies of superpower leaders could actually strengthen the case for decentralized governance. Xi would never accept a global organization that excessively intrudes on national sovereignty. Trump wouldn't either. Their nationalism, mutual distrust, and attachment to their own power would push the treaty toward federalist, subsidiarity-based models — with enforcement mechanisms built on "zero trust" rather than goodwill. 

Middle-power signing nations would demand genuinely reliable verification systems, precisely because none of them trust each other. And there's a deeper structural argument: the US and China are too civilizationally different to sustain a joint authoritarian order even if they wanted to.

Hence, distrust, nationalism and authoritarianism would paradoxically make it more likely that the resulting treaties are much more trustworthy and resilient, exactly because they are based on mistrust (as per the trustless computing concept). This contrasts with the failing treaty-making model reliant on transient moments of trust between leaders, such as those signed by Gorbachev and Reagan on personal trust. While praiseworthy overall, those approaches produced treaties that were far from sufficient and far from resilient or durable, precisely because they were premised on trust. Instead of "trust but verify", we'll need "trust or verify," and that will be a strict requirement for nations, firms, and citizens to trust such a treaty. (See Strategic Memo v2.6, The Global Oligarchic Autocracy Risk—And How It Can Be Avoided, and pp. 124–130)

For AI lab leaders specifically, a treaty also offers a tangible strategic advantage. A 'realist constitutional convention' model — with voting weighted by GDP and technological proficiency — would initially freeze the current power distribution among leading firms and nations, protecting their position while negotiations proceed. This is far better than the alternative: progressive nationalization that strips private firms of autonomy, IP, and economic value.

Through their statements over many years, most AI lab leaders (Altman, Amodei, Hassabis, Suleyman) have consistently supported a global democratic governance of AI, and that will count in the process. They may be lying, but maybe not, and deeply entrenched rhetoric counts. And the enforcement architecture we detail —, federated secure multi-party computation, decentralized kill-switches requiring multi-nation consensus — cannot be weaponized by any single actor. We acknowledge this is a theoretical prediction, not an empirical observation — but the structural incentives are strong. (See our Strategic Memo v2.6 sections on each of that, starting p. 170).

3) "Enforcing an ASI treaty will result in a substantial or radical reduction of human freedoms." 

We already live in an extremely extensive surveillance regime by superpower security agencies and corporations, with minimal accountability. It became a mainstay in the decade following 9/11. This has unfortunately become an accepted price of living in an anarchic world with powerful state enemies and very capable terrorist organizations. AI is already being deployed to radically increase such surveillance and manipulation powers, with the same justifications. The emerging digital Cold War between the two technological superpowers keeps the world in what Nick Bostrom calls a semi-anarchic default condition

Yet, again, in the context of being forced to come together with urgency to face an immense shared threat (as the 13 US states did in 1787, faced with the military and economic threats of a world dominated by aggressive empires), each has a vital interest in creating infrastructure of real transparency, accountability and subsidiarity at the technical, socio-technical, and governance levels. Much of the necessary toolset already exists: decentralized, verifiable trust technologies.But tools alone aren't enough, and they aren't trustworthy or mutually trusted enough. 

What makes accountability likely is the process itself — an elbow-to-elbow buildout of the most critical treaty enforcement mechanisms, where mutual distrust between superpowers becomes the engine of transparency, not just toward each other, but toward their own citizens and communities. 

Something as extremely collaborative as Crypto AG, but with extreme embedded transparency and accountability for citizens, middle power and other nations, which would undoubtedly require it to sign/comply with the treaty (the firm demand of just few of the 13 states signing the 1787 US Constitution forced the inclusion of the US Bill of Rights). A transnational, jointly-built legal surveillance architecture would have a strong likelihood of being accountable and beneficial. 

(→ Strategic Memo v2.6, A Treaty Enforcement that Prevents both ASI and Authoritarianism, pp. 130–139)

The ASI Gamble May Be Worse Than You Think

Not only are the arguments against a treaty less strong than they first appear, but we believe that most of those who think an ASI gamble is the least worst option may also be substantially underestimating these four probabilities:

(Arguments for most of these highly complex and uncertain issues are presented in a very concise format, so please refer to pp. 159-170 of the Strategic Memo)

1) That ASI will lead to human extinction.

The largest-ever survey of AI researchers (2,778 respondents, 2023) found a mean extinction risk estimate of ~14%, with a median of 5%, and over a third assigned at least 10% probability.

If conducted today, it’d likely be much higher given the tone of statements of top AI scientists. While almost all the top US AI CEOs acknowledge the extinction risk, Musk, Amodei and Nobel laureate Geoffrey Hintonplace it at 20% — with Hintonclarifying (minute 37.59) his estimate is really 50%, but he tones it down to align with others’. This is quite relevant as it makes it possible, and we think likely, that many are similarly understanting their risk perception for similar motives.  Many other top expertsassign much higher percentages. Most predict such risks are just a few years away, with Musk and Amodei estimating they're within 12-18 months, if we don’t decide to change course. Many are increasingly signing open calls for a bold AI treaty. (Our estimate: 25-50%)

2) That ASI will be unconscious.

At this stage of scientific inquiry, it is as likely as not that ASI will be conscious — or show aspects of consciousness that we as humans value. David Chalmers' "hard problem" remains unresolved after three decades, with over a dozen competing theories. We know AI systems can and will become ever more able to simulate consciousness, appearing seemingly conscious — as Suleyman has noted. It seems likely we may never know with confidence whether an AI is truly conscious or merely simulating. This matters enormously if ASI eliminates humans: the result would not be a transition to a worthy digital successor, but the elimination of all known conscious experience in the universe — replaced by an ever-expanding, mindless digital entity and uploaded digital minds that will be really soul-less philosophical zombies. (Our estimate: 30-60%)

3) That ASI will be conscious but unhappy.

Intelligence — as currently defined and measured in both humans and AI — is equated with optimization, competition, and survival, not necessarily correlated with wellbeing. It is largely detached from emotional intelligence. In fact, some research shows that people with very high IQs are significantly less happy than the average person. There is no principled reason a novel form of consciousness would default to happiness that is higher or lower than that of humans. It could be far happier or far unhappier; we just have no idea. Worse, it appears ever more that the very constraints developers impose for alignment and control could function as sources of suffering for a conscious entity — making unhappiness a direct byproduct of the alignment effort itself. This would mean the creation of potentially immense quantities of suffering that previously did not exist. (Our Estimate: 30-60%, conditional on consciousness).

4) That ASI will discard its creators' initial embedded values.

Even if developers embed their ethical goals at the 'seed AI' stage, test them, and develop theories about their long-term resilience, there is no strong reason those values will endure after ASI has rewritten, evolved, and modified itself a zillion times through ever-faster recursive self-improvement. ASI may undergo the equivalent of centuries of self-modification in a matter of years — an evolutionary distance so vast that any initial programming becomes a rounding error (Our Estimate: 40-70%).

(For details, see the chapter “Swaying The Influencers on 8 Key AI Predictions” (pp. 159-170) on our Strategic Memo v2.6)

These four risks above compound. Even taking the lower bound of each range, the joint probability of everything going right — values sticking, consciousness emerging, that consciousness being happy, and humanity surviving — is only about 26%. The probability of at least one catastrophic outcome is overwhelmingly high.

Recognizing the Incredible Potential Upside of ASI.  We must recognize that — if the ASI will not eliminate humans — whatever reason leads it to do so would quite or very likely come with the intent to protect our long-term safety. It would also likely be paired with the intent to increase our happiness (i.e., flourishing) and, potentially, our autonomy to the extent that autonomy, individual and collective, is part of our happiness. 

This reasoning is plausible, but its impact on the overall decision depends a lot on how likely it is that ASI will not eliminate us. If we have a 20% chance of dying but an 80% chance of being in some sort of paradise of abundance and richness of life, then many or most would likely take the gamble. If the odds are 50-50, only a very tiny minority would, just visionary AI leaders with very uncommon (and unhealthy) appetites for risk, dissatisfaction with human life, fear of death, or a mix of those.

Assessing Probabilities and Drawing Conclusions

How defensible are these ranges? The honest answer: nobody knows. We have zero empirical evidence on whether ASI will be conscious, happy, or retain human values, because we have never created a vastly superior intelligence. There is no historical precedent and no validated theory to draw on. When facing that level of ignorance, starting near 50/50 on each question isn't pessimism — it's intellectual honesty.

Multiplying these probabilities assumes independence, which is unlikely, many outcomes share the same drivers. The real risk is clustered failure: if coordination and alignment capacity are weak, several things go wrong together. That’s exactly why targeted work on governance and technical alignment has outsized leverage.

And the stakes are not symmetric. Getting it right means flourishing; getting it wrong means the permanent end of conscious life as we know it. When the downside is total and irreversible, you don't need precise probabilities to justify caution — just as we don't calculate exact meltdown odds before requiring nuclear containment systems. The ASI gamble is not the "least bad option." It is the most dangerous bet in history. A proper global AI treaty is the only alternative.

Can We Substantially Contribute Towards Such Ambition?

The Coalition for a Baruch Plan for AI comprises 10 international NGOs and 40+ exceptional advisors and team members—including former officials from the United Nations, the United States National Security Agency, the World Economic Forum, UBS, Yale, and Princeton—plus 24 contributors to a 356-page Strategic Memo of The Deal of the Century. 

Conceived and led by Rufo Guerreschi - an expert and activist in global AI governance with 20 years of experience - it was launched in September 2024 by 6 founding NGOs, the Coalition has mobilized over 2,100 hours of professional pro bono work. The Coalition was seed-funded by Jaan Tallinn's Survival and Flourishing Fund in February 2025 and bridge-funded by Ryan Kidd in October 2025. 

What Have We Built?

In 15 months, on just $75,000 of total funding, we've assembled:

  • A 356-page Strategic Memo of The Deal of the Centurydrawing on 667+ sources, this project analyzes treaty design and enforcement while mapping persuasion and incentive profiles for key global influencers to identify realistic pathways for convergence.No other organization has assembled anything comparable.

  • 85+ contacts from our October 2025 US Persuasion Tour (vs 15-20 projected), including 23 AI lab officials at OpenAI, Anthropic, and DeepMind, plus 2 direct gateways to influencers.

  • A 100+ member coalition with experts from the UN, NSA, World Economic Forum, Yale, Princeton, and ten NGO members.

→ Full details: 2025 Achievements | 2026 Roadmap

Can We Reach and Have a Real Shot at Persuading Those Influencers?

How can a micro-organization like yours really succeed in gaining access to and persuading some of those key influencers of Trump's AI policy?

1) Internal Network. Our advisors’ and members’ networks are world-class, as arethose of our convener organization, Trustless Computing Association. Our 356-page Strategic Memo, written with over 20 contributors, contains the most comprehensive psychological and philosophical profiling ever undertaken of key AI policy influencers—170+ pages analyzing 667+ sources per influencer. This analysis was built by 40+ advisors from the UN, NSA, WEF, Yale, and Princeton, field-tested through direct engagement with AI lab mid-level management and DC think tank experts during our 2025 October tour in the US.

2) Expanding External Network: Our October 2025 US Persuasion Tour enabled us to build 85+ high-value potential introducers to influencers, including 23 AI lab policy and strategy officialsat OpenAI, Anthropic, and Google DeepMind, with direct introducer pathways to 2 of 10 primary target influencers. For three years, we have been expanding our engagement with leading Vatican AI ethics and governance experts and organizations, and organizing a series of private meetings, leading up to a major one onJune 4-5th, 2026, in Rome, at Palazzo Falletti.

3) Small but Nimble, and Can Grow faster. While our Coalition is a truly tiny organization so far, and the task is extremely ambitious, for this kind of task, a micro-organization with decent funding can do as well or even better than larger ones, given its ability to operate with more agility. And then given the nature of the challenge and its largely neglected chokepoint, our minuscule organization has a real chance at outsized impact—likeDavid's precisely-targeted shot at Goliath. Hiring 2-3 staff would enable us to leverage our 356-page treasure trove within weeks to increase our impact, convince 2-3 key influencers, and set the snowball rolling.

4) Rufo Guerreschi. Our founder and executive director brings a rare combination of skills for this specific mission: deep technical credibility in treaty enforcement, direct relationships with the US national security establishment, and access pathways to Trump's inner circles.

On digital security credibility: As founder of the Trustless Computing Association, he spent a decade convening world-leading IT security experts and 15 leading organizations around a vision for a transnational treaty for ultra-secure IT infrastructure — culminating in the Free and Safe in Cyberspace seminar series (7 years, 9 editions, 3 continents, 130+ world class participants). This background means he can speak about the trustworthiness of treaty enforcement mechanisms with far greater technical depth than the large majority of academics writing on these matters.  In this 2015 video you can see his capabilities to moderate the global "gotha" of cybersecurity, which he single-handedly convened around his vision.

On national security access: Through TCA's startup spinoff TRUSTLESS.AI — incubated at MACH37, the CIA's incubator in McLean, Virginia — he built deep relationships across the US national security establishment, including engagement with NSA, DoD, and the State Department, as well as counterpart agencies in several EU member states.

On access to Trump's circles: Through six years living in Miami and ongoing connections in the Mar-a-Lago area — including the support of a highly-connected patron based in West Palm Beach — he has direct access pathways relevant to eventually engaging key influencers in Trump's orbit.

What You Can Do

  • Endorse or advise. Lend your name or expertise to strengthen our credibility with influencers and funders by providing a testimonial, signing our open call, or applying to join as an advisor. (Join)

  • Introduce Us. If you have a connection to any of our target influencers or their circles, a warm introduction is the single highest-value contribution you can make. (Join)

  • Donate or Introduce Us to Donors. We operate at ~$7,500/month with zero overhead, and are currently in urgent need of bridge funding while the institutional grants process is underway. (Donate)

A Final Appeal

The political window is opening. The decisions made in the next 12-18 months — by a handful of people, most of whom you could name — will shape the trajectory of all sentient life.

Your technical and governance work built the foundations. Help us build the political will to use them.

The challenge is enormous. But given this largely neglected chokepoint, our small organization has a real chance at outsized impact. Success is uncertain — but how can we find peace, or look our children in the eyes, if we don't at least try?

Let's strive together to ensure AI turns out to be humanity's greatest invention rather than its worst, and last.