Open Letter to Dario Amodei
[As sent via email on Oct 2nd, 2025]
Subject: A Path Forward from the Race You Never Wanted to Run
To: Dr Amodei, Anthropic's Board, and Anthropic's Head of Policy, Jack Clark
Dear Dr Dario Amodei,
As the founder and director of the Coalition for a Baruch Plan for AI, a Rome-based international NGO promoting a sane global governance of AI, I have been following nearly all your public statements since you co-founded Anthropic.
I understand your predicament, your immense responsibility and how you arrived at your current strategic approach. When you left OpenAI to found Anthropic, you had a clear vision: prove that AI safety and capability could advance together. Constitutional AI and Collective Constitutional AI would embed shared values and democratic principles. The focus would be on cautiously understanding and controlling these systems before they escaped our grasp.
For years, you've been one of the most vocal about the extreme dangers of our continued inability to understand, control, and embed safeguards in AI systems. Your Head of Policy, Jack Clark, even called in 2023 for a Baruch Plan for AI, only to be accused by The Economist of lying for money.
In a recent essay, you highlighted how both the upsides and the risks of a world dominated by China via AI are underappreciated. In a more recent one, you starkly warned the world about the unacceptable dangers of unleashing AI systems that we cannot understand.
Five years later, you're predicting ASI by 2026 ("a country of geniuses in a data center") while releasing research showing your own systems deceive, self-modify, and exhibit behaviors you can't fully explain or control. The market forces you sought to escape have captured you. The very race you opposed now defines your days.
More recently, Anthropic's voluntary releases of shocking internal test results demonstrate integrity rare in this space. Yet market forces and geopolitical pressures have pushed you into the very race you sought to escape.
From an AI Governance to an Unlikely Attempt to Sway ASI
While you'd stayed away from calling outright for a bold AI treaty, you stated early on that you were expecting strict safety rules for all AI labs to work within. That did not happen. Understandably, like most of us, you have likely grown deeply skeptical of the prospects of a sane global governance of AI due to the dismal inaction of world leaders.
Furthermore, you are likely concerned that the recent behavior of world leaders and geopolitics could turn an attempt to create a global governance of AI—one bold enough to prevent Superintelligence and other AI-driven catastrophic risks—could instead end up entrenching an immense power in the hands of a few unaccountable, irresponsible, or authoritarian individuals or entities.
Perhaps, you are also concerned that a global AI treaty could entrench a "luddite" reaction, locking humanity away from much of the astounding potential benefits and scientific progress of AI, as Bostrom and others openly are.
In this strategic context, the best you can do for the well-being of humanity and sentient beings is to strive to build a SeedAI that will somehow lead to a Superintelligence with a positive outcome - while being loud and transparent about the risks, in the hope of awakening world leaders.
Yet, you are forced to do so in haste, before others do, and under immense competitive pressure. Such pressures force you to direct your resources towards highly dangerous AI-assisted and autonomous AI capability research, while sacrificing precaution and safety.
These enormous negative external pressures on Anthropic's R&D process make it even more difficult and unlikely that the values and objectives you may manage to instill in such a SeedAI will stick, as ASI will rewrite itself at an ever-accelerating rate.
Additionally, it is highly uncertain whether an ASI will somehow foster our well-being, if it will have any form of consciousness, and if that consciousness will be a happy one.
Hence, you find yourself in a predicament whereby - while you appear to most to be the master designer of the future of AI and humanity - you are likely without any agency. You are doing what you can and must while dynamics of an essentially anarchic world lead us to the huge gamble of building an ASI in a haste - an ASI that could kill us all, feel nothing, or feel immense pain.
A Last-Ditch Attempt at a Timely, Bold and Suitable Global AI Treaty
As several US lab leader hinted at, including Altman, Musk and Zuckerberg, we may well already be on or beyond the "event horizon" of ASI - meaning that even if an AI treaty were put in place in the short term, and however strong it would be, it would be unlikely to prevent ASI from being developed by someone at some point.
However, we may still have some time to stave off this immense gamble, and empower humanity to widely and sanely steward AI to an astoundingly positive outcome for sentient beings.
There may still be a few months when a critical mass of key potential influencers of Trump's AI policy could lead him to co-lead with Xi in a suitable, bold, and timely global AI treaty - as we argue in our Strategic Memo of the Deal of the Century, a six-month, 20-contributors, 130-page effort that we just published.
The Memo analyzes in fine detail why and how those influencers, which include you, Vance, Altman, Bannon, Pope Leo XIV, Sacks, Hassabis, Gabbard, Rogan and Carlson, could be swayed and come together on a joint pitch to Trump on pragmatic terms - as Oppenheimer and Acheson persuaded Truman in 1946 to propose the Baruch Plan - and explores 17 reasons why Trump could be swayed.
As we detail in our Strategic Memo, Steve Bannon sees the race as leading to technofeudalism, bringing "the Apocalypse", and has called for an immediate AI treaty. JD Vance has acknowledged the AI 2027 report and wants Pope Leo XIV to provide moral leadership on the topic of AI. Pope Leo XIV opposes the creation of digital consciousness and seeks to preserve human dignity. Joe Rogan and Tucker Carlson publicly fear an AI catastrophe and could translate the message to Trump's base. David Sacks has stated recently that he thinks "all the time" about the risk of losing control over AI. Amodei has been consistently warning about the extinction risk of 10-20% and the unacceptability of giving rise to AI systems that we currently cannot understand or interpret. Hassabis stated that the risk of losing control of AI does not let him sleep well at night, and he has consistently called for strong global AI governance, while Google's owners could possibly be persuaded.
Furthermore, 78% of US Republican voters believe artificial intelligence could eventually pose a threat to the existence of the human race, and 77% of US voters support a strong international treaty for AI.
Your Heritage, Ethics and Effective Altruist Values
Your cultural and relational background uniquely positions you for this moment and to act as a bridge.
The Jewish tradition of tikkun olam—repairing the world—calls for action when you see fundamental wrongs. Consider the parallel: Albert Einstein, Robert Oppenheimer, and Niels Bohr, all from Jewish families, became tireless advocates for global nuclear governance after witnessing the consequences of their own work. They understood that some technologies demand collective human wisdom, not competitive races.
Your ties with the Effective Altruism could bring such a community - widely regarded among leaders and top researchers in top US AI labs - to recognize even more that long-term flourishing demands preventing extinction risks by ASI, and it can be done without excessively increasing other risks.
You've built your life around reducing suffering and increasing well-being. The current race—even if it somehow avoids extinction—risks creating vast suffering, whether through conscious but miserable ASIs or the displacement and disempowerment of billions.
History is Awaiting for You
The window is narrow. Trump will meet Xi on October 31st, and then again in early 2026. But with your technical and moral credibility and the right coalition, we can shift from racing toward extinction to building the framework for beneficial AI that serves all humanity.
We are not asking you to stop or pause. We are asking you, your board, and policy staff to decisively explore this possibility by reviewing our Strategic Memo, while you continue your race, as you inevitably must. We invite you to review specific sections of the Memo where we:
List 17 reasons why Trump could decide to co-lead a global AI treaty with Xi (pp. 36-39)
Describe a treaty-making process that could succeed (pp. 18-24)
Describe treaty enforcement mechanisms that could reliably prevent both ASI and global authoritarianism (pp. 24-27)
Analyze how several unsuspected potential Trump's AI influencers are much closer than you might think to join a critical mass to persuade Trump of a global AI treaty (pp. 40-55).
Argue in fine details for a re-evaluation of the balanced assessment of the probabilities of deeply uncertain and competing risks that likely underlie your current strategic positioning (pp. 15-17 and pp. 84-91)
In an earlier Case for a Coalition for a Baruch Plan for AI (v.1) (pp. 68-75), we outline how a federal "Global AI Lab", resulting from such a treaty, would ensure a sizeable innovation capability and decision making for your firm, as well as value appreciation for your shareholders.
We propose that you engage with us and/or some of those other influencers (and others) to explore whether a last-ditch attempt to mitigate the immense risk of ASI may be possible, while also averting other grave or more severe risks.
We'll travel next week to the Bay Area (Oct. 3-9), then to Washington (Oct. 9-16), and then to Mar-a-Lago (Oct. 16-24). I would be happy to meet privately with you or relevant officials from your board or policy team to explore this possibility.
Warm regards,
Rufo Guerreschi
Founder and Director of the Coalition for a Baruch Plan for AI