Open Letter to Sam Altman

[As sent via email on Oct 3rd, 2025]

Subject: A Possible Better Alternative to the Enormous Gamble of ASI

To: Mr. Sam Altman, OpenAI's Board, OpenAI's Policy Team, Mr. Jack Clark,

Dear Mr Sam Altman,

As the founder and director of the Coalition for a Baruch Plan for AI, an international NGO promoting a sane global governance of AI, I have been following nearly all your public statements since November 2023.

I understand your predicament, your immense responsibility and how you arrived at your current strategic approach. Until last year, louder than anyone, you warned of the existential risks of unregulated AI. You called for a bold global governance of AI based on the subsidiarity principle, and even called for a democratic global AI convention, inspired by the US Constitutional Convention.

Since then, you have grown deeply, understandably skeptical of the prospects of a sane and effective global governance of AI - given the dismal inaction of world leaders.

Furthermore, the behavior of such world leaders and geopolitics has increased concerns that a global governance regime—bold enough to prevent Superintelligence and other AI-driven catastrophic risks—could turn into an immense power in the hands of a few unaccountable, irresponsible, or authoritarian individuals or entities. 

Perhaps, you are also concerned that a global AI treaty could entrench a "luddite" reaction, locking humanity away from much of the astounding potential benefits and scientific progress of AI.

In this context, the best you can do for the well-being of humanity and sentient beings is to try your best to build a SeedAI that will lead to a Superintelligence that will turn out positive, somehow.

Yet, you are forced to do so in haste, before others do, and under immense competitive pressure. Such pressures force you to direct your resources towards extremely dangerous AI-assisted and autonomous AI capability research, while sacrificing precaution and safety. 

These enormous negative external pressures on the OpenAI R&D process make it even more difficult and unlikely that values and objectives that you'll manage to instill in such a SeedAI will stick, as ASI will rewrite itself at an ever-accelerating rate.

Additionally, it is highly uncertain whether an ASI will somehow foster our well-being, if it will have any form of consciousness, and if that consciousness will be a happy one.

Hence, you find yourself in a predicament whereby - while you appear to most to be the master designer of the future of AI and humanity - you are likely without any agency. You are doing what you can and must while dynamics of an essentially anarchic world lead us to the huge gamble of building an ASI in haste - an ASI that could kill us all, feel nothing, or feel immense pain.

A Last-Ditch Attempt at a Timely, Bold and Suitable Global AI Treaty

As you pointed out, we may well already be beyond the "event horizon" of ASI - meaning that even if an AI treaty were put in place in the short term, and however strong it would be, it would be unlikely to prevent ASI from being developed by someone at some point. 

However, we may still have some time to avert this immense gamble and empower humanity to widely and responsibly steward AI to an astoundingly positive outcome for sentient beings.

There may still be a few months when a critical mass of key potential influencers of Trump's AI policy could lead him to co-lead with Xi in a suitable, bold, and timely global AI treaty - as we argue in our Strategic Memo of the Deal of the Century, a six-month, 20-contributors, 130-page effort that we just published. 

The Memo analyzes in fine detail why and how those influencers, which include you, Vance, Amodei, Bannon, Pope Leo XIV, Sacks, Hassabis, Gabbard, Rogan and Carlson, could be swayed and come together on a joint pitch to Trump on pragmatic terms - as Oppenheimer and Acheson persuaded Truman in 1946 to propose the Baruch Plan - and explores 17 reasons why Trump could be swayed.

As we detail in our Strategic Memo, Steve Bannon sees the race as leading to technofeudalism, bringing "the Apocalypse", and has called for an immediate AI treaty. JD Vance has acknowledged the AI 2027 report and wants Pope Leo XIV to provide moral leadership on the topic of AI. Pope Leo XIV opposes the creation of digital consciousness and seeks to preserve human dignity. Joe Rogan and Tucker Carlson publicly fear an AI catastrophe and could translate the message to Trump's base. David Sacks has stated recently that he thinks "all the time" about the risk of losing control over AI. Amodei has consistently warned about the extinction risk of 10-20% and the unacceptability of creating AI systems that we currently cannot understand or interpret. Hassabis stated that the risk of losing control of AI does not let him sleep well at night, and he has consistently called for strong global AI governance, while Google's owners could possibly be persuaded. 

Furthermore, 78% of US Republican voters believe artificial intelligence could eventually pose a threat to the existence of the human race, and 77% of US voters support a strong international treaty for AI.

History is Awaiting for You

The window is narrow. Trump will meet Xi on October 31st, and then again in early 2026. But with your technical and moral credibility and the right coalition, we can shift from racing toward extinction to building the framework for beneficial AI that serves all humanity.

We are not asking you to stop or pause. We are asking you, your board, and policy staff to decisively explore this possibility by reviewing our Strategic Memo, while you continue your race, as you inevitably must. We invite you to review specific sections of the Memo where we:

  • List 17 reasons why Trump could decide to co-lead a global AI treaty with Xi (pp. 36-39) 

  • Describe a treaty-making process that could succeed (pp. 18-24)

  • Describe treaty enforcement mechanisms that could reliably prevent both ASI and global authoritarianism (pp. 24-27)

  • Analyze how several unsuspected potential Trump's AI influencers are much closer than you might think to join a critical mass to persuade Trump of a global AI treaty (pp. 40-55).

  • Argue in fine details for a re-evaluation of the balanced assessment of the probabilities of deeply uncertain and competing risks that likely underlie your current strategic positioning (pp. 15-17 and pp. 84-91)

In an earlier Case for a Coalition for a Baruch Plan for AI (v.1) (pp. 68-75), we outline how a federal "Global AI Lab", resulting from such a treaty, would ensure a sizeable innovation capability and decision making for your firm, as well as value appreciation for your shareholders. 

We propose that you engage with us and/or some of those other influencers (and others) to explore whether a last-ditch attempt to mitigate the immense risk of ASI may be possible, while also averting other grave or more severe risks.

We'll travel next week to the Bay Area (Oct. 3-9), then to Washington (Oct. 9-16), and then to Mar-a-Lago (Oct. 16-24). I would be happy to meet privately with you or relevant officials from your board or policy team to explore this possibility.

Warm regards,

Rufo Guerreschi,
Founder and Director of the Coalition for a Baruch Plan for AI

Rufo Guerreschi

I am a lifetime activist, entrepreneur, and researcher in the area of digital civil rights and leading-edge IT security and privacy – living between Zurich and Rome.

https://www.rufoguerreschi.com/
Previous
Previous

Open Letter to Dario Amodei

Next
Next

Why and How Google Leaders Could be Swayed to Champion a US-China-led bold AI Treaty