Open Letter to Sam Altman
[As sent via email to Altman and OpenAI on Oct 1st, 2025]
To the Board of OpenAI,
To the Policy Team of OpenAI,
Dear Mr Altman,
As the founder and head of the Coalition for a Baruch Plan for AI, an NGO promoting a sensible global governance of AI, I have been following nearly every one of your public statements since the launch of ChatGPT in November 2023.
I understand your predicament, your immense responsibility and how you arrived at your current strategic approach. Until last year, louder than anyone, you warned of the existential risks of unregulated AI and called for global governance of AI based on subsidiarity, and even called for a democratic global AI convention. Since then, you have grown deeply, understandably skeptical of the prospects of a sane and effective global governance of AI due to the inaction of world leaders.
Furthermore, the behavior of world leaders and geopolitics has increased concerns that a global governance system, bold enough to prevent Superintelligence and other AI-driven catastrophic risks, could turn into an immense power in the hands of a few unaccountable, irresponsible, or authoritarian individuals or entities.
Perhaps, you are also concerned that a global AI treaty could entrench a "luddite" reaction, locking humanity away from much of the astounding potential benefits and scientific progress of AI.
In this context, the best you can do for the well-being of humanity and sentient beings is to try your best to build a SeedAI that will lead to a Superintelligence that will somehow turn out positive.
Yet, you are forced to do so in haste, before others do, pursuing ever more powerful AI, especially one capable of scientific innovation and leading-edge AI research.
These enormous negative external pressures on the OpenAI R&D process make it even more difficult and unlikely that values and objectives instilled in such a SeedAI will stick.
Additionally, it is highly uncertain whether an ASI will, for whatever reasons, foster our well-being, or if it will possess some form of valuable consciousness, and that consciousness will be a happy one.
Hence, you find yourself in a predicament whereby, while you appear to most to be the master designer of the future of AI and humanity, you are likely without any agency, doing what you must while dynamics of an essentially anarchic world lead us to the huge gamble of building an ASI and od it in a haste - an ASI that could kill us all, feel nothing, or feel immense pain.
As you pointed out, we may well already be beyond the "event horizon" of ASI - meaning that even if an AI treaty were put in place in the short term, and however strong it would be, it would be unlikely to prevent ASI from being developed by someone at some point.
However, we may still have some time to at least try a more sensible last-ditch strategy.
There may still be a few months when a critical mass of key potential influencers of Trump's AI policy could lead him to co-lead with Xi in a suitable, bold, and timely global AI treaty, as we argue in our new, six-month, 20-contributor Strategic Memo of the Deal of the Century.
The Memo analyzes in fine detail why and how those influencers, which include you, Vance, Amodei, Bannon, Pope Leo XIV, Sacks, Rogan, and Carlson, could come together on a joint pitch to Trump on pragmatic terms, and explores 17 reasons why Trump could be swayed.
We are not asking you to stop or pause, but rather to dedicate yourself to promoting such a treaty-making process. We are proposing that - will you continue your race, as you inevitably must - you, your board and policy staff to decisively explore this possibility.
We propose that you engage with us and/or some of those other influencers (and others) to explore whether a last-ditch attempt to mitigate the immense risk of ASI may be possible, while also averting other grave or more severe risks.
We'll soon travel to the Bay Area (Oct 3-9) and then to Washington (Oct 9-16). I would gladly meet privately with you or relevant officials from your board or policy team to explore this possibility.
Warm regards,
Rufo Guerreschi
Founder and Director of the Coalition for a Baruch Plan for AI