Open Letter to Demis Hassabis

[As sent via email on Oct 2nd, 2025]

Subject: How you Could Catalyze the Bold AI Treaty You've Been Calling For

To: Demis Hassabis, Larry Page, Sergey Brin, Sundar Pichai, and Allan Dafoe

Dear Dr. Hassabis,

As the founder and director of the Coalition for a Baruch Plan for AI, an international NGO promoting responsible global governance of AI, I have been closely following your public statements since you began warning about AI's existential risks.

I understand your predicament. You've been the most vocal among AI leaders in calling for strong global governance, and the only one still doing so in 2025. You've warned that "AGI is coming, society's not ready." You've called for a "technical UN" for AI safety. You've even urged like-minded nations to form a coalition that can influence global AI governance alongside the U.S. and China. Google DeepMind's July 2023 white paper, Exploring Institutions for Global AI Governance, laid out early on a detailed and sensible plan for international AI institutions, as bold as they needed to be. 

Yet, you find yourself trapped in a race for ASI while knowing the destination may be catastrophic.

Bringing all of Google along with Your AI Vision

Other top executives at Google seem to be aligned with your concerns and vision for a human-controllable and humanity-controlled advanced AI. Allan Dafoe, your Head of AGI Strategy - who analyzed in detail parallels with 1946 global regulation of nuclear technology - has gone even further, stating recently that we do not really need breakthroughs in AI technical alignment, "we just need global coordination to restrict unsafe AIs". Sundar Pichai has acknowledged the risk of AI destroying civilization is "pretty high.

Yet, at Google, Larry Page and Sergey Brin alone retain control over the Google DeepMind board and strategic direction. Although largely absent from the public eye, their position appears relatively straightforward in their actions, if not in their words. 

Larry Page, in particular, appears to be set to pursue all-powerful AI even at the cost of human annihilation, since be famously called "speciesist" those overly concerned about human extinction risk from AI. Sergey Brin, while occasionally hinting at an acknowledgment of the safety risks, has recently returned to work full-time at Google, focused on ensuring it prevails in the AI race. Ray Kurzweil, a visionary who foresaw superintelligence and the singularity, was hired by Larry Page and remains a key AI advisor, speaking widely about his eagerness for immortality and the numerous benefits that ASI will bring. Page and Brin appear comfortable working towards a successor species and trying to be first at that, while presumably recognizing the extinction risks.

As we detail in our recent blog post, Google's overall stance appears to be a soft public relations hedging strategy, likely meant to buy Google time and goodwill while it races forward. While Hassabis seems very sincere, a “good cop, bad cop” dynamic appears to be at play, where Hassabis gets to play the good cop, while real decisions are and will be made elsewhere. Hence, Hassabis' gestures toward treaties and safety appear, overall, more like positioning than commitment. 

Persuading Page and Brin to Pursue a Global AI Treaty

Without a genuine pivot by Page and Brin, it appears that Google will remain mostly an accelerator or a bystander, and not a decisive actor in fostering the bold and timely global AI treaty that you are calling for. The prospects of such a pivot require a path towards a global treaty-making process that is realistic, ensures proper technical guidance, does not overly restrict most transhumanist and post-humanist AI promises, and retains a significant space for them to participate in the decision-making process.

I understand Page's and Brin's predicament, their immense responsibility, and how their current strategic approach to pushing ahead derives from their evolving balancing of immense competing risks in a foreseeable near-term strategic scenario. 

The dismal inaction of world leaders following the 2023 UK AI Safety Summit has been disheartening, justifying strong skepticism about the prospects for a sane, timely, and effective global governance of AI risks.

Furthermore, the behavior of world leaders and geopolitics has increased concerns that a global governance regime—bold enough to prevent Superintelligence and other AI-driven catastrophic risks—could turn into an immense power in the hands of a few unaccountable, irresponsible, or authoritarian individuals or entities. 

Perhaps Page and Brin, and you, Dr Hassabis, are also concerned that a global AI treaty could entrench a "Luddite" reaction, locking humanity away from much of the astounding potential benefits and scientific progress of AI.

In this context, the best Page and Brin can do for the well-being of humanity and sentient beings is to strive to build a SeedAI that will somehow lead to a Superintelligence with a positive outcome.

Yet, Google is forced to do so in haste, before others do, and under immense competitive pressure. Such pressures force you to direct your resources towards highly dangerous AI-assisted and autonomous AI capability research, while sacrificing precaution and safety. 

These enormous negative external pressures on the Google DeepMind's R&D process make it even more difficult and unlikely that values and objectives that you'll manage to instill in such a SeedAI will stick, as ASI will rewrite itself at an ever-accelerating rate.

Additionally, it is highly uncertain whether an ASI will somehow foster our well-being, if it will have any form of consciousness, and if that consciousness will be a happy one.

Hence, you find yourself in a predicament whereby - while you appear to most to be the master designer of the future of AI and humanity - you are likely without any agency. You are doing what you can and must while dynamics of an essentially anarchic world lead us to the huge gamble of building an ASI in a haste - an ASI that could kill us all, feel nothing, or feel immense pain.

A Last-Ditch Attempt at a Timely, Bold and Suitable Global AI Treaty

As Altman, Musk, and Zuckerberg have indicated, we might already be at or past the "event horizon" of Artificial Superintelligence (ASI). If that is the case, even a robust, short-term AI treaty would not be able to stop ASI from being developed or arising eventually.

However, we may still have some time to stave off this immense gamble, and empower humanity to widely and sanely steward AI to an astoundingly positive outcome for sentient beings.

There may still be a few months when a critical mass of key potential influencers of Trump's AI policy could lead him to co-lead with Xi in a suitable, bold, and timely global AI treaty - as we argue in our Strategic Memo of the Deal of the Century, a six-month, 20-contributors, 130-page effort that we just published.

The Memo analyzes in fine detail why and how those influencers, which include you, Vance, Amodei, Altman, Bannon, Pope Leo XIV, Sacks, Gabbard, Rogan and Carlson, could be swayed and come together on a joint pitch to Trump on pragmatic terms - as Oppenheimer and Acheson persuaded Truman in 1946 to propose the Baruch Plan - and explores 17 reasons why Trump could be swayed.

As we detail in our Strategic Memo, Steve Bannon sees the race as leading to technofeudalism, bringing "the Apocalypse", and has called for an immediate AI treaty. JD Vance has acknowledged the AI 2027 report and wants Pope Leo XIV to provide moral leadership on the topic of AI. Pope Leo XIV opposes the creation of digital consciousness and seeks to preserve human dignity. Joe Rogan and Tucker Carlson publicly fear an AI catastrophe and could translate the message to Trump's base. David Sacks has stated recently that he thinks "all the time" about the risk of losing control over AI. Amodei has been consistently warning about the extinction risk of 10-20% and the unacceptability of giving rise to AI systems that we currently cannot understand or interpret. Hassabis stated that the risk of losing control of AI does not let him sleep well at night, and he has consistently called for strong global AI governance, while Google's owners could possibly be persuaded. 

Furthermore, 78% of US Republican voters believe artificial intelligence could eventually pose a threat to the existence of the human race, and 77% of US voters support a strong international treaty for AI.

History is Awaiting for You

The window is narrow. Trump will meet Xi on October 31st, and then again in early 2026. But with your technical and moral credibility and the right coalition, we can shift from racing toward extinction to building the framework for beneficial AI that serves all humanity.

We are not asking you to stop or pause. We are asking you, your board, and policy staff to decisively explore this possibility by reviewing our Strategic Memo, while you continue your race, as you inevitably must. We invite you to review specific sections of the Memo where we:

  • List 17 reasons why Trump could decide to co-lead a global AI treaty with Xi (pp. 36-39) 

  • Describe a treaty-making process that could succeed (pp. 18-24)

  • Describe treaty enforcement mechanisms that could reliably prevent both ASI and global authoritarianism (pp. 24-27)

  • Analyze how several unsuspected potential Trump's AI influencers are much closer than you might think to join a critical mass to persuade Trump of a global AI treaty (pp. 40-55).

  • Argue in fine details for a re-evaluation of the balanced assessment of the probabilities of deeply uncertain and competing risks that likely underlie your current strategic positioning (pp. 15-17 and pp. 84-91)

In an earlier Case for a Coalition for a Baruch Plan for AI (v.1) (pp. 68-75), we outline how a federal "Global AI Lab", resulting from such a treaty, would ensure a sizeable innovation capability and decision making for your firm, as well as value appreciation for your shareholders. 

We propose that you engage with us and/or some of those other influencers (and others) to explore whether a last-ditch attempt to mitigate the immense risk of ASI may be possible, while also averting other grave or more severe risks.

We'll travel next week to the Bay Area (Oct. 3-9), then to Washington (Oct. 9-16), and then to Mar-a-Lago (Oct. 16-24). I would be happy to meet privately with you or relevant officials from your board or policy team to explore this possibility.

Warm regards,

Rufo Guerreschi,
Founder and Director of the Coalition for a Baruch Plan for AI