Why and How Google Leaders Could be Swayed to Champion a US-China-led bold AI Treaty
[derived from the “Google Leaders” subchapters of the Strategic Memo of the Deal of the Century]
Google leaders have diverging views on AI, ASI and AI governance, especially among its top executives and its owners and founders.
Demis Hassabis, CEO of Google DeepMind and recent Nobel Laureate. He has been, among AI leaders, the most vocal supporter of strong global governance to prevent the safety and concentration of power risks of AI in recent months. He has spoken repeatedly about the importance of safe AI, multilateral cooperation, and the potential need for international institutions. He called recently for the creation of a "technical UN" to ensure global AI safety. And more recently, he called for a broad and diverse coalition of non-superpower nations, such as the UK, France, Canada, and Switzerland, to advance global regulation on dual-use AI standards.
Back in July 2023, Google DeepMind published a white paper, Exploring Institutions for Global AI Governance, a detailed “exploration” of the feasibility of creating four new IGOs for AI, including a Frontier AI Collaborative, an "international public-private partnership" to “develop and distribute cutting-edge AI systems” or to “ensure such technologies are accessible to a broad international coalition”, yet it had no follow-up since it was published in 2023. Hassabis plays a careful balancing role, as other Google leaders and his bosses have not repeated his proposal and seem to have different ideas.
Echoing Hassabis, their CEO, Sundar Pichai, stated recently that he believes the risk of AI resulting in the end of human civilization is "pretty high", but he remains optimistic that as that risk is more widely perceived, humanity will come together to prevent it. Allan Dafoe, DeepMind’s influential Head of AGI Strategy, who wrote a foundational paper on the application of the Baruch Plan model to AI, has been going beyond Demis Hassabis, stating that there is no need to seek technical AI alignment: we just need global coordination to restrict unsafe AIs.
Yet, at Google, Larry Page and Sergey Brin still hold the real power. Though largely absent from the public eye, they retain control over Alphabet’s board and strategic direction. Their position is relatively straightforward in deeds, if not in words. Page, in particular, is likely set for all-powerful AI at nearly all costs, since be famously called "speciesist," those overly concerned about human extinction risk. Brin, while occasionally hinting at an acknowledgment of the risks, has recently returned to work full-time to ensure Google wins the AI race. Ray Kurzweil, a visionary who foresaw superintelligence and the singularity, was hired by Larry Page and remains a key AI advisor, speaking widely about his eagerness for immortality and the numerous benefits that ASI will bring. Page and Brin appear comfortable working towards a successor species and trying to be first at that, while presumably recognizing the extinction risks. Meanwhile, Brin recently recognized, as Hassabis did, the possibility that we are in a simulation, but was not greatly concerned with the implications for precaution.
As it stands, Google's overall stance appears as a soft public relations hedging strategy, likely meant to buy Google time and goodwill while it races forward. While Hassabis seems very sincere, a “good cop, bad cop” dynamic appears to be at play, where Hassabis gets to play the good cop, while real decisions are and will be made elsewhere. Hence, Hassabis' gestures toward treaties and safety appear, overall, more like positioning than commitment.
Without a genuine pivot by Page and Brin, it appears that Google will remain mostly an accelerator or a bystander, and not a decisive actor in fostering the bold and timely global AI treaty that you are calling for. The prospects of such a pivot require a path towards a global treaty-making process that is realistic, ensures proper technical guidance, does not overly restrict most transhumanist and post-humanist AI promises, and retains a significant space for them to participate in the decision-making process.