
Coalition for a Baruch Plan for AI
We stand on the brink of an irreversible slide towards AIs capable to subjugate or annihilate humanity, or enable a state or firm to do so.
While it is very late, we can still conceivably prevail and realize AI's astounding opportunities. But only if a few key potential influencers of the U.S. President’s AI policy rise up to their historical role - as they did in 1946 for nuclear technologies.
In recents months, OpenAI, Musk’s xAI, Zuckerberg's Meta and NVIDIA have openly declared their intent to build Artificial Superintelligence (ASI), a form of self-improving AI by definition beyond durable human control, with unforeseeable consequences, including human extinction as openly admitted even by Musk and Altman.
While most leaders and journalists are still trying to digest and admit the immensity of seemingly inexorable risks, a spark of hope arises from the fact that we have been there before.
In 1946, another pragmatic US President, Harry Truman, was convinced by key influncers of his nuclear policy, and aligned political conditions, to propose the Baruch Plan, an unprecedented treaty establishing a global monopoly over all dangerous nuclear weapons, research and materials.
We can do it again, this time for AI, and succeed via better deal and treaty making methods, leading Donald J. Trump to ink The Deal of the Century.
Launched on December 18th, 2024 by:
What is the Baruch Plan?
On June 14, 1946, barely an hour after the birth of Donald Trump, President Truman proposed to the United Nations the Baruch Plan: a bold solution to create a new global democratic agency to bring under exclusive international control all dangerous research, arsenals, facilities and supply chain for nuclear weapons and energy - to be extended to all other globally dangerous technologies and weapons. While it ultimately failed, the Plan remained for years the official U.S. nuclear policy.
The Idea of a Baruch Plan for AI
Current AI governance initiatives by superpowers, IGOs and the UN are severely insufficient in scope, timeliness, inclusivity and participation. Nothing less than a Baruch Plan for AI can reliably tackle AI’s immense risks for human safety and for unaccountable concentration of power and wealth, and realize its astounding potential. Awareness of this need is mounting. Some of the most influential AI experts and leaders have referred to or outright called for the Baruch Plan as a model for AI governance, including Yoshua Bengio, the most cited AI scientist, Ian Hogarth, (UK AI Safety Institute), Allan Dafoe (Google DeepMind), Jack Clark, (Anthropic), Jaan Tallinn, (Future of Life Institute), and Nick Bostrom.
The Deal of the Century
A Better Treaty-Making Method
Awareness of the risks and need for coordination is rapidly growing. But turning such consensus into a suitable and timely treaty among even a moderate number of powerful states is completely impossible by relying on the dominant treaty making methods, as shown by the utter failure of nuclear and climate treaties. We need a much more effective, high-bandwidth, time-bound, and inclusive treaty-making process. We need one with a set start and end date, a clear mandate, and supermajority rulesto prevent any state's vetos. We need something extreme, as the circumstances are, but also historically proven, as we only have one shot. Fortunately, we can rely on the proven successes of the intergovernmental constituent assembly treaty-making model, that in their most successful examples led quickly to the US and Swiss federations.