Coalition for a
Baruch Plan for AI
We stand on the brink of an irreversible slide towards AIs capable of subjugating or annihilating humanity, or enabling a state or firm to do so.
While quite late, we can still prevail and realize AI's astounding opportunities. But only if a few key potential influencers of the U.S. President’s AI policy rise to their historical role, as they did in 1946 for nuclear technologies.
Launched in December
18th, 2024, by:
The Deal of the Century
In recent months, OpenAI, Musk’s xAI, Zuckerberg's Meta and NVIDIA have openly declared their intent to build Artificial Superintelligence (ASI), a form of self-improving AI by definition beyond durable human control, with unforeseeable consequences, including human extinction, as openly admitted even by Musk and Altman.
While most leaders and journalists are still trying to digest and admit the immensity of seemingly inexorable risks, a spark of hope arises from the fact that we have been there before.
In 1946, another pragmatic US President, Harry Truman, was convinced by key influencers of his nuclear policy, and aligned political conditions to propose the Baruch Plan, an unprecedented treaty establishing a global monopoly over all dangerous nuclear weapons, research and materials.
We can and should do it again for AI. And succeed this time via better deal and treaty-making methods, leading Trump and Jinping to co-lead The Deal of the Century.
Video Intro by our President
What is the Baruch Plan?
On June 14, 1946, barely an hour after the birth of Donald Trump, President Truman proposed to the United Nations the Baruch Plan: a bold solution to create a new global democratic agency to bring under exclusive international control all dangerous research, arsenals, facilities and supply chain for nuclear weapons and energy - to be extended to all other globally dangerous technologies and weapons. While it ultimately failed, the Plan remained for years the official U.S. nuclear policy.
The Idea of a Baruch Plan for AI
Current AI governance initiatives by superpowers, IGOs and the UN are severely insufficient in scope, timeliness, inclusivity and participation. Nothing less than a Baruch Plan for AI can reliably tackle AI’s immense risks for human safety and for unaccountable concentration of power and wealth, and realize its astounding potential. Awareness of this need is mounting. Some of the most influential AI experts and leaders have referred to or outright called for the Baruch Plan as a model for AI governance, including Yoshua Bengio, the most cited AI scientist, Ian Hogarth, (UK AI Safety Institute), Allan Dafoe (Google DeepMind), Jack Clark, (Anthropic), Jaan Tallinn, (Future of Life Institute), and Nick Bostrom.
A Better Treaty-Making Method
Awareness of AI risks and the need for coordination is rapidly growing. But turning such consensus into a suitable and timely treaty is virtually impossible by relying on the dominant treaty-making methods, as shown by the utter failures of nuclear and climate treaties. Building on an initial US-China agreement, we require a much more effective, high-bandwidth, time-bound, and inclusive negotiation and treaty-making process.
We need one with a set start and end date, a clear mandate, and supermajority rules to deny veto power to any state. We need something extreme, as the circumstances are - but also historically proven, as we only have one shot. Ideally, we can rely on the proven successes of the intergovernmental constituent assembly treaty-making model, which, in its most successful examples, led to the US and Swiss federations in a few months.