Our New Strategy and Roadmap: Our Future Depends on Trump’s Key AI Advisors
After many weeks of deep strategic assessment and research, following our initial $60,000 funding from the Survival and Flourishing Fund, we have revised our strategy and Roadmap. Read it below, as of May 1st, 20205, or refer to a live updated page here.
_____________________
Goal
"We call on all heads of state, President-Elect Trump and President Xi, and their advisors and security agencies, to engage in an open global treaty-making process for safe and fair AI of radically unprecedented scope, urgency and effectiveness."
(Abstract, Open Call for a Coalition for a Baruch Plan for AI, December 2024)
Situational Assessment and Strategy
Whether we like it or not, the future of humanity and other conscious life forms rests overwhelmingly on the choices of Donald Trump and a handful of his key AI advisors.
Preventing AI's immense short-term safety and concentration of power risks, while realizing its promise, depends on the leaders of the most powerful nations acknowledging the need for (1) an extraordinarily bold AI global treaty - similar to what occurred in 1946 for nuclear technology in the months leading up to the US presenting the Baruch Plan to the UN (on the same day Donald Trump's was born!) - and (2) pursuing it through an extraordinarily timely and effective treaty-making model to avoid repeating the failures of the Baruch Plan and other treaty attempts related to nuclear issues and climate change.
Given the overwhelming expert and scientific evidence, along with deep and widespread popular concerns, most powerful states would likely join in advancing such an unprecedented treaty if the US were involved or served as an initiator or co-initiator with China. Consequently, its prospects rest almost entirely on the possibility that key AI advisors to Trump will convince him to co-lead with Xi Jinping "an open global treaty-making process for safe and fair AI of radically unprecedented scope, urgency and effectiveness", involving at least a dozen of the most powerful nations, security agencies, and top AI labs - inking the "deal of the century" and entrenching a substantial US economic advantage.
Key AI Advisors to Donald Trump
After thorough research, we believe Trump is primarily influenced by Elon Musk, Peter Thiel, JD Vance, Sam Altman, and Tulsi Gabbard, and secondarily by Google's Demis Hassabis (and Dafoe, Page, Brin), Dario Amodei, some national security officials, and media personalities (Tucker Carlson, Joe Rogan).
While widely viewed as self-interested or opaque, figures such as Musk, Altman, and Thiel articulate philosophical concerns that are internally coherent and morally ambitious, albeit controversial and evolving. These beliefs, supported by their public statements and writings, reflect deeply-held convictions that can be constructively engaged. Their statements are clear evidence that they believe the bold global governance needed to prevent ASI is unachievable in time, and that if it were, it would likely lead to some dystopia or an increase in, or no reduction of, other grave existential risks. They have a strong rational basis for that.
Musk and Altman, who were very vocal about the existential risks of AI and the need for strong global governance of AI just a year ago, have understandably come to believe it is likely too late. They have therefore turned to focusing on making Superintelligence work well through their recipes, beating others to the finish line, and advocating for a light-touch "referee" among AI Labs. Thiel acknowledges a substantial risk of extinction but is even more concerned about an AI-driven authoritarian dystopia (or "anti-christ"), including one led by the US. Vance is likely to align with whatever Trump decides, as long as it aligns with his career and, perhaps, his religious and philosophical convictions, which seemingly align with Thiel's. Gabbard has consistently expressed significant public concern about the risk of nuclear war and attempted to guide Trump on that issue, so she is likely to do the same regarding AI. Amodei, who has been consistently deeply worried about AI safety and the concentration of power, has recently shifted his main concerns from preventing China from dominating AI to the absolute need for more time to avert an unaligned or out-of-control AI with unforeseen consequences.
We believe that Google DeepMind CEO Demis Hassabis (along with Dafoe and, to a lesser extent, Page and Brin) could be a key ally due to his consistent recent calls for strong global governance of AI and a "sort of new technical UN" for AI. Eric Schmidt, Bill Gates, and others may also serve as influential allies based on their recent statements, although they are unlikely to be first movers.
Case for Key AI Advisors to Donald Trump
While our case will be presented to them on a 1-to-1 basis, it will be based on a shared general case, which can be summarized as:
"Promoting a global treaty-making process, and the resulting organization, that is as bold, timely, effective, and inclusive as needed to reliably prevent ASI - but also solidly based on federalism and subsidiarity by leveraging and innovating on state-of-the-art organizational, trustless computing, decentralized defensive AI, and guaranteed safe AI concepts and technologies - has high chances to succeed in both:
Sustainably preventing ASI/Superintelligence (forever or until when, and if, its risks of human extinction or dystopia will be reduced to acceptable levels)
Sustainably minimising risks posed by a strong global AI governance, including:
Excessive and/or unaccountable concentration of power and surveillance (Thiel's stated primary concern)
Excessive caution that could prevent the reduction of other existential and extinction risks, and "expansion and preservation of consciousness" and its "greater enlightenment" (Musk's primary concern)
A reduction of US global economic advantage (Trump's primary concern, and likely Vance's)
Long-term value lock-in"
Inexpert, non-meritocratic, bureaucratic, or mob-based rule; (Implicit and explicit concern of all of them)
Uniqueness and Neglectedness
Our work and strategy cover an area that is crucial and neglected:
While several NGOs have proposed global institutions for AI, none advocate for or facilitate the creation of a global federal institution for AI that is as extraordinarily bold, timely, empowered, and inclusive as it necessarily needs to be.
While several NGOs are lobbying lawmakers and political leaders for a global governance of AI, none are focused on persuading the key AI advisors to Trump.
While many NGOs and experts have suggested detailed designs for global institutions for AI, very little attention has been given to how we can promote treaty-making processes to build suitable institutions in a timely manner.
Roadmap
Foundation Stage
February-April 2025:
Identified and deeply researched key AI advisors to Trump to understand their interests, values, philosophy, psychology, and triggers. Researched their text, audio, and video statements, as well as expert opinions about them, such as books and articles. Identified in order of priority: Musk, Thiel, Vance, Gabbard, Altman, and others.
Refined our general strategy to advance our main goal best.
Set up a legal entity for CBPAI following $60,000 from the Survival and Flourishing Fund.
First Stage
By June 2025:
Achieved substantial progress, with relevant internal and external experts, in developing content to attract and persuade key AI advisors to Trump of the benefits of establishing a suitable treaty-making process for the global governance of AI. Deliverables will include:
Internal Analysis. A well-documented and detailed analysis on why and how they could be convinced, likely in a paper.
Case Documents. Email templates, briefs, detailed Cases, and video content can attract their attention and be engaging, enticing, and convincing. Here is a summary of the case, which will be adapted one by one:
Argue in broad terms how a global governance of AI can still succeed in reliably preventing ASI while at the same time minimizing the substantial risks it would introduce, as mentioned above:
Detail the organisational, technical, and socio-technical provisions of such a treaty-making process that will achieve those goals, including:
Ways to reconcile extremely pervasive global compliance, oversight, surveillance, and control systems with enforcing the treaty, with high and durable levels of global democratic control, liberty, and the principle of subsidiarity.
(If we receive over $100k in funding) Work is underway involving deep experts, advisors, team members (and relevant state representatives) on a new version of the Case for a Coalition for a Baruch Plan for AI, which will:
Analyse in depth the socio-technical and geopolitical challenges of AI global governance, and solutions to such challenges that minimise the risks involved.
Aim to play a role similar to that of the Acheson-Lilienthal Report, primarily written by Oppenheimer, in providing the scientific basis for the proposal of the Baruch Plan in 1946.
Seek collaboration with leading NGOs, states, and national security agencies on international initiatives, such as:
Various initiatives for an "IPCC for AI", such as the International Scientific Report on the Safety of Advanced AI.
The Guidelines for Secure AI System Development, led by the NSA and GCHQ, include follow-up documents and similar documents from China.
By September 2025:
Reached out to the identified key AI advisors to Trump with a highly targeted, personalised, and customised case for a suitable, bold, and timely treaty-making process for AI.
By October 2025:
Reached out to the Director of DNI, Tulsi Gabbard, and key NSA officials (current and former)—including Mr Luber, Director of the NSA Cybersecurity Directorate (in charge of AI safety), his predecessors Sager and Plunkett, and the former NSA director, Mike Rogers—via two Coalition advisors who are former NSA officials.
The NSA Cybersecurity Directorate published a document titled "Deploying AI Systems Securely: Best Practices for Deploying Secure and Resilient AI Systems" in April 2024. This document builds on the previously released Guidelines for Secure AI System Development.
We prominently referred to these Guidelines (mostly ignored by the media) in our Coalition announcement on September 10th, 2024, and then in all our follow-up documents. We are seeking to do the same on the Chinese side.
Sometime after October 2025, we hope that, partly as a consequence of our work, Trump or his administration will decisively advance an exceptionally bold, timely, and effective treaty-making process for AI and actively seek an agreement with China and other key states through private or public meetings.
Second Stage
While it would already be very positive that Trump and other states are actively pursuing an exceptionally bold, timely, and effective treaty-making process for AI, we believe that using a treaty-making process based on the intergovernmental constituent assembly model may be necessary to make such process as timely, effectively and inclusive as it necessarily needs to be, and more resilient and solid on the long term.
While such a second stage could take many forms, we believe that an ideal second stage could start with the US and China - and/or a highly globally diverse group of states that make up at least 30% of the GDP and 30% of the world population, and include at least two states that are veto-holders of the UN Security Council or possess nuclear weapons - agreeing on a mandate and rules for an intergovernmental constituent assembly for AI. They would therefore convene such an assembly with a date, place, and budget set for six months later (or nine or twelve), in which all states are invited to participate.
We believe such Mandate and Rules should:
Be approved by a simple majority based on weighted voting, after having extensively strived for consensus. Considering the vast disparity in power between states, particularly in AI and more broadly, and recognizing that three billion people are illiterate or lack internet access, we foresee the voting power in such an assembly to be initially weighted by population and GDP (giving considerable power to the US and China, yet without creating a duopoly)
Have been widely perceived as fair, resilient, neutral, expert, and participatory, including via transnational citizens' assemblies, and a neutral and balanced mix of experts;
Ensure statutory safeguards are in place to maximize the chances that the resulting IGO will not degenerate, unduly centralize too much power, or be captured by one or a few nations, and improve over time, such as via checks and balances, a solid federal structure, regular mandatory re-constituent assemblies, and other measures;
Ensure that the resulting charter is sent for ratification by the signatory states and becomes valid if at least 9 out of 13 of them approve it, using the same ratio used after the US Constitutional Convention of 1787.
Have established an open dialogue and convergence paths with competing or complementary international governance initiatives, especially those led by digital superpowers, and positioning itself as a medium-term complement for possible more urgent initiatives, albeit less multilateral, aimed at tackling urgent global safety risks;
If China, the US, and most of those other states have joined by then, there can be enough positive and negative incentives exerted by participant states on non-joining states to keep their AI capabilities under reliable control (provided that AI supply chain control - and the public availability or hackability of dangerous AI technologies - have been mitigated early on via great powers coordination);
As decided in the agreed-upon Mandate and Rules for the intergovernmental constituent assembly for AI, six to twelve months later, the Open Transnational Constituent Assembly for AI and Digital Communications will be held in Geneva or another neutral city. Once it is clear the new organization will be created, we believe it will be participated in by at least 60 states, which are highly globally diverse and make up at least 50% of the GDP and 50% of the world population, and include at least three states that are veto-holder of the UN Security Council or have nuclear weapons:
States that are not participating in the Assembly can also ratify it.
The charter review provisions enable non-participating states to participate in charter review conferences, initially held every two years, on an equal basis.
Early ratifying states commit large enough funds to begin setting up the three agencies, including the Global Public Benefit AI Lab.