January 1946, January 2026: The Paradox of Pragmatic US Presidents and Existential Treaties

How the president withdrawing from 66 international agreements could become the one who proposes the most ambitious treaty in history—and why that's not a contradiction

I. The Current Moment

Last week, Donald Trump withdrew the United States from 66 international treaties and organizations, calling them "contrary to the interests of the United States." He is openly discussing annexing Canada, seizing Greenland, and retaking the Panama Canal. His administration has launched what critics call an illegal invasion of Venezuela. He has embraced Viktor Orbán and praised authoritarian governance models.

The list of withdrawals reads like the dismantling of the post-WWII international order: the Paris Agreement, the WHO, UNESCO, the UN Human Rights Council, and most strikingly, the UN Framework Convention on Climate Change—the first country ever to abandon that bedrock treaty, ratified by unanimous Senate consent in 1992.

His net approval rating stands at -12, having declined steadily from +6 when he took office. Among independents, the collapse has been catastrophic: 43 points underwater by December.

Meanwhile, the AI race accelerates. Leading labs—OpenAI, Anthropic, xAI, Google DeepMind—openly race toward artificial superintelligence (ASI). They acknowledge the risks. They say there's no alternative. Sam Altman stands next to Trump at press conferences while his company pursues systems that could, by his own admission, pose existential risks to humanity.

To most observers, Trump appears to be the last person on Earth who would champion a bold international treaty on AI—let alone one that could extend to nuclear weapons.

This perception is almost certainly wrong.

II. The Conventional Reading—And Why It Misses the Point

The standard interpretation goes like this: Trump is dismantling the liberal international order because he's ideologically opposed to multilateralism. He sees international institutions as constraints on American sovereignty and power. Therefore, he will never support strong global governance of any kind.

This reading misunderstands the man entirely.

Trump isn't dismantling international institutions because he believes in sovereignty as a principle. He's dismantling these institutions because he correctly perceives them as ineffective—legacy structures that constrain American action while delivering little real benefit. The UN Human Rights Council doesn't stop human rights abuses. The Paris Agreement didn't stop emissions. The WHO didn't stop COVID.

A pragmatic leader looking at this landscape doesn't see the choice as "international cooperation vs. national sovereignty." He sees the choice as "ineffective institutions vs. something that actually works."

The need for a big win

Trump's approval is underwater and sinking. He needs a legacy-defining achievement. Here's what most observers miss: American voters overwhelmingly support strong AI regulation and see AI as an existential threat—and this concern has been growing steadily. A president who positioned himself as the leader of global AI safety would be handing himself exactly the big win he needs. The polling progression is striking:

According to the AI Policy Institute, 76% of US voters believe AI will pose a threat to the "existence of humanity." This isn't a fringe view—it's mainstream American opinion.

As of November 2025, YouGov found that 45% of US voters think AI should be "much more regulated", with an additional 29% saying it should be "somewhat more regulated"—three-quarters of the country calling for stronger rules.

Most remarkably for a Republican president: as of August 2025, 78% of US Republican voters believe artificial intelligence could eventually pose a threat to the existence of humanity.

As of June 2025, 43% of US citizens are very or somewhat concerned that AI could "cause the end of the human race."

On international cooperation specifically: according to a 2024 University of Maryland survey, 77% of US voters support a strong international treaty for AI—matching the rise of similar public sentiment in the first half of 1946.

The demand for leadership exists; what's missing is the supply. A president with declining approval who positioned himself as the leader of a global AI safety initiative would be swimming with the current, not against it.

In a striking coincidence that could appeal to a man who believes in destiny, the Baruch Plan—the boldest international treaty ever proposed—was presented to the United Nations on June 14, 1946, barely an hour after Donald Trump was born in Queens, New York. Trump can succeed where Truman failed, completing a mission that began with his first breath. For a deeper analysis of why Trump's psychology, incentives, and circumstances align with this possibility, see "17 Reasons Why Trump Could Be Persuaded to Co-Lead a Bold AI Treaty with Xi."

III. Why Leaders' Distrust Is Actually an Advantage

Here's a counterintuitive point that deserves emphasis: the fact that current world leaders deeply distrust each other is not an obstacle to a successful treaty—it's actually an advantage.

The arms control achievements of the Reagan-Gorbachev era rested substantially on personal rapport between two leaders. When that personal chemistry faded or successors arrived, the treaties weakened. Trust-based agreements are inherently fragile.

A treaty negotiated between leaders who fundamentally distrust each other—Trump and Xi, for instance—would by necessity be built on verification, not trust. It would be, in the technical sense, trustless: designed from the ground up to function even when no party trusts any other party.

This is exactly what robust global governance requires. The original Baruch Plan failed partly because verification technology didn't exist in 1946—there was no way to confirm compliance without intrusive inspections that neither side would accept. Today, compute monitoring, satellite imagery, and AI-assisted inspection make "trust or verify" (not "trust but verify") operationally feasible.

Leaders who start from mutual suspicion will demand ironclad verification mechanisms. They'll insist on enforcement provisions with teeth. They won't accept symbolic gestures or gentleman's agreements. The resulting framework, if achieved, would be far more durable than anything built on personal relationships.

Trump's transactional worldview—his instinct to assume everyone is trying to cheat him—is precisely the mindset needed to design a treaty that can't be cheated.

IV. January 1946 and January 2026: The Influencer Parallel

The parallels between January 1946 and January 2026 extend beyond circumstances to the specific people who could make the difference.

The 1946 coalition

In the early months of 1946, as the world reeled from Hiroshima, a small circle of trusted experts shifted US policy toward unprecedented global coordination:

Robert Oppenheimer, the director of the Manhattan Project, provided the scientific credibility and moral authority. He had built the bomb; his voice carried unique weight when he said it must be controlled internationally.

Dean Acheson, the deputy secretary of state, brought strategic foresight and bureaucratic skill. He translated Oppenheimer's scientific concerns into policy language that Truman could act on.

Bernard Baruch, Truman's wartime economic czar, provided the political savvy and business credibility. He knew how to sell a radical idea to skeptical audiences.

Together, they convinced a pragmatic machine politician with declining approval ratings—a man who had just ordered atomic bombs dropped on civilian populations—to propose the boldest treaty in world history.

The 2026 parallel

Today, once again, the immense responsibility falls on a small, decisive group close to the US President:

Sam Altman could play the dual role of technologist and industrialist—part Oppenheimer, part Baruch. He has built the systems driving the current AI race; his voice would carry unique weight if he said they must be controlled internationally. Unlike Oppenheimer, he also commands the business credibility of a tech CEO.

JD Vance and Marco Rubio could bring the geopolitical instincts and access that Acheson provided—the ability to translate abstract risk into concrete, politically viable action for a president who thinks in terms of deals, not doctrines.

Steve Bannon could play Baruch's role in a different register: translating a bold AI treaty (which he has called for) into something the MAGA base will embrace as American strength, not globalist surrender.

Given the enormous ethical and existential stakes, Pope Leo XIV may also play a role that has no 1946 equivalent. The newly elected Pope has made AI ethics a priority, and Vance recently deferred outright to papal authority on the overall ethical framing of AI's future. A Vatican endorsement could provide moral cover that transcends partisan politics.

The pattern is clear: a coordinated pitch from figures like Vance, Altman, Bannon, and Pope Leo XIV—aligned with support from Tulsi Gabbard, Dario Amodei, and media figures like Tucker Carlson and Joe Rogan—could shift Trump's calculus in ways that no think tank report or UN resolution ever could.

This is how the 1946 Baruch Plan happened. The same can be done today.

V. Authoritarian Leaders and Democratic Global Governance: No Paradox

A common objection holds that authoritarian or authoritarian-leaning leaders would never agree to genuinely democratic global institutions. If Trump, Xi, and Putin are all primarily concerned with maintaining their own power, how could they possibly support international governance with real teeth?

This objection misunderstands what authoritarian leaders actually want.

Authoritarian leaders want to be autocratic internally—to maintain control within their own domains. But they face a fundamental problem: other powerful actors could develop technologies or capabilities that threaten their domestic control. An AI system developed by a rival that escapes human oversight doesn't respect national borders. A superintelligence aligned with American values is as threatening to Beijing as one aligned with Chinese values is to Washington.

To protect their autonomy at home, authoritarian leaders need something they cannot achieve unilaterally: assurance that no other actor will develop capabilities that could dominate them. This requires accountable supranational institutions with genuine enforcement power.

The logic is straightforward:

  • Each leader wants to prevent any other leader from capturing overwhelming power through AI

  • No leader can prevent this through bilateral deals alone (any two parties could collude against a third)

  • Only a multilateral framework with transparent governance and enforcement mechanisms provides reliable protection

  • That framework must be at least somewhat democratic—because any governance structure that gives veto power to a single nation is one that could be captured by that nation

There is no contradiction between authoritarian national leaders and democratic global structures. Democratic global governance is the only structure they would accept, because it's the only one that reliably prevents power capture by rivals.

Putin doesn't want Xi to control superintelligence. Xi doesn't want Trump to control it. Trump doesn't want anyone else to control it. The only stable equilibrium is shared control through institutions that no single actor dominates.

This is why the Baruch Plan proposed governance modeled on the UN Security Council but without any nation's veto. Truman—no globalist—understood that American interests required an institution America couldn't dominate, because that was the only institution the Soviets would accept, and Soviet acceptance was necessary to prevent a catastrophic arms race.

The same logic applies today, intensified by the stakes involved.

VI. What Success Would Mean: AI, Nuclear Safety, and the Baruch Plan's Original Vision

The Baruch Plan was never intended to address only nuclear weapons. The original proposal specified that the International Atomic Development Authority would "progressively extend its control over all existing and future advanced weapons, including biological and chemical agents."

A successful AI treaty could fulfill that original vision—and do something more: make existing nuclear arsenals dramatically safer.

Advanced AI systems developed under proper international governance—transparent, verifiable, maintained under multilateral oversight—could reduce nuclear risks that have persisted since 1946:

False alarms: There have been numerous documented incidents where early warning systems generated false signals that could have triggered nuclear war. AI systems could provide more reliable threat detection while filtering out false positives, extending the time available for human decision-making.

Verification: AI-enabled monitoring could make arms control agreements more verifiable than ever before—addressing one of the key failure modes of the original Baruch Plan.

Crisis communication: Mutually trusted AI systems could facilitate secure communication between nuclear powers during high-tension moments, reducing the risk of misunderstanding that could spiral into catastrophe.

More fundamentally, a successful AI treaty would establish a governance model that could then extend to nuclear weapons—finally achieving what the Baruch Plan attempted 80 years ago. The institutional infrastructure, verification mechanisms, and political precedent created for AI would provide the foundation for bringing all dangerous technologies under coordinated human control.

VII. The Window

On June 14, 1946, Bernard Baruch told the United Nations: "We are here to make a choice between the quick and the dead."

In a striking coincidence, the Baruch Plan was presented barely an hour after Donald Trump was born in Queens, New York.

Eighty years later, Trump can succeed where Truman failed.

And here's what most observers miss: the very traits that seem to disqualify Trump from leading a global AI treaty are precisely what could make him uniquely effective at it.

His transactional worldview? It means he'll demand verification mechanisms that actually work—no gentleman's agreements, no trust-based frameworks that collapse when leadership changes.

His hostility to existing international institutions? It means he won't settle for symbolic gestures. If he proposes something, it will have teeth.

His obsession with "winning"? A successful AI treaty that positions America as the leader of global AI governance would be the ultimate win—bigger than any real estate deal, bigger than any election.

His willingness to talk to adversaries without preconditions? It means he can negotiate with Xi directly, without the bureaucratic filters that slow traditional diplomacy.

His need for a legacy-defining achievement? This would dwarf anything any president has accomplished. Reagan ended the Cold War. Trump could be the president who prevented the AI apocalypse.

Someone put Greenland in his head. Someone put the Panama Canal in his head. Someone can put something better there.

Trump's scheduled meeting with Xi Jinping in April 2026 creates a natural deadline. The question isn't whether this can happen—Truman proved pragmatic leaders can propose radical international cooperation when their advisors convince them it serves their interests.

The question is whether the right people reach him in time.

The author is Rufo Guerreschi. For more information on efforts to advocate for a global AI treaty modeled on the Baruch Plan, see cbpai.org.

Rufo Guerreschi

I am a lifetime activist, entrepreneur, and researcher in the area of digital civil rights and leading-edge IT security and privacy – living between Zurich and Rome.

https://www.rufoguerreschi.com/
Previous
Previous

TwentyTwo Reasons Why Trump Could Be Persuaded to Co-Lead a Bold Global AI Treaty with Xi

Next
Next

The Pentagon Just Declared Wartime AI Mobilization. Here's Why That Makes a Treaty More Likely, Not Less.