A Fork in History

[This post is derived from a chapter with the same name in the Strategic Memo of The Deal of the Century (v.2)]

In recent weeks and months, most leading AI labs - including OpenAI, Musk’s xAI, and Zuckerberg's Meta, as well as AI chip companies like NVIDIA - have openly declared their intent to build Superintelligence, also referred to as Artificial Superintelligence, or ASI. 

Unlike AGI, Superintelligence is a precisely defined term describing an AI that self-improves at an ever-accelerating rate, leading (almost) by definition to its escape beyond the control of any human entity.

While some top AI leaders have distorted the term in recent months to mean human-controllable AI, an ASI would (almost) certainly lead to a technological singularity, with unforeseeable consequences, positive or negative, including a substantial risk of human extinction or near extinction. Most leading scientists and top AI experts (including Altman and Musk) now warn - ever more urgently - of substantial and imminent risks from ASI. These risks also include an irreversible consolidation of global power in one or a few entities.

Furthermore, we have no idea whether such an intelligence would be conscious, and if it were, whether that consciousness would be in a state of suffering or well-being.

Predicted timelines for reaching ASI or a "point of no return" in the creation of an ASI have been collapsing from decades to just a few years, with many experts not excluding being just months away. Altman thinks we have entered the "event horizon" of ASI, i.e., the point of no return. Anthropic CEO Dario Amodei predicts we'll reach AI as capable as a "country of geniuses" (i.e., ASI) by 2026. Musk believes about ASI that "if it doesn't happen this year, next year for sure", while Zuckerberg stated "developing superintelligence is now in sight". 

Most of those AI leaders, who were once the loudest voices warning of safety risks and calling for global regulation, have understandably grown disillusioned by the total lack of proportionate diplomatic action. Now, most of them are focused on winning a race to ASI, hoping to embed their values into the first ASI, and trusting that those values will stick and somehow things will go well.

While top AI experts are sounding the alarm, 76% of US voters believe artificial intelligence could eventually pose a threat to the existence of the human race. 62% of US voters are primarily concerned about artificial intelligence, while just 21% are primarily excited about it.

Our predicament is nothing short of unfathomable and mind-boggling; so much so that most state leaders and media have yet to come to terms with it, paralyzed in disbelief, confusion and denial

While we have somehow survived to date unbridled nuclear and biological innovations, AI advances seem to prove the vulnerable world hypothesis: unless certain technological innovations are reliably globally governed and restrained, humanity runs an ever-greater risk of doom.

Paradoxically, at the same time, success in staving off both of those two immense risks - and instead building advanced but human-controllable “tool AI” - would most likely unlock unimaginable benefits for humanity and other sentient beings. These could likely go far beyond great abundance, healthcare and scientific innovation, possibly helping humans to substantially or radically reduce suffering and unlock human flourishing in unimaginable ways. 

Never before has mankind stood so clearly at a fork in history: unimaginable tragedy or triumph.

Rufo Guerreschi

I am a lifetime activist, entrepreneur, and researcher in the area of digital civil rights and leading-edge IT security and privacy – living between Zurich and Rome.

https://www.rufoguerreschi.com/
Next
Next

Launching The Strategic Memo of The Deal of the Century: a U.S. Tour for a Global AI Treaty