Whatever Happened to "AI Safety"?
Alan Turing saw it coming. In 1951, when artificial intelligence existed only in the darkened halls of university physics labs—staffed by brilliant geeks who had a hard time finding a date, back when "computer science" was still an exotic phrase—Turing made a prediction that would prove eerily prescient. He said that one day AI innovation would become impossible to stop. And he said that would be the moment when government would have to step in with guardrails—to make sure that a machine designed to imitate human consciousness would prove more helpful than harmful.
That moment arrived decades later, in 2023, after the field had passed through two brutal "AI winters" when funding nearly dried up altogether. But the deeper irony cuts to the bone. Turing—a national hero for cracking the Enigma code and helping end the Second World War—was in effect killed by his own government for being gay, driven to suicide by a court-ordered chemical castration. The man who saw our future most clearly was destroyed for who he loved.
The person who stepped into the AI spotlight seventy-two years later could not have been more different in circumstance, if not in brilliance. Sam Altman—openly gay, preferring the real glamour of San Francisco's Russian Hill to the faux glamour of the Googleplex—rushed around the world in 2023 appealing for government regulation of artificial intelligence. It was a bravura performance. But one could be forgiven for wondering whether his true agenda was less about safety and more about activating global partners for OpenAI, which was swiftly becoming the fastest-rising application on Earth.
Altman was not known to be among the "doomers"—the researchers, many clustered around Oxford, who warned of the existential risk AI posed to the human species. Nor was he an accelerationist, shouting go, go, go at all costs. Instead, he did something far more commercially effective: he activated OpenAI's surge to become the fastest-rising startup in history.
"Alignment"—the principle that AI systems should be aligned with human values and ethics—turned out, in Altman's hands, to mean something closer to the alignment between business strategy and market opportunity. Key safety researchers walked out the door, one by one, as that reality became impossible to ignore.
Yet for one luminous moment, someone in power actually tried. The one leader who bucked the accelerationist tide was the young, newly installed UK Prime Minister Rishi Sunak, whose rise owed no small debt to his father-in-law, Narayana Murthy, the founder of Infosys—the man who put India on the map as the first major nation of the Global South to push back against the AI dominance of the Global North.
With Murthy's connections and credibility behind him, Sunak assembled twenty-nine nations at Bletchley Park—the very site where Turing had cracked Enigma—and secured their signatures on a declaration stating that AI must be proven safe before it could be released to the public. The plan was to host a series of summit meetings on AI Safety until all the world's citizens would be protected, much as they were under the post–World War II START treaties that put guardrails on nuclear weapons.
The symbolism was exquisite. The policy ambition was real. And for a brief season, it seemed as though the world might actually get this right.
The momentum did not last.
Altman and the CEOs of the so-called "Magnificent Seven"—the Big Tech companies remaking their entire strategies around AI—unleashed their lobbyists across every venue that mattered. AI Safety legislation failed in New York. It failed in Geneva. It failed in tech-heavy statehouses like Sacramento.
But the kill shot happened in a single moment, behind closed doors.
That simple argument—delivered privately, never debated publicly—changed everything. The next day, market forces drew the obvious conclusion. Not just NASDAQ, but the Dow Jones and the S&P recognized that meaningful AI governance was never going to happen. That assumption caused markets to rise and rise again, fueling the biggest surge of wealth in world history—up $1.7 trillion in market capitalization in just three years.
AI Safety, as a political movement, was dead.
Or was it?
A prescient journalist, the New York Times columnist Thomas Friedman, predicted in September of last year that the United States and China would eventually align on AI governance. He called it co-opetition—a framework in which the two superpowers would compete in some domains and cooperate in others. It was a smart prediction. But Friedman left something out.
How could it happen? Who would make it happen? How could such a U.S.–China alignment be operationalized when neither superpower has any incentive to move first?
The answer, we believe, does not lie in Washington or Beijing. It lies in the Global South—in the markets of four hundred million people across nations like Thailand, Indonesia, Mexico, and Peru, whose governments are now asking a different question. Not "AI Safety" in the old, binary sense—safe versus unsafe, regulate versus accelerate. But something more nuanced and, we believe, more achievable: How do we reduce potential harm AND optimize benefit in a single governance model?
The AI Middle Way Coalition
A third path between U.S. market-driven and Chinese state-controlled AI—rooted in the Buddhist principle of balance between extremes, designed for the nations whose people have the most to lose and the most to gain.
Visit middleway.aiThis website—middleway.ai—represents that conviction. It is not too late. But it is almost too late.
The challenge now is not to resurrect "AI Safety" as it was conceived in 2023. That ship has sailed. The challenge is to build something better—a framework that combines the reduction of harm with the optimization of benefit, that gives Global South nations genuine leverage in the AI era rather than leaving them as passive consumers of technologies designed elsewhere for someone else's profit.
As Rousseau predicted centuries ago, wealth inequality unchecked will grow so extreme that the top-heavy economy will eventually collapse under the chaos and short-sightedness of its own leaders. The question is whether we build a middle way before that collapse—or after it.
The window is still open. Barely.
Craig Warren Smith is the founder of the Digital Divide Institute and Chairman of the AI Middle Way Coalition, a partnership of Global South nations pursuing a third path for AI governance. The Coalition's Bangkok Declaration is scheduled for signing on April 21, 2026.