Democracy & AI Governance

Fragmented AI Policy Is the Road to Autocracy — And We're Already On It

Why the US/China split is producing surveillance states across the Global South, and why the window to change course is closing

February 2026 · Craig Warren Smith

← Back to all posts

Here is a prediction that no one in Washington or Beijing wants to hear: the greatest threat to democracy in the coming decade is not a particular ideology or political movement. It is fragmented AI policy — the chaotic, uncoordinated patchwork of regulations, non-regulations, and competing jurisdictional claims that now characterizes how most nations on earth approach artificial intelligence. This fragmentation is not a temporary inconvenience. It is the mechanism by which democratic governance is being hollowed out and replaced, incrementally and often invisibly, by autocratic control.

And it is already happening.

The Anatomy of Fragmentation

When we speak of "fragmented AI policy," we do not mean the absence of policy. We mean the opposite: the proliferation of competing policies, issued by competing agencies, serving competing political interests, with no coordinating framework. The causes vary from nation to nation, but the pattern is remarkably consistent. In almost every case, fragmentation reflects three intersecting forces.

First, the US/China geopolitical split forces nations into impossible choices. The United States offers market access, venture capital, and cloud infrastructure from Google, Microsoft, Amazon, and NVIDIA — but with conditions that privilege American corporate interests and data extraction. China offers affordable hardware, Huawei telecommunications infrastructure, and AI systems that come bundled with surveillance architectures. Nations that try to engage both sides find their policy apparatus pulled in contradictory directions, with different ministries aligning with different patrons.

Second, domestic institutional competition creates internal fragmentation. In Indonesia, for example, there is currently no umbrella AI regulation. Instead, the Ministry of Communication and Digital Affairs drafts an AI roadmap, the National Research and Innovation Agency (BRIN) handles technology assessment, the Personal Data Protection Commission enforces privacy law, and thirty-nine separate ministries and agencies contributed to a national AI white paper — which sounds collaborative but actually reveals how many competing power centers exist. After the roadmap is published, each ministry will be required to draft its own AI guidelines. That is not coordination. That is fragmentation by design.

Third, the lobbying power of global technology corporations ensures that whatever policy does emerge tends to serve corporate rather than citizen interests. In nations where regulatory capacity is already stretched thin, a single well-funded lobbying campaign can shape legislation — or prevent it from passing altogether. The result is that the loudest voice in any AI policy conversation is rarely the voice of the people who will be most affected.

Why Fragmentation Leads to Autocracy

The connection between fragmented policy and rising autocracy is not abstract. It operates through a precise mechanism.

When a nation lacks coherent AI governance, it cannot negotiate effectively with the technology superpowers. It cannot set terms for data sovereignty, algorithmic transparency, or citizen protection. Instead, it accepts whatever terms are offered by whichever superpower happens to be in the room. These terms invariably include surveillance infrastructure — whether the American model of corporate data extraction or the Chinese model of state monitoring. Often both.

Both sides of the US/China split lead to citizen surveillance. The American model concentrates data in the hands of private corporations accountable to shareholders, not citizens. The Chinese model concentrates data in the hands of the state. Neither model gives citizens ownership of their own data.

The surveillance question is not "Will it happen?" It is "Who will operate it?" And in fragmented policy environments, the answer is: whoever gets there first, with whatever terms they choose, locked in forever.

Lock-In: The Irreversibility Problem

This brings us to the most dangerous aspect of fragmented AI policy: technological lock-in. When a nation adopts AI infrastructure — cloud platforms, data architectures, algorithmic systems for taxation, healthcare, education, and law enforcement — those choices become extremely difficult to reverse. The data formats, the API dependencies, the training datasets, the institutional workflows built around specific platforms — all of these create path dependencies that harden over time.

Our analysis suggests that by late 2027, approximately 55–60% of Global South AI infrastructure decisions will be locked in. By 2029, that figure rises above 80%. By 2030, the window closes entirely. The governance frameworks that nations adopt — or fail to adopt — in 2026 and 2027 will determine whether four billion people live under democratic or autocratic AI systems for a generation.

AI lock-in will be more consequential than telecom lock-in by orders of magnitude, because AI systems do not merely transmit information — they make decisions. They determine who gets loans, who gets hired, who gets flagged by law enforcement, who gets access to healthcare. When these systems are locked into autocratic architectures, the autocracy becomes self-reinforcing and, for practical purposes, permanent.

Indonesia: A Case Study in Subnational Risk

Indonesia illustrates a dimension of this problem that receives almost no attention: the subnational level. Indonesia is not merely a nation of 280 million people. It is a nation of approximately 500 districts ("kabupaten"), each governed by a bupati (regent) with significant administrative authority — and each tied to political party structures that have their own relationships with technology providers and their own incentives for surveillance.

While Jakarta debates national AI policy, the real decisions about which AI systems will govern daily life are being made at the district level. A bupati who adopts a Chinese-sourced facial recognition system for "public safety" has made a governance choice that affects millions of people — and that choice may be driven not by national policy but by party patronage, personal relationships with technology vendors, or simple cost comparisons that ignore long-term sovereignty implications.

This subnational fragmentation is where democracy dies most quietly. No one holds a press conference to announce that District X has adopted surveillance infrastructure. No international media covers the slow accumulation of AI systems across hundreds of local governments. Yet the cumulative effect is that by the time national policy catches up, the installed base of autocratic AI infrastructure may already be too large and too embedded to remove.

What is true of Indonesia is true, with local variations, across the Global South. In Mexico, state and municipal governments make technology procurement decisions that may or may not align with federal AI ambitions. In Thailand, provincial administrations interact directly with Chinese technology firms. In Peru, regional governments face the same choices with even less institutional capacity to evaluate them.

What Media and Democratic Governments Must Understand

For media organizations committed to democracy, this is not a technology story. It is the democracy story of the decade. The question of who governs AI is the question of who governs, period. And right now, the answer being written across the Global South — in the absence of coherent policy, in the gaps between fragmented agencies, in the quiet procurement decisions of district governments — is: not citizens.

Either side leads to the same destination. A nation that surrenders its data to American corporations and a nation that accepts Chinese surveillance infrastructure have both made the same fundamental choice — to cede citizen governance of AI to an external power.

A Third Path Exists — But Not for Long

The AI Middle Way Coalition — uniting Thailand, Indonesia, Mexico, Peru, and an expanding network of partners — is attempting to demonstrate that a third path is possible. One that engages both the American and Chinese AI ecosystems without subordination to either. One that insists on data sovereignty, algorithmic transparency, and citizen governance as non-negotiable conditions of technology adoption. One that addresses the subnational challenge directly, building governance frameworks that reach below national capitals to the district and provincial levels where the actual infrastructure decisions are being made.

The Bangkok Declaration, to be signed on April 21, 2026, will formalize this commitment among founding nations. But the Declaration alone is not enough. What is needed is for democratic media and democratic governments worldwide to recognize that fragmented AI policy is not a bureaucratic inconvenience — it is the mechanism of democratic erosion. And that the window to establish alternatives is measured not in decades but in months.

The infrastructure decisions being made right now — this year, this quarter, in national ministries and district offices across the Global South — are writing the governance code that four billion people will live under for a generation. If those decisions are made in fragmentation, under pressure, without coordination, the result will be autocracy by default. Not because anyone chose it, but because no one organized the alternative in time.

Craig Warren Smith is the founder of the Digital Divide Institute, Chairman of the AI Middle Way Coalition, and co-architect of the Bangkok Declaration framework. He has spent 25 years working on technology governance in the Global South, including national policy adoption in Thailand and Indonesia.

← More Articles Get Involved