AI and the state: why technology inherits the institutions it enters

Across the Global South, artificial intelligence is increasingly positioned as the next frontier in public sector reform. From agriculture and health to urban planning and welfare delivery, AI is framed as a force multiplier offering prediction, efficiency, and scale where state systems have long struggled with resource constraints.

The narrative is familiar and compelling: smarter systems can compensate for thin institutions. Algorithms can see patterns humans miss. Automation can fill gaps where capacity is weak. In countries grappling with stretched bureaucracies and uneven service delivery, AI is imagined not merely as a tool, but as a solution.

Yet this framing rests on a fragile assumption. It treats AI as a technological rupture something capable of bypassing the slow, contested work of institution-building. In practice, what is emerging is less dramatic and more revealing: AI adoption in public systems rarely transforms governance. It inherits it.

From digital promise to administrative reality

Agriculture offers a telling example. Across many lower- and middle-income countries, AI-enabled pest surveillance systems are being introduced to improve early warning, standardise diagnosis, and deliver timely advisories to farmers. These systems are often presented as a clear break from earlier ICT platforms smarter, faster, and more adaptive.

On the ground, however, the shift is more incremental than advertised. AI has improved diagnostic speed. Image recognition tools reduce the time taken to identify pests. Dashboards aggregate data more efficiently. Alerts travel faster. But the deeper structures of decision-making who validates information, who authorises action, how advisories are framed and disseminated remain largely unchanged.

Thresholds for intervention are still set centrally. Validation processes remain hierarchical. Feedback loops are weak or absent. Intelligence is added, but reflexivity is not. AI accelerates existing workflows without reconfiguring them. It makes institutions faster, not fundamentally different.

Why trust continues to reside with humans

This continuity matters. Governance failures are rarely about the absence of data or prediction alone. They arise from how authority is exercised, how accountability is enforced, and how systems learn or fail to learn from error. Computational sophistication cannot substitute for these functions.

A common assumption in AI policy discourse is that automation will displace human intermediaries. In practice, legitimacy in public systems continues to rest with people, not systems. Extension officers, scientists, local officials, and peer networks remain primary arbiters of credibility. AI outputs are interpreted, triangulated, and filtered through lived experience before action is taken.

This is not resistance to technology. It reflects how trust is socially constructed. In many Global South contexts, authority is earned through presence, continuity, and relational accountability rather than algorithmic opacity. AI is integrated selectively, calibrated against context and experience. What emerges is situated rationality, not blind adoption.

Human-centric governance as a democratic imperative

This persistence of human mediation points to a deeper governance principle. A genuinely human-centric approach to AI places human agency at its core. When people cannot understand, contest, or influence the systems shaping their lives, technology shifts from enabling to extractive. AI governance is therefore not merely technical; it is democratic.

As AI increasingly affects fundamental rights from privacy and health to due process and freedom of expression oversight, transparency, and accountability become essential. Global frameworks such as the OECD/G20 AI Principles, along with emerging judicial decisions, underscore a clear message: accountability cannot be delegated to algorithms or vendors.

At the same time, human values are not universal abstractions. Their interpretation varies across cultures and institutional contexts. Privacy, for example, is widely recognised as a fundamental right, yet its meaning differs across regions. Some societies emphasise individual data protection; others prioritise collective security or administrative efficiency. These differences strengthen the case for governance grounded in institutions capable of negotiating trade-offs legitimately.

Public infrastructure, private intelligence

Another tension sits beneath many AI deployments: ownership. Across the Global South, public systems increasingly rely on privately developed algorithms, platforms, or datasets. This enables speed, but introduces asymmetries in control, transparency, and long-term stewardship. When core intelligence layers are privately owned, the state’s capacity to govern becomes contingent. Performance alone cannot resolve this tension; governance clarity is required.

Perhaps the most underappreciated constraint facing AI-enabled public systems is institutional learning. AI generates signals, but signals matter only if institutions can respond, adapt, and recalibrate. Where feedback loops are weak and errors are not systematically analysed, intelligence becomes static. Faster decisions are not necessarily better decisions. Without learning mechanisms, speed can entrench error.

Governing intelligence before it governs us

If there is a lesson here, it is this: AI governance must begin with institutional introspection, not procurement. AI adoption is an institutional reform challenge, not merely a digital one. Human intermediaries are governance assets, not inefficiencies. Public AI systems require public governance clarity. AI will not fix weak institutions. But approached with humility and institutional awareness, it can help reveal where governance needs strengthening most. That may be its most valuable contribution.

This perspective does not reject technological ambition. It suggests sequencing: strengthening institutions alongside innovation, so intelligence supports governance capacity, public trust, and long-term resilience rather than substituting for them over time globally.

Author

  • Vishnu Narayan

    Vishnu Narayan writes on the safe and ethical governance of artificial intelligence and emerging technologies, with a particular focus on healthcare systems.

    He works in regulatory and public policy at the Medical Technology Association of India (MTaI), New Delhi where he engages on responsible innovation and fair practices in the health technology sector. Trained as a biomedical engineer, he approaches technology governance as a regulatory systems strategist, examining how institutions can ensure that innovaion evolves alongside patient safety, accountability, and public trust. Vishnu is also a Research Group Member at the Center for AI and Digital Policy (CAIDP), Washington DC and has been part of the Commonwealth AI Consortium, London.

    He is an alumnus of the Tata Institute of Social Sciences (TISS), Mumbai.

    View all posts

Discover more from HealthTechAsia

Subscribe to get the latest posts sent to your email.