Convenience without judgement: AI health advice and the systems it enters

Across regions, the growing use of artificial intelligence for everyday health advice is often framed as a question of access.

The narrative is both familiar and persuasive.

AI reduces cost. It removes friction. It brings health information closer to individuals who are distant from formal systems. For many particularly across parts of the Global South this shift appears transformative. Where healthcare systems are uneven or difficult to access, AI offers something that feels immediate, responsive, and increasingly reliable.

At first glance, the value is clear. It lowers barriers. It expands reach.

This is not a shift in access. It is a shift in how judgment is exercised.

But as with many technological shifts in public systems, the more important question lies elsewhere: not in what technology enables, but in what it assumes about the systems into which it is introduced.

Converging principles, diverging systems: the real challenge of global AI governance

The economics of convenience

One way to understand this shift is through a basic economic lens.

As Ronald Coase observed, institutions and behaviours are shaped by transaction costs the effort required to access, process, and act on information. When those costs change, patterns of behaviour adjust accordingly.

AI systems such as ChatGPT and Google Gemini dramatically reduce these costs in healthcare engagement. They compress search, interpretation, and synthesis into a single interaction.

A question asked. An answer delivered.

But as transaction costs fall, something else tends to shift alongside them: the distribution of decision-making effort.

In traditional settings, individuals bore part of that burden searching, comparing, interpreting. AI removes much of that effort. It does not simply provide information. It resolves it.

And in doing so, it quietly relocates judgment.

From information to resolution

Earlier forms of digital health and before that, internet search exposed individuals to plurality. Multiple sources, uneven credibility, competing interpretations. The process was inefficient, but it required engagement.

Large Language Models (LLMs) can collapse that plurality.

They synthesise across sources and present a single, coherent response. The uncertainty that exists within health knowledge is compressed into a structured answer. The process becomes near invisible; the output appears complete.

This shift aligns with a deeper observation made by Herbert Simon: “A wealth of information creates a poverty of attention.

AI addresses that poverty by reducing the need to process information. But it also reduces the need to interrogate it.

Why access does not guarantee outcomes

Digital health has encountered this problem before.

For over a decade, policy efforts have focused on expanding access telemedicine, mobile platforms, digital records. The underlying assumption has been consistent: make systems available, and outcomes will improve.

In practice, this has proven only partially true.

Because outcomes are shaped not only by access, but by how systems are used within institutional and social contexts. Workflows, authority structures, trust relationships, and levels of literacy all mediate how technology is interpreted and acted upon.

AI-mediated health advice enters this same landscape but with less visibility and fewer institutional guardrails.

Health advice, by its nature, is contextual. It depends on individual history, underlying conditions, and environmental factors. AI systems, however, operate on generalisation. They produce responses that are broadly applicable, not individually precise.

This is not a flaw. It becomes one when generalised outputs are interpreted as personalised guidance.

The economics of Health AI: understanding where value truly lies

A familiar policy pattern

What we are observing follows a well-established pattern in public policy.

As Michael Lipsky argued, the real substance of policy emerges at the point of interaction where systems meet individuals under conditions of constraint.

AI does not bypass this reality. It enters it. And in doing so, it reflects it.

In healthcare, this means that AI advice is filtered through existing conditions: varying levels of health literacy, differing trust in institutions, and uneven access to professional care. Individuals do not engage with these systems in a vacuum; they engage with them as part of broader decision-making environments shaped by necessity and constraint.

Where the effects are amplified

These dynamics are particularly pronounced in parts of the Global South.

Not because of the technology itself, but because of the context.

In environments where access to healthcare professionals is limited, AI-mediated advice often becomes a first and sometimes primary point of engagement. In such settings, the authority of a well-articulated response can exceed its underlying reliability.

At the same time, the capacity to critically interpret these outputs is unevenly distributed.

The result is not misuse, but predictable use under constraint.

As development economists have long noted, interventions introduced into resource-limited systems often produce outcomes shaped more by context than by design intent.

Public systems, private intelligence

Governments have begun to respond.

Efforts to develop independent or locally aligned AI health systems reflect an important recognition: that reliance on external platforms alone introduces risks of control, alignment, and long-term dependency.

This is a necessary step. But it does not resolve the core issue.

Because the challenge is not only who builds the system. It is how the system is used.

From access to capability

This brings the discussion back to a familiar, but often overlooked, principle.

As Amartya Sen argued: “Development is the expansion of capabilities.”

AI may expand access, but its value depends on how well people can interpret and act on it. Without that, access alone is not enough. And in healthcare, that difference is consequential.

The appeal of AI-driven health advice lies in its simplicity.

It reduces effort. It delivers clarity. It makes systems feel more accessible. But simplicity, in complex systems, does not remove underlying constraints.

It obscures them.

About this analysis

This article is part of HealthTechAsia’s Policy Lens series, which tracks healthcare AI governance developments across Asia and the Middle East. HealthTechAsia also provides advisory support to organisations navigating the region’s regulatory and governance landscape — including regulatory impact assessments, AI governance frameworks, policy monitoring, and market-specific regulatory briefs.

Enquiries: team@healthtechasia.co

Author

  • Vishnu Narayan

    Vishnu Narayan writes on the safe and ethical governance of artificial intelligence and emerging technologies, with a particular focus on healthcare systems.

    He works in regulatory and public policy at the Medical Technology Association of India (MTaI), New Delhi where he engages on responsible innovation and fair practices in the health technology sector.

    Trained as a biomedical engineer, he approaches technology governance as a regulatory systems strategist, examining how institutions can ensure that innovaion evolves alongside patient safety, accountability, and public trust.

    Vishnu is also a Research Group Member at the Center for AI and Digital Policy (CAIDP), Washington DC and has been part of the Commonwealth AI Consortium, London.

    He is an alumnus of the Tata Institute of Social Sciences (TISS), Mumbai.

    View all posts

Discover more from HealthTechAsia

Subscribe to get the latest posts sent to your email.