Beyond principles: the next phase of AI governance in healthcare

The governance of artificial intelligence in healthcare has entered a more mature phase.

Across jurisdictions including the European Union, the United States, the United Kingdom, India, and Singapore-regulatory approaches have moved beyond abstract principles toward structured systems grounded in risk classification, data governance, and lifecycle oversight.

Frameworks such as the EU AI Act, evolving U.S. Food and Drug Administration guidance, and the World Health Organization’s recommendations on AI for health signal a clear shift: AI is no longer treated as an experimental overlay, but as a clinical system requiring institutional scrutiny.

This is meaningful progress. It reflects a recognition that systems influencing diagnosis, treatment, and access to care must be governed with a level of seriousness comparable to drugs, devices, and clinical protocols.

At a global level, there is convergence at the level of principles, even as implementation continues to vary. Across policy frameworks, certain expectations now appear consistently. AI is positioned as augmentative rather than autonomous in clinical decision-making, with human accountability retained. Patient awareness and consent are increasingly embedded within regulatory expectations.

Data governance-covering quality, traceability, and protection-has become central to regulatory design. Risk-based classification frameworks are widely adopted, ensuring that scrutiny is proportionate to potential harm. There is also growing emphasis on explainability, post-market monitoring, and incident reporting, reflecting the recognition that AI systems evolve and require continuous oversight.

Institutions such as the World Health Organization and the International Medical Device Regulators Forum have played an important role in shaping this baseline, while national frameworks have adapted these principles to local contexts.

While these approaches differ in structure-ranging from the EU’s comprehensive risk-based model to the FDA’s lifecycle-oriented approach-they reflect a shared underlying logic.

Taken together, these developments mark an important first phase in the governance of AI in healthcare: establishing safety, accountability, and trust at the level of the system.

Yet this convergence also reveals its limits. The shared focus remains largely centred on system behaviour-its outputs, risks, and immediate interaction with clinicians and patients. What remains less fully addressed is the broader ecosystem within which these systems operate.

Digital infrastructure and the changing foundations of patient safety

The first gap concerns accountability in multi-actor environments. While policy frameworks emphasise responsibility, AI systems in healthcare are rarely singular entities. They are composed of layered interactions between model developers, software platforms, cloud providers, and clinical users. Each contributes to system behaviour in ways that are not easily separable. When harm occurs, responsibility can become diffuse.

Existing frameworks often assume clarity where, operationally, it is still evolving. The challenge is not only to assign responsibility, but to design enforceable liability pathways across these interconnected actors.

A second issue relates to context and generalisability. While regulatory approaches rightly stress data quality, they do not consistently require validation across diverse clinical environments. Much of the data used to train healthcare AI systems originates in well-resourced settings.

When deployed in contexts with different disease burdens or care pathways, performance may shift in ways that are not immediately visible. This is less a failure of technology than of incomplete validation. A more forward-looking approach would embed context-specific performance requirements into both approval and post-market evaluation.

There is also an implicit assumption across many regulatory systems that the infrastructure supporting AI is stable and uniform.

Reliable connectivity, interoperable health records, and technical support are often treated as given. In practice, these conditions vary significantly. In such environments, AI introduces not only opportunity, but new forms of fragility. Policy must therefore engage more directly with infrastructure readiness as a component of safety, rather than treating it as external.

Equally important is the increasing complexity of AI supply chains. In some cases, AI is embedded within broader platforms in ways that can limit visibility into underlying models, training data, or update mechanisms.

While regulatory attention has focused on end products, it does not consistently extend to transparency across the value chain. As systems become more modular and interdependent, there is a case for supply-chain level disclosure standards, ensuring that trust extends beyond the interface to the system as a whole.

Another dimension that remains underdeveloped is the economic and operational sustainability of AI in healthcare.

Policy has largely focused on enabling innovation and ensuring safety, but less on how these systems are sustained over time. In many settings, the absence of reimbursement pathways results in pilot initiatives that do not scale. If AI is to move from experimentation to integration, governance must engage with how these technologies are financed and maintained.

Closely linked to this is the role of development financing. Large-scale investments-whether domestic or through international partnerships-are increasingly directed toward strengthening health infrastructure. Commitments such as Japan’s Official Development Assistance loans to India, which include substantial allocations for health, illustrate how capital is being mobilised at scale.

While not explicitly framed around AI, such investments will shape the environments in which data-driven systems operate. This raises a broader consideration. As health systems become more digitally integrated, the absence of explicit governance provisions within financing frameworks-covering data stewardship, interoperability, and accountability-risks creating infrastructure that is technologically capable, but institutionally underprepared. Conversely, there is an opportunity to embed governance expectations at the point of investment.

In this context, regions undergoing rapid health system expansion and digitalisation may increasingly shape how governance models are operationalised-not through formal standard-setting alone, but through deployment choices at scale.

A further gap lies in how human interaction with AI systems is conceptualised. While most frameworks emphasise oversight, they often assume that clinicians have the time and capacity to meaningfully interrogate outputs.

In practice, decision-making is shaped by workload and institutional pressures. Without structured training and workflow integration, oversight risks becoming nominal rather than substantive. Beyond these, additional considerations are emerging.

The environmental footprint of large-scale AI systems is receiving increasing attention. Similarly, the need for accessible patient grievance mechanisms in cases of algorithmic harm remains underdeveloped. These issues may appear peripheral today, but are likely to become more central as adoption deepens.

Taken together, these gaps do not diminish the significance of current regulatory efforts. Rather, they indicate a transition point. The first phase of AI governance in healthcare has focused on establishing safety and trust at the level of the system. The next phase must address the conditions within which these systems operate.

As Peter Drucker observed, “The best way to predict the future is to create it.”

This shift is not merely technical-it is institutional. It requires moving from viewing AI as a product to understanding it as part of a living, interconnected health system shaped by infrastructure, economics, and global interdependence.

For regulation to remain effective, it must evolve accordingly-not only to keep pace with technological development, but to anticipate the conditions under which these systems are deployed.

Because ultimately, the question is not whether AI in healthcare is regulated, but whether it is regulated for the world it is actually entering.

Author

  • Vishnu Narayan

    Vishnu Narayan writes on the safe and ethical governance of artificial intelligence and emerging technologies, with a particular focus on healthcare systems.

    He works in regulatory and public policy at the Medical Technology Association of India (MTaI), New Delhi where he engages on responsible innovation and fair practices in the health technology sector. Trained as a biomedical engineer, he approaches technology governance as a regulatory systems strategist, examining how institutions can ensure that innovaion evolves alongside patient safety, accountability, and public trust. Vishnu is also a Research Group Member at the Center for AI and Digital Policy (CAIDP), Washington DC and has been part of the Commonwealth AI Consortium, London.

    He is an alumnus of the Tata Institute of Social Sciences (TISS), Mumbai.

    View all posts

Discover more from HealthTechAsia

Subscribe to get the latest posts sent to your email.