Artificial intelligence governance in healthcare is progressing across multiple jurisdictions but remains fragmented, under-enforced, and uneven in its ambition, according to a global analysis presented at a forum convened by the ADB-WHO Forum on Harnessing AI for Health Equity in Manila last week.
Peiling Yap, Chief Scientist at HealthAI – The Global Agency for Responsible AI, drew on the organisation’s December 2025 report to outline the state of AI health governance across eight countries — Brazil, China, India, the UK, Singapore, the United States, Vietnam, and Zambia — identifying shared structural trends, persistent common challenges, and country-specific findings.
Three structural trends
Yap identified three trends shaping the regulatory landscape globally. The first is the growing role of software as a medical device (SaMD) frameworks as the primary legal foundation for AI governance in healthcare.
Most countries are broadly aligning with IMDRF-based risk classification guidelines, she said, but a regulatory grey zone persists around in-hospital AI systems and direct-to-consumer applications, which frequently fall outside existing medical device classifications. Emerging mechanisms to address these gaps include model cards for AI-enabled medical devices, predetermined change control plans, and expanded post-market surveillance authority covering the full product lifecycle.
The second trend is the emergence of what Yap described as multi-layered governance architecture, combining national AI strategies, data protection legislation, digital health infrastructure, and sector-specific medical device regulation.
She cited the EU AI Act’s contested interaction with the Medical Device Regulation, Peru’s AI law and its limited enforcement capacity, and South Korea’s dual compliance requirement under both the Digital Medical Products Act and the AI Basic Act as illustrations of the complexity this creates in practice.
The third trend is the rise of digital sovereignty as a central concern in national health data strategy. Governments are increasingly developing dedicated health data platforms and seeking to define how data may be used in AI development.
Yap noted that data from wearables and consumer health applications frequently falls outside traditional health privacy frameworks, and that re-identification of individuals through cross-dataset analysis is an expanding risk that regulators are only beginning to address.
Three cross-cutting challenges
Across all eight jurisdictions, Yap identified three systemic challenges. The first is fragmented regulatory authority: in every country examined, multiple agencies carry overlapping mandates with limited coordination between them.
The second is the inadequacy of existing governance frameworks for adaptive AI — systems that continue to learn and change after deployment — which current regulatory models were not designed to handle.
The third is infrastructure inequity: uneven broadband connectivity and digital literacy between urban and rural populations, a challenge Yap described as present not only in emerging markets but across all eight countries in the study.
She also noted a broader structural problem: a consistent distance between the ambitious strategic visions governments articulate in national AI strategies and their capacity to translate those visions into enforceable regulation.
China: a two-phase model
Yap devoted particular attention to China, describing its approach as a deliberate two-phase strategy. The first phase focuses on building what she termed a “digital highway” — the data platforms, governance standards, and regulatory infrastructure needed to support AI deployment at scale.
This includes a four-level national population health information platform under a Medical Big Data strategy, and an Internet Plus Healthcare programme that had reached more than 3,340 internet hospitals by December 2024.
The second phase, already under way, involves guiding AI applications onto that infrastructure. China’s National Medical Products Administration has developed specialised guidelines for AI-enabled medical devices, with mandatory adverse event reporting and periodic re-evaluations.
By October 2025, more than 110 Class III AI medical devices had received approval, with domestic firms holding over 90% of approvals. Yap also noted China’s stated intent to align with EU MDR, IVDR, and FDA GMLP standards to facilitate synchronous global product launches.
Vietnam: foundations in place, capacity to build
Vietnam, Yap said, is at an earlier stage but has the right legal foundations in place, including a Digital Technology Industry Law and a Personal Data Protection Decree. AI applications in healthcare are currently regulated under SaMD pathways.
She noted that HealthAI is actively working with Vietnamese regulators to build the enforcement and assessment capacity required to manage the incoming wave of AI deployment.
Recommendations for governments
Closing her presentation, Yap outlined HealthAI’s recommendations for national governments and health ministries: establishing formal interagency coordination mechanisms such as dedicated AI councils or cross-ministerial working groups; adopting participatory and evidence-based policy design; and prioritising investment in foundational infrastructure including electricity, broadband connectivity, digital health platforms, and society-wide AI literacy.
The full HealthAI report is available for download via the organisation’s website.
Discover more from HealthTechAsia
Subscribe to get the latest posts sent to your email.