Singapore has significantly strengthened its healthcare AI governance architecture, with a series of policy updates, regulatory enhancements, and institutional developments signalling the city-state’s ambition to become a global reference authority on AI in healthcare.
The developments were outlined this week at the ADB-WHO Forum on Harnessing AI for Health Equity in Manila by Professor John CW Lim, Executive Director of the Duke-NUS Centre of Regulatory Excellence (CoRE), Core Lead (Policy) at the SingHealth Duke-NUS Global Health Institute, and Senior Advisor to the Ministry of Health, Singapore.
WHO recognition for HSA
In a significant milestone, the Health Sciences Authority (HSA) was recognised by the WHO in March 2026 as achieving the highest maturity level — Level 4 — for medical device regulation, making it the first national regulatory authority in the world to receive this certification.
The recognition positions Singapore as a global regulatory reference authority, with implications for how other markets look to the city-state when developing their own AI medical device frameworks.
Updated AI guidelines and agentic AI framework
Singapore’s Ministry of Health updated its flagship Artificial Intelligence and Healthcare Guidelines (AIHGle) to version two in March 2026, reflecting developments in generative AI and providing updated good practice recommendations across the full AI lifecycle.
The Infocomm Media Development Authority (IMDA) has also released a new model governance framework specifically for agentic AI — autonomous decision-making systems — focused on human oversight, accountability, and safety alignment.
Governing philosophy: AI-enhanced, not AI-decided
Singapore’s Minister for Health articulated a clear governing philosophy: healthcare should be AI-enabled and AI-enhanced, but not AI-decided. Human oversight remains central to the framework, with safety, clinical efficacy, and cost effectiveness as the Ministry’s cornerstones for AI adoption.
Roundtable process shapes regulatory direction
CoRE convened three expert roundtables on healthcare AI governance — two in 2024 and one in August 2025 — bringing together stakeholders from across the sector.
The process concluded that new legislation was not required, but that existing regulatory tools needed reinforcement through additional guidelines, sandboxes, and stakeholder education.
As a result, HSA has introduced a pre-market consultation scheme to provide earlier regulatory guidance for developers, while areas including algorithmic trustworthiness and data labelling accountability have been identified for further study.
From regulating tools to governing systems
Perhaps the most significant strategic shift Professor Lim outlined is Singapore’s move away from regulating individual AI tools toward governing AI-enabled healthcare systems as a whole.
The regulatory journey has progressed from foundations built on software-as-a-medical-device frameworks and the Healthcare Services Act, through national AI strategy and generative and agentic AI governance, toward a more complex frontier ahead.
That next frontier, as Professor Lim described it, encompasses adaptive learning systems that evolve with real-world data, robust real-world monitoring mechanisms, and cross-domain regulatory convergence — where the boundaries between medical devices, clinical services, and AI systems increasingly blur and require integrated oversight.
The underlying principle, he emphasised, is that regulation must be adaptive and evidence-led: responsive to real-world impact rather than reactive to every innovation cycle.
Discover more from HealthTechAsia
Subscribe to get the latest posts sent to your email.