The modern discourse on artificial intelligence is often framed as a recent phenomenon driven by advances in machine learning, large-scale data, and computational power.
Yet the foundations of this field are much older, rooted in earlier attempts to formalise reasoning, logic, and decision-making into systems that could be replicated by machines. By the mid-20th century, these ideas had begun to take shape in more concrete ways, culminating in early experiments that sought not just to compute, but to emulate aspects of human thought.
What followed was a period of considerable optimism arguably ahead of its institutional and technical realities. Early researchers envisioned systems capable of reasoning and learning at levels comparable to human intelligence within relatively short timeframes. While those expectations proved ambitious, the underlying ambition to translate intelligence into structured, programmable systems has persisted.
Today, that ambition has materialised in ways that are both more powerful and more complex than originally anticipated.
Artificial intelligence is no longer a theoretical construct or a laboratory pursuit. It is embedded within economic systems, public infrastructure, and institutional decision-making processes. Its development is global, its deployment increasingly widespread, and its implications extend well beyond the domains in which it is immediately applied.
It is within this context that the question of governance has taken on renewed urgency.
The economics of Health AI: understanding where value truly lies
Over the past decade, the governance of artificial intelligence has moved from abstraction to something closer to architecture. What began as a largely normative conversation anchored in ethics and high-level principles has gradually evolved into a dense and layered ecosystem of global frameworks, regional strategies, and national regulatory approaches.
From the OECD AI Principles and UNESCO’s Recommendation on the Ethics of AI, to more binding efforts such as the EU AI Act, the contours of governance are now more visible than they were even a few years ago.
At the level of principles, there is now a noticeable convergence.
Across jurisdictions, ideas such as fairness, transparency, accountability, safety, and human oversight appear with increasing consistency. These are no longer contested concepts; they have, in many ways, become the shared vocabulary of AI governance.
Whether articulated through intergovernmental principles, multilateral recommendations, or national strategies, the language of “trustworthy” and “human-centric” AI is now widely understood.
Yet this convergence is, to some extent, misleading.
Because while principles have aligned, the systems through which they are implemented remain far less consistent.
A closer look suggests that global AI governance is not a single framework, but a layered one. At one level sit soft-law instruments non-binding principles and recommendations that shape expectations without prescribing enforcement.
At another are regional efforts, which attempt to adapt these ideas to local priorities and institutional realities. And at the national level, more concrete regulatory approaches begin to emerge, in some cases moving toward binding obligations.
These layers are not necessarily in conflict. But they are not always aligned.
What begins to emerge is a form of coherence gap a condition in which there is broad agreement on what responsible AI should look like, but less clarity on how that responsibility is operationalised across different systems.
This becomes more apparent when one considers how AI systems actually function in practice.
Take a hypothetical, but increasingly plausible scenario. An AI system used for clinical decision support is developed by a company operating across multiple jurisdictions. The underlying model is trained on datasets sourced from different regions, hosted on cloud infrastructure located elsewhere, and deployed within a healthcare system governed by its own national regulatory framework.
The principles guiding its design may reflect global norms fairness, transparency, accountability but the obligations governing its use differ depending on where and how it is deployed.
In such a setting, responsibility is not absent. But it is distributed.
And not always in ways that are clearly defined.
Part of this divergence can be traced to differences in normative force. Soft-law frameworks carry influence, but rely on voluntary adoption. Binding regulations impose obligations, but are limited in geographic scope. As a result, the same principle may exist as guidance in one context and as an enforceable requirement in another.
At the same time, institutional capacity plays a defining role. Governance is not only about what is written into policy, but what can be implemented in practice. Jurisdictions with more developed regulatory ecosystems are better positioned to operationalise complex requirements such as audits, conformity assessments, or lifecycle monitoring. Others may adopt more flexible or incremental approaches, often prioritising capacity-building alongside governance.
Economic orientation further shapes these differences.
AI is increasingly seen not only as a governance challenge, but as a strategic asset. For some, regulation is a mechanism to manage risk and safeguard rights.
For others, it is balanced against the need to foster innovation, attract investment, or accelerate digital transformation. These positions are not necessarily contradictory but they do contribute to divergence in how governance is approached.
At the same time, much of the current regulatory focus remains concentrated on how AI systems are deployed and used.
This is understandable. It is at the point of application that risks become visible, and where accountability is most readily assigned.
However, it also means that relatively less attention is directed toward upstream dynamics such as data concentration, compute infrastructure, and the role of a limited number of global technology providers which increasingly shape how AI systems are developed and controlled.
This creates a subtle imbalance.
While principles are articulated at a global level, and regulations are implemented at a national level, key elements of the AI ecosystem operate across borders often beyond the direct reach of any single regulatory authority.
In many ways, this reinforces the coherence gap.
Taken together, these dynamics suggest that the challenge of global AI governance is not the absence of frameworks. It is the absence of alignment across them.
The question is no longer how to define responsible AI.
It is how to ensure that responsibility can be exercised in systems that are inherently interconnected, yet institutionally uneven.
This points toward a shift in how governance might need to be understood.
Rather than seeking full harmonisation, there may be greater value in coordination ensuring that different systems can operate alongside one another without creating fragmentation or regulatory blind spots. This includes greater attention to interoperability across definitions, risk classifications, and compliance approaches.
It also suggests the need to look beyond the application layer. Questions of data access, infrastructure control, and cross-border dependencies are likely to play an increasingly central role in shaping how governance functions in practice.
At the same time, institutional capacity cannot be treated as secondary. Without the ability to interpret, implement, and enforce frameworks, even well-articulated principles remain aspirational.
Finally, governance must be understood not only as regulatory, but operational. It is shaped by how organisations interpret frameworks, how systems are integrated into workflows, and how accountability is distributed across complex, multi-actor environments.
This is where the next phase of AI governance is likely to unfold.
Not in the creation of new principles, but in the alignment of systems that can give those principles practical effect.
Increasingly, this calls for approaches that move between policy, implementation, and system design bridging the space between global norms and local realities.
Entities working in this space, including HealthTechAsia, are beginning to engage with this challenge more directly: supporting stakeholders in interpreting, aligning, and operationalising governance across diverse contexts.
As AI continues to evolve, the measure of governance may not be the number of frameworks that exist, but the extent to which they are able to function together in practice.
Because ultimately, the challenge is not defining what responsible AI looks like.
It is ensuring that responsibility can be exercised consistently, credibly, and across borders in the systems where AI actually operates.
Discover more from HealthTechAsia
Subscribe to get the latest posts sent to your email.
