UN consultation calls for enforceable AI governance as Global Dialogue approaches

HealthTechAsia Contributing Policy Editor & Non-Resident Fellow Vishnu Narayan joined the United Nations Office for Digital and Emerging Technologies’ third virtual stakeholder consultation on 12 May, ahead of the first session of the Global Dialogue on AI Governance.

The consultation, co-chaired by H.E. Egriselda López, Permanent Representative of El Salvador to the United Nations, and H.E. Rein Tammsaar, Permanent Representative of Estonia to the United Nations, brought together governments, civil society organisations, academic institutions, and industry representatives to help shape the agenda and substance of the upcoming Dialogue.

Discussions were structured around four thematic clusters: AI opportunities and implications; bridging AI divides; safe and trustworthy AI; and human rights, transparency, and oversight.

The Global Dialogue itself represents a significant institutional development. Established by UN General Assembly resolution following the adoption of the Global Digital Compact at the 2024 Summit of the Future, it is designed to serve as the UN’s permanent, inclusive platform for AI governance cooperation,  the first mechanism to give every country a formal seat at the table.

Its first session will be held on 6 and 7 July 2026 in Geneva, back-to-back with the ITU AI for Good Global Summit, with a second session to follow in New York in May 2027.

A central tension running through the consultation was the gap between the pace of AI deployment and the capacity of institutions, particularly in the Global South, to govern it effectively. Participants broadly agreed that voluntary, self-regulatory models driven by dominant private actors are structurally inadequate to meet the moment.

What is needed, the consultation heard, is enforceable governance with meaningful accountability mechanisms, not frameworks that exist on paper but carry no consequence.

Vishnu brought a perspective grounded in the realities of developing country contexts. In many parts of the world, he argued, AI is not being deployed into mature institutional environments. It is being asked to compensate for institutional scarcity: filling gaps in healthcare delivery, public administration, and social services where human capacity is already stretched.

That context changes the governance calculus significantly. The risks are different, the populations affected are more vulnerable, and the consequences of getting it wrong are more severe.

Bridging the AI divide

On infrastructure, the consultation moved beyond familiar debates about skills and training. Participants argued that the deeper divide is structural: sovereign access to compute, digital public infrastructure, and reliable connectivity. These were framed not as development aspirations but as prerequisites for meaningful participation in the AI economy.

Linguistic inclusion emerged as a specific and underappreciated dimension of inequality. AI systems continue to prioritise dominant global languages, leaving smaller languages, and the communities that speak them, increasingly marginalised as AI becomes more deeply embedded in public services and economic life. Capacity building, participants argued, must address this directly, including through support for local language AI development.

Safety standards and the case for convergence

On safety, the consultation surfaced a fundamental problem: there is no shared definition of AI safety across stakeholder groups. Governments, companies, civil society organisations, and academic institutions are working from different premises, producing fragmented and sometimes contradictory approaches.

Several participants pointed to the EU AI Act, specifically its systemic risk provisions and the emerging Code of Practice, as a candidate baseline for global convergence. The argument was pragmatic as much as principled: aligning with a common standard would allow resource-constrained states to draw on existing evaluation infrastructure, third-party auditors, and shared intelligence networks rather than building expensive independent capacity from scratch.

The consultation also heard that safety testing needs to broaden beyond computer science. Continuous expert red-teaming was called for, with contributions from humanities, social sciences, and other disciplines capable of identifying harms that technical evaluation alone may miss.

Accountability and human oversight

Accountability was a consistent thread throughout the day. Speakers argued that oversight mechanisms must remain anchored close to citizens, not abstracted into global frameworks that are difficult to enforce at the national level. Final decision-making authority, the consultation heard, must remain with humans. Efficiency cannot be permitted to supersede dignity.

Several participants flagged the judicial context as an area of immediate concern. In parts of Latin America and the Global South, generative and agentic AI tools are already being used to assist in drafting legal rulings, in environments where transparency is limited and the cultural and linguistic diversity of affected populations is not reflected in the systems being used.

The risk, participants argued, is that AI does not merely automate existing processes: it automates existing inequalities.

The question of output-level governance also drew scrutiny. Judging AI safety solely by final outputs, participants warned, creates an illusion of compliance while leaving the reasoning processes and value conflicts embedded in AI systems entirely opaque.

Looking ahead

The Geneva session in July will be held alongside the ITU AI for Good Global Summit, one of the year’s most significant gatherings on technology governance. Together, the two events are likely to set the terms of the international AI governance conversation for the remainder of 2026.

The stakeholder consultation process is designed to ensure that the Dialogue reflects genuinely diverse perspectives, including those of nations and communities that have so far had limited influence over how global AI governance frameworks are being shaped. Expressions of interest to co-chair thematic discussions remain open to Member States and relevant stakeholders until 22 May 2026.

Author

  • Matthew Brady

    Matt Brady is an award-winning storyteller and strategic communications advisor.

    A native Englishman with global experience spanning China, Hong Kong, Iraq, Malaysia, Saudi Arabia, and the UAE, he founded HealthTechAsia and co-founded the non-profit Pul Alliance for Digital Health and Equity.

    He has led social media and communications initiatives for world leaders, corporations, and NGOs, and spearheaded editorial strategy for a portfolio of leading healthcare events and year-round publications — transforming coverage from print to digital — including Arab Health, Asia Health, Africa Health, FIME, and others. Earlier in his career, he held editorial roles at Microsoft and Johnson & Johnson.

    He received the 2021 Medical Travel Media Award from the Malaysia Healthcare Travel Council and a Guardian Student Media Award in 2000.

    Connect with Matt on LinkedIn: https://www.linkedin.com/in/matt-brady-0764992/

    View all posts

Discover more from HealthTechAsia

Subscribe to get the latest posts sent to your email.