Saudi Arabia’s data and artificial intelligence regulator is in the final stretch of a public consultation on a draft responsible AI policy, with submissions closing on 3 May — giving stakeholders less than two weeks to have their say on rules that would govern AI across every sector of the Kingdom’s economy.
The Saudi Data and Artificial Intelligence Authority (SDAIA) opened the consultation in early April via the Istitlaa Platform, the country’s unified electronic portal for public and government consultations. The draft policy applies to government bodies, private companies, non-profit organisations, and individuals involved in developing, deploying, or publishing AI applications within Saudi Arabia.
The draft sets out concrete technical obligations for developers, including requirements to embed watermarks in AI outputs for traceability, integrate content tracking mechanisms, implement bias mitigation through diversification of training data, and build interpretability features that make model outputs and decision-making processes legible to users.
The consultation reflects a broader tension in Saudi Arabia’s AI ambitions: the Kingdom has moved aggressively to deploy AI across public services and industry under Vision 2030, while formal binding regulation has lagged behind. SDAIA has to date issued guidance on a voluntary basis; the responsible AI policy, once finalised, would represent a more formal governance baseline.
Saudi Arabia designated 2026 its Year of Artificial Intelligence, with SDAIA framing the initiative as a nationally coordinated programme spanning innovation, awareness, and implementation — making the governance consultation a timely complement to that ambition.
Submissions can be made via the Istitlaa Platform before the 3 May deadline.
Discover more from HealthTechAsia
Subscribe to get the latest posts sent to your email.