As artificial intelligence accelerates at unprecedented speed, healthcare leaders gathered today at MedTech World Middle East to address a critical question: how can AI move beyond experimentation and into safe, scalable, hospital-ready deployment?
The panel, titled “AI in Healthcare: From Hype to Hospital-Ready Solutions,” was moderated by Dr Avi Mehra, Clinical Safety Officer at IBM. Speakers included Andrew Schroen, Manager of Digital Health at Mediclinic; Dr Anushka Patchava, Chief Clinical & Innovation Officer at Cignpost and Global Expert Advisor to the UN; and Dr Omar Najim, Life Sciences Lead at Hub71.
The discussion centred not on whether AI will transform healthcare, but on how it can be implemented responsibly, sustainably and at scale.
From artificial to augmented intelligence
Opening the session, Dr Mehra highlighted the mounting pressures facing global health systems: workforce shortages, rising demand, widening access gaps and financial strain. Against this backdrop, AI is frequently positioned as a solution. However, he cautioned that healthcare does not operate in a “sandbox” environment, where experimentation can occur without consequence. Safety, regulation and clinical trust remain paramount.
Dr Patchava reframed the AI debate by arguing that in healthcare, AI should be understood not as artificial intelligence, but as augmented or assisted intelligence.
“We must move beyond the idea of ‘do no harm’ as meaning zero risk,” she said, noting that human clinicians themselves are not infallible. Instead, policy frameworks should recognise AI as a tool operating alongside clinicians, with appropriate guardrails and oversight mechanisms in place.
She emphasised that regulation must evolve at pace with technological development, rather than inadvertently stifling innovation by holding AI to unrealistic standards detached from real-world clinical practice.
Hospital adoption: Governance before algorithms
From the provider perspective, Schroen outlined Mediclinic’s structured approach to AI adoption. Before evaluating solutions, organisations must assess their digital maturity, internal expertise and governance frameworks.
“You need the right people at the table,” he said, pointing to the importance of cross-functional committees spanning IT, clinical leadership, operations and finance. Without strong governance and clear business cases, hospitals risk becoming trapped in cycles of pilots that never scale.
Schroen stressed the importance of being “brutally honest” about data readiness and infrastructure. Many AI initiatives falter because underlying data systems are fragmented or poorly structured.
To avoid “pilot fatigue”, Mediclinic evaluates scalability from the outset — including vendor roadmaps, technical compatibility with cloud infrastructure, and long-term commercial viability.
Interestingly, Schroen suggested that non-clinical AI use cases can serve as gateways to broader adoption. For example, voice AI systems supporting appointment booking can improve patient experience and clinician scheduling efficiency. Demonstrating value in lower-risk operational areas can help build trust before expanding into higher-risk clinical decision support applications.
Equity and global access
Dr Najim widened the lens to consider AI’s potential impact on global health equity. He described AI as a potential “democratising force” capable of expanding access to expertise in underserved regions.
Tools that enable remote interpretation of imaging, digital triage or telemedicine consultations could help bridge gaps in specialist access, particularly in lower-resource settings. However, he acknowledged concerns that AI could also exacerbate inequalities if access to infrastructure, connectivity and digital literacy remains uneven.
Regulation, he argued, should not become an obstacle to innovation. Instead of rigid frameworks that risk becoming obsolete as technology evolves, regulators should focus on principles that protect patients while enabling experimentation and adaptation.
Starting with the problem, not the technology
A recurring theme throughout the panel was the importance of defining the problem before deploying AI.
Dr Patchava cautioned against adopting technology for its own sake. “Start with why,” she said, urging organisations to benchmark existing workflows and identify specific bottlenecks before introducing AI solutions.
Without clear baseline metrics — such as reporting turnaround times or workforce gaps — it becomes difficult to demonstrate return on investment or clinical impact. Poorly defined pilots, she noted, often collapse when commercial models are introduced because measurable outcomes were never established.
Integration into clinical workflows was also identified as critical. AI tools that add complexity or administrative burden are unlikely to gain clinician adoption, particularly in already overstretched systems.
Regulations and realistic standards
The panel also explored whether AI is being held to an impossibly high standard. While healthcare systems today are known to be imperfect and subject to human error, AI failures often attract disproportionate scrutiny.
Speakers agreed that while safety must remain non-negotiable, evaluation frameworks should be proportionate and grounded in comparative performance — assessing whether AI improves outcomes relative to current standards of care, rather than expecting flawless performance.
From hype to practical implementation
The overarching message from the session was pragmatic: AI’s impact in healthcare will depend less on technological sophistication and more on governance, culture, data quality and workflow integration.
As health systems confront mounting structural pressures, the promise of AI is significant. Yet panellists made clear that sustainable transformation will require disciplined implementation, regulatory evolution and sustained clinician engagement.
In short, moving from hype to hospital-ready solutions demands not just smarter algorithms, but smarter systems.
Discover more from HealthTechAsia
Subscribe to get the latest posts sent to your email.