Paris, France – As artificial intelligence rapidly integrates into healthcare systems around the world, a growing chorus of voices is calling for a fundamental ingredient too often overlooked in the rush toward innovation: trust.
At a recent international panel on AI in healthcare, experts from law, ethics, medicine, and data science agreed on one thing: the promise of AI will only be realized if patients and clinicians alike can trust how it’s developed, deployed, and explained.
“Everyone wants transparency,” said Artur Olesch, a health journalist and founder of Health Algorithms, opening the discussion. “But not all transparency is useful—or even possible.”
Transparency: Between Ethics and Implementation
That paradox sits at the heart of the debate. Noemi Conditi, a lawyer and research fellow at the University of Bologna, argued that while transparency is critical for building trust, it must be thoughtfully balanced with other priorities.
“How much do we want to know about how AI systems work?” she asked. “More transparency can mean less intellectual property protection. And on the patient side, too much detail can overwhelm or confuse. But without any transparency, how can we trust the system at all?”
Dr. Jessica Morley of Yale University’s Digital Ethics Center added a cautionary note about superficial efforts. “Transparency has become a tick-box exercise. Just putting a privacy statement on a website doesn’t mean you’re actually being transparent. It has to be meaningful. You need to ask: What is the purpose of this transparency? Is it helping someone understand, evaluate, or challenge the system?”
The Need for Auditability, Not Just Explainability
Rather than demanding a line-by-line explanation of how an algorithm works—a technical impossibility in many deep-learning systems—Morley emphasized the need for “auditability”: clear documentation of who built the model, with what data, how it was trained, and how it’s monitored in real time.
That sentiment was echoed by Professor Eric Wong, Group Chief Digital Health Officer in Singapore’s National Healthcare Group. “Expecting full transparency is like asking for a randomized control trial on parachutes,” he said. “Some things just work. But that doesn’t mean we skip safeguards.”
Wong described Singapore’s approach: always deploying AI with human oversight, and ensuring clinicians remain in the loop. “Studies show that algorithms sometimes outperform doctors—and vice versa. But together, they do better than either alone.”
Bias: The Unseen Clinical Risk
Panelists also tackled the critical issue of bias—how it creeps into algorithms, and how much of it healthcare systems can tolerate.
“You’ll never have a completely unbiased model,” Morley said. “Even our drugs aren’t unbiased. What we need is to know where the bias is, and how it might affect patients.”
She distinguished between “positive bias”—such as training models to recognize biological sex differences—and “negative bias,” which can lead to discrimination or poor outcomes for certain populations. “Bias can enter at any stage,” she noted, “from flawed data to real-world deployment where hospital workflows differ.”
For Steffen Hess, head of the Health Data Lab at Germany’s BfArM, transparency about training data is key. “If an algorithm is trained on biased data, it will make biased decisions. We must make that process visible, not just the algorithm itself.”
Data Access, Privacy, and Regulation: The Balancing Act
Access to high-quality, representative data remains one of the field’s most persistent challenges. Germany’s healthcare system, for instance, holds data from 74 million citizens. But as Hess explained, that data can only be used under strict legal and ethical guardrails.
“The question is: When is it necessary to share big datasets to train unbiased models? And how can we do it responsibly?” he said. “Secure processing environments and clear regulatory frameworks are essential.”
Singapore offers one model. During the COVID-19 pandemic, the country leveraged AI to scan millions of chest X-rays, improving turnaround times and enabling rapid triage. Crucially, Wong said, this was done with built-in guardrails—secure systems, clinician oversight, and patient consent.
“People often see regulation as a barrier,” Wong said. “But good rules can create space for innovation.”
Building Trust from the Ground Up
So how can trust be earned—not just between systems and regulators, but between AI and the clinicians expected to use it?
“It’s a communications challenge,” said Conditi. “AI needs to be seen as a companion, not a competitor. Clinicians need to feel they can control it, question it, and understand it.”
That starts with training, said Wong. In Singapore, AI is now part of the medical school curriculum. “We’re preparing our doctors and nurses from day one. Because by the time a regulation is written, the tech has already moved on.”
Morley put it even more simply: “Build tools clinicians actually want to use. Don’t throw AI at them just because you can.”
And as Hess noted, trust is a two-way street. “Doctors are skeptical when AI doesn’t work. But the only way it can work is if doctors input better data. Trust must grow on both sides.”
The Future: Not Just Smart Machines, but Smart Deployment
What emerged from the panel wasn’t a blueprint, but a mindset: build with humility, regulate with purpose, and never forget the human beings at the center—both the doctors using the tools, and the patients whose lives may depend on them.