Newsletter Newsletters Events Events Podcasts Videos Africanews
Loader
Advertisement

Europe is rolling out AI in health care without sufficient patient protections, WHO warns

A woman talks into her phone.
A woman talks into her phone. Copyright  Canva
Copyright Canva
By Gabriela Galvin
Published on
Share Comments
Share Close Button

Just four out of 50 surveyed countries have national strategies for AI in health care.

Artificial intelligence (AI) is transforming Europe's health sector, promising better patient care and reduced workforce pressure – but it's being deployed without adequate safeguards for patients or health workers, according to a new report from the World Health Organization (WHO).

The analysis found that AI adoption, regulation, and funding in health care vary considerably across 50 countries spanning Europe and Central Asia.

Half of the surveyed countries have introduced AI chatbots for patients, while 32 are using AI-powered diagnostic tools, particularly for imaging and detection. Countries are also exploring AI for patient screening, pathology, mental health, data analytics, administrative tasks, and workforce planning.

Spain, for example, is testing AI tools to boost early disease detection, Finland is using AI to train health workers, and Estonia is applying it to data analysis, the report found.

But while 26 countries have settled on their priorities for AI in health care, just 14 have allocated funding for those efforts. Only four countries – Andorra, Finland, Slovakia, and Sweden – have a national strategy specifically for AI in health.

Dr Hans Kluge, head of the WHO's Europe office, warned that without "clear strategies, data privacy, legal guardrails and investment in AI literacy," these tools could worsen Europe's health disparities.

"AI is on the verge of revolutionising health care, but its promise will only be realised if people and patients remain at the centre of every decision," Kluge said in a statement.

One major challenge: AI tools are based on massive datasets that can be faulty, biased, or incomplete, meaning their own decisions can reflect those flaws and lead to medical mistakes, such as missed diagnoses or incorrect treatments.

The WHO report said countries should clarify who is responsible for such AI-driven errors.

The report recommends that Europe ensure its AI strategies align with public health goals, train health workers to use AI effectively, strengthen laws and ethical guidelines around AI, and maintain transparency with the public about how AI is used in health care.

The lack of clear standards may already be making health workers reluctant to use AI tools, said Dr David Novillo Ortiz, who works on AI, data, and digital health initiatives at the WHO's Europe office.

European countries should "ensure that AI systems are tested for safety, fairness and real-world effectiveness before they reach patients," he added.

Go to accessibility shortcuts
Share Comments

Read more