header banner
WORLD

Stronger safeguards needed as AI healthcare grows, WHO Europe warns

Only four countries, or eight percent, have adopted a dedicated national AI health strategy, and seven others are in the process of doing so. Almost two-thirds of countries in the region are already using AI-assisted diagnostics, especially in imaging and detection, while half of countries have introduced AI chatbots for patient engagement and support.
alt=
By AFP/RSS
COPENHAGEN, Nov 19: The growing use of artificial intelligence in healthcare necessitates stronger legal and ethical safeguards to protect patients and healthcare workers, the World Health Organization's Europe branch said in a report published Wednesday.

 

That is the conclusion of a report on AI adoption and regulation in healthcare systems in Europe, based on responses from 50 of the 53 member states in the WHO's European region, which includes Central Asia.

 

Only four countries, or eight percent, have adopted a dedicated national AI health strategy, and seven others are in the process of doing so, the report said.

 

"We stand at a fork in the road," Natasha Azzopardi-Muscat, the WHO Europe's director of health systems, said in a statement.

 

"Either AI will be used to improve people's health and well-being, reduce the burden on our exhausted health workers and bring down healthcare costs, or it could undermine patient safety, compromise privacy and entrench inequalities in care," she said.

 

Almost two-thirds of countries in the region are already using AI-assisted diagnostics, especially in imaging and detection, while half of countries have introduced AI chatbots for patient engagement and support.

 

The WHO urged its member states to address "potential risks" associated with AI, including "biased or low-quality outputs, automation bias, erosion of clinician skills, reduced clinician-patient interaction and inequitable outcomes for marginalised populations".

 

Regulation is struggling to keep pace with technology, the WHO Europe said, noting that 86 percent of member states said legal uncertainty was the primary barrier to AI adoption.

 

"Without clear legal standards, clinicians may be reluctant to rely on AI tools and patients may have no clear path for recourse if something goes wrong," said David Novillo Ortiz, the WHO's regional advisor on data, artificial intelligence and digital health.

 

The WHO Europe said countries should clarify accountability, establish redress mechanisms for harm, and ensure that AI systems "are tested for safety, fairness and real-world effectiveness before they reach patients".



Related story

An urgent plea to protect healthcare providers

Related Stories
OPINION

Better together

Say-no-to-violence-project_20191117091546.jpg
POLITICS

Embattled PM Oli grows stronger as ‘ideological po...

PMOli_20200618211153.jpg
OPINION

Strengthening healthcare service in developing cou...

doctors_20230720150851.jpg
SOCIETY

Revolutionizing Healthcare in Nepal

AeshaStory1_20230515171224.jpg
My City

The Future Of Telehealth And AI In Business

telehealthandAlfeatured_20220728165830.jpg