The World Health Organization (WHO) recently issued a report highlighting its thoughts artificial intelligence (AI) ethics and guidelines in healthcare.
In its report, the WHO called for increased collaboration between governments and emphasized the need for designated agencies to oversee AI in healthcare. Recognizing the risks associated with AI in clinical care and diagnosis, the organization stressed the importance of addressing these risks through robust regulatory measures.
One of the key topics addressed in the report was the accountability of healthcare providers using AI platforms. The WHO stated that if an AI platform deviates substantially from its original model in ways beyond the control of the developer, the provider should be held accountable. Additionally, the report emphasized the responsibility of healthcare providers to engage with the public as they implement AI platforms and develop policies surrounding their use.
Another concern raised by the WHO is the potential for increased self-diagnosis and the erosion of the doctor-patient relationship in the context of AI platforms used for "wellness applications." These applications often have less stringent regulations compared to traditional healthcare providers, which raises concerns about the accuracy and reliability of self-diagnosis.
The full report can be accessed here.
Subscribe to Taylor English Insights by topic here.