This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
Insights Insights
| 1 minute read

Who's to Blame? The Ethical Quandary of AI-Enabled Medical Errors

Significant concerns have been raised regarding product liability as AI-enabled medical devices become increasingly complex and autonomous. While there are many areas of AI product liability that warrant discussion, let’s focus on three for this post.

Shifting Liability: One of the most pressing challenges in AI product liability is the potential for a shift in liability from the human decision-maker to the AI-enabled medical device. When an AI medical device makes recommendations or monitors vital signs during a procedure, the question arises as to who is ultimately responsible for any adverse outcomes. If the AI medical device provides faulty or misleading information, should the healthcare provider or the manufacturer of the AI device be held liable? This is a question we are still grappling with.

Failure to Warn: Another significant challenge in AI product liability is the potential for failure to warn claims. Plaintiffs’ attorneys may be more likely to file failure to warn claims against manufacturers of AI-enabled medical devices than design or manufacturing claims. This is because it can be more difficult to identify design or manufacturing defects in complex AI systems. Knowing this, manufacturers of AI-enabled medical devices will want to provide adequate warnings and instructions to healthcare providers and patients regarding the limitations and potential risks associated with the use of these devices.

Lack of Transparency: A third challenge in AI product liability is the lack of transparency in how AI systems and algorithms make decisions. It can be difficult to interpret the reasoning behind an AI system's recommendations, especially when multiple AI systems are working together. This lack of transparency can make it challenging to determine whether an AI system has made an error or whether a human error has occurred.

One key theme that emerges from these challenges is the importance of explainability and accountability. Companies must be able to document how the AI supported medical device was developed and what guidance and/or data sets were used for the AI’s decision-making process. While this documentation won’t stop product liability suits, it could very well assist the company with its defense of one.

Tags

insights, product liability, ai and blockchain, health care, ruggio_michael