This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
Insights Insights
| 1 minute read

AI and Machine Learning: Have You Created Your Policies Yet?

Pharmaceutical companies and healthcare providers are increasingly incorporating AI and machine learning (ML) into their research and patient care processes. As these technologies continue to advance, regulators are starting to pay attention to ensure their ethical and safe use.

In fact, the European Medicines Agency (EMA) recently published a draft 'reflections' paper addressing its position on ethical and safety standards and regulations related to AI and ML in all lifecycle stages of human and veterinary medicines. Additionally, one key recommendation from the American Medical Association (AMA) is the need for human oversight of AI and ML processes and products. These opinions hold weight as both the AMA and EMA will play a significant role in shaping AI and ML regulations in the EU and the US.

However, pharmaceutical companies and healthcare providers should not wait for regulations to be enacted but instead develop their own stance on AI and ML specifically regarding ethics, safety, patient care, data privacy, and human oversight and then incorporate their stance into their operational policies. My fellow partner, Mitzi Hill recently published the below recommended steps for creating AI policies which is a great starting point for discussing AI and ML within your organization:

  • Update vendor contracts to address IP issues and to ensure that you know what tools underlie the platforms and technologies that you are licensing for use by your company
  • Update customer contracts to let customers know of any AI output that will or might be present in the deliverables or other goods or services you make available to them
  • Audit AI tools already in use in your company
  • Develop a list of prohibited AI tools and permitted AI tools
  • Develop a list of any prohibited uses of AI
  • Develop procedures to identify any AI-assisted code your employees generate, and rules for logging those
  • Appoint a person or group empowered to review requests to add AI tools to employee workflow, and make sure employees understand the approvals process
  • Update privacy policies, acceptable use policies, terms of use, and other website agreements (consumer-facing) to make clear whether your platform/website uses AI, how it is used, what user information will be collected and processed, and what rights you do or don’t intend to enforce
  • Draft an internal Use of AI policy and update any internal policies that may be affected by use of AI. Remember to notify employees of the rules you have set regarding acceptable tools, uses of AI, and internal procedures for evaluating new tools/uses
  • Train employees to understand what AI can and cannot do for them
  • Encourage transparency at every level about use of AI


The reflection paper highlights that a human-centric approach should guide all development and deployment of AI and ML. The use of AI in the medicinal product lifecycle should always occur in compliance with the existing legal requirements, consider ethics and ensure due respect of fundamental rights.


insights, white collar, health care, ai and blockchain, ruggio_michael, hill_mitzi