Skip to main content

Article

Artificial Intelligence in Life Sciences: An Evolving Risk Landscape

This second article of our Artificial Intelligence in Life Sciences series considers some of the potential risk issues created by the use of AI in the life sciences sector and the legal implications.

While artificial intelligence (AI) technologies bring tremendous benefits and opportunities to life sciences businesses, the threats associated with such technologies are wide ranging and complex. This can be exacerbated if companies move too quickly to adopt such sophisticated systems without implementing appropriate securities to protect them.  Key risks include the following:

Technology and performance failures

AI-based risks can be inherent in the technology being adopted, or the environment in which it is being deployed. They may also arise in the implementation or AI procurement process. Products integrating AI technologies, such as robotics, may malfunction, giving rise to quality and performance issues, which could result in economic loss, or even physical harm. Those malfunctions could have a number of causes, ranging from latent defects to incorrect use by the operator, such as the failure to carry out maintenance or implement software updates or patches.

Certain types of AI, such as unsupervised machine learning (ML) and deep neural networks, which are often utilised in genomic sequencing and precision medicine, can lead to unintended consequences such as inaccurate clinical outcomes due to faulty assumptions by developers, or the use of poor quality and disorganised data sets.

Falling prey to bias

As AI technologies are underpinned by algorithms developed by humans, the success of their output is dependent on being fed complete and accurate data that is not subject to bias. However, there is often bias — albeit in many cases unconscious bias — in human decisions. This is of particular significance in the life sciences sector where personal data is collated and analysed for use in personalised drug discovery and development, and where the outcome will be influenced by patient factors such as gender, age, and ethnicity. 

There could be significant adverse implications for manufacturers who have developed a medicinal product using an algorithm based on non-representative patient data that results in poor patient outcomes.  For example, a system designed to diagnose cancerous skin lesions that is trained with data on one skin colour may not generate accurate results for patients with a different skin colour, increasing the risk to their health.[1]

Cyber security

AI systems are at risk of exploitation and misconfigurations owing to the high volumes of complex data utilised.  In particular, there are a multitude of cyber security related threats facing AI systems such as system manipulation, data corruption and poisoning, transfer-learning attacks, and data privacy breaches.  This risk is particularly prevalent in medical devices, where AI integrated software components can be vulnerable to security breaches and cyberattacks, potentially impacting the efficacy and safety of a device.   In order for organisations to secure their AI applications, they must leverage security solutions that provide hyper-secure confidential computing environments.

Intellectual property (IP) risks

The current intellectual property (IP) system was designed to incentivise human creation and innovation. As AI operates more autonomously, this raises fundamental questions for the IP system across all IP rights. For example, when AI inevitably develops new IP, how will a company detect it and establish ownership? The rapid evolution of AI-related technologies and the move towards more autonomous systems highlight the need to consider new ways of creating and enforcing IP safeguards. Recognising the importance of the interaction between AI and IP, the UK government has recently consulted on how AI should be dealt with in the patent and copyright system.

As AI applications, such as ML, can evolve by learning from experience and improving performance over time, this raises questions concerning liability for patent infringement in circumstances where there was no infringement before the ML evolved. In addition to establishing the potentially liable entity, there may also be difficulties in proving the occurrence of the infringement. For example, the infringement may only occur for a short period of time due to the evolutionary nature of AI. As AI processing often takes place in the cloud, it may be difficult, or even impossible, for the patentee to collect evidence of infringement. Fear of liability for copyright infringement might also prevent AI researchers from releasing the data on which the AI was trained, reducing AI explicability and transparency.

Regulatory compliance risks

AI technologies are moving fast and regulators are seeking to keep pace. As AI and its various applications involve new and rapidly changing computing technologies and vast amounts of training data, they pose significant, new regulatory challenges. Regulators are working to create adaptive and real-time regulations:

  • In April 2021, the European Commission published a “landmark proposal” for a regulation laying down harmonised rules on AI, also referred to as the ‘Artificial Intelligence Act’. This proposal will have a major impact on the ongoing use and development of AI in the life sciences sector, particularly in the med-tech industry where medical devices and in vitro diagnostic medical devices are expressly covered by the proposal.  The new rules have a very broad application:  As well as applying to providers and users of AI systems who are based or established in the EU, they also apply to providers in third countries deploying services with AI systems throughout the EU.  They also apply to providers and users in third countries whose AI systems produce outputs used in the EU.
  • In October 2021, the US Food and Drug Administration, Health Canada, and the UK’s Medicines and Healthcare products Regulatory Agency jointly published 10 guiding principles to inform the development of Good Machine Learning Practice for medical devices that use AI and ML.

Life science businesses operating across multiple territories must carefully review and ensure compliance with the relevant regulations and guiding principles.

Liability risks

In the coming years, with an increasingly complex risk landscape and potential for damage, we expect to see an increasing number of disputes arising from the growing use of AI technologies. The number and interdependency of AI system components, such as the tangible hardware, software and data could make it difficult to identify the source of a problem and the responsible party. This will be compounded where the development and operation of the AI system involves a number of participants, including system operators, developers, data suppliers and platform providers.  In the life sciences sector, product liability claims may arise where a patient has suffered injury as a result of an AI powered device, while the use of voluminous, sensitive medical data could also give rise to privacy litigation.  If you are one of those AI investors, developers, system operators, data suppliers, or platform providers, you must consider whether, and on what basis, you may be held liable.

In the next article in our Artificial Intelligence in Life Sciences – Revolution, Risk, and Regulation series, we will delve deeper into the legislative frameworks governing AI and the liability risks.

 

[1] WHO Guidance: Ethics and Governance of Artificial Intelligence for Health, 2021 available at https://www.who.int/publications-detail-redirect/9789240029200

Meet the authors

Placeholder Image

Jenny Yu

Marsh UK Industries - Chemicals & Life Sciences Industries Leader

  • United Kingdom

Placeholder Image

Paula Margolis

Corporate Affairs Lawyer, Kennedys

  • United Kingdom

Placeholder Image

Samantha Silver

Partner, Kennedys

  • United Kingdom