AI in life sciences: future-proofing strategies

headshots of two authors

In the final piece of the 'AI in life sciences' series, Jenny Yu, Marsh’s Life Sciences industry practice leader, and experts from law firm Kennedys - Paula Margolis, Corporate Affairs Lawyer and Samantha Silver, Partner - emphasize essential steps for life sciences companies to develop and implement a successful AI business strategy. They delve into effective management of outcomes from AI technologies and strategies to mitigate potential risks.


Test tubes

As artificial intelligence (AI) continues to gain considerable momentum across the life sciences sector, companies must take steps to understand how AI technologies operate and the risks that they present, the extent to which they can transform and add value to their business, and implement a framework to provide for their effective incorporation into the organisation.

In the final article in our series, ‘Artificial intelligence in life sciences’, we highlight some of the key steps that life sciences companies should consider adopting in order to build and deploy a successful AI business strategy, effectively manage outcomes generated by AI technologies, and safeguard against the potential risks arising.

Key steps:

  1. Implementation of a robust and effective data strategy: As data is the fuel that powers AI processes and decision making, the quality, volume, and integrity of that data is fundamental to achieving non-biased and reliable outcomes. In the life sciences sector, bias in product design, testing, and clinical trials may result in some healthcare products not being as effective on certain patient groups. A robust and effective data strategy is therefore critical to ensure that complete and accurate data sets are collated and maintained.
     
  2. Re-evaluation of privacy and cybersecurity risks: With the EU and UK focusing hard on evaluating and updating existing product safety laws and regulations to encompass a legislative framework for AI, life sciences companies should take steps to evaluate the safety of their AI-powered products, particularly from a cyber and privacy perspective.  This would include the assessment of the safety of products both in isolation and when connected to other products, ensuring all parties in the supply chain are aware of and trained on, the obligations that will be imposed on them, and the adoption of current European and/or national standards for assessing the cybersecurity of products.
     
  3. Development of a modern governance and risk management framework: In view of the risk profile, the existing regulatory and legal framework and the speed and depth at which AI-driven technologies are being utilised, life sciences organisations are having to appropriately adapt existing governance and risk management frameworks to harness the power of AI. Historically, and notwithstanding the demands of evolving regulatory changes, organisations have typically depended upon relatively static risk management frameworks and systems, which relied upon key individuals within the organisation updating risk registers according to their responsibilities.

    The use of AI technologies provides an opportunity for a step-change in risk management through connectivity between AI and key risk indicator information, such as complaints and adverse events data. Collaborative discussions and appropriate planning with risk management, information technology specialists, engineers, and other key stakeholders are crucial to reducing such risks. 
     
  4. Management of employee skill sets and adapting the company workforce: Employees are a key consideration in making AI an integral part of business operations. Life sciences businesses will need to invest in robust learning and development programmes to allow existing and future employees to acquire the necessary skillsets to develop and integrate AI-linked solutions.  Businesses may also consider the creation of new roles within the company to manage the risks arising from the increasing adoption of AI technologies within business operations. For example, with the life sciences sector becoming increasingly vulnerable to privacy and cybersecurity related risks, companies may wish to deploy personnel with expertise in these areas in order to safeguard their products.

Conclusion

Companies play a crucial role in ensuring the implementation of appropriate safeguards to prevent and respond to risks arising from the potentially negative consequences of AI technologies. While the role of risk professionals within the life sciences industry will undoubtedly change going forward, the combined forces of AI-enabled risk management with subject matter direction and oversight will create a future in which questions such as “how much risk should I take?” can be informed and updated in real-time.

With changes to existing regulatory and liability regimes on the horizon, life sciences companies, and in particular, manufacturers in the sector, should seek to ensure that their products comprising AI-based technologies have undergone rigorous testing and safety checks and comply with existing laws and regulations before release to market.

 

This article was originally posted by Marsh.

More news and updates

CEO Update - 29 April 2024

Much of the UK will go to the polls this Thursday in the biggest test of public opinion ahead of the UK general election. Ten key metro mayoral contests take place including in Tees Valley, Manchester, the West and East Midlands and London. The results will be announced over the weekend and give a new mandate to key figures in regions of the UK whilst also helping define if we will have a summer or autumn general election.

Rinri Therapeutics announces new Chair as it approaches first clinical trial

Rinri Therapeutics is pleased to announce the appointment of David Hipkiss as its new Independent Chair. This strategic move comes at a pivotal time as the company prepares to initiate first-in-human trials for its lead asset, Rincell-1.

Harnessing the power of innovation: How the Innovation Map can help your life science company thrive

In this blog, Sam Cruickshank, Programme Manager at BIA, discusses the pivotal role of the recently updated 2024 BIA Innovation Map in empowering life science entrepreneurs to navigate the complexities of the start-up landscape and unlock their ventures' full potential.

Jim Faulkner, Nicole Mather and Carolyn Porter join the Cell and Gene Therapy Catapult as Non-Executive Directors

London, United Kingdom, 29 April 2024: The Cell and Gene Therapy Catapult (CGT Catapult), an independent innovation and technology organisation specialising in the advancement of the cell and gene therapy industry, has appointed three new Non-Executive Directors to its board: Jim Faulkner; Nicole Mather; and Carolyn Porter.

More within