top of page
Search

Self-Regulation Of Artificial Intelligence In Biopharma: Redefining Business Ethics

  • Writer: Dinesh Agaram
    Dinesh Agaram
  • Mar 30, 2021
  • 7 min read

Updated: Apr 8, 2021


ree

Self-regulation to eliminate bias in AI and ensure its ethical application has been at the heart of some of the current public discourse and social media discussion following the publication of Karen Hao’s article on Facebook’s addiction to misinformation and the heavy-handed and unfair treatment of Dr. Timnit Gebru by Google.


While some of the exposure to the harmful effects of AI via Facebook and other social media is voluntary and can be avoided (I have deleted my account with Facebook, for example, and gained much peace and time as a result), in the realm of Medicine, tasting (or suffering) the effects or after-effects of the use of AI looks not avoidable, as long as regulatory sophistication and coverage evolves slower than industry’s penchant for applying the technology to its innovation and delivery methods.


People fall ill; people therefore need treatment, and they have no choice but to undergo one of the treatment options physicians recommend to them. Patient may not be privy to the use of any AI in the invention of a drug being prescribed to them. Not just patients, the clinicians themselves may not be aware of it.


That said, drug discovery is a sophisticated scientific endeavour and its robustness and effectiveness have only grown over the years. Rightly, patients (except from their disease condition data and treatment feedback) have hardly had any direct say in the formulation of drugs and their prescription specifics in the long and glorious past of allopathic medicine. That decision-making is better left to the experts. Mandatory phased clinical trialling of an intensive nature has and should ensure that by the time the clinician confidently prescribes a drug, all safeguards and precautions have been taken to avert any severe adverse effects on the patient and ensure a reasonable level of treatment efficacy.


Why then is the use of AI in drug development and patient care a cause of concern for the patient, clinician, and for public health at large? What could make AI unsafe, despite the stringent regulatory criteria to be met for obtaining permissions to market new drugs and treatments?


THE NATURE OF BIAS IN LIFE SCIENCES DATA & METHOD

Almost all bias in AI algorithms occurs due to two reasons: one - bias within the data, which only gets inherited by algorithms in their decision logic, and two – due to insufficient scrutiny and detection of bias and false positive (and negative) correlations during the initial stages of model-building (feature selection, extraction). This leads to a propagation and accentuation of bias until a model is deployed in production.


In drug target selection, where proteins involved in disease through abnormal function or levels of expression are identified and validated, a discovery-based approach through exploration of available data may lead to a promising new target. The explored space could combine a variety of datasets including gene sequence data, protein-protein interaction data and gene expression data.


Although several relevant data sources could be used for selection and validation, understanding and insight into the biology of disease is still not complete and limits the robustness of decisions. Additionally, there is inadequate and inaccurate labelling of biological data generated through experiments because experiments cannot be set up to perfectly mimic in vivo conditions.


Drug companies therefore rely on proxy measures of in vivo efficacy and safety in much of the stages of drug discovery for their decision-making. Computational models based on such proxy measures and other datasets can only be expected to be prone to bias, with negligible ability to pinpoint or explain the bias, leading consequently to a lower hit rate in drugs performing well at clinical trial stage.


Nevertheless, bias in the AI algorithm could potentially lead to a wrong target being selected. While in most cases this will lead to unsuccessful clinical trials, by the time of completion of clinical trials, the company may already have lost a few billion dollars in R&D costs pursuing a red herring.


In some cases, this bias could slip through clinical trials due to favourable subject selection during trials, which means a drug for a less-than-optimal target tested on a very partial selection of human subjects could get out into the market being prescribed or administered to a much larger group of people – the diversity of genetic profiles and disease conditions of those not been taken into consideration in the development per se of the drug. This is a recipe for poor performance, that will unfold over a period after the drug is out in the market.


The threat of such adverse reactions because of bias is much more, of course, in more complex diseases, real-time life-critical applications of therapy, and in drugs intended to tackle diseases prevalent globally.


In complex diseases, seeking to find one single target the malfunctioning or over-expression of which is responsible for the anomalous condition is not useful. With multiple unknown targets being responsible, there could be more than one bias that could pass uncaught in the cases of simplistically chasing one target with a drug.


In the case of emergency medicine, the drug needs to both produce therapeutic effect and be safe with a great degree of accuracy in both good action (efficacy) and lack of bad action (safety – not affecting anything other than the target). If this accuracy is not there, a patient could die or be left crippled for life – an irreversible harm.


With respect to drugs targeted at the global population with the same generic disease condition, the lack of biological variation across races and ethnic groups in data used for target and ligand validation modelling could make clinical trial results not applicable to much of the global population. The Oxford-AstraZeneca Covid vaccine, for example, was trialled in the UK and Brazil. Will it have the exact same efficacy and safety profile on, say, populations in Asian countries? It is highly unlikely if data capturing different strains of coronavirus found in people in those countries and the variations in the genetic makeup of those populations was not factored into machine learning models used during target identification and validation.

In short, AI incorporated into the current method could in some cases further increase the unpredictability of the output as much as it could improve its precision.


SELF-REGULATION – SIMPLY A MATTER OF BEHAVING WITH COMPLETE RESPONSIBILITY

Regardless of scientific limitations cited above, self-regulation in medicine, with or without AI, regardless of what the concerned organisation’s role in it is, needs to first start with the realisation that for far too long, governments, law and regulation have been burdened with a disproportionate part of the responsibility to ensure safe and effective drugs get released to market. By allowing themselves to rely heavily on proxy indicators and correlations during drug discovery, companies have not necessarily exhibited similar levels of seriousness in their responsibility towards getting the science of disease prevention and cure fully right. It could be argued that this approach could have incentivised production of biased data to substantiate poor early decisions. It has not been cost-effective either (check failure rates of clinical trials here). Nevertheless, this has come to be the way the industry conducts its business.


Another realisation is key from an ethical standpoint - the powerlessness of the patient (touched upon in the early part of this article). While the prevalent notion of patient-centricity focuses on the importance of “deep understanding of their medical conditions, experiences, needs and priorities” and “putting patient first,” those aspects alone will not provide sufficient conviction in the commitment to “doing no wrong” while applying AI to drug discovery and treatment pathways. A clearly worded pledge (could be published on the company website for transparency) affirming a commitment to being responsible with AI, along with a description of the way a company intends to discharge that responsibility to the patient will go a long way to AI ethics being taken seriously by employees in their work.


Thirdly, to address the matter of generating right data to input to AI models, and working on improving the method of selection of targets and ligands, biopharma discovery processes need to shun the relegation of true measures of desired properties (target, drug) to post-discovery stages (preclinical, clinical, real use). There needs to be a more stringent set of minimum validation criteria for efficacy, safety, ADME, PK, toxicity, and heterogeneity, for every stage of drug discovery - from early disease characterisation to ligand optimisation and final in vitro and in silico testing. The motto needs to be “do everything it takes to ensure false positives don’t slip through to the next stage of drug discovery and design”. Some companies have made high-level efforts towards this (e.g., AstraZeneca with its 5Rs); nevertheless, much remains to be done in terms of defining minimum data and “proof” requirements – accounting for diversity, coverage within human body systems (metabolic, neural, pathological), cross-validation with historical data from clinical trials and pharmacovigilance, minimum statistical significance in experimental results to moving a target or a ligand to the next stage in process. The minimum in all these needs to derive out of a maximum goal – maximum efficacy, maximum safety.


R&D in large biopharma consumes several billions of dollars for every project and every year of their operation. Spending more effort on understanding disease at a systemic level rather than purely focusing on identifying singular proteins to target and spending rest of the effort on justifying the selection, will give higher returns on investment in the long run.


The proof of owning full responsibility and putting the patient first manifests itself in the power that Quantitative Biology teams (present in almost every biopharma company) have been vested with in the decision-making process across all stages of discovery and design. The QB team must also be empowered to implement and enforce the minimum data and proof criteria proposed above, to ensure the right biology and right extent of biology has been applied in drug inventions.


AI ETHICS IS SAME AS HUMAN ETHICS. WELL, ALMOST!

The Asilomar Principles for ethical AI cover safety, verifiability, explainability and human control in critical situations, amongst others. These principles, while totally appropriate to be taken to the regulators to debate and find ways of converting into law, apply as much to self-regulation in the context of biopharma companies.

The bio version of those principles can (and must) be bestowed with a higher ethical standard – one that emphasises relevance over risk-transfer, continuous validation over deferment of true study of properties, traceability of logic over disconnected discovery stages, and recognition of higher moral responsibility in human actors over any other entity or method.


Biopharma’s conflict over shareholder vs stakeholder (end-customer – the patient) benefit is harder to deal with than in other industries, as it literally is about life vs death. While life for the shareholder and death for the stakeholder, figuratively speaking, is not a viable business model (at least in this industry), an admirable model of business in the future will involve being a step ahead of regulation in re-defining ethics.


Regulators in their application of the same principles can then focus on consolidating practices into a common standard and ensure they are followed, acting on behalf of public. In short, being responsible (ethical) with AI in drug discovery simply means continuously improving on its relevance, and the rigour of its testing and validation. Also, if such an implementation of ethics can be achieved in biopharma, other industries can follow the example with relative ease, their needs not tied (at least not as much) to life-saving missions.

 
 
 

Comments


©2021 by Precision Health Future. Proudly created with Wix.com

bottom of page