top of page
Search

On Big Pharma's Commitment to Ethical AI

  • Writer: Dinesh Agaram
    Dinesh Agaram
  • Apr 5, 2021
  • 9 min read

Updated: Apr 8, 2021


ree

As we inch towards personalised medicine making use of Artificial Intelligence and its power to throw better light on both the Biology and Chemistry of drug discovery, we must recognise that much damage could ensue if the technology is applied in an unregulated manner – that is, without proper controls and restraints. Following the article on self-regulation of AI, let us look at how big biopharmaceutical companies are faring as of date with respect to communicating their commitment to upholding ethical principles in their use of Artificial Intelligence. I believe that such communication explicitly published on the companies’ corporate websites would go a great way in keeping their R&D leaders consciously following relevant principles, as well as provide comfort and reassurance to clinicians, patients and other stakeholders that use or prescribe their companies’ products.


Making an explicit commitment around AI ethics

Before AI became mainstream, the focus of data ethics was purely around privacy and security. Accordingly, you could find that almost every professionally run organisation put in place measures to safeguard those two principles, especially around customer data. Many of them published a word on their commitment and compliance, too, in their main communication channels.


By publishing the company’s commitment to upholding ethics in AI explicitly, all stakeholder groups interested – including tech executives, bioethicists, lawyers, public activists, regulators, patients, clinicians - can be provided assurance and would come to know the exact principles put in place to avert undesirable consequences, including life-threatening situations, inequity and injustice. The importance of making their commitment explicitly in their main communication channel – the corporate website - cannot be emphasised enough.


A framework for Ethical AI in Biopharma

While there is no single AI ethics standard that has been accepted by industry yet as a benchmark to aspire to, the following parameters may sufficiently cover all aspects of ethical behaviour and the public expression of commitment. This is a condensed list drawn from putting together common and recurring themes of the Asilomar principles from Future of Life’s Beneficial AI initiative, the European Commission’s guidelines for trustworthy AI, and the FDA’s and EMA’s early views on ethical good practices of using AI in pharma.


Proposing the MERTA framework

The following list of condensed ethics’ themes for Artificial Intelligence captures the common elements of key sources referenced above:

Mitigation of Bias: inclusion, eliminating discrimination, experimental rigour (sufficiency criteria for validation of results)

Explainability: traceability of decisions, audit of systems

Responsiveness: continuous improvement, regular engagement with stakeholders, acting promptly on adverse information

Transparency: voluntary public disclosure, truthfulness (especially revealing the truth upon harm)

Accountability: safety & security (covers privacy and risk to patient), policies (formalisation, responsible use), sustainability and human rights/values alignment


ree

While Mitigation of Bias and Explainability are themes that have emerged largely in the context of AI, Responsiveness, Transparency and Accountability are generic themes of corporate management that take specific character when applied to AI. In this analysis below, I have looked for evidence of companies’ recognising those specific ethical needs particular to AI.


Companies chosen for the exercise

These biggest 10 pharmaceutical companies by revenue in 2020 have been considered for the exercise: Johnson & Johnson, Pfizer, Novartis, Roche, Merck, Glaxo SmithKline, Sanofi, Abbvie, Takeda and Bristol Myers Squibb. AstraZeneca - at one rank below top 10 – has been included as well, owing to the company’s popularity following its successful covid vaccine.


Disclaimer

The analysis below only takes stock of public expression of commitment to above principles based on information on the companies’ websites. For an independent analyst, it is not easy to arrive at a definitive statement about the internal workings of a corporate entity – therefore, the objective of the exercise is not to prove the presence or absence of any commitment to ethics in a categorical sense.


Assessment rules

Presence or absence of clear communication of commitment to a principle or a theme has been construed based on the following rules:

i. Commitment to a principle has been acknowledged when there is an explicit mention of it (in words amounting to the meaning conveyed by the principle) in corporate website content as part of company’s principles of AI use

ii. Commitment to a generic principle (e.g., upholding data privacy) applicable to AI is assumed if explicitly mentioned (in words amounting to the meaning conveyed by the principle) in corporate website even without specific reference to AI (e.g., upholding data privacy) under the condition there is no specific content page that affirms commitment of ethical use of AI

iii. If a content page on corporate website is dedicated to upholding ethics in AI but a specific ethical principle is not called out explicitly within it, then it is deemed as not being consciously and officially pursued

iv. Articles published by company personnel on websites that are not maintained by the company and mention ethical AI are not considered for assessment

Summary of findings

· All the companies assessed have published at least one communication in their websites citing an internal project as being driven by AI.

· Only two companies out of 11 assessed – Novartis and AstraZeneca - have taken the effort to create a page on their website with content explicitly stating their intent to follow ethical principles in using AI. In these, they have touched upon most of the principles covered in the MERTA framework (read further for exceptions to this)

· Other than Novartis and AstraZeneca, only Sanofi has stated that it is developing policies that supported fair use of AI.

· Merck and Abbvie have published a note saying they have discussed AI ethics and/or set up an AI ethics board but have not published the actual principles they commit to upholding

· Overall, efforts in the industry to affirm (or re-affirm) commitment to ethical AI is exceptionally low and implies greater risk to patients with the entry of AI (especially with regulation not being one step ahead)


Let us look at the findings at a more detailed level, against each of the MERTA framework themes.


Mitigation of Bias

Novartis’s statement around this is strong and complete, and adequately covers the principles of inclusion and non-discrimination:

“We are committed to mitigating the risk of bias throughout the process, from data gathering, model creation and application of the model. To that end, we will strive to:

  • Design, develop, test, train and operate AI algorithms based on inclusive and representative data to eliminate possible biases and known discriminatory aspects such as race, gender, ethnicity, sexual orientation, political or religious beliefs;

  • Use data samples that are representative of the studied and analyzed population to eliminate or prevent unconscious bias;

  • Perform a risk impact assessment on the AI systems before their use in production to eliminate the risk of bias or discrimination;

  • Develop and use AI systems in ways that reflect the social and cultural diversity of Novartis;

  • In the short-term, assess, acquire or develop tools and establish techniques to assess statistical bias in data-sets from external sources – mitigating bias in all data sourced from outside of Novartis;

  • Ensure the responsible use of AI when applied to the real world, as outlined in our ‘Empower Humanity’ Principle.”


AstraZeneca’s statement is equally effective:

We endeavour to use robust, inclusive datasets in our Data and AI systems.

  • We seek to ensure our use of AI is sensitive to social, socio-geographic and socio- economic issues, and protect against discrimination or negative bias to the best of our ability.

  • We will continually adapt and improve our AI systems and training methods to drive inclusiveness.

We treat people and communities fairly and equitably in the design, process, and outcome distribution of our AI systems.

  • We are aware of the limits of our AI systems. We strive to apply their outputs in the right context and in a non-discriminatory fashion.

  • We monitor our AI systems to maintain fairness throughout their lifecycle.

  • We acknowledge all data sources and human effort in our Data and AI Systems, while protecting our intellectual property.”

One way of eliminating bias and ensure AI models and algorithms work in an inclusive and non-discriminatory sense is to ensure sufficient experimental rigour. Discovery of safety and efficacy issues time and again during clinical trials could directly point to the lack of this rigour; with AI coming into the picture, it is even more important to make a commitment to this principle. None of the top companies pledge such a commitment explicitly, in their pledges.


Explainability

Again, Novartis’s commitment towards this unique AI-driven necessity is very clearly worded:

“Novartis strives to create transparency around the design and use of AI systems to explain how such systems work through:

  • Short term: Openly disclosing / informing end-users when they are interacting with an AI system

  • Mid-term: Enabling the auditability and traceability of the decision pathways taken by AI systems using IT tools and infrastructure,” and

“Ensuring the use of AI systems has a clear purpose that is accurate, truthful, not misleading, and appropriate for their intended context”


It is worth noting that Novartis has looked at the practicality of implementing transparency right from the beginning and has chosen to commit to traceability, auditability and explainability only in the mid-term (no definition of the timeframes published). While this speaks the truth, it also increases the risk of not being able to meet its commitment to “proactive monitoring” (check under Responsiveness below) until the time such measures are put in place.


AstraZeneca has made a succinct attempt at describing its approach to Explainability:

We will ensure our assumptions are clear, we will ensure algorithms are appropriately documented, decisions are explainable as needed, and processes are in place to deal with unanticipated impacts.

  • We can demonstrate our data sources, and how models are trained and maintained.

  • We have the ability to explain processes, data and algorithms when required to do so while protecting our intellectual property.”

It is known that making AI systems explainable is a tough task. This is especially true of systems based on more complex models (such as typically is the case with Artificial Neural Networks). More the complexity less would be the ability to incorporate explainability. Therefore, while it is easier to explain processes, data, and algorithms, it may not be that easy to explain a result or outcome when it comes to complex machine learning based systems. Companies will need to invest more effort to ensure this is reasonably achieved.



Responsiveness

Novartis affirms its commitment to “Review, Learn and Adapt” through these actions:

“Proactively monitoring and mitigating potential negative AI consequences”

“If the AI systems are deployed in relation to products and manufacturing environments, we are committed to reporting adverse events within 24 hours of discovery to the Novartis Safety Department and quality complaints to Quality Assurance, and then transparently communicating the risks of our medicines and devices to regulatory authorities.”


Although there isn’t explicit mention of continuous improvement, Novartis in its wording covers the spirit of the principle, also calling out its efforts to continuously train its people in ethical AI in the short-term.


AstraZeneca’s website states:

We recognise and address unforeseen consequences resulting from our AI usage appropriately, and ensure that lessons are learned,” and

“(We will ensure…) processes are in place to deal with unanticipated impacts”

Regular stakeholder engagement to continuously receive and incorporate feedback into processes, policies and implemented principles is necessary to invest in, since with AI, “a small flame could spread into a forest fire in no time,” so to speak. None of the companies have made a public commitment to this.


Transparency

Novartis’s commitment to transparency in AI reads as:

Openly disclosing / informing end-users when they are interacting with an AI system,” and

“Mid-term: Transparently communicating and explaining the limitations, purpose, decisions and capability of AI systems as new visualization models are developed”


AstraZeneca’s commitment to transparency is worded unambiguously as well:

We can demonstrate our data sources, and how models are trained and maintained.

We have the ability to explain processes, data and algorithms when required to do so while protecting our intellectual property.

We are transparent about the use of AI to build trust and credibility in all our endeavours,” and

We are open about the use, strengths and limitations of our data and AI systems.

  • We explain to people if they are interacting with an AI system and whether interactions are recorded.

  • We are able to explain when and how AI is used to aid a decision that impacts humans.

  • We will ensure appropriate levels of explainability and transparency in line with our legal obligations.”

It is worth noting that AstraZeneca’s voluntary disclosure commitment is only provided about “interacting with an AI system”; ability to explain “how AI is used to aid a decision” is seen as a capability that will be exercised upon request.


Accountability

Novartis has made its commitment wholesome:

Ensuring the use of AI systems has a clear purpose that is accurate, truthful, not misleading, and appropriate for their intended context,”

As an accountable organization, Novartis is committed to establishing robust governance over the design and use of AI. Such rigorous governance includes appropriate leadership and oversight, risk and impact assessments, appropriate policies and procedures, transparency, training and awareness, monitoring and verification, response and enforcement.” (and a fuller description of this commitment, too),” and

“Empowering, educating and training associates in the short-term to have the right ethical professional awareness (knowledge, experience and required skills) as they use or operate AI systems, to ensure that ethical commitments (as laid out in the Code of Ethics) are not compromised; moving in the mid-term to a system of certification.”


AstraZeneca describes its commitment to accountability at different levels – data, system, organisation:

We take accountability of our use of Data and AI Systems throughout their life cycle, so their use is appropriate and monitored over time.

  • We anticipate and mitigate the impact of potential unfavourable consequences of AI through testing, governance, and procedures.

  • We are accountable for our findings and the recommendations from AI systems. We govern AI-supported decisions appropriately.,”

“We apply AI to contribute to a sustainable workforce, business, and planet, to help make AstraZeneca a Great Place to Work, and accelerate our contribution to society,” and

“We employ human-led governance over our AI systems. We respect human dignity and autonomy and strive to reflect this in our AI systems.”


Roche has published that it is "committed to use artificial intelligence (AI) and real-world data (RWD) in a responsible and trustworthy way". It does not expand on what exactly it will do to achieve this, except in the context of data privacy.


Regarding data privacy, all the companies surveyed have a strong data protection policy. It is assumed that this would extend to Big Data and Artificial Intelligence applications they pursue.


A fuller analysis of coverage of different principles in the top 11 biopharma companies (by revenue, 2020) is published here.


Special mention needs to be made on what is missed out, even by the companies that have made a good attempt at describing their intent to implement ethical AI – there is no recognition of a need for a high level of experimental rigour and continuous engagement with key stakeholder groups as prerequisites to making all other commitments to principles feasible and fool-proof.


Comments are invited, especially from people working for the companies covered here.





 
 
 

Comments


©2021 by Precision Health Future. Proudly created with Wix.com

bottom of page