Clarkston Consulting https://www.facebook.com/ClarkstonConsulting https://twitter.com/Clarkston_Inc https://www.linkedin.com/company/clarkston-consulting http://plus.google.com/112636148091952451172 https://www.youtube.com/user/ClarkstonInc
Skip to content

Mitigating Bias in AI-Enabled Medical Devices

The implementation of artificial intelligence (AI) has modified elements of nearly every industry, making things faster, cheaper, and more accurate. The medical device industry is no exception, as AI usage in this field has greatly increased in recent years. With continued expansion in the industry to provide better medical services, it’s important for those both producing and investing in AI-enabled medical devices to understand how it can impact healthcare and those receiving it. In this piece, we discuss the risks and benefits of AI in medical devices, where AI bias comes from, and how it can be mitigated.  

State of AI-Enabled Medical Devices 

The first implementations of AI in the medical field date to the 1970s, setting the precedent for the utilization of machine learning technologies to diagnose, treat, and prevent disease. For the next several decades, AI-enabled devices were introduced at relatively low frequencies. However, more recently, this flatline growth has skyrocketed, with the use of AI in healthcare growing 167% from 2019 to 2021. The Covid-19 pandemic and development of digital health have only accelerated the demand for machine learning in healthcare. Currently, 75% of AI applications are designed for radiology, though there’s an expanding number of devices being developed for other medical fields, such as cardiology and neurology. With quickly expanding uses of AI, it’s important to understand what the implementation of this technology entails. 

Risks of AI in Medical Care 

Artificial intelligence simulates the processes of the human mind, and to do so, it learns from the data it’s provided. In most cases, AI bias unintentionally reflects the unconscious human biases that it has been fed. Within the medical field, this can be of great risk, as biased, impartial, or unrepresentative data can be amplified and reproduced in the AI algorithm. In operation, this can lead to issues of generalizability and ethical flaws. Below are a couple of instances where AI bias could lead to medical malpractice. 

Skin Cancer Diagnosis 

A study conducted by the JAMA Dermatology Network found an AI model to be inconsistent in diagnosing skin cancer across different skin colors. The data provided was comprised mostly of light-skinned subjects, so the AI struggled in identifying cancerous spots in dark-skinned patients. In effect, people of color would be diagnosed at a more advanced stage, and treatment options become more limited. 

Discriminatory Clinical Algorithm 

One 2019 study found that clinical AI used in several hospitals was heavily biased in its decision making. The algorithm was designed to decide which patients needed care, and one subset of the data that trained the AI was past healthcare spending. The rationale was that spending is an indicator of relative need of care; however, historically, Black patients have spent less on healthcare due to disparities in wealth, income, and access to care. Mistaking lower expenditure on healthcare for less immediate need for it amongst Black individuals led to bias in the algorithm and an imbalance in care received by race for the same condition. 

Solutions to Mitigate Bias 

To get the most out of AI applications in medical devices, it’s imperative to consider what can be done to mitigate the negative impacts that could afflict these devices and their ability to deliver unbiased treatment. 

Regulatory Solutions 

As of 2022, 96% of the AI-enabled medical devices on the market received 510(k) clearance, meaning they didn’t require clinical trials because the developers showed the device is equivalent to an existing one. Only three devices went through the FDA’s more thorough process before being available on the market. Additionally, the FDA can’t require software developers to submit subpopulation-specific data in their device approval applications, which leaves gaps in the evaluation process for bias to creep in. Recently, there has been a big push to create a more rigorous examination process for AI-enabled medical devices to instill “algorithmic justice and healthcare equity” in the industry. Since this policy will be costly and requires more planning, the FDA can take immediate action to mitigate the risk of bias in healthcare by mandating clear labeling if the device wasn’t tested on a diverse pool of data.   

Good Machine Learning Practices 

The FDA, UK MHRA, and Health Canada recently joint-published 10 guiding principles on good machine learning practices for AI-enabled medical devices. A few of the recommendations that would directly control bias include using clinical studies and data sets that are representative of the intended population and monitoring current AI medical devices for misuse or partiality. Organizations should be providing implicit bias training and overseeing clinical trials to ensure healthcare equity from top to bottom. 

Benefits of Implementation 

While those in medicine must proceed cautiously, the use of AI-enabled medical devices offers widespread advantages to both healthcare providers and those being treated. Some novel applications of this technology include remote surgery and AI-assisted surgical robots. More broadly used applications of machine learning include data management, risk control, and procedure automation, maximizing healthcare provider time and patient experience, safety, and capacity. AI-enabled medical devices can also play a role in reducing the cost of healthcare by lessening time in diagnosis and treatment. Not only can costs be reduced, but the benefit of AI is that, if leveraged appropriately with inclusive data, it eliminates the potential for prejudice and subjectivity that varies from doctor to doctor, organization to organization. Ultimately, proper enablement of AI in medical devices can transform healthcare into an equitable, accessible, and accurate system. 

Moving Forward with AI-Enabled Medical Devices 

This June, the FDA published a statement regarding artificial intelligence in medical devices. Most of this release dealt with predetermined change control plans (PCCP) as a necessary element in minimizing the challenges of AI in medicine. Ultimately, AI-enabled medical devices are just one point of bias in the medical industry. By addressing related issues in clinical trials and healthcare administration, the impact of AI bias will curtail.   

Artificial intelligence is becoming an integral part in how we function, and as organizations look for safe, beneficial, and cost-effective ways to implement AI in medical devices, Clarkston is here to support you. With AI rolling into the medical device industry now, it’s important to understand the implications and best practices going forward. Our medical device consultants have expertise and services to assist you as we navigate the future of innovation in the industry. 

Subscribe to Clarkston's Insights

  • I'm interested in...
  • Clarkston Consulting requests your information to share our research and content with you.

    You may unsubscribe from these communications at any time.

  • This field is for validation purposes and should be left unchanged.

Contributions from Aaron Messer 

Tags: Artificial Intelligence, Medical Device Trends
RELATED INSIGHTS