ai

A Lack of Diverse Data Is Hurting Healthcare AI. Here’s How.

With great term sheets come great responsibility
article cover

Francis Scialabba

· less than 3 min read

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.

If you look at 2020 funding for healthcare AI startups, each quarter keeps on raising the bar. In Q3, funding reached its highest level in three years.

But with great term sheets come great responsibility. Experts worry that lack of representation in training data could lead to false diagnoses and suboptimal patient care. These concerns are also preventing some AI products from going to market.

The good, the bad, and the risky

People have faith in using machine learning models in healthcare because, at their best, they can help surpass human limitations to expedite drug discovery, diagnoses, and other medical breakthroughs.

The problem: An ML model is only as good as the data it’s trained on. And issues with that data—like narrow scope and existing biases—can easily compound over time.

  • For example: A model trained mostly on medical data from a predominantly white area could have trouble diagnosing black women.

Accounting for these types of discrepancies is key, especially before an AI healthcare product goes to market.

But there are potential roadblocks to solving these issues. The healthcare sector is “notorious for having siloed data, and must also contend with patient data privacy concerns,” Deepashri Varadharajan, lead AI analyst at CB Insights, told us.

  • One potential fix: Federated learning—a more privacy-focused approach to ML—could allow companies to collaboratively form more representative data sets without sharing raw data.
  • “We’re already seeing...partnership models in AI-enabled diagnostics and drug R&D,” says Varadharajan.

The trajectory

The FDA’s current regulatory framework doesn’t cover adaptive or continuously learning algorithms. So far, the FDA has only approved “locked” AI/ML algos, or ones that always provide the same output under the same conditions.

Eighteen months ago, the FDA published a proposed regulatory framework for AI/ML-based software used for medical purposes, but nothing’s finalized yet.

Looking ahead: Things came to a head at a recent FDA public advisory committee meeting, when speakers raised concerns about model bias, lack of large and varied data sets, and the life-or-death implications of this tech.

  • “AI and ML algorithms may not represent you if the data do not include you,” said Terri Cornelison, a chief medical officer with the FDA.
Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.