<img src="https://secure.visionary-data-intuition.com/794280.png" style="display:none;">
Skip to content

Three With a SME: What You Need to Know About AI in Healthcare with Lee Miller, Software Architect at Juno Health

Potential of Healthcare AI: Insights from Expert Lee Miller
7:40

By Juno Health

April 30, 2024


Picture of Lee Miller with Three With a SME logo.

Lee Miller, Software Architect at Juno Health, recently sat down with the Juno Health blog editorial team to discuss the blue-sky potential of AI, what we need to do to ensure it reaches that potential, and the risks associated with implementing it before it’s ready.

In the following Q&A, Lee also discusses the future of AI in healthcare, along with AI bias and privacy concerns that need to be addressed.

The following has been edited for length and clarity.

Q: Overall, please tell us about the potential that AI holds for the healthcare arena.

Miller: AI has significant potential in healthcare. It can identify correlations in large datasets that may be too complex for humans to detect. For example, AI has demonstrated the ability to identify sepsis up to two days before a doctor might notice it, as well as predict potential mortality and optimize scheduling.

AI can also assist with scheduling patients, navigating reimbursement processes, and many other aspects of healthcare. When implemented with appropriate safeguards and risk management, AI can revolutionize patient care by providing early warnings and identifying optimal treatment paths.

Additionally, AI shows promise in drug trials, enabling better predictions and helping determine potential avenues for research. The success of these applications hinges on the amount and quality of data collected.

Q: What do we need to do to achieve the potential that AI holds for healthcare?

Miller: A key aspect of AI is the use of data – clean, qualified, and unbiased data. Ensuring data is free from implicit biases is challenging but crucial. Biased data leads to biased models, resulting in favorable outcomes for some groups while disadvantaging others.

For example, a sepsis prediction model developed for a specific hospital's patient population worked well there but failed when applied to a different hospital with a different patient demographic. To avoid such issues, collecting large datasets from diverse geographic areas and populations is essential. This means considering variables such as gender, age, demographics, and racial backgrounds.

Large-scale data collection is necessary for developing robust models. This involves gathering millions or even billions of data points and using statistical sampling to minimize bias. A major challenge for AI today is the need for vast amounts of diverse data, as some generative AI applications have exhausted the available data sources on the internet, hindering their progress.

Consider the scale of the datasets required for effective AI models. Years of data collection are necessary. One reason I'm involved in developing a new EHR system is to enable codified data collection that can be utilized for data analytics in the future. This system will allow us to extract and combine data from different facilities, building large datasets for machine learning models and AI projects.

Q: What will happen if AI is implemented too early? Or in areas where it’s not ready yet?

Miller: This brings us to the risks of AI in healthcare. President Biden's Executive Order on AI, and the ONC's interpretation, specifically 170.315(b)(11), address these risks, particularly bias and potential disadvantages for certain groups. Underrepresentation and social determinants of health can pose challenges.

For example, when developing systems for the VA, there is an inherent bias. The VA primarily serves male veterans who are older, were once in excellent health due to their military service and have experienced traumatic injuries. Additionally, they often come from rural areas or inner cities. This smaller dataset is not representative of the overall population, making it risky to generalize from VA data.

Another scenario involves using a scheduling or payment algorithm with data from an area that includes both wealthy and poor ZIP codes. Relying on ZIP codes could result in prioritizing patients from wealthier areas over those from poorer ones. This implicit bias can lead to unequal treatment, which may only become apparent over time.

President Biden's executive order and ONC regulations aim to prevent preferential treatment based on social determinants of health. While the intention is to create equitable healthcare, the risk remains that wealthier individuals might receive better care than their less affluent counterparts.

It is crucial to eliminate disparities in healthcare and ensure that everyone receives the same quality of treatment. Data must be collected from across the entire population spectrum, regardless of socioeconomic status, racial background, or sexual identity, to avoid skewing AI models and disadvantaging certain groups.

To achieve this, it's essential to gather data from underrepresented groups and continuously monitor for potential biases. Addressing this challenge early on is vital, as biases can be insidious and may emerge from unexpected sources.

NIST provides risk management frameworks that guide how to identify and mitigate biased data throughout the process. This requires significant effort, including transparent communication with end users about how models are created, where the data comes from, and why it is considered unbiased.

Even after deployment, ongoing monitoring is necessary to detect and address biases. If biases are identified, they must be communicated to end users, along with strategies for addressing them.

The widespread rush to adopt AI as a cure-all poses a significant risk. Hastily jumping into AI without proper preparation may lead to disappointing outcomes, with AI failing to meet expectations or deliver desired results. This may result in AI being dismissed as immature, ineffective, or doomed to fail due to negative experiences.

In addition, clinical interventions should be approached cautiously, as they directly impact people's health. Instead, focusing on areas such as scheduling and payments is a safer starting point. However, even in these areas, it is essential to remain vigilant to avoid unintended bias that may favor one group over another based-on location, age, or financial ability.

To avoid these pitfalls, AI deployment should be thoughtful and deliberate. As emphasized earlier, data is crucial. The use of large, high-quality datasets is essential, yet we currently lack access to such curated data sets due to privacy concerns. These concerns raise questions about data privacy and the inclusion of sensitive social determinants of health.

This potential for skewed outcomes is a significant concern and highlights the importance of careful planning and oversight.

For example, one dramatic and widely viewed negative outcome can significantly impact public perception. Using the analogy of airplane crashes, these did not stop the development of air travel, which is now one of the safest forms of transportation. However, the memory of crashes still causes fear for some people, even though driving to the airport poses a greater risk than flying.

The same applies to AI in healthcare. If problems arise early on, people may fear AI-based treatments and avoid them due to a perceived risk. News outlets and social media will amplify any negative incidents, fueling conspiracy theories and fear.

Please contact us here to learn more about how Juno Health is dedicated to building smarter, more flexible digital healthcare solutions.

Share this article