Population Health

June 18, 2024

Initiative announces awardees of AI-focused population health pilot projects

Programming code displayed on a computer screenThe Population Health Initiative announced the award of five, $100,000 artificial intelligence-focused pilot grants to support interdisciplinary teams of University of Washington researchers to develop preliminary data or proof-of-concept needed to pursue follow-on funding to scale their respective efforts. The total collective value of these awards is roughly $600,000, which includes matching funds from different schools, colleges and units.

“The rapid advances we are seeing in technological innovation hold incredible promise for us to realize major progress in addressing some of the most pressing challenges to our health and well-being,” said Ali H. Mokdad, the university’s chief strategy officer for population health and professor of health metrics sciences. “We are delighted to be able to support these five project teams to test novel applications of large language models and generative AI in areas ranging from more effective diagnosis of tuberculosis to better assessment of brain health and pathology samples.”

The goal of this special funding call was to accelerate the application of large language models and generative AI to seemingly intractable grand challenges in population health. Details regarding the five funded projects, project teams and focus of each teams’ project can be found in the following tabs.

Customizing LLMs for Reliable Clinical Reasoning Support

Investigators
Yulia Tsvetkov, Allen School of Computer Science & Engineering
Pang Wei Koh, Allen School of Computer Science & Engineering
Jonathan Ilgen, Department of Emergency Medicine

Project abstract
Generative AI adapted to medical domains holds great promise for advancing the health and well-being of populations, including in languages and regions that have limited access to healthcare support. However, safety risks and the need for responsible data practices, as well as unrealistic assumptions made by existing approaches to automated clinical reasoning, hinder the development and deployment of models.

This proposal addresses three critical challenges towards creating large language models (LLMs) enhanced with medical knowledge and trained for clinical reasoning support by (1) synthesizing realistic data that can facilitate research and model development while improving model fairness and minimizing privacy violations, (2) incorporating uncertainty estimation mechanisms into LLMs to abstain from making low-confidence decisions for enhanced model safety, and (3) developing new methods to augment LLMs with external knowledge, for rapid customization to individual users and new knowledge domains. We propose to incorporate these innovations in a novel framework that simulates patient–expert interactions.

Ultimately, this project aims at developing a proof-of-concept prototype of reliable, interactive, knowledgeable, and socially-aware LLM assistants empowering patients from diverse populations, clinicians, and researchers on a wide range of clinical use cases.

PathFinder: A Multi-Modal Multi-Agent Framework for Diagnostic Decision-Making in Histopathology

Investigators
Linda Shapiro, Department of Electrical and Computer Engineering, Allen School of Computer Science & Engineering
Ranjay Krishna, Allen School of Computer Science & Engineering
Mehmet Saygin Seyfioglu, Department of Electrical & Computer Engineering
Fatemeh Ghezloo, Allen School of Computer Science & Engineering
Wisdom Ikezogwo, Allen School of Computer Science & Engineering

Project abstract
Pathologists often detect diseases by examining histopathology whole-slide images (WSIs), which are digitally scanned human pathology samples of gigapixel size. In their analysis, pathologists traverse these extensive images, gathering evidence to support their diagnoses – a time-consuming process that becomes increasingly demanding as cancer cases rise with the aging global population.

AI technology can dramatically speed up the diagnostic process, enabling doctors to help more patients efficiently. However, existing computational solutions segment large WSIs into multiple small patches, which are analyzed independently. While somewhat effective, they lack efficiency, interpretability, and holistic diagnosis. We propose PathFinder, a multi-modal multi-agent framework that mimics the natural decision-making process of expert pathologists. PathFinder will contain three AI agents collaborating to simultaneously, navigate between WSI patches, gather evidence, and make a final diagnosis holistically. 1) The Navigation Agent will mimic a pathologist’s viewing behavior to find the most important regions within the WSI. 2) The Description Agent will then provide natural text descriptions of the regions of interest (ROIs) that the Navigation Agent identified. 3) Finally, the Diagnosis Agent will make a diagnosis based on the accumulated descriptions provided by the Description Agent.

Our method enhances efficiency by reducing the need to examine every section of the WSI and provides human-readable diagnostic decisions through natural language descriptions of ROIs. Our integrated system promises a more intuitive and precise diagnostic process, potentially adaptable to other types of medical imaging like ultrasound and MRI, making it a versatile tool in medical diagnostics.

Standalone Smartphone Pupillometry with Machine Learning and AI for Diagnosis of Neurological Disease

Investigators
Michael R. Levitt, Department of Neurological Surgery
Suman Jayadev, Department of Neurology
Shwetak Patel, Allen School of Computer Science & Engineering
Anthony Maxin, Department of Neurological Surgery

Project abstract
The pupillary light reflex (PLR) is a non-invasive biomarker associated with brain health. It is altered in diseases and conditions such as traumatic brain injury and dementia. Most clinicians are forced to make a PLR assessment subjectively using a penlight and the naked eye – a technique known as manual pupillometry. While the literature has shown that this is unreliable, it is the only method available to the majority of first responders and clinicians in the USA and throughout the world. Quantitative pupillometry was developed in response to the inaccuracy of manual pupillometry and is a highly accurate method of assessing the PLR. Unfortunately, prevailing devices are fragile, cumbersome, and cost ~$9,000, not including repeat expenditures for disposable parts, making them unaffordable for most hospitals in the USA, let alone the rest of the world.

To address this need for a more affordable and accessible method of quantitative pupillometry, we have developed PupilScreen – a standalone smartphone application for reliable detection and quantification of the PLR using machine learning. In the proposed project, we will build upon the initial development and testing of this application to systematically generate pilot data on the reliability of measurements with this application and on the use of PLR to diagnose several high-impact neurological conditions (with assistance from machine learning) with the goal of using these preliminary data to pursue future funding.

Using AI for Tuberculosis Classification Using Wearable Data

Investigators
Shwetak Patel, Allen School of Computer Science & Engineering, Department of Electrical & Computer Engineering
David Horne, Department of Medicine
Thomas R. Hawn, Department of Medicine

Project abstract
According to the WHO, tuberculosis (TB) is the leading infectious disease-related cause of death, killing 1.5 million individuals each year and causing disease in 10 million. With the increasing ubiquity of connected technologies in developing countries there is an opportunity to use these tools to help diagnose and limit TB in a population.

We recently created a model that distinguishes TB from non-TB coughs from smartphone recordings. Although these results are promising, this study and others are done in controlled lab settings and conditions. While this is useful in comparing different model performance to this type of data, it does not address the many nuances of real world data that would be needed to deploy outside controlled situations. Wearables offer continuous unobtrusive monitoring and increased access to signals throughout the day, but come with additional signal noise.

We propose a pilot study that aims to use wearable sensors to classify TB infection based on cough characteristics in real-world settings. The primary aim will be to collect continuous data with lab test ground truth to create an ML model diagnosing TB infection that is robust across situationally diverse conditions. Secondarily we can explore the use of biometrics from Fitbit data, and the use of generative AI as ways to create a more robust classifier.

With a robust pipeline, our proof of concept will be a stepping stone towards real time community deployable models that can allow for early diagnosis and notification to decrease TB transmission events and address TB control.

AI-generated characterization of landscape risk for disease emergence Washington

Investigators
Julianne Meisner, Department of Global Health
John Y. Choe, Department of Industrial & Systems Engineering
Shwetak Patel, Allen School of Computer Science & Engineering
Peter Rabinowitz, Department of Environmental & Occupational Health Sciences
Beth Lipton, Washington State Department of Health

Project abstract
Over the last 50 years, new pathogens have emerged from wildlife and environments to cause human epidemics and pandemics at increasing frequency, with increasingly severe impacts. Enormous advances in computer science have also been achieved over this period, allowing zoonotic disease experts to use sophisticated modeling approaches to predict the sites of future emergence events, termed “hotspots.”

However, most of these efforts have produced hotspot maps that have low spatial resolution, meaning large areas of entire countries or even regions are flagged as hotspots, information which is not actionable. This limitation is due, in part, to the datasets used to fit the model, which are either low-resolution or poorly-suited for predicting zoonotic hotspots. For instance, many modeling efforts have treated all human-modified landscapes as risk factors for zoonotic emergence, ignoring important heterogeneities in how communities and settlements interface—or coexist—with ecosystems. Further, to our knowledge, there are no prior efforts to produce forecasted versions of these datasets, limiting hotspot mapping to current conditions.

In this project, a doctoral student in computer science will work with a multidisciplinary team of UW faculty mentors to create high-resolution and dynamic datasets of key risk factors for pandemic emergence in Washington state, validate them with members of the Washington State One Health Collaborative, and develop a computational framework for forecasting these datasets. This work will serve as key proof-of-principle for a larger grant submitted to NIH, NSF or ARPA-H for scale-up to global pandemic prediction.

More information about this funding opportunity can be found by visiting its funding page.