When consumers think of artificial intelligence or augmented intelligence (AI) today, they may think of Apple’s Siri or Amazon’s Alexa. Or if they are more familiar with 2001: A Space Odyssey, they may remember HAL 9000’s quip, “I’m sorry Dave, I am afraid I can’t do that.” All joking aside, there are applications for AI in more than just science fiction movies or ordering groceries off Amazon. Practitioners are applying AI to address some of the most serious problems, including those that keep our healthcare system from providing the cost-effective, high-quality care Americans deserve.

Healthcare providers currently use AI-driven tools—especially machine learning—to inform clinical decisions, assist those with physical and developmental disabilities, and develop drugs. Moreover, connected health experts predict an AI-driven sea change in healthcare, which could unlock incredible efficiencies and cost savings. As these applications make their way into the American healthcare ecosystem, it is more important than ever that policymakers understand what AI is (and what it is not) and its role in the future of clinical care.

In late July, ACT | The App Association’s Connected Health Initiative (CHI) hosted a lunch briefing with the Congressional Artificial Intelligence Caucus titled Advancing American Healthcare Using Artificial Intelligence.” App Association’s Senior Director for Public Policy Graham Dufault moderated the discussion, which featured insights from Kali Durgampudi, VP of innovation and mobility of Nuance Communications; Josh New, senior policy analyst for the Center for Data Innovation (CDI); Betsy Furler, CEO of Communication Circles; and Dr. Akane Sano, assistant professor at Rice University. Representative Pete Olson (TX-22), co-chair of the AI Caucus, gave opening remarks. The panelists and Congressman engaged congressional staffers and policymakers in an hour-long discussion on the implications of AI for the health sector.

Kali Durgampudi, described Nuance’s platform as sitting “on top of” electronic health records (EHRs), and its main functionality as flagging decision points for the practitioner. For example, if the input data includes a description of two symptoms associated with a certain condition, a pop-up might appear suggesting a potential co-morbidity for the practitioner to consider. He further elaborated that doctors go through about 4,000 clicks each day, spending 43 percent of their time as doctors on documentation. Through the voice recognition software in Nuance products, the time physicians spend on documentation has been reduced by a staggering 45 percent, allowing doctors to do what they signed up for: to practice medicine.

Josh New detailed CDI’s policy priorities for AI. He noted that some policymakers are proposing complete transparency in how AI arrives at an outcome while others are calling for explainability. Complete transparency would require companies to disclose the entirety of their AI algorithms to ensure that policymakers and others can understand how they reached certain conclusions. Explainability, New argued, is not a much better solution, as it would require each decision of an AI algorithm be publicly explained and traced through the decision-making process. As both complete transparency and explainability have adverse outcomes, New proposed the use of accountability mechanisms instead. The difference being an accountability measure would require the user of an AI-driven algorithm be accountable for the ultimate decision—whether the user is relying solely on AI or not. Accountability would therefore require algorithms to enable human intervention while incenting an AI user to understand its own algorithms enough to bear the risk of an adverse outcome.

Dr. Akane Sano, shared her experiences using machine learning in academic research. She described her research on the stress levels and mental health of undergraduate students. Sano played a video for the audience that described a study in which patients wore devices that tracked their sleep, heart rhythms, and other bioindicators. The various quantitative data points were measured against reported moods and stress levels. Meanwhile, several machine learning algorithms were used to best interpret this data into meaningful information on what factors contribute most to student happiness. For instance, it was found that time inside was negatively correlated with student happiness. The study has been widely cited, and Dr. Sano continues to build on these findings.

 

The final panelist was App Association member Betsy Furler, a speech pathologist turned CEO of her own startup, who utilizes apps in her own practice and is an accessibility consultant to developers and tech companies. The apps Furler uses help kids who are non-verbal communicate in real time with parents, loved ones, and others. In particular, ProLoQuo2Go uses AI-driven software to learn a non-verbal child’s speech patterns in order to better anticipate the next word in a sentence. She also described the use of AI in new screening tools teachers are using with kindergarten age children. Teachers find the AI-driven tools help experts screen children for speech development issues more efficiently than other methods. To the extent that a program using AI might exhibit biases, Furler noted that the apps never have the final say. From a clinical standpoint, the expert will always override an app’s decision where it follows an incorrect bias or reaches the wrong conclusion, and the app will then learn from that, thanks to AI.

To the everyday observer, AI may seem like dystopian fiction. However, it is real and it is changing healthcare for the better. It was great to hear the use cases and ramifications of AI from the policy, business, and academic experts, and the Connected Health Initiative looks forward to playing a leading role in this developing conversation in the public policy realm.