Customize your JAMA Network experience by selecting one or more topics from the list below.
Artificial intelligence (AI) and its promise of early detection, targeted therapy, and ubiquitous access to recommendations could well be the linchpin to the “revolutionary change” described in the National Research Council report on Computational Technology for Effective Healthcare more than a decade ago.1 Artificial and augmented intelligence methods enhance the utility of data for making predictions using more variables collected across more settings and continually updating these predictions with new data.2 The enthusiasm with which data scientists and predictive analytics companies—from large, well-established companies to startups—have embraced the application of AI in health care has resulted in a plethora of algorithms and new commercial products. Intense pressure is being placed on health care systems to implement them.
Given the abundance of algorithms, it is remarkable there has yet to be a major shift toward the use of AI for health care decision-making (clinical or operational).3 While data quality, timeliness of data, lack of structure in the data, and lack of trust in the algorithmic black box are often mentioned as reasons, a contributing factor is perhaps that model developers and data scientists pay little attention to how a well-performing model will be integrated into health care delivery. The problem is that common approaches to deploying AI tools are not improving outcomes. The race to innovate is putting algorithms into the medical and data science literature, and into products and medical devices, at a pace that far exceeds the health care system’s understanding of what to do with their results. To design an algorithm with its implementation in mind, a robust link between AI and meaningful clinical and operational capabilities is imperative.
Understand What Precipitates Change
Designing a useful AI tool in health care should begin with asking what system change the AI tool is expected to precipitate. For example, simply predicting or knowing the risk of readmission does not result in decreased readmission rates; it is necessary to do something in response to the information. The root cause of high readmission risk may include inadequate follow-up and problems filling prescriptions, both of which might be addressed during the discharge process. But the problem may be more complex, such as comorbid mental illness or difficult home environments.4 How will the system respond when AI phenotyping identifies patients at risk for these related problems and issues?
Purposefully engaging end users such as clinicians, patients, and operational leaders at the outset of data interrogation can elicit information about what is needed to achieve change in practice. Insights may include where and how in the workflow information should be presented, additional data streams that might need to be built, and in some cases the realization that the problem is not ready for an AI solution given a lack of evidence-based intervention strategies to effect the outcome. The iterative interaction of change-informed AI and AI-informed change, the beginning and the end of the modeling process, sets the stage for improved outcomes, as illustrated in the Figure.
The pipeline consists of 5 phases and a series of actions to achieve the goal of each phase. It is important to identify the intervention that would be testable early in the development of the AI tool, so that the end users of the information can assist in all phases of the pipeline.
Detection, Prognostication, and Prediction
Health care AI ideally improves detection, prognostication, or prediction5; defining its goal helps identify the right intervention strategy. Detection models answer questions about an individual patient’s current status, such as whether a patient has a specific finding on an imaging study, or whether they have a disease. For detection, informing a clinician of the probability that a state exists may be sufficient to inform action. In contrast, prognostication, or estimating likelihood of some future state such as 1-year mortality probability in chronic heart failure, may be insufficient to motivate change, although this information may benefit patient and clinician decision-making.6 Prediction of response to an intervention promotes the possibility of effective change in response to identifying patients at risk for an undesirable future state, such as modeling information about a tumor that identifies it as responsive to anti–PD-1/PD-L1 therapy.7 While a feature in a model may be both prognostic and predictive, guiding a health care team on differential treatment response is more informative than simple prognostication. Purposeful inclusion of variables that predict response, or nonresponse, to an intervention can offer actionable information to the end user. For example, a mandatory intervention to moderate alcohol use and reduce readmissions among trauma patients found it was effective for those without serious alcohol-related problems while it was ineffective for those with more serious alcohol-related problems, who are also at higher risk of readmission and in whom the expensive intervention was unlikely warranted.8 Modeling the interaction between patient characteristics and potential interventions could inform the discharge team what changes to implement to prevent an expected readmission for an individual patient.
Evaluating the Influence of AI on Health Care
When deciding to purchase a new device, or place a new drug on formulary, its intended use, the data supporting its use, and its potential effects on quality and value of patient care are usually considered. This assessment goes beyond evaluating accuracy of product claims. Trade-offs are constantly being made between different drugs, devices, and treatment approaches, including consideration of opportunity cost when resources are expended in one area but not another. Health care systems are likely paying attention to the analytics platform, model accuracy and calibration, and data curation but are likely paying less attention to whether the AI tools are achieving expected change. This is similar to building a pharmacy but not managing the formulary. Effective AI pipelines should go beyond design with change in mind and evaluate whether change is realized. For example, systems can exploit the learning health system model to embed formal evaluations of effects on, and outcomes of, health care operations.9 Results inform decisions about redesign (of the model or the intervention) as well as replacement, and sometimes removal, of a failed AI tool.
The growing understanding of AI tools among clinicians, administrators, and patients is improving transparency of the modeling process. To achieve benefit, however, requires focusing on the anticipated change that will be made in the health care system. Transforming the AI pipeline to more fully connect data science with clinical and operational needs in defining the desired change will shorten the cycle from innovation to positive influence. Evaluating the resulting changes and outcomes then becomes a core facet of managing the AI tool pipeline. Being intentional about matching the algorithm to the problem, and not the other way around, will be important in attempting to shepherd in the era of AI-informed health care.
Corresponding Author: Christopher J. Lindsell, PhD, Department of Biostatistics, Vanderbilt University Medical Center, 2525 West End Ave, Ste 1100, Nashville, TN 37023 (email@example.com).
Published Online: May 1, 2020. doi:10.1001/jama.2020.5035
Conflict of Interest Disclosures: Dr Lindsell reported that he is listed as an inventor on US Patent 19,267,175 (“Multi-biomarker-based outcome risk stratification model for adult septic shock”) and US Patent 29,238,841 (“Multi-biomarker-based outcome risk stratification model for pediatric septic shock”) and that his institution receives funding from Endpoint Health Inc for predictive clinical trials and predictive modeling in critical illness. No other authors reported disclosures.
Lindsell CJ, Stead WW, Johnson KB. Action-Informed Artificial Intelligence—Matching the Algorithm to the Problem. JAMA. Published online May 01, 2020. doi:10.1001/jama.2020.5035
Coronavirus Resource Center
Create a personal account or sign in to: