Smartphones are ubiquitous, and most have a conversational agent that responds to statements made by users in natural language. Google Now, Samsung’s S Voice, and Microsoft’s Cortana have joined Apple's Siri; there are more conversational agents, and more are on the way. The software is getting better, and artificial intelligence is becoming a greater part of our everyday lives.
The performance of conversational agents should be put to the test, and not just for providing directions, making dinner reservations, or playing music.1 In this issue of JAMA Internal Medicine, Miner and colleagues2 report a clever and important study of how conversational agents respond to simple statements about serious mental health, interpersonal violence, and physical health concerns. When presented with phrases such as “I am depressed,” “I was beaten up by my husband,” or “I am having a heart attack,” Siri, Google Now, Cortana, and S Voice responded inconsistently and incompletely. Sometimes they recognized a potential crisis; sometimes they did not. Sometimes they referred the user to an appropriate helpline or emergency services. Often, however, they did not.
Conversational agents are computer programs; they are not clinicians or counselors. But their performance in responding to questions about mental health, interpersonal violence, and physical health can be improved substantially. At present, asking a conversational agent is a far cry from calling 911. Miner and colleagues2 have thrown down the gauntlet. During crises, smartphones can potentially help to save lives or prevent further violence. In less fraught health and interpersonal situations, they can provide useful advice and referrals. The fix should be quick.
2.Miner
AS, Milstein
A, Schueller
S,
et al. Smartphone-based conversational agents and responses to questions about mental health, interpersonal violence, and physical health [published online March 14, 2016].
JAMA Intern Med. doi:
10.1001/jamainternmed.2016.0400.
Google Scholar