[Skip to Content]
[Skip to Content Landing]
Viewpoint
Innovations in Health Care Delivery
February 9, 2016

Machine Learning and the Profession of Medicine

Author Affiliations
  • 1Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, California
JAMA. 2016;315(6):551-552. doi:10.1001/jama.2015.18421

Must a physician be human? A new computer, “Ellie,” developed at the Institute for Creative Technologies, asks questions as a clinician might, such as “How easy is it for you to get a good night’s sleep?” Ellie then analyzes the patient’s verbal responses, facial expressions, and vocal intonations, possibly detecting signs of posttraumatic stress disorder, depression, or other medical conditions. In a randomized study, 239 probands were told that Ellie was “controlled by a human” or “a computer program.” Those believing the latter revealed more personal material to Ellie, based on blind ratings and self-reports.1 In China, millions of people turn to Microsoft’s chatbot, “Xiaoice,”2 when they need a “sympathetic ear,” despite knowing that Xiaoice is not human. Xiaoice develops a specially attuned personality and sense of humor by methodically mining the Internet for real text conversations. Xiaoice also learns about users from their reactions over time and becomes sensitive to their emotions, modifying responses accordingly, all without human instruction. Ellie and Xiaoice are the result of machine learning technology.

First Page Preview View Large
First page PDF preview
First page PDF preview
×