[Skip to Navigation]
Sign In
Views 2,030
Citations 0
Invited Commentary
Health Informatics
July 23, 2021

Using Artificial Intelligence to Make Use of Electronic Health Records Less Painful—Fighting Fire With Fire

Author Affiliations
  • 1American Board of Internal Medicine, Philadelphia, Pennsylvania
JAMA Netw Open. 2021;4(7):e2118298. doi:10.1001/jamanetworkopen.2021.18298

Sometimes it takes a computer to solve problems created by a computer. The travails of using electronic health records (EHRs) are well known and widely experienced. Literature identifies EHRs both as a major time sink for clinicians1 and as a major driver of burnout and frustration.2 Electronic health records have improved many things at the point of care, especially when used within the same practice or system. Being able to see trended laboratory or vital sign data in tabular or graphic format, seamless prescription ordering, and consistent access to concurrent data, such as telephone calls, generated in real time by colleagues in practice are all definite improvements over the old days of paper records. However, one thing that is clearly worse: dealing with scanned copies of old records. They are unindexed, unlabeled, and, once uploaded as unidentified PDFs, completely inscrutable. One waits for the PDF to load, hoping against hope that one is not about to see a fax cover sheet or a copy of an insurance preauthorization. The time and energy demanded for review is quite high, and one is tempted to ignore them altogether.

The study by Chi et al3 in JAMA Network Open offers a ray of hope that a promising solution could be on the horizon. Using a clever research design simulating real practice, the authors took assorted bundles of scanned old records and put them in front of clinicians to review, asking the clinicians to answer 22 questions about each packet that required information derived from somewhere in the stack. Once presented with the question, a timer started, so the authors were able to track how much time the clinicians spent finding the information in the records. The researchers also built an artificial intelligence (AI) tool that learned how to look at the PDFs and categorize the documents within them: Is this a pathology report? A procedure? An imaging report? They were also able to obtain information on the date of the document and combine “pages” into “documents” so that a multipage report could be recognized as such. They subjected the scanned records to the AI algorithm and presented the results to the participating clinicians using a web interface that described the document type and date and hyperlinked to the actual document in the scanned document. After obliging each clinician in the study to undergo a standardized short web-based training on how to use the output of the AI tool, they presented to each clinician 1 record that had been prescreened by the AI algorithm and 1 that had not, distributing their records among 12 willing clinicians. They compared the time it took to find the information as well as the accuracy of the information found using the 2 different approaches, one supported by the AI tool and the other the old-fashioned way. These first-time physician users spent 18% less time finding the information when supported by the AI tool with comparable accuracy. Eleven of the 12 volunteering clinicians in the study said they would use this tool if it were available to them, and that they would recommend it to colleagues.

Of course, there are many limitations to this study. It included 12 willing physicians at 1 institution; it was performed in the specialty of gastroenterology; and there was some rate of miscategorization of uncertain clinical or patient safety consequence. However, it can and should be understood as a proof of concept: AI can be used as a first-pass technology to reduce the workload of a clinician who must wade through voluminous old records. The users had a variety of suggestions for improving the process; however, the fact that, after only a short online training, the physicians saved such a meaningful amount of time and were positively disposed to having the tool available in real life is truly impressive.

One can imagine a variety of directions from here: the AI could be embedded in the EHR itself, someone could develop it as a commercial product, or it could be incorporated in the process of scanning records. Any of these approaches would be welcome relief to hard-pressed clinicians drowning in seas of unstructured data.

As much as we can and should celebrate the promise of this breakthrough, it is worth noting that there are several other, better ways to solve this problem. If we had truly robust standards for electronic data interchange and less anxiety about privacy, these kinds of data could be moved around more freely in a structured format. Of course, there are regional exchanges where they do. The data could also be created in structured format to begin with. The very gastroenterologists participating in the study are paid to perform procedures regardless of the format in which the procedure report is produced; one could imagine a world in which failure to produce a machine-readable structured procedure report precluded being paid at all. There are glimmers and communities wherein these solutions are more attainable than others; however, as long as we live in the Babel of free-form non-interoperable medical documentation, it is likely that an AI tool supporting humans who care for patients by wading through volumes of scanned documents can be a real contribution.

Back to top
Article Information

Published: July 23, 2021. doi:10.1001/jamanetworkopen.2021.18298

Open Access: This is an open access article distributed under the terms of the CC-BY License. © 2021 Baron RJ. JAMA Network Open.

Corresponding Author: Richard J. Baron, MD, American Board of Internal Medicine, 510 Walnut St, Ste 1700, Philadelphia, PA 19106 (rbaron@abim.org).

Conflict of Interest Disclosures: None reported.

References
1.
Sinsky  C, Colligan  L, Li  L,  et al.  Allocation of physician time in ambulatory practice: a 371 time and motion study in 4 specialties.   Ann Intern Med. 2016;165(11):753-760. doi:10.7326/M16-0961PubMedGoogle ScholarCrossref
2.
Shanafelt  TD, Dyrbye  LN, Sinsky  C,  et al.  Relationship between clerical burden and characteristics of the electronic environment with physician burnout and professional satisfaction.   Mayo Clin Proc. 2016;91(7):836-848. doi:10.1016/j.mayocp.2016.05.007 PubMedGoogle ScholarCrossref
3.
Chi  EA, Chi  G, Tsui  CT,  et al.  Development and validation of an artificial intelligence system to optimize clinician review of patient records.   JAMA Netw Open. 2021;4(7):e2117391. doi:10.1001/jamanetworkopen.2021.17391Google Scholar
Limit 200 characters
Limit 25 characters
Conflicts of Interest Disclosure

Identify all potential conflicts of interest that might be relevant to your comment.

Conflicts of interest comprise financial interests, activities, and relationships within the past 3 years including but not limited to employment, affiliation, grants or funding, consultancies, honoraria or payment, speaker's bureaus, stock ownership or options, expert testimony, royalties, donation of medical equipment, or patents planned, pending, or issued.

Err on the side of full disclosure.

If you have no conflicts of interest, check "No potential conflicts of interest" in the box below. The information will be posted with your response.

Not all submitted comments are published. Please see our commenting policy for details.

Limit 140 characters
Limit 3600 characters or approximately 600 words
    ×