[Skip to Content]
[Skip to Content Landing]
Table 1.  
Study Demographics
Study Demographics
Table 2.  
Differences in Communication Outcomes by Degree of Clinician Computer Use in Safety-Net Encounters
Differences in Communication Outcomes by Degree of Clinician Computer Use in Safety-Net Encounters
1.
Berkman  ND, Sheridan  SL, Donahue  KE, Halpern  DJ, Crotty  K.  Low health literacy and health outcomes: an updated systematic review.  Ann Intern Med. 2011;155(2):97-107.PubMedGoogle ScholarCrossref
2.
Frankel  R, Altschuler  A, George  S,  et al.  Effects of exam-room computing on clinician-patient communication: a longitudinal qualitative study.  J Gen Intern Med. 2005;20(8):677-682.PubMedGoogle ScholarCrossref
3.
Rouf  E, Whittle  J, Lu  N, Schwartz  MD.  Computers in the exam room: differences in physician-patient interaction may be due to physician experience.  J Gen Intern Med. 2007;22(1):43-48.PubMedGoogle ScholarCrossref
4.
Margalit  RS, Roter  D, Dunevant  MA, Larson  S, Reis  S.  Electronic medical record use and physician-patient communication: an observational study of Israeli primary care encounters.  Patient Educ Couns. 2006;61(1):134-141.PubMedGoogle ScholarCrossref
5.
Roter  D, Larson  S.  The Roter interaction analysis system (RIAS): utility and flexibility for analysis of medical interactions.  Patient Educ Couns. 2002;46(4):243-251.PubMedGoogle ScholarCrossref
6.
Mauksch  LB, Dugdale  DC, Dodson  S, Epstein  R.  Relationship, communication, and efficiency in the medical encounter: creating a clinical model from a literature review.  Arch Intern Med. 2008;168(13):1387-1395.PubMedGoogle ScholarCrossref
7.
Duke  P, Frankel  RM, Reis  S.  How to integrate the electronic health record and patient-centered communication into the medical visit: a skills-based approach.  Teach Learn Med. 2013;25(4):358-365.PubMedGoogle ScholarCrossref
Research Letter
January 2016

Association Between Clinician Computer Use and Communication With Patients in Safety-Net Clinics

Author Affiliations
  • 1Division of General Internal Medicine, University of California, San Francisco (UCSF)
  • 2UCSF Center for Vulnerable Populations at San Francisco General Hospital, San Francisco
  • 3Department of Medicine, Oregon Health & Science University, Portland
  • 4VA Portland Health Care System, Portland, Oregon
  • 5John Burns School of Medicine, University of Hawaii, Honolulu
  • 6Division of Rheumatology, UCSF
  • 7Institute for Health Policy Studies, UCSF
JAMA Intern Med. 2016;176(1):125-128. doi:10.1001/jamainternmed.2015.6186

Safety-net clinics serve populations with limited proficiency in English and limited health literacy who experience communication barriers that contribute to disparities in care and health.1 Implementation of electronic health records in safety-net clinics may affect communication between patients and health care professionals.2 We studied associations between clinician computer use and communication with patients with diverse chronic diseases in safety-net clinics.

Methods

This observational study took place from November 1, 2011, to November 30, 2013, at an academically affiliated public hospital with a basic electronic health record for reviewing test results, tracking health care maintenance, prescribing medications, and referring patients. Some clinics (internal medicine and diabetes) required typed visit documentation, which was optional in other clinics (family medicine, cardiology, and rheumatology).

Eligible adults who spoke English or Spanish had specific chronic conditions and received primary and subspecialty care (Table 1). Physicians, nurse practitioners, fellows, and residents could decline participation or designate patients as ineligible. Research assistants enrolled and interviewed patients by telephone before appointments, videotaped the subsequent visit, and interviewed patients after the visit. Clinician participants completed paper or online questionnaires. Data analysis was conducted from March 12, 2013, to September 11, 2015. All clinicians and patients provided written informed consent; patients provided verbal consent via telephone before the baseline interview. The University of California, San Francisco, Institutional Review Board approved this study.

The clinician computer use score summed the following 4 coder ratings (Cronbach α, .67): amount of review of computer data, typing or clicking the computer mouse, eye contact with patients, and noninteractive pauses.24 With ratings for eye contact reversed, as more eye contact is indicative of less computer use, high total scores (range, 0-12) indicated more computer use. Interrater reliability was 0.90 (4 videos), and we validated the score calculating its correlation (0.66) with clinician and patient statements occurring during computer use (33 encounters). After visits, patients rated the quality of medical care received in the past 6 months (poor to excellent).

We analyzed communication using the Roter Interaction Analysis System.5 Statements were assigned 1 of 37 codes (average interrater reliability, 0.74), which were summed in categories (Table 2). Rapport building included positive (eg, laughter or agreement), negative (eg, criticism or disagreement), emotional (eg, empathy or partnership), and social (“chit-chat”) behaviors. Positive affect sums ratings for emotional tone.

We categorized computer use scores into tertiles (Table 1). Multivariable analyses controlled for length of visit and variables with bivariate associations (P < .10) with higher computer use (lower vs higher patient educational level, poorer vs better quality of life, nurse practitioners vs physicians, fewer vs more clinician practice years, and general medicine, family medicine, and diabetes clinics vs rheumatology and cardiology clinics). We performed generalized estimating equations regression for within-clinician correlations (Stata/SE, version 12.1; StataCorp, LP), after multilevel regression showed minimal within-patient correlation.

Results

We recorded 71 encounters among 47 patients (38%) and 39 clinicians (83%) (Table 1). Compared with patients in clinical encounters with low computer use, patients in clinical encounters with high computer use were less likely to rate care as excellent (12 of 25 [48%] vs 16 of 19 [83%] patients; P = .04) and used more social rapport building, such as statements like “You like wearing your hair that way…” (adjusted difference in number of statements made during the encounter, 9.6; P = .04). Patients in clinical encounters with moderate computer use used less positive rapport building, with statements such as “Thank you” (–18.3; P < .01), but demonstrated more positive affect tone (2.4; P = .02).

Clinicians in encounters with high computer use engaged in more negative rapport building, using statements such as “No, it looks like [your specialist] filled that medication for you. It has a refill.” (2.7; P < .01). They also used more social rapport building, with statements such as “I’m looking at a few different jobs” (9.7; P < .01), and demonstrated less positive affect (–4.1; P < .01) (Table 2).

Discussion

High computer use by clinicians in safety-net clinics was associated with lower patient satisfaction and observable communication differences. Although social rapport building can build trust and satisfaction,6 concurrent computer use may inhibit authentic engagement, and multitasking clinicians may miss openings for deeper connection with their patients. Information in the electronic health record may trigger disagreement by clinicians as they detect and clarify patient misunderstandings.

The limitations of this study include possible volunteer bias, recall bias with the satisfaction measure, confounding, and effects on eye contact ratings by noncomputer tasks. Also, not all associations were consistent as computer use increased; moderate computer use by clinicians was associated with mixed patient communication differences.

Software, structural, and curricular interventions7 should support clinicians’ use of the electronic health record in ways that enhance their capacity to communicate with and care for diverse patients.

Back to top
Article Information

Corresponding Author: Neda Ratanawongsa, MD, MPH, Division of General Internal Medicine, University of California, San Francisco, 1001 Potrero Ave, PO Box 1364, San Francisco, CA 94110 (neda.ratanawongsa@ucsf.edu).

Published Online: November 30, 2015. doi:10.1001/jamainternmed.2015.6186.

Author Contributions: Dr Ratanawongsa had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

Study concept and design: Ratanawongsa, Yelin, Schillinger.

Acquisition, analysis, or interpretation of data: Ratanawongsa, Barton, Lyles, Wu, Martinez.

Drafting of the manuscript: Ratanawongsa, Yelin.

Critical revision of the manuscript for important intellectual content: Barton, Lyles, Wu, Yelin, Martinez, Schillinger.

Statistical analysis: Ratanawongsa, Yelin.

Obtained funding: Ratanawongsa, Schillinger.

Administrative, technical, or material support: Barton, Lyles, Wu, Martinez, Schillinger.

Study supervision: Ratanawongsa, Yelin, Schillinger.

Conflict of Interest Disclosures: None reported.

Funding/Support: This research was supported by grants 1K08HS022561 (Dr Ratanawongsa) and K99HS022408 (Dr Lyles) from the Agency for Heathcare Research & Quality and award number KL2TR000143 (Drs Ratanawongsa and Lyles) from the National Center for Advancing Translational Sciences of the National Institutes of Health and by the Health Delivery Systems Center for Diabetes Translational Research funded through grant 1P30-DK092924 from the National Institute of Diabetes and Digestive and Kidney Diseases (Dr Schillinger). Drs Ratanawongsa and Barton were fellows supported by the Pfizer Medical Academic Partnership Fellowship in Health Literacy, under the mentorship of Drs Schillinger and Yelin.

Role of the Funder/Sponsor: The funding sources had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

Disclaimer: The views expressed herein are solely the responsibility of the authors and do not necessarily represent the official views of the Agency for Healthcare Research & Quality or the National Institutes of Health.

Previous Presentation: Preliminary data from this article were presented at the International Conference on Communication in Healthcare; September 30, 2013; Montreal, Quebec, Canada.

References
1.
Berkman  ND, Sheridan  SL, Donahue  KE, Halpern  DJ, Crotty  K.  Low health literacy and health outcomes: an updated systematic review.  Ann Intern Med. 2011;155(2):97-107.PubMedGoogle ScholarCrossref
2.
Frankel  R, Altschuler  A, George  S,  et al.  Effects of exam-room computing on clinician-patient communication: a longitudinal qualitative study.  J Gen Intern Med. 2005;20(8):677-682.PubMedGoogle ScholarCrossref
3.
Rouf  E, Whittle  J, Lu  N, Schwartz  MD.  Computers in the exam room: differences in physician-patient interaction may be due to physician experience.  J Gen Intern Med. 2007;22(1):43-48.PubMedGoogle ScholarCrossref
4.
Margalit  RS, Roter  D, Dunevant  MA, Larson  S, Reis  S.  Electronic medical record use and physician-patient communication: an observational study of Israeli primary care encounters.  Patient Educ Couns. 2006;61(1):134-141.PubMedGoogle ScholarCrossref
5.
Roter  D, Larson  S.  The Roter interaction analysis system (RIAS): utility and flexibility for analysis of medical interactions.  Patient Educ Couns. 2002;46(4):243-251.PubMedGoogle ScholarCrossref
6.
Mauksch  LB, Dugdale  DC, Dodson  S, Epstein  R.  Relationship, communication, and efficiency in the medical encounter: creating a clinical model from a literature review.  Arch Intern Med. 2008;168(13):1387-1395.PubMedGoogle ScholarCrossref
7.
Duke  P, Frankel  RM, Reis  S.  How to integrate the electronic health record and patient-centered communication into the medical visit: a skills-based approach.  Teach Learn Med. 2013;25(4):358-365.PubMedGoogle ScholarCrossref
×