[Skip to Navigation]
Sign In
Figure 1.  Percentage of Patients at Each Level of Worry, Stratified by Normal vs Not Normal Test Results
Percentage of Patients at Each Level of Worry, Stratified by Normal vs Not Normal Test Results
Figure 2.  Adjusted Pooled Odds Ratios (ORs) Using a Random-Effects Model of Patient Worry as a Function of Whether a Test Result Was Not Normal
Adjusted Pooled Odds Ratios (ORs) Using a Random-Effects Model of Patient Worry as a Function of Whether a Test Result Was Not Normal

Normal test result was the reference value. I2 for heterogeneity was less than 0.01%, suggesting very low intersite heterogeneity. Markers indicate ORs, with horizontal lines indicating 99% CIs; diamond indicates the pooled estimate, with outer points of the diamond indicating the 99% CI of the pooled estimate. CU Anschutz indicates University of Colorado Anschutz Medical Center; UC Davis Health, University of California, Davis Health; RE, random-effects; UTSW, University of Texas Southwestern Medical Center; and VUMC, Vanderbilt University Medical Center.

Figure 3.  Adjusted Pooled Odds Ratios (ORs) Using a Random-Effects Model Evaluating the Association Between Precounseling Patients About the Reasons for Ordering a Test and Level of Worry
Adjusted Pooled Odds Ratios (ORs) Using a Random-Effects Model Evaluating the Association Between Precounseling Patients About the Reasons for Ordering a Test and Level of Worry

I2 for heterogeneity was 36.50%, suggesting moderate intersite heterogeneity. Markers indicate ORs, with horizontal lines indicating 99% CIs; diamond indicates the pooled estimate, with outer points of the diamond indicating the 99% CI of the pooled estimate. CU Anschutz indicates University of Colorado Anschutz Medical Center; UC Davis Health, University of California, Davis Health; RE, random-effects; UTSW, University of Texas Southwestern Medical Center; and VUMC, Vanderbilt University Medical Center.

Table 1.  Survey Responses and Respondent Demographics
Survey Responses and Respondent Demographics
Table 2.  Patient Portal Preferences
Patient Portal Preferences
1.
21st Century Cures Act: interoperability, information blocking, and the ONC health IT certification program. Federal Register. February 2021. Accessed November 3, 2022. https://www.federalregister.gov/documents/2020/05/01/2020-07419/21st-century-cures-act-interoperability-information-blocking-and-the-onc-health-it-certification
2.
Institute of Medicine (US) Committee on Quality of Health Care in America. Crossing the Quality Chasm: A New Health System for the 21st Century. National Academy Press; 2001.
3.
Rigby  M, Georgiou  A, Hyppönen  H,  et al.  Patient portals as a means of information and communication technology support to patient-centric care coordination—the missing evidence and the challenges of evaluation: a joint contribution of IMIA WG EVAL and EFMI WG EVAL.   Yearb Med Inform. 2015;10(1):148-159. PubMedGoogle Scholar
4.
Ross  SE, Lin  CT.  The effects of promoting patient access to medical records: a review.   J Am Med Inform Assoc. 2003;10(2):129-138. doi:10.1197/jamia.M1147 PubMedGoogle ScholarCrossref
5.
Giardina  TD, Baldwin  J, Nystrom  DT, Sittig  DF, Singh  H.  Patient perceptions of receiving test results via online portals: a mixed-methods study.   J Am Med Inform Assoc. 2018;25(4):440-446. doi:10.1093/jamia/ocx140 PubMedGoogle ScholarCrossref
6.
Arvisais-Anhalt  S, Lau  M, Lehmann  CU,  et al.  The 21st Century Cures Act and multiuser electronic health record access: potential pitfalls of information release.   J Med internet Res. 2022;24(2):e34085. doi:10.2196/34085 PubMedGoogle ScholarCrossref
7.
Carini  E, Villani  L, Pezzullo  AM,  et al.  The impact of digital patient portals on health outcomes, system efficiency, and patient attitudes: updated systematic literature review.   J Med internet Res. 2021;23(9):e26189. doi:10.2196/26189 PubMedGoogle ScholarCrossref
8.
Lyles  CR, Nelson  EC, Frampton  S, Dykes  PC, Cemballi  AG, Sarkar  U.  Using electronic health record portals to improve patient engagement: research priorities and best practices.   Ann Intern Med. 2020;172(11)(suppl):S123-S129. doi:10.7326/M19-0876 PubMedGoogle ScholarCrossref
9.
Irizarry  T, DeVito Dabbs  A, Curran  CR.  Patient portals and patient engagement: a state of the science review.   J Med internet Res. 2015;17(6):e148. doi:10.2196/jmir.4255 PubMedGoogle ScholarCrossref
10.
Antonio  MG, Petrovskaya  O, Lau  F.  The state of evidence in patient portals: umbrella review.   J Med internet Res. 2020;22(11):e23851. doi:10.2196/23851 PubMedGoogle ScholarCrossref
11.
Walker  J, Leveille  S, Bell  S,  et al.  OpenNotes after 7 years: patient experiences with ongoing access to their clinicians’ outpatient visit notes.   J Med internet Res. 2019;21(5):e13876. doi:10.2196/13876 PubMedGoogle ScholarCrossref
12.
Salmi  L, Blease  C, Hägglund  M, Walker  J, DesRoches  CM. US policy requires immediate release of records to patients. BMJ. 2021;372:n246.
13.
Steitz  BD, Wong  JIS, Cobb  JG, Carlson  B, Smith  G, Rosenbloom  ST.  Policies and procedures governing patient portal use at an Academic Medical Center.   JAMIA Open. 2019;2(4):479-488. doi:10.1093/jamiaopen/ooz039 PubMedGoogle ScholarCrossref
14.
Steitz  BD, Sulieman  L, Wright  A, Rosenbloom  ST.  Association of immediate release of test results to patients with implications for clinical workflow.   JAMA Netw Open. 2021;4(10):e2129553. doi:10.1001/jamanetworkopen.2021.29553 PubMedGoogle ScholarCrossref
15.
Esch  T, Mejilla  R, Anselmo  M, Podtschaske  B, Delbanco  T, Walker  J.  Engaging patients through open notes: an evaluation using mixed methods.   BMJ Open. 2016;6(1):e010034-e11. doi:10.1136/bmjopen-2015-010034 PubMedGoogle ScholarCrossref
16.
Bell  SK, Mejilla  R, Anselmo  M,  et al.  When doctors share visit notes with patients: a study of patient and doctor perceptions of documentation errors, safety opportunities and the patient-doctor relationship.   BMJ Qual Saf. 2017;26(4):262-270. doi:10.1136/bmjqs-2015-004697 PubMedGoogle ScholarCrossref
17.
Denneson  LM, Cromer  R, Williams  HB, Pisciotta  M, Dobscha  SK.  A qualitative analysis of how online access to mental health notes is changing clinician perceptions of power and the therapeutic relationship.   J Med internet Res. 2017;19(6):e208. doi:10.2196/jmir.6915 PubMedGoogle ScholarCrossref
18.
Leonard  LD, Himelhoch  B, Huynh  V,  et al.  Patient and clinician perceptions of the immediate release of electronic health information.   Am J Surg. 2022;224(1 Pt A):27-34. doi:10.1016/j.amjsurg.2021.12.002 PubMedGoogle ScholarCrossref
19.
D’Costa  SN, Kuhn  IL, Fritz  Z.  A systematic review of patient access to medical records in the acute setting: practicalities, perspectives and ethical consequences.   BMC Med Ethics. 2020;21(1):18. doi:10.1186/s12910-020-0459-6 PubMedGoogle ScholarCrossref
20.
DesRoches  CM, Leveille  S, Bell  SK,  et al.  The views and experiences of clinicians sharing medical record notes with patients.   JAMA Netw Open. 2020;3(3):e201753-e12. doi:10.1001/jamanetworkopen.2020.1753 PubMedGoogle ScholarCrossref
21.
Blease  C, Salmi  L, Hägglund  M, Wachenheim  D, DesRoches  C.  COVID-19 and open notes: a new method to enhance patient safety and trust.   JMIR Ment Health. 2021;8(6):e29314. doi:10.2196/29314 PubMedGoogle ScholarCrossref
22.
Sarabu  C, Lee  T, Hogan  A, Pageler  N.  The value of OpenNotes for pediatric patients, their families and impact on the patient-physician relationship.   Appl Clin Inform. 2021;12(1):76-81. doi:10.1055/s-0040-1721781 PubMedGoogle ScholarCrossref
23.
Turer  RW, DesRoches  CM, Salmi  L, Helmer  T, Rosenbloom  ST.  Patient perceptions of receiving COVID-19 test results via an online patient portal: an open results survey.   Appl Clin Inform. 2021;12(4):954-959. doi:10.1055/s-0041-1736221 PubMedGoogle ScholarCrossref
24.
Grimes  GC, Reis  MD, Budati  G, Gupta  M, Forjuoh  SN.  Patient preferences and physician practices for laboratory test results notification.   J Am Board Fam Med. 2009;22(6):670-676. doi:10.3122/jabfm.2009.06.090078 PubMedGoogle ScholarCrossref
25.
Steitz  BD, Langford  K, Turer  RW,  et al. Patient attitudes about immediate access to test results. Abstract presented at: AMIA 2022 Clinical Informatics Conference; May 24-26, 2022; Houston, Texas.
26.
Kannan  V, Fish  JS, Mutz  JM,  et al.  Rapid development of specialty population registries and quality measures from electronic health record data: an agile framework.   Methods Inf Med. 2017;56(99):e74-e83. doi:10.3414/ME16-02-0031 PubMedGoogle ScholarCrossref
27.
Wright  JA, Leveille  SG, Chimowitz  H,  et al.  Validation of a brief scale to assess ambulatory patients’ perceptions of reading visit notes: a scale development study.   BMJ Open. 2020;10(10):e034517. doi:10.1136/bmjopen-2019-034517 PubMedGoogle ScholarCrossref
28.
Harris  PA, Taylor  R, Thielke  R, Payne  J, Gonzalez  N, Conde  JG.  Research electronic data capture (REDCap)—a metadata-driven methodology and workflow process for providing translational research informatics support.   J Biomed Inform. 2009;42(2):377-381. doi:10.1016/j.jbi.2008.08.010 PubMedGoogle ScholarCrossref
29.
Langan  D, Higgins  JPT, Jackson  D,  et al.  A comparison of heterogeneity variance estimators in simulated random-effects meta-analyses.   Res Synth Methods. 2019;10(1):83-98. doi:10.1002/jrsm.1316 PubMedGoogle ScholarCrossref
30.
Slade  E, Naylor  MG.  A fair comparison of tree-based and parametric methods in multiple imputation by chained equations.   Stat Med. 2020;39(8):1156-1166. doi:10.1002/sim.8468 PubMedGoogle ScholarCrossref
31.
Shah  AD, Bartlett  JW, Carpenter  J, Nicholas  O, Hemingway  H.  Comparison of random forest and parametric imputation models for imputing missing data using MICE: a CALIBER study.   Am J Epidemiol. 2014;179(6):764-774. doi:10.1093/aje/kwt312 PubMedGoogle ScholarCrossref
32.
Harrell  FE.  Regression Modeling Strategies: With Applications to Linear Models, Logistic and Ordinal Regression, and Survival Analysis. 2nd ed. Springer; 2015. doi:10.1007/978-3-319-19425-7
33.
Harrell  FE. Regression Modeling Strategies. Springer; 2021.
34.
Higgins  JPT, Thompson  SG.  Quantifying heterogeneity in a meta-analysis.   Stat Med. 2002;21(11):1539-1558. doi:10.1002/sim.1186 PubMedGoogle ScholarCrossref
35.
West  SL, Gartlehner  G, Mansfield  AJ,  et al. Comparative Effectiveness Review Methods: Clinical Heterogeneity. Agency for Healthcare Research and Quality. 2010. Accessed August 17, 2022. https://www.ncbi.nlm.nih.gov/books/NBK53310/
36.
Viechtbauer  W.  Conducting meta-analyses in R with the metafor package.   J Stat Softw. 2010;36(3):1-48. doi:10.18637/jss.v036.i03 Google ScholarCrossref
37.
Giardina  TD, Modi  V, Parrish  DE, Singh  H.  The patient portal and abnormal test results: an exploratory study of patient experiences.   Patient Exp J. 2015;2(1):148-154. doi:10.35680/2372-0247.1055 PubMedGoogle ScholarCrossref
38.
Coppola  KM.  The promise and peril of the patient portal.   JAMA Neurol. 2022;79(1):11-12. doi:10.1001/jamaneurol.2021.4453 PubMedGoogle ScholarCrossref
39.
Chen  KT, de Virgilio  C.  The patient portal: power to the people.   Am J Surg. 2022;224(1 Pt A):25-26. doi:10.1016/j.amjsurg.2022.03.014 PubMedGoogle ScholarCrossref
40.
Nazi  KM, Turvey  CL, Klein  DM, Hogan  TP, Woods  SSVA.  VA OpenNotes: exploring the experiences of early patient adopters with access to clinical notes.   J Am Med Inform Assoc. 2015;22(2):380-389. doi:10.1136/amiajnl-2014-003144 PubMedGoogle ScholarCrossref
41.
Wolff  JL, Darer  JD, Berger  A,  et al. Inviting patients and care partners to read doctors’ notes: OpenNotes and shared access to electronic medical records. J Am Med Inform Assoc. 2017;24(e1):e166-e172.
42.
Harvey  JA, Cohen  MA, Brenin  DR, Nicholson  BT, Adams  RB.  Breaking bad news: a primer for radiologists in breast imaging.   J Am Coll Radiol. 2007;4(11):800-808. doi:10.1016/j.jacr.2007.06.009 PubMedGoogle ScholarCrossref
43.
Monsonego  J, Cortes  J, da Silva  DP, Jorge  AF, Klein  P.  Psychological impact, support and information needs for women with an abnormal Pap smear: comparative results of a questionnaire in three European countries.   BMC Womens Health. 2011;11(1):18. doi:10.1186/1472-6874-11-18 PubMedGoogle ScholarCrossref
44.
Anthony  DL, Campos-Castillo  C, Lim  PS.  Who isn’t using patient portals and why? evidence and implications from a national sample of US adults.   Health Aff (Millwood). 2018;37(12):1948-1954. doi:10.1377/hlthaff.2018.05117 PubMedGoogle ScholarCrossref
2 Comments for this article
EXPAND ALL
Withholding Abnormal Results Until Appointment Time?
Hojin Seo, Medical Student | Rocky Vista University
Thank you to the authors for writing about this topic. In a society with limitless information available to us at our fingertips, it is easy to want to know everything right away. As important as it is to respect patients' autonomy, healthcare practitioners need to balance the increased worry and burden that accompanies test results with a lot of tact. As this study showed that receiving a not normal result was associated with increased worry, would it be in a patient's benefit for providers to withhold only abnormal results until they can meet for an appointment? Or is this in a way encroaching on a patient's right to have immediate access to their own health?
CONFLICT OF INTEREST: None Reported
READ MORE
Author Response to Hojin Seo
Bryan Steitz, PhD | Vanderbilt University Medical Center
Thank you for this important comment. The 21st Cures Act mandates the immediate availability of electronic health information, including test results, upon patient request. This improved information availability offers numerous benefits to patients and clinicians, including allowing patients engage in greater ownership of their health care and better participate in shared decision making. Our results indicated that not normal results were associated with increased worry. However, considering the finding that 95.3% of respondents who received not normal results would like to continue to be able to access immediately released test results, it is difficult to recommend withholding all abnormal results until a later appointment, which are often scheduled for weeks or months after the result is available. In our conversations with patients, we consistently hear that bad news is bad news, regardless of where and when it is received. Many patients may prefer to receive bad news in the comfort of their home or surrounded by friends and family. For example, allowing patients this opportunity, when desired, provides patients time to process the news and develop a set of questions to guide conversations with their clinician and ensure that they have an active voice in their healthcare. An additional real-world consideration is that withholding only not normal results may be confusing– is a result not showing up because it is not normal or because it has not yet been finalized?

Rather than reverting to delaying or withholding test results until a clinician deems it most appropriate, there is significant opportunity to develop and refine innovative ways to deliver test results in a way that allows patients indicate their specific preferences. For example, our research found weak evidence that pre-counseling may help to reduce worry from abnormal test results, but there is not an established best practice for how or when to best pre-counsel patients. There is also opportunity to allow patients to indicate how or when they are notified about a test result; moving from an opt-out to opt-in model for notifications allows for greater flexibility to accommodate patient preferences. Improving information sharing is a step towards helping to promote patient autonomy and allowing them to better engage as a member of their healthcare team.
CONFLICT OF INTEREST: None Reported
READ MORE
Original Investigation
Health Policy
March 20, 2023

Perspectives of Patients About Immediate Access to Test Results Through an Online Patient Portal

Author Affiliations
  • 1Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, Tennessee
  • 2Department of Emergency Medicine, UT Southwestern Medical Center, Dallas, Texas
  • 3Clinical Informatics Center, UT Southwestern Medical Center, Dallas, Texas
  • 4Department of Medicine, University of Colorado Anschutz Medical Campus, Aurora
  • 5Department of Clinical Informatics, University of California Davis Health, Sacramento
  • 6Department of Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts
  • 7Department of Medicine, Harvard Medical School, Boston, Massachusetts
  • 8Department of Pediatrics, UT Southwestern Medical Center, Dallas, Texas
  • 9Department of Insights and Operations, Vanderbilt University Medical Center, Nashville, Tennessee
  • 10Department of Ophthalmology and Visual Sciences, Vanderbilt University Medical Center, Nashville, Tennessee
  • 11Department of Biostatistics, Vanderbilt University Medical Center, Nashville, Tennessee
  • 12Department of Medicine, Vanderbilt University Medical Center, Nashville, Tennessee
  • 13Department of Pediatrics, Vanderbilt University Medical Center, Nashville, Tennessee
JAMA Netw Open. 2023;6(3):e233572. doi:10.1001/jamanetworkopen.2023.3572
Key Points

Question  What are patient attitudes and perspectives related to viewing immediately released test results through an online patient portal?

Findings  In this survey study of 8139 respondents at 4 US academic medical centers, 96% of patients preferred receiving immediately released test results online even if their health care practitioner had not yet reviewed the result. A subset of respondents experienced increased worry after receiving abnormal results.

Meaning  In this study, most patients supported receiving immediately released test results via a patient portal, but some patients experienced increased worry, especially when test results were abnormal.

Abstract

Importance  The 21st Century Cures Act Final Rule mandates the immediate electronic availability of test results to patients, likely empowering them to better manage their health. Concerns remain about unintended effects of releasing abnormal test results to patients.

Objective  To assess patient and caregiver attitudes and preferences related to receiving immediately released test results through an online patient portal.

Design, Setting, and Participants  This large, multisite survey study was conducted at 4 geographically distributed academic medical centers in the US using an instrument adapted from validated surveys. The survey was delivered in May 2022 to adult patients and care partners who had accessed test results via an online patient portal account between April 5, 2021, and April 4, 2022.

Exposures  Access to test results via a patient portal between April 5, 2021, and April 4, 2022.

Main Outcomes and Measures  Responses to questions related to demographics, test type and result, reaction to result, notification experience and future preferences, and effect on health and well-being were aggregated. To evaluate characteristics associated with patient worry, logistic regression and pooled random-effects models were used to assess level of worry as a function of whether test results were perceived by patients as normal or not normal and whether patients were precounseled.

Results  Of 43 380 surveys delivered, there were 8139 respondents (18.8%). Most respondents were female (5129 [63.0%]) and spoke English as their primary language (7690 [94.5%]). The median age was 64 years (IQR, 50-72 years). Most respondents (7520 of 7859 [95.7%]), including 2337 of 2453 individuals (95.3%) who received nonnormal results, preferred to immediately receive test results through the portal. Few respondents (411 of 5473 [7.5%]) reported that reviewing results before they were contacted by a health care practitioner increased worry, though increased worry was more common among respondents who received abnormal results (403 of 2442 [16.5%]) than those whose results were normal (294 of 5918 [5.0%]). The result of the pooled model for worry as a function of test result normality was statistically significant (odds ratio [OR], 2.71; 99% CI, 1.96-3.74), suggesting an association between worry and nonnormal results. The result of the pooled model evaluating the association between worry and precounseling was not significant (OR, 0.70; 99% CI, 0.31-1.59).

Conclusions and Relevance  In this multisite survey study of patient attitudes and preferences toward receiving immediately released test results via a patient portal, most respondents preferred to receive test results via the patient portal despite viewing results prior to discussion with a health care professional. This preference persisted among patients with nonnormal results.

Introduction

The US Office of the National Coordinator for Health Information Technology’s Final Rule implementing the information-blocking portion of the 21st Century Cures Act went into effect on April 5, 2021. The Final Rule mandates the immediate electronic availability of nearly all test results, medication lists, and clinical notes to patients and care partners upon their request.1 Improved access to personal health information allows patients to manage their health care and supports coordination efforts among patients, care partners, and health care teams.2-4 However, the benefit to immediate release of test results may be offset by unintended consequences to patient well-being and confidentiality.5,6

Online patient portals have emerged as important tools for facilitating engagement and enabling patients to access health information from their medical records, review educational resources, participate in medical decision-making, and communicate with clinicians.7-9 Prior to the Cures Act, individual health systems could choose which health information to share via portals. Many health systems shared laboratory and imaging results; some also shared clinical notes.10-12 However, many health systems suppressed or delayed the release of certain results, collectively defined as information blocking. Information blocking was intended to provide health care practitioners time to review and discuss results with patients when indicated. Delays and suppression were common for results associated with misinterpretation or emotional distress (eg, HIV testing, genetic testing for Huntington disease, or tissue biopsy results concerning for malignant tumors).13 Early research suggested that immediate release of test results was associated with more patients viewing their health data. A study14 since the Final Rule went into effect showed a 4-fold increase in the number of results viewed by patients prior to clinician counseling and a doubling of the number of patient-initiated messages sent to clinicians within 6 hours of viewing results.

Full access to medical records has been advocated as a strategy for strengthening patient-clinician relationships.15-17 Most patients want unrestricted access to their medical records.18,19 The OpenNotes collaborative established the immediate release of clinical notes (ie, open notes) as best practice.11,20-22 However, the practice of immediately releasing test results without context provided by clinician counseling (ie, open results) remains controversial.5 While portal users may be satisfied receiving test results online, portals provide inadequate guidance on how to interpret sensitive or abnormal results, which may contribute to negative emotions.5,23

Some patients and clinicians prefer to discuss sensitive or abnormal results synchronously to review results, answer questions, and formulate a treatment plan.24 Pilot studies suggest varied patient preferences about how and when to receive results.23,25 Result release strategies should align with patient preferences and minimize distress. To best design release strategies, we first must understand patient attitudes and preferences related to open results, which have not been widely studied. To address this gap, we surveyed a large cohort of patients and care partners receiving immediately released test results via a patient portal at 4 geographically diverse academic medical centers.

Methods
Study Setting and Participants

This survey study was fielded at 4 US academic medical centers serving diverse geographic regions, including the Pacific West (University of California, Davis Health [UC Davis Health]), Rocky Mountain Region (University of Colorado Anschutz Medical Center [CU Anschutz]), Southwest (University of Texas Southwestern Medical Center [UTSW]), and Southeast (Vanderbilt University Medical Center [VUMC]). Eligible participants included English-speaking adult patients and their designated caregivers with email addresses documented in the electronic health record (EHR) who accessed test results via a patient portal in the calendar year after the implementation of the Cures Act (April 5, 2021, to April 4, 2022). UTSW, UC Davis Health, and VUMC recruited participants from registries of patients who had previously consented to be contacted for research.26 CU Anschutz did not have a comparable registry and instead invited all eligible patients who had viewed results on the portal in the month preceding the study. All survey sites use the Epic EHR and MyChart patient portal (Epic Systems Corporation). The institutional review board at each site approved all study procedures and granted waivers of informed consent since patient identifiers were not collected. We followed the American Association for Public Opinion Research (AAPOR) reporting guideline.

Survey Instrument

We adapted a previously validated instrument designed to evaluate patient perceptions about open notes11,27 and later about the immediate release of COVID-19 test results.23 We piloted this instrument with VUMC’s Patient Advisory Council.25 The instrument included 29 questions in 6 domains: (1) demographics and portal user role, (2) test result information, (3) result review behaviors, (4) education and health care practitioner follow-up, (5) effects on health and well-being, and (6) preferences for future results. Respondents who reviewed multiple test results during the study period could select multiple result types in their response.

We implemented the survey using REDCap.28 The instrument is available in the eAppendix in Supplement 1. We built the first REDCap project at VUMC and replicated it at the other sites using REDCap’s sharing tools. This facilitated identical content except for site-specific branding.

Survey Procedure

In May 2022, we emailed eligible participants an explanation of the study and a survey link. Participants who did not initially complete the survey received 2 follow-up emails sent approximately 10 days apart. The survey remained open for 33 days. Each site managed local survey distribution and data collection. Participants were not compensated.

Statistical Analysis

We calculated descriptive statistics from each site for all survey questions. Question-level data are reported as the count and percentage of responses among available participants for each respective question. We also computed descriptive statistics, stratifying by whether patients were precounseled (eTable in Supplement 1).

We then evaluated participant-reported level of worry as a function of whether participants perceived test results as normal or not normal and whether they were precounseled (ie, the reason for the test was explained before testing). Worry was represented as an ordinal categorical variable with the following values: (1) “I was never worried,” (2) “much less worried,” (3) “less worried,” (4) “no change,” (5) “more worried,” and (6) “much more worried.” The independent variables (test result normality and precounseling) were represented as dichotomous variables. The not normal category was an aggregation of responses of “not normal,” “other,” and “unknown” on the survey question.

We plotted the all-site proportion of participants with each level of worry, stratified by normal vs not normal test results. We then performed a prospective meta-analysis using random-effects models to pool site-specific odds of worry as a function of test result normality and precounseling. Site-specific odds were calculated using multivariable proportional odds ordinal logistic regression models evaluating worry as a function of test result normality and whether participants were precounseled. For random-effects models, we used restricted maximum likelihood estimation for model generation and Hartung-Knapp-Sidik-Jonkman–style test statistics.29 Site-specific models were adjusted by identical covariates to account for potential confounding. We selected candidate covariates based on clinical expertise and known patient portal disparities. We collapsed covariates representing less than 2% of the study population into “other” categories. We required all sites to use identical models to facilitate meta-analysis via the random-effects model; thus, covariates with insufficient samples at any of the sites were removed from all 4 sites’ models. We evaluated for collinearity using Spearman correlation coefficients. No covariate pairs had correlation coefficients greater than 0.5, so we did not remove any covariates due to collinearity. Finally, variables were removed for missingness over 30%. We performed random forest imputation using all candidate variables (including those omitted due to missingness).30,31 The selection process yielded the following covariates: age, comorbidity count, employment status, health care worker status, ethnicity, race, language, test type, precounseling, and mode of contact regarding test results. Race and ethnicity were assessed by participant self-report. Ethnicity categories were Spanish or Latino, and race categories were American Indian or Pacific Native, Asian, Black or African American, Native Hawaiian or Pacific Islander, White, and other (listed as an option on the survey). Continuous variables (comorbidity count and age) were modeled as restricted cubic splines with 3 knots. Restricted cubic splines are piecewise polynomials that allow models to account for nonlinear relationships and are restricted to linear functions in the heads and tails to avoid erratic behavior at the extremes.32 Before model fitting, we examined the proportional odds assumption using univariate models plotted across outcomes strata using the mean value of each variable per strata.33

We also fit site-specific multivariable models including first- and second-order interaction terms to test for interactions between test result normality and precounseling. This was performed to assess a potential association between test result normality and precounseling, which might require the effects of each to be modeled differently given the status of the other. Pooled odds ratios (ORs) for worry as a function of test result normality and precounseling are reported along with I2 statistics for heterogeneity.34 We defined I2 less than 25% as low heterogeneity, between 25% and 50% as moderate heterogeneity, and greater than 50% as high heterogeneity.35 Analyses were performed using R, version 4.1.2 (R Project for Statistical Computing) with the rms package for single-site regression and the metafor package for meta-analysis.32,36 We set 1-sided P < .05 for likelihood ratio χ2 testing.

Results

Of 43 380 surveys delivered, there were 8139 participants (18.8%), of whom 5129 (63.0%) identified as female, 2895 (35.6%) as male, and 115 (1.4%) as other or unknown gender. A total of 120 (1.5%) were American Indian or Pacific Native; 250 (3.1%), Asian; 428 (5.3%), Black or African American; 23 (0.2%), Native Hawaiian or Pacific Islander; 6900 (84.8%), White; and 245 (3.0%) other race; 420 (5.2%) were Spanish or Latino. Most patients spoke English as their primary language (7690 [94.5%]). The median age of participants was 64 years (IQR, 50-72 years). Table 1 provides detailed respondent demographic characteristics. A total of 6306 of 7856 respondents (80.3%) reported reviewing at least 1 test result in the past month, and 5767 of 6245 (92.3%) reported receiving precounseling. Most tests were blood tests (4730 of 6276 [75.4%]). Imaging or biopsies accounted for 3044 of 6276 tests (48.5%). Most respondents reported normal findings (3582 of 6246 [57.3%]) (Table 2). Among 6200 respondents who reviewed results, 5418 (87.4%) reported being contacted by a health care practitioner about the result. Most commonly, communication occurred through a patient portal message (3783 of 6200 [61.0%]), during a clinic or telemedicine visit (1157 of 6200 [18.7%]), or through a telephone call (1108 of 6200 [17.9%]). Of 5318 patients who sought additional information after reviewing their results, 2123 (39.9%) conducted an internet search. When asked about their preferences for contacts about future test results, 7046 of 7814 respondents (90.2%) indicated that they would prefer result delivery via the patient portal. Nearly all respondents (7520 of 7859 [95.7%]) indicated that they wanted to receive results through the patient portal as soon as results were available, even if their health care practitioner had not yet reviewed a result. Furthermore, 2337 of 2453 respondents (95.3%) who received not normal test results similarly indicated that they wanted to continue to receive immediately released results through the portal.

As shown in Figure 1, few respondents (411 of 5473 [7.5%]) reported being more worried after viewing test results. Among respondents who viewed a result before being contacted by a health care practitioner, almost half (2513 of 5473 [45.9%]) reported feeling less worried after reviewing their results through the patient portal. Among those reporting not normal results, most reported less or no change in their level of worry (2039 of 2442 [83.5%]). However, respondents who viewed not normal results were more likely to report being more worried or much more worried than those who reported normal results (403 of 2442 [16.5%] vs 294 of 5918 [5.0%]) (Figure 1). Among respondents with not normal blood test and imaging results, 187 of 1168 (16.0%) and 146 of 833 (17.5%), respectively, reported more worry or much more worry compared with those with normal blood test and imaging results (123 of 3078 [4.0%] and 104 of 1791 [5.8%], respectively).

All single-site adjusted models evaluating worry as a function of test result normality had significant overall model and partial effects, suggesting an association between not normal results and increased worry. The only other covariates associated with worry were other language at UC Davis Health (OR 0.30; 99% CI, 0.11-0.77) and precounseling at UC Davis Health (OR, 0.47; 99% CI, 0.24-0.92) and UTSW (OR, 0.64; 99% CI, 0.42-0.97).

The pooled random-effects model evaluating worry as a function of test result normality indicated that not normal results were associated with greater likelihood of worry compared with normal results (pooled OR, 2.71; 99% CI, 1.96-3.74). Figure 2 shows individual and pooled adjusted ORs. The I2 statistic for the pooled model was 0.01%, suggesting very low heterogeneity. While site-specific models from 2 sites suggested that precounseling might be associated with less likelihood of worry, results of the pooled random-effects model evaluating worry as a function of precounseling were not significant (pooled OR, 0.70; 99% CI, 0.31-1.59). The I2 for this pooled model was 36.50%, suggesting moderate heterogeneity. Figure 3 shows individual and pooled adjusted ORs. Additional site-specific models including interaction terms between test result normality and precounseling showed that the interaction was not significant.

Discussion

We surveyed a large cohort of patients and care partners at 4 geographically distributed academic medical centers who had accessed the patient portal at least once in the past year. Nearly all respondents (95.7%) wanted to continue to receive test results through the online patient portal immediately upon reporting and before being contacted by a health care practitioner. Most respondents indicated that reviewing results had either a positive effect or no effect on their level of worry. However, a subset of respondents with not normal results experienced additional worry. At 2 institutions, we observed reduced worry associated with precounseling before testing.

Few prior studies have investigated patient attitudes and preferences related to open results. Early work by Giardina et al5,37 found an association between receiving abnormal results and negative emotions, highlighting a need for more nuanced and customizable result release strategies. During the studies by Giardina et al,5,37 available test results were commonly released at tiered intervals through the patient portal based on sensitivity and perceived risk of misinterpretation.13 Since the Cures Act Final Rule, multiple studies have highlighted the risk of worry, the need to improve result interpretation by patients, and the need for medical counseling.18,23,38,39 However, these studies were conducted with small cohorts at single sites.

The open notes literature has highlighted the importance of data availability and transparency to enable patients to manage their health care.11,16,20,40,41 Our findings suggest that open results may have a similar effect, as most respondents sought additional information after viewing results. Interestingly, 39.9% of patients who sought additional information after reviewing their results conducted an internet search, highlighting potential unmet information needs. Providing patients time to review, research, and process their own test results might allow them to prepare for subsequent discussions with their health care practitioners and may lead to better shared decision-making.

A subset of respondents reported additional worry after viewing not normal results. Our modeling results support the findings reported by Giardina and colleagues5 that patients receiving not normal results are at increased risk for negative emotions, potentially due to difficulty interpreting the results in the context of their own health. Prior literature42,43 has highlighted a similar trend in worry when receiving news of abnormal results outside a patient portal, such as through a telephone call or during an in-person visit. We found that 95.3% of participants who received abnormal test results would like to continue to receive immediately released results through the portal. This finding suggests that there may be benefits to receiving abnormal results online, such as allowing patients to choose where and with whom to view such results. Additional research is necessary to better understand the nuance of worry from receiving abnormal test results, especially as it relates to release through the portal. A separate qualitative evaluation of the free-text questions in our survey is forthcoming and may provide insight into this phenomenon.

A large proportion of respondents (92.3%) reported receiving precounseling. Interestingly, we found no association between precounseling and lower levels of worry. Best practices for precounseling should be studied further. Additionally, the workflow and financial consequences of this added task for an already stressed clinical workforce warrants further consideration. Precounseling strategies might encompass both technical and social-technical approaches, including in-person anticipatory guidance, improved asynchronous communication, and portal-based educational materials. Other strategies include optimizing existing patient portal interfaces to give users control over their notification preferences related to sensitive or abnormal results or timing the release of test results during working hours. Additional research is necessary to further investigate the efficacy of strategies to mitigate emotional distress.

Limitations

This study has limitations. Patient portal users were surveyed at 4 large academic medical centers that were geographically distributed across the US. Results may not be generalizable to patients outside these systems. Further, all sites used Epic’s MyChart patient portal. It is possible that vendor differences could influence user perceptions, though portal functionality is similar between most vendors. Our study relied on self-reported responses, which may introduce biased or potentially incorrect responses. The response rate was modest, with variation between sites. It is possible respondents were more enthusiastic about open results and do not represent all portal users. Similarly, only patients who accessed test results via the patient portal were included in the study cohort, which may bias our findings. Three of 4 sites used research registries for recruitment, which may have contributed to heterogeneous response rates between sites. However, the large sample size reinforces our findings and enabled robust statistical testing among subsets of respondents.

A survey question about sex was erroneously omitted from 3 of the sites (UC Davis Health, UTSW, and VUMC) but was included at CU Anschutz. We obtained aggregate sex data from the EHR for the respondent cohort for these 3 sites but were unable to link them at a participant level. Therefore, sex was not included in the multivariable models. This also explains the higher missingness in sex data reported at CU Anschutz.

Survey respondents were primarily White, female, English-speaking, and highly educated. Prior studies indicated a similar demographic profile among patient portal users, suggesting a bias in self-selecting portal users.44 Future studies should capture preferences among non–English-speaking patients and patients from underrepresented racial and ethnic populations and with underrepresented educational levels and socioeconomic status.

Conclusions

This survey study assessed attitudes and perceptions related to immediately released test results in a large cohort of patients and caregivers at 4 geographically distributed academic medical centers. Most respondents preferred to receive test results through the patient portal even if it meant viewing results prior to discussion with a health care professional. This remained true for patients receiving not normal results. However, receiving a result that was not normal was associated with increased worry compared with receiving a normal result. As health care systems continue to navigate this new era of health information transparency, balancing patients’ expectation of immediate access to their information with the need to manage increased worry and health care practitioner burden is increasingly important.

Back to top
Article Information

Accepted for Publication: January 17, 2023.

Published: March 20, 2023. doi:10.1001/jamanetworkopen.2023.3572

Open Access: This is an open access article distributed under the terms of the CC-BY License. © 2023 Steitz BD et al. JAMA Network Open.

Corresponding Author: Bryan D. Steitz, PhD, Department of Biomedical Informatics, Vanderbilt University Medical Center, 2525 W End Ave, Ste 1475, Nashville, TN 37203 (bryan.d.steitz@vumc.org).

Author Contributions: Drs Steitz and Turer were co–first authors. Drs Rosenbloom and DesRoches were co–senior authors. Drs Steitz and Turer had full access to all the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.

Concept and design: Steitz, Turer, Lin, MacDonald, Salmi, Sternberg, Chen, Rosenbloom, DesRoches.

Acquisition, analysis, or interpretation of data: Steitz, Turer, Lin, MacDonald, Salmi, Wright, Lehmann, Langford, McDonald, Reese, Chen, Rosenbloom, DesRoches.

Drafting of the manuscript: Steitz, Turer, Lin, MacDonald, Salmi, Lehmann, Chen, Rosenbloom, DesRoches.

Critical revision of the manuscript for important intellectual content: Steitz, Turer, Lin, Salmi, Wright, Lehmann, Langford, McDonald, Reese, Sternberg, Rosenbloom, DesRoches.

Statistical analysis: Steitz, Turer, Lin, Langford, Reese, Chen.

Administrative, technical, or material support: Steitz, Turer, Lin, MacDonald, Salmi, Lehmann, Rosenbloom.

Supervision: Turer, Lin, Wright, Lehmann, Rosenbloom.

Conflict of Interest Disclosures: Dr Lehmann reported receiving an honorarium from Springer for a textbook outside the submitted work. Dr Sternberg reported waived meeting registration fees from Press Ganey as a member of the Physician Advisory Council during the conduct of the study. Dr DesRoches reported being the director of OpenNotes, a grant-funded research initiative that does not offer commercial products or have relationships with commercial entities. No other disclosures were reported.

Data Sharing Statement: See Supplement 2.

References
1.
21st Century Cures Act: interoperability, information blocking, and the ONC health IT certification program. Federal Register. February 2021. Accessed November 3, 2022. https://www.federalregister.gov/documents/2020/05/01/2020-07419/21st-century-cures-act-interoperability-information-blocking-and-the-onc-health-it-certification
2.
Institute of Medicine (US) Committee on Quality of Health Care in America. Crossing the Quality Chasm: A New Health System for the 21st Century. National Academy Press; 2001.
3.
Rigby  M, Georgiou  A, Hyppönen  H,  et al.  Patient portals as a means of information and communication technology support to patient-centric care coordination—the missing evidence and the challenges of evaluation: a joint contribution of IMIA WG EVAL and EFMI WG EVAL.   Yearb Med Inform. 2015;10(1):148-159. PubMedGoogle Scholar
4.
Ross  SE, Lin  CT.  The effects of promoting patient access to medical records: a review.   J Am Med Inform Assoc. 2003;10(2):129-138. doi:10.1197/jamia.M1147 PubMedGoogle ScholarCrossref
5.
Giardina  TD, Baldwin  J, Nystrom  DT, Sittig  DF, Singh  H.  Patient perceptions of receiving test results via online portals: a mixed-methods study.   J Am Med Inform Assoc. 2018;25(4):440-446. doi:10.1093/jamia/ocx140 PubMedGoogle ScholarCrossref
6.
Arvisais-Anhalt  S, Lau  M, Lehmann  CU,  et al.  The 21st Century Cures Act and multiuser electronic health record access: potential pitfalls of information release.   J Med internet Res. 2022;24(2):e34085. doi:10.2196/34085 PubMedGoogle ScholarCrossref
7.
Carini  E, Villani  L, Pezzullo  AM,  et al.  The impact of digital patient portals on health outcomes, system efficiency, and patient attitudes: updated systematic literature review.   J Med internet Res. 2021;23(9):e26189. doi:10.2196/26189 PubMedGoogle ScholarCrossref
8.
Lyles  CR, Nelson  EC, Frampton  S, Dykes  PC, Cemballi  AG, Sarkar  U.  Using electronic health record portals to improve patient engagement: research priorities and best practices.   Ann Intern Med. 2020;172(11)(suppl):S123-S129. doi:10.7326/M19-0876 PubMedGoogle ScholarCrossref
9.
Irizarry  T, DeVito Dabbs  A, Curran  CR.  Patient portals and patient engagement: a state of the science review.   J Med internet Res. 2015;17(6):e148. doi:10.2196/jmir.4255 PubMedGoogle ScholarCrossref
10.
Antonio  MG, Petrovskaya  O, Lau  F.  The state of evidence in patient portals: umbrella review.   J Med internet Res. 2020;22(11):e23851. doi:10.2196/23851 PubMedGoogle ScholarCrossref
11.
Walker  J, Leveille  S, Bell  S,  et al.  OpenNotes after 7 years: patient experiences with ongoing access to their clinicians’ outpatient visit notes.   J Med internet Res. 2019;21(5):e13876. doi:10.2196/13876 PubMedGoogle ScholarCrossref
12.
Salmi  L, Blease  C, Hägglund  M, Walker  J, DesRoches  CM. US policy requires immediate release of records to patients. BMJ. 2021;372:n246.
13.
Steitz  BD, Wong  JIS, Cobb  JG, Carlson  B, Smith  G, Rosenbloom  ST.  Policies and procedures governing patient portal use at an Academic Medical Center.   JAMIA Open. 2019;2(4):479-488. doi:10.1093/jamiaopen/ooz039 PubMedGoogle ScholarCrossref
14.
Steitz  BD, Sulieman  L, Wright  A, Rosenbloom  ST.  Association of immediate release of test results to patients with implications for clinical workflow.   JAMA Netw Open. 2021;4(10):e2129553. doi:10.1001/jamanetworkopen.2021.29553 PubMedGoogle ScholarCrossref
15.
Esch  T, Mejilla  R, Anselmo  M, Podtschaske  B, Delbanco  T, Walker  J.  Engaging patients through open notes: an evaluation using mixed methods.   BMJ Open. 2016;6(1):e010034-e11. doi:10.1136/bmjopen-2015-010034 PubMedGoogle ScholarCrossref
16.
Bell  SK, Mejilla  R, Anselmo  M,  et al.  When doctors share visit notes with patients: a study of patient and doctor perceptions of documentation errors, safety opportunities and the patient-doctor relationship.   BMJ Qual Saf. 2017;26(4):262-270. doi:10.1136/bmjqs-2015-004697 PubMedGoogle ScholarCrossref
17.
Denneson  LM, Cromer  R, Williams  HB, Pisciotta  M, Dobscha  SK.  A qualitative analysis of how online access to mental health notes is changing clinician perceptions of power and the therapeutic relationship.   J Med internet Res. 2017;19(6):e208. doi:10.2196/jmir.6915 PubMedGoogle ScholarCrossref
18.
Leonard  LD, Himelhoch  B, Huynh  V,  et al.  Patient and clinician perceptions of the immediate release of electronic health information.   Am J Surg. 2022;224(1 Pt A):27-34. doi:10.1016/j.amjsurg.2021.12.002 PubMedGoogle ScholarCrossref
19.
D’Costa  SN, Kuhn  IL, Fritz  Z.  A systematic review of patient access to medical records in the acute setting: practicalities, perspectives and ethical consequences.   BMC Med Ethics. 2020;21(1):18. doi:10.1186/s12910-020-0459-6 PubMedGoogle ScholarCrossref
20.
DesRoches  CM, Leveille  S, Bell  SK,  et al.  The views and experiences of clinicians sharing medical record notes with patients.   JAMA Netw Open. 2020;3(3):e201753-e12. doi:10.1001/jamanetworkopen.2020.1753 PubMedGoogle ScholarCrossref
21.
Blease  C, Salmi  L, Hägglund  M, Wachenheim  D, DesRoches  C.  COVID-19 and open notes: a new method to enhance patient safety and trust.   JMIR Ment Health. 2021;8(6):e29314. doi:10.2196/29314 PubMedGoogle ScholarCrossref
22.
Sarabu  C, Lee  T, Hogan  A, Pageler  N.  The value of OpenNotes for pediatric patients, their families and impact on the patient-physician relationship.   Appl Clin Inform. 2021;12(1):76-81. doi:10.1055/s-0040-1721781 PubMedGoogle ScholarCrossref
23.
Turer  RW, DesRoches  CM, Salmi  L, Helmer  T, Rosenbloom  ST.  Patient perceptions of receiving COVID-19 test results via an online patient portal: an open results survey.   Appl Clin Inform. 2021;12(4):954-959. doi:10.1055/s-0041-1736221 PubMedGoogle ScholarCrossref
24.
Grimes  GC, Reis  MD, Budati  G, Gupta  M, Forjuoh  SN.  Patient preferences and physician practices for laboratory test results notification.   J Am Board Fam Med. 2009;22(6):670-676. doi:10.3122/jabfm.2009.06.090078 PubMedGoogle ScholarCrossref
25.
Steitz  BD, Langford  K, Turer  RW,  et al. Patient attitudes about immediate access to test results. Abstract presented at: AMIA 2022 Clinical Informatics Conference; May 24-26, 2022; Houston, Texas.
26.
Kannan  V, Fish  JS, Mutz  JM,  et al.  Rapid development of specialty population registries and quality measures from electronic health record data: an agile framework.   Methods Inf Med. 2017;56(99):e74-e83. doi:10.3414/ME16-02-0031 PubMedGoogle ScholarCrossref
27.
Wright  JA, Leveille  SG, Chimowitz  H,  et al.  Validation of a brief scale to assess ambulatory patients’ perceptions of reading visit notes: a scale development study.   BMJ Open. 2020;10(10):e034517. doi:10.1136/bmjopen-2019-034517 PubMedGoogle ScholarCrossref
28.
Harris  PA, Taylor  R, Thielke  R, Payne  J, Gonzalez  N, Conde  JG.  Research electronic data capture (REDCap)—a metadata-driven methodology and workflow process for providing translational research informatics support.   J Biomed Inform. 2009;42(2):377-381. doi:10.1016/j.jbi.2008.08.010 PubMedGoogle ScholarCrossref
29.
Langan  D, Higgins  JPT, Jackson  D,  et al.  A comparison of heterogeneity variance estimators in simulated random-effects meta-analyses.   Res Synth Methods. 2019;10(1):83-98. doi:10.1002/jrsm.1316 PubMedGoogle ScholarCrossref
30.
Slade  E, Naylor  MG.  A fair comparison of tree-based and parametric methods in multiple imputation by chained equations.   Stat Med. 2020;39(8):1156-1166. doi:10.1002/sim.8468 PubMedGoogle ScholarCrossref
31.
Shah  AD, Bartlett  JW, Carpenter  J, Nicholas  O, Hemingway  H.  Comparison of random forest and parametric imputation models for imputing missing data using MICE: a CALIBER study.   Am J Epidemiol. 2014;179(6):764-774. doi:10.1093/aje/kwt312 PubMedGoogle ScholarCrossref
32.
Harrell  FE.  Regression Modeling Strategies: With Applications to Linear Models, Logistic and Ordinal Regression, and Survival Analysis. 2nd ed. Springer; 2015. doi:10.1007/978-3-319-19425-7
33.
Harrell  FE. Regression Modeling Strategies. Springer; 2021.
34.
Higgins  JPT, Thompson  SG.  Quantifying heterogeneity in a meta-analysis.   Stat Med. 2002;21(11):1539-1558. doi:10.1002/sim.1186 PubMedGoogle ScholarCrossref
35.
West  SL, Gartlehner  G, Mansfield  AJ,  et al. Comparative Effectiveness Review Methods: Clinical Heterogeneity. Agency for Healthcare Research and Quality. 2010. Accessed August 17, 2022. https://www.ncbi.nlm.nih.gov/books/NBK53310/
36.
Viechtbauer  W.  Conducting meta-analyses in R with the metafor package.   J Stat Softw. 2010;36(3):1-48. doi:10.18637/jss.v036.i03 Google ScholarCrossref
37.
Giardina  TD, Modi  V, Parrish  DE, Singh  H.  The patient portal and abnormal test results: an exploratory study of patient experiences.   Patient Exp J. 2015;2(1):148-154. doi:10.35680/2372-0247.1055 PubMedGoogle ScholarCrossref
38.
Coppola  KM.  The promise and peril of the patient portal.   JAMA Neurol. 2022;79(1):11-12. doi:10.1001/jamaneurol.2021.4453 PubMedGoogle ScholarCrossref
39.
Chen  KT, de Virgilio  C.  The patient portal: power to the people.   Am J Surg. 2022;224(1 Pt A):25-26. doi:10.1016/j.amjsurg.2022.03.014 PubMedGoogle ScholarCrossref
40.
Nazi  KM, Turvey  CL, Klein  DM, Hogan  TP, Woods  SSVA.  VA OpenNotes: exploring the experiences of early patient adopters with access to clinical notes.   J Am Med Inform Assoc. 2015;22(2):380-389. doi:10.1136/amiajnl-2014-003144 PubMedGoogle ScholarCrossref
41.
Wolff  JL, Darer  JD, Berger  A,  et al. Inviting patients and care partners to read doctors’ notes: OpenNotes and shared access to electronic medical records. J Am Med Inform Assoc. 2017;24(e1):e166-e172.
42.
Harvey  JA, Cohen  MA, Brenin  DR, Nicholson  BT, Adams  RB.  Breaking bad news: a primer for radiologists in breast imaging.   J Am Coll Radiol. 2007;4(11):800-808. doi:10.1016/j.jacr.2007.06.009 PubMedGoogle ScholarCrossref
43.
Monsonego  J, Cortes  J, da Silva  DP, Jorge  AF, Klein  P.  Psychological impact, support and information needs for women with an abnormal Pap smear: comparative results of a questionnaire in three European countries.   BMC Womens Health. 2011;11(1):18. doi:10.1186/1472-6874-11-18 PubMedGoogle ScholarCrossref
44.
Anthony  DL, Campos-Castillo  C, Lim  PS.  Who isn’t using patient portals and why? evidence and implications from a national sample of US adults.   Health Aff (Millwood). 2018;37(12):1948-1954. doi:10.1377/hlthaff.2018.05117 PubMedGoogle ScholarCrossref
×