Key PointsQuestion
Are eye symptoms reported differently in the electronic medical record (EMR) vs patient report on an Eye Symptom Questionnaire (ESQ)?
Findings
Large inconsistencies were noted in this observational study of 162 patients, with participants having discordant symptom reporting between the ESQ and EMR, including blurry vision, glare, pain or discomfort, and redness.
Meaning
These data suggest that symptom reporting varies between methods, with patients tending to report more symptoms on self-reported questionnaires.
Importance
Accurate documentation of patient symptoms in the electronic medical record (EMR) is important for high-quality patient care.
Objective
To explore inconsistencies between patient self-report on an Eye Symptom Questionnaire (ESQ) and documentation in the EMR.
Design, Setting, and Participants
This investigation was an observational study in comprehensive ophthalmology and cornea clinics at an academic institution among a convenience sample of 192 consecutive eligible patients, of whom 30 declined participation. Patients were recruited at the Kellogg Eye Center from October 1, 2015, to January 31, 2016. Patients were eligible to be included in the study if they were 18 years or older.
Main Outcomes and Measures
Concordance of symptoms reported on an ESQ with data recorded in the EMR. Agreement of symptom report was analyzed using κ statistics and McNemar tests. Disagreement was defined as a negative symptom report or no mention of a symptom in the EMR for patients who reported moderate to severe symptoms on the ESQ. Logistic regression was used to investigate if patient factors, physician characteristics, or diagnoses were associated with the probability of disagreement for symptoms of blurry vision, pain or discomfort, and redness.
Results
A total of 162 patients (324 eyes) were included. The mean (SD) age of participants was 56.6 (19.4) years, 62.3% (101 of 162) were female, and 84.9% (135 of 159) were white. At the participant level, 33.8% (54 of 160) had discordant reporting of blurry vision between the ESQ and EMR. Likewise, documentation was discordant for reporting glare (48.1% [78 of 162]), pain or discomfort (26.5% [43 of 162]), and redness (24.7% [40 of 162]), with poor to fair agreement (κ range, −0.02 to 0.42). Discordance of symptom reporting was more frequently characterized by positive reporting on the ESQ and lack of documentation in the EMR (Holm-adjusted McNemar P < .03 for 7 of 8 symptoms except for blurry vision [P = .59]). Return visits at which the patient reported blurry vision on the ESQ had increased odds of not reporting the symptom in the EMR compared with new visits (odds ratio, 5.25; 95% CI, 1.69-16.30; Holm-adjusted P = .045).
Conclusions and Relevance
Symptom reporting was inconsistent between patient self-report on an ESQ and documentation in the EMR, with symptoms more frequently recorded on a questionnaire. These results suggest that documentation of symptoms based on EMR data may not provide a comprehensive resource for clinical practice or “big data” research.
The medical record began as a tool for physicians to document their pertinent findings from the clinical encounter and has evolved to serve clinicians, patients, health systems, and insurers. Advocates of conversion to an electronic medical record (EMR) aimed to balance the multiple functional purposes of the EMR and user accessibility.1 The percentage of office-based physicians using any EMR increased from 18% in 2001 to 83% in 2014 after the passage of the Health Information Technology for Economic and Clinical Health Act.2,3 The Institute of Medicine4 promoted the core capabilities expected from an EMR, including health information storage, electronic communication, and patient support.
Physicians have mixed views on the ability of the EMR to capture important components of the interaction with the patient.5-7 Researchers have reported inconsistencies between the EMR and patient findings, but they have not necessarily clarified if this discrepancy is because of poor documentation or poor communication between the patient and clinician.8-15 The conversion from a paper medical record to an EMR created new issues. Detractors of EMRs report interruptions in clinic work flow, interference with maintaining eye contact with patients, time-consuming data entry, longer clinic visits, and lower productivity.16 Electronic medical record shortcuts, such as the copy-and-paste function and template-based notes, create unique user errors that may diminish the quality of information.17,18
Researchers anticipate that the EMR can be used beyond clinical applications as a research tool.19 Medical record data could be analyzed by “big data” approaches, such as natural language processing and bioinformatics, which have the potential to improve health care efficiency, quality, and cost-effectiveness.20,21 However, these applications assume that the EMR has accurate patient-level data. The present study was undertaken to fully explore inconsistencies between self-report on an Eye Symptom Questionnaire (ESQ) and documentation in the EMR for patients seen at an ophthalmology clinic.
This study, approved by the institutional review board at the University of Michigan, Ann Arbor, was compliant with the Health Insurance Portability and Accountability Act and adhered to the Declaration of Helsinki. Written informed consent was obtained from all enrollees.
This investigation was an observational study in comprehensive ophthalmology and cornea clinics at an academic institution among a convenience sample of 192 consecutive eligible patients, of whom 30 declined participation. Patients were recruited at the Kellogg Eye Center from October 1, 2015, to January 31, 2016. Patients were eligible to be included in the study if they were 18 years or older. Exclusion criteria included patients in the 90-day postoperative period for ocular surgical procedures or history of complex ocular surface diseases requiring ocular surgery, which would potentially alter normal symptom reporting. Race was classified based on self-report in the EMR as white, African American, Asian, or other.
We conducted a pilot project administering an ESQ (eFigure in the Supplement) to patients of 13 physicians in comprehensive (n = 7) and cornea (n = 6) clinics to understand the association between patient-reported symptoms and diagnosed eye disease. The ESQ was administered while the patient was waiting to see the physician after the technician encounter. The ESQ contained 8 eye symptom items obtained from previously validated questionnaires and asked about the severity of each symptom in the past 7 days. Responses were reported for right and left eyes separately. Six of the 8 symptom items were obtained from the National Institutes of Health (NIH) Toolbox Vision-Related Quality of Life measure, 1 item (pain or discomfort) was taken from the National Eye Institute Visual Function Questionnaire, and 1 item (gritty sensation) was derived from the Ocular Surface Disease Index.22-24 Eye symptom items on the ESQ were reported on a 4-point Likert-type scale, including “no problem at all,” “a little bit of a problem,” “somewhat of a problem,” or “very much of a problem,” for 7 questions and on a 5-point Likert-type scale, including “none,” “mild,” “moderate,” “severe,” or “very severe,” for 1 question (pain or discomfort).
A medical student (N.G.V.) abstracted eye symptoms retrospectively from the EMR corresponding to the 8 eye symptoms of the ESQ. We abstracted symptoms recorded by any person on the care team. Technicians and clinicians were not aware their documentation was to be queried. Wording differences for symptoms in the EMR were recorded and collapsed into broad categories. The EMR included radio buttons for all symptoms included in the ESQ. If the radio button was not checked or if there was no free-text documentation of the symptom, the symptom was classified as “not documented.” The EMR was reviewed to document the following: (1) if the patient reported having the symptom (positive symptom) or not having the symptom (negative symptom), (2) if a symptom was recorded as occurring within the past 7 days, (3) if the EMR indicated eye laterality for the symptom, and (4) if there was no documentation of the symptom in the EMR. Eye symptoms reported in the EMR but not included in the ESQ were also classified (eTable 1 in the Supplement). These other symptoms were not explored in depth because they were not components of the validated questionnaires from the NIH Toolbox Vision-Related Quality of Life measure or National Eye Institute Visual Function Questionnaire.
Additional data were collected regarding demographic information, such as age and sex of patients, clinical diagnosis of the eye (no presence of disease, nonurgent, or urgent anterior segment disease), type of visit (new visit, return visit with no new problems, or new problem during a return visit), and characteristics of the examining physician. These physician characteristics included the number of years the physician had been in clinical practice, the physician’s mean volume of patients on a clinic day, and if the physician worked with a medical scribe.
Agreement between reporting symptoms on the ESQ vs the EMR was summarized descriptively with cross tables and included frequencies and percentages. Results were reported as eye based and participant based because of missing data for eye laterality in the EMR. At the participant level, if the symptom was reported in at least 1 eye, then he or she was indicated as having the symptom. Presence of an eye symptom on the ESQ was defined as a report of “somewhat of a problem” or “very much of a problem” for 7 symptoms or a report of “moderate,” “severe,” or “very severe” for pain or discomfort. Two alternative categorizations were performed as sensitivity analyses. In the first analysis, presence of a symptom on the ESQ was recategorized as any positive report regardless of severity (including “mild” or “a little bit of a problem,” depending on the symptom). In the second analysis, presence of a symptom on the ESQ was recategorized as only the highest positive symptom report categories (“severe” and “very severe” or “very much of a problem”). Symptom severity levels could not be captured in the EMR to the same extent; therefore, any positive documentation was taken as presence of that symptom. Lack of documentation of a symptom or an explicit negative report in the EMR was treated as a negative report.
κ Statistics were used to assess the level of agreement or concordance of symptom reporting between the ESQ and EMR at the participant level. McNemar tests were used to evaluate discordance in symptom report on the ESQ compared with the EMR. At the eye level, the most severely reported symptom on the ESQ was identified, and agreement of report was compared with the EMR.
Factors associated with disagreement in symptom reporting between the ESQ and EMR were investigated with logistic regression models. Because of low rates of positive documentation in the EMR of some symptoms, only blurry vision, pain or discomfort, and redness were investigated in greater depth. Models were aggregated to the participant level to preserve reported symptom data that would otherwise be missing at the eye level. Patients with missing eye laterality but positive symptom reporting were treated as positive documentation in the EMR. Models were based on the subset of participants with a positive symptom report on the ESQ. Disagreement was defined as a negative report or no documentation of a symptom in the EMR (−EMR) for patients who had positive self-report on the ESQ (+ESQ). Factors investigated for associations with the probability of disagreement included patient demographics, clinical diagnosis of the worse eye, physician characteristics, and type of visit. To account for multiple tests, P values were adjusted using the method by Holm.25P < .05 was considered statistically significant, and all hypothesis tests were 2-sided. All statistical analyses were performed using a computer program (SAS, version 9.4; SAS Institute Inc).
A total of 162 patients (324 eyes) were included in the analysis. Descriptive statistics of the sample are summarized in Table 1. The mean (SD) age of participants was 56.6 (19.4) years (age range, 18.4-94 years), 62.3% (101 of 162) were female, and 84.9% (135 of 159) were of white race. Eyes had a mean (SD) logMAR visual acuity of 0.34 (0.64) (mean [SD] Snellen equivalent, 20/44 [6.4] lines) and range of −0.12 to +3.00 logMAR (Snellen equivalent, 20/15 to hand motion).
Symptom Reporting on the ESQ and in the EMR
We examined the concordance and discordance of symptom report between methods and described the directionality of discordance at the eye level (Figure and eTable 2 in the Supplement) and at the participant level (Table 2 and eTable 3 in the Supplement). The following results are at the participant level. Symptom presence was concordant (+ESQ and +EMR) for blurry vision, glare, pain or discomfort, and redness in 37.5% (60 of 160), 3.1% (5 of 162), 21.0% (34 of 162), and 14.2% (23 of 162) of participants, respectively. Symptom absence was concordant (−ESQ and −EMR) for blurry vision, glare, pain or discomfort, and redness in 28.8% (46 of 160), 48.8% (79 of 162), 52.5% (85 of 162), and 61.1% (99 of 162) of participants, respectively. Reporting blurry vision was discordant in 33.8% (54 of 160) of participants. Of these discordant participants, 46.3% (25 of 54) had +ESQ and −EMR. Reporting glare was discordant in 48.1% (78 of 162) of participants. Of these discordant participants, 91.0% (71 of 78) had +ESQ and −EMR. Reporting pain or discomfort was discordant in 26.5% (43 of 162) of participants. Of these discordant participants, 74.4% (32 of 43) had +ESQ and −EMR. Reporting redness was discordant in 24.7% (40 of 162) of participants. Of these discordant participants, 80.0% (32 of 40) had +ESQ and −EMR.
Symptom Agreement Between the ESQ and EMR
κ Statistics indicated poor to fair agreement between the ESQ and EMR for symptom reporting (κ range, −0.02 to 0.42) (Table 2). At the participant level, positive reporting of symptoms on the ESQ with no documentation or a negative report in the EMR was more prevalent than the converse for glare (with +ESQ and −EMR vs −ESQ and +EMR values of 43.8% [71 of 162] vs 4.3% [7 of 162]), pain or discomfort (19.8% [32 of 162] vs 6.8% [11 of 162]), and redness (19.8% [32 of 162] vs 4.9% [8 of 162]) but not for blurry vision (15.6% [25 of 160] vs 18.1% [29 of 160]). McNemar tests indicate imbalance in discordant symptom reporting, with more discrepancy in the direction of positive report on the ESQ and negative documentation in the EMR for all eye symptoms (Holm-adjusted McNemar P < .03 for 7 of 8 symptoms) except for blurry vision (P = .59). All percentage comparisons of +ESQ and −EMR as well as −ESQ and +EMR and their Holm-adjusted McNemar P values are listed in Table 2.
For the “inclusive” sensitivity analysis, results were predictably more discordant between the ESQ and EMR. κ Statistics remained poor to fair (κ range, −0.04 to 0.26), and McNemar test results showed stronger discordance in the direction of +ESQ and −EMR (Holm-adjusted McNemar P < .001 for all). The comparisons were as follows: 28.6% +ESQ and −EMR vs 6.2% −ESQ and +EMR for blurry vision, 64.2% +ESQ and −EMR vs 3.7% −ESQ and +EMR for glare, 39.5% +ESQ and −EMR vs 2.5% −ESQ and +EMR for pain or discomfort, 35.2 % +ESQ and −EMR vs 3.1% −ESQ and +EMR for redness, 41.6% +ESQ and −EMR vs 1.2% −ESQ and +EMR for burning or stinging, 48.1% +ESQ and −EMR vs 0.6% −ESQ and +EMR for itching, 33.9% +ESQ and −EMR vs 2.5% −ESQ and +EMR for gritty sensation, and 53.7% +ESQ and −EMR vs 2.5% −ESQ and +EMR for sensitive to light. For the “exclusive” sensitivity analysis (only the most severe symptom was considered positive on the ESQ), agreement was poor (κ range, −0.05 to 0.36). Blurry vision was more frequently discordant as −ESQ and +EMR (8.7% +ESQ and −EMR vs 31.7% −ESQ and +EMR, Holm-adjusted McNemar P < .001). Other symptoms were discordant as +ESQ and −EMR for glare (24.7% +ESQ and −EMR vs 6.2% −ESQ and +EMR, Holm-adjusted McNemar P < .002) and light sensitivity (19.1% +ESQ and −EMR vs 6.8% −ESQ and+EMR, Holm-adjusted McNemar P < .01). There were no statistically significant discordant findings for symptoms of pain or discomfort, redness, burning or stinging, itching, and gritty sensation.
Agreement between the most severely reported symptom on the ESQ and EMR documentation of that symptom was also evaluated at the eye level. In 108 eyes, 1 symptom was reported on the ESQ at a higher level of severity than all other symptoms. In these 108 eyes, 25 eyes (23.1%) also had a positive documentation of that symptom in the EMR, 13 eyes (12.0%) had documentation in the EMR but no eye designation, 62 eyes (57.4%) had no indication of the symptom in the EMR, and 8 eyes (7.4%) had an explicit negative symptom report.
Some agreement between symptom reporting for all 8 symptoms on the ESQ and EMR was seen in 46.3% (75 of 162) of participants. Exact agreement occurred in 23.5% (38 of 162) of participants. When a patient reported 3 or more symptoms on the ESQ, the EMR never had exact agreement on the symptoms (Table 3).
Factors Associated With ESQ and EMR Disagreement
Logistic regression models predicting the probability of −EMR in the subset of participants with +ESQ for the symptoms of blurry vision, pain or discomfort, and redness are summarized in eTable 4 in the Supplement. No significant associations of age, sex, diagnosis, physician years in practice, clinic volume, or presence of a medical scribe were found with any of the 3 symptom outcomes. Type of visit showed a statistically significant association with disagreement such that return visit patients who positively reported symptoms on the ESQ had increased odds of not reporting the symptom in the EMR compared with new visit patients. On multiple testing adjustment, these results only remained significant for blurry vision (odds ratio, 5.25; 95% CI, 1.69-16.30; Holm-adjusted P = .045). Because of small sample sizes for −ESQ and +EMR, models of this disagreement could not be investigated.
The original intent of the EMR was not for complete documentation of the clinical encounter but for physicians note taking of their patient interactions. The EMR was implemented to integrate many sources of medical information. We demonstrated that there is substantial discrepancy in the symptoms reported by patients on an ESQ and those documented in the EMR, as shown in previous studies26-29 in other specialties. This discrepancy can occur in the following 2 directions: positive reporting by self-report with negative or no documentation in the EMR (+ESQ and −EMR) or negative reporting by self-report and positive documentation in the EMR (−ESQ and +EMR).
We found significant imbalance in symptom reporting, with more symptoms reported through self-report on the ESQ than through the EMR, except for blurry vision. Prior studies12,27-29 have also found these types of differences, and such disconnects have been found in paper medical records or EMRs. Exact agreement between self-report and EMR documentation dropped to zero when patients reported 3 or more symptoms on the ESQ. When we adjusted our categorizations through sensitivity analyses, discordance in symptom report understandably shifted as well. Discordance in symptom reporting could be because of differences in terminology of symptoms between the patient and clinician or errors of omission, such as forgetting or choosing not to report or record a symptom.26,30 Perhaps a more bothersome symptom is the focus of the clinical encounter, and other less onerous symptoms (eg, glare) are not discussed (or documented). However, even for the exclusive sensitivity analysis, we show that the ESQ and the EMR are inconsistently documented. We cannot assume that self-report is more accurate than the EMR just because more symptoms are reported.
The quality of documentation is critical not only for patient care but also for assessment measurements and clinical studies.19 In ophthalmology, the Intelligent Research in Sight (IRIS) Registry can be used to evaluate patient-level data to improve patient outcomes and practice performance, but it relies on objectively collected data, such as visual acuity and billing codes.31,32 Inclusion of psychometric data, such as patient-reported outcomes (PROs), could be a future direction for the IRIS Registry. We suggest that PROs, such as those provided by the NIH Toolbox Vision-Related Quality of Life measure or the Patient-Reported Outcomes Measurement Information System (PROMIS), could be collected as standardized, self-report templates and uploaded into the EMR.23,33 Our study results suggest that integrating self-report would include more symptom reporting and would be consistent across patients to enhance the fidelity of the data. Using patient self-reporting, the patient-physician interaction could shift from reporting symptoms to focusing on symptom severity and causality. In addition, reliance on PROs for symptom reporting is an important way to document that treatments lead to improved quality of life and are being recommended by the US Food and Drug Administration34 for use in clinical trials.
We explored factors that may be related to the discordance between an ESQ and the EMR. Patient factors (age and sex), physician factors (years in practice, workload, and use of a medical scribe), and presence of urgent or nonurgent anterior segment eye diseases were not significantly associated with reporting disagreements. Type of visit was associated with disagreement for the symptom of blurry vision. When patients reported blurry vision on the ESQ, there was an increased probability that the physician would not document it in the EMR during return visits compared with new visits. As noted by other authors, inconsistency may rather be due to time constraints, system-related errors, and communication lapses.15
There are limitations to our study. The study was performed at a single center using a specific type of EMR, which limits generalizability. We could not assess the influence of any specific minority group on discrepancies in symptom reporting because of low representation of minorities in our sample. We did not use the entire NIH Toolbox Vision-Related Quality of Life measure survey, which may alter the survey’s validity. Recall bias can occur, in which the patient could remember symptoms when prompted by a survey but not during the clinical encounter. A symptom was considered a “negative report” when the symptom was not documented in the EMR. This method is not a perfect reflection of the clinical encounter (although it serves as such for medicolegal purposes). Therefore, we report inconsistencies and not sensitivities and specificities. In the future, 2 independent classifiers with a method of adjudication could be used to code EMR documentation. The coding occurred independent of and without knowledge of the ESQ results.
This study identifies a key challenge for an EMR system, namely, the quality of the documentation. We found significant inconsistencies between symptom self-report on an ESQ and documentation in the EMR, with a bias toward reporting more symptoms via self-report. If the EMR lacks relevant symptom information, it has implications for patient care, including communication errors and poor representation of the patient’s reported problems. The inconsistencies imply caution for the use of EMR data in research studies. Future work should further examine why information is inconsistently reported. Perhaps the implementation of self-report questionnaires for symptoms in the clinical setting will mitigate the limitations of the EMR and improve the quality of documentation.
Corresponding Author: Maria A. Woodward, MD, MS, Department of Ophthalmology and Visual Sciences, University of Michigan, 1000 Wall St, Ann Arbor, MI 48105 (mariawoo@umich.edu).
Accepted for Publication: December 3, 2016.
Published Online: January 26, 2017. doi:10.1001/jamaophthalmol.2016.5551
Author Contributions: Dr Woodward had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Study concept and design: Lee, Woodward.
Acquisition, analysis, or interpretation of data: Valikodath, Newman-Casey, Musch, Niziol, Woodward.
Drafting of the manuscript: Valikodath, Niziol, Woodward.
Critical revision of the manuscript for important intellectual content: All authors.
Statistical analysis: Musch, Niziol, Woodward.
Administrative, technical, or material support: Newman-Casey, Lee, Woodward.
Study supervision: Newman-Casey, Musch, Woodward.
Conflict of Interest Disclosures: All authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest. Dr Newman-Casey reported being a consultant to Blue Health Intelligence. Dr Lee reported being a consultant to the Centers for Disease Control and Prevention. Both disclosures are outside of the submitted work. No other disclosures were reported.
Funding/Support: Ms Valikodath is supported by training grant 5TL1TR000435-09 from the National Institutes of Health. Dr Newman-Casey is supported by Mentored Clinical Scientist Research Career Development Award K23EY025320 from the National Eye Institute and by a Research to Prevent Blindness Career Development Award. Dr Lee is supported by the W. K. Kellogg Foundation and by Research to Prevent Blindness. Dr Musch is supported by the W. K. Kellogg Foundation. Dr Woodward is supported by Mentored Clinical Scientist Research Career Development Award K23EY023596 from the National Eye Institute.
Role of the Funder/Sponsor: The funding organizations had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
1.Raymond
L, Paré
G, Ortiz de Guinea
A,
et al. Improving performance in medical practices through the extended use of electronic medical record systems: a survey of Canadian family physicians.
BMC Med Inform Decis Mak. 2015;15:27.
PubMedGoogle ScholarCrossref 2.Jamoom
E, Yang
N, Hing
E. Percentage of office-based physicians using any electronic health records or electronic medical records, physicians that have a basic system, and physicians that have a certified system, by state: United States, 2014 (table).
https://www.cdc.gov/nchs/data/ahcd/nehrs/2015_web_tables.pdf. Published 2015. Accessed November 8, 2016.
3.Hsiao
CJ, Hing
E. Use and characteristics of electronic health record systems among office-based physician practices: United States, 2001-2013. Hyattsville, MD: National Center for Health Statistics.
http://www.cdc.gov/nchs/products/databriefs/db143.htm. Published January 2014. Accessed November 8, 2016.
5.DesRoches
CM, Campbell
EG, Rao
SR,
et al. Electronic health records in ambulatory care: a national survey of physicians.
N Engl J Med. 2008;359(1):50-60.
PubMedGoogle ScholarCrossref 6.Chiang
MF, Boland
MV, Margolis
JW, Lum
F, Abramoff
MD, Hildebrand
PL; American Academy of Ophthalmology Medical Information Technology Committee. Adoption and perceptions of electronic health record systems by ophthalmologists: an American Academy of Ophthalmology survey.
Ophthalmology. 2008;115(9):1591-1597.
PubMedGoogle ScholarCrossref 7.Street
RL
Jr, Liu
L, Farber
NJ,
et al. Provider interaction with the electronic health record: the effects on patient-centered communication in medical encounters.
Patient Educ Couns. 2014;96(3):315-319.
PubMedGoogle ScholarCrossref 8.St Sauver
JL, Hagen
PT, Cha
SS,
et al. Agreement between patient reports of cardiovascular disease and patient medical records.
Mayo Clin Proc. 2005;80(2):203-210.
PubMedGoogle ScholarCrossref 9.Fromme
EK, Eilers
KM, Mori
M, Hsieh
YC, Beer
TM. How accurate is clinician reporting of chemotherapy adverse effects? a comparison with patient-reported symptoms from the Quality-of-Life Questionnaire C30.
J Clin Oncol. 2004;22(17):3485-3490.
PubMedGoogle ScholarCrossref 10.Beckles
GL, Williamson
DF, Brown
AF,
et al. Agreement between self-reports and medical records was only fair in a cross-sectional study of performance of annual eye examinations among adults with diabetes in managed care.
Med Care. 2007;45(9):876-883.
PubMedGoogle ScholarCrossref 11.Corser
W, Sikorskii
A, Olomu
A, Stommel
M, Proden
C, Holmes-Rovner
M. Concordance between comorbidity data from patient self-report interviews and medical record documentation.
BMC Health Serv Res. 2008;8:85.
PubMedGoogle ScholarCrossref 12.Echaiz
JF, Cass
C, Henderson
JP, Babcock
HM, Marschall
J. Low correlation between self-report and medical record documentation of urinary tract infection symptoms.
Am J Infect Control. 2015;43(9):983-986.
PubMedGoogle ScholarCrossref 13.Yadav
S, Kazanji
N, Narayan
KC,
et al. Comparison of accuracy of physical examination findings in initial progress notes between paper charts and a newly implemented electronic health record [published online June 29, 2016].
J Am Med Inform Assoc. doi:
10.1093/jamia/ocw067PubMedGoogle Scholar 14.Shachak
A, Hadas-Dayagi
M, Ziv
A, Reis
S. Primary care physicians’ use of an electronic medical record system: a cognitive task analysis.
J Gen Intern Med. 2009;24(3):341-348.
PubMedGoogle ScholarCrossref 15.Margalit
RS, Roter
D, Dunevant
MA, Larson
S, Reis
S. Electronic medical record use and physician-patient communication: an observational study of Israeli primary care encounters.
Patient Educ Couns. 2006;61(1):134-141.
PubMedGoogle ScholarCrossref 17.Mamykina
L, Vawdrey
DK, Stetson
PD, Zheng
K, Hripcsak
G. Clinical documentation: composition or synthesis?
J Am Med Inform Assoc. 2012;19(6):1025-1031.
PubMedGoogle ScholarCrossref 18.Bowman
S. Impact of electronic health record systems on information integrity: quality and safety implications.
Perspect Health Inf Manag. 2013;10:1c.
PubMedGoogle Scholar 19.Weiskopf
NG, Weng
C. Methods and dimensions of electronic health record data quality assessment: enabling reuse for clinical research.
J Am Med Inform Assoc. 2013;20(1):144-151.
PubMedGoogle ScholarCrossref 22.Mangione
CM, Lee
PP, Pitts
J, Gutierrez
P, Berry
S, Hays
RD; NEI-VFQ Field Test Investigators. Psychometric properties of the National Eye Institute Visual Function Questionnaire (NEI-VFQ).
Arch Ophthalmol. 1998;116(11):1496-1504.
PubMedGoogle ScholarCrossref 23.Paz
SH, Slotkin
J, McKean-Cowdin
R,
et al. Development of a vision-targeted health-related quality of life item measure.
Qual Life Res. 2013;22(9):2477-2487.
PubMedGoogle ScholarCrossref 24.Schiffman
RM, Christianson
MD, Jacobsen
G, Hirsch
JD, Reis
BL. Reliability and validity of the Ocular Surface Disease Index.
Arch Ophthalmol. 2000;118(5):615-621.
PubMedGoogle ScholarCrossref 25.Holm
S. A simple sequentially rejective multiple test procedure.
Scand J Stat. 1979;6(2):65-70.
Google Scholar 27.Barbara
AM, Loeb
M, Dolovich
L, Brazil
K, Russell
M. Agreement between self-report and medical records on signs and symptoms of respiratory illness.
Prim Care Respir J. 2012;21(2):145-152.
PubMedGoogle ScholarCrossref 28.Pakhomov
SV, Jacobsen
SJ, Chute
CG, Roger
VL. Agreement between patient-reported symptoms and their documentation in the medical record.
Am J Manag Care. 2008;14(8):530-539.
PubMedGoogle Scholar 29.Stengel
D, Bauwens
K, Walter
M, Köpfer
T, Ekkernkamp
A. Comparison of handheld computer-assisted and conventional paper chart documentation of medical records: a randomized, controlled trial.
J Bone Joint Surg Am. 2004;86-A(3):553-560.
PubMedGoogle ScholarCrossref 31.Sommer
A. The utility of “big data” and social media for anticipating, preventing, and treating disease.
JAMA Ophthalmol. 2016;134(9):1030-1031.
PubMedGoogle ScholarCrossref 33.Cella
D, Yount
S, Rothrock
N,
et al; PROMIS Cooperative Group. The Patient-Reported Outcomes Measurement Information System (PROMIS): progress of an NIH Roadmap cooperative group during its first two years.
Med Care. 2007;45(5)(suppl 1):S3-S11.
PubMedGoogle ScholarCrossref