To assess the validity of an algorithm for identifying patients with diabetic macular edema (DME) using International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) diagnosis codes in administrative billing data from a convenience sample of physician offices.
A convenience sample of 12 general ophthalmologists and 10 retina specialists applied prespecified algorithms based on ICD-9-CM diagnosis codes to the billing claims of their practices and selected the associated medical records. Four ophthalmologists abstracted data from the medical records, which were then compared with the coded diagnoses. Main outcome measures were sensitivity, specificity, and the κ statistic for the DME algorithm (a combination of codes 250.xx and 362.53), treating medical record documentation of DME as the standard criterion.
The DME algorithm had a sensitivity of 0.88 and a specificity of 0.96 for identifying DME. Excellent agreement was noted between the algorithm and the medical records (κ = 0.84). The algorithm performed less well in identifying patients with a diagnosis of clinically significant DME (sensitivity, 0.86; specificity, 0.84; κ = 0.64).
The results of this pilot study suggest that patients with DME can be identified accurately in claims data using ICD-9-CM diagnosis codes. Application of this algorithm could improve investigations of disease prevalence and disease burden and provide an efficient means of assessing care and interventions.
Vision loss related to diabetic retinopathy may be caused by vitreous hemorrhage, macular edema, macular ischemia, or tractional retinal detachment. Diabetic retinopathy is the leading cause of blindness in working-age populations in developed countries, and diabetic macular edema (DME) is the leading cause of any degree of visual impairment in patients with diabetes mellitus. However, current understanding of the epidemiology and disease burden of DME is limited. Findings from the Wisconsin Epidemiologic Study of Diabetic Retinopathy suggest that 13% to 25% of patients with diabetes mellitus develop DME in 10 years, with a point prevalence ranging from 2% to 6%.1 Data from other populations are sparse.
Administrative claims databases have been used to develop reliable estimates of the incidence of several chronic ophthalmic conditions including glaucoma, macular degeneration, and diabetic retinopathy.2 However, the accuracy of claims data in identifying patients with DME has not been assessed or validated, to our knowledge, and has been limited by the lack of an International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) code for DME before October 1, 2005.
The objective of this pilot study was to assess the validity of an algorithm for identifying patients with DME using ICD-9-CM diagnosis codes in administrative billing data from a convenience sample of physician offices. If billing data from physician offices are accurate, then administrative databases can be used with confidence in efforts to assess disease prevalence and, ultimately, disease burden and the benefits of treatment of diabetes mellitus and DME.
Of 43 general ophthalmologists and retina specialists invited to participate in the study, 22 physicians (51%) agreed. We asked the participants to search their computerized billing data for patients seen between January 1, 2001, and July 31, 2003, and to randomly select 10 patients using 4 coding algorithms (Table 1). The algorithms, based on previous work by Sloan et al,2 rely on ICD-9-CM diagnosis codes available during calendar years 2001 through 2003. We hypothesized that an algorithm that combined codes for diabetes mellitus and cystoid macular edema (250.xx and 362.53), hereafter termed “the DME algorithm,” could be used to accurately identify patients with DME and that the other 3 algorithms would not be useful in identifying patients with DME. We used practice-based billing data so that we could generalize results across the mix of payers in the practices, as opposed to the approach used in previous studies in which selection was based on the payers' claims databases (eg, Medicare).
For each of the 10 patients from each practice, we asked the physicians to collect the medical records, including all handwritten office notes and any dictation, letters, or other correspondence sent or received, with regard to (1) the billing diagnosis visit; (2) the visit immediately preceding the billing diagnosis visit, if applicable; (3) the first office visit regardless of the reason for the visit; and (4) the first visit for which a qualifying clinical diagnosis was noted in the medical record regardless of whether this visit was the billing diagnosis visit. We asked the participating physicians to remove all identifiable patient information from the selected medical records, preserving only the month and year of each visit. We compensated the physicians $75 for each complete medical record. The Institutional Review Board of the Duke University Health System approved this study.
Data abstraction and analysis
Four ophthalmologists (S.B., P.M., I.J.S., and P.P.L.) on the study team reviewed the submitted medical records and abstracted the data. These reviewers were masked to physician identity, practice type (general ophthalmology or retina subspecialty), and practice setting (academic or community practice). For each medical record, the reviewers identified whether diabetes mellitus, DME, or clinically significant DME was documented on or before the billing diagnosis date. All data were entered into a commercially available database (Access 2002; Microsoft Corp, Redmond, Washington). A coinvestigator (P.P.L.) then reviewed (1) all medical records for which the billing data were identified using the DME algorithm but in which the reviewers did not find a diagnosis of DME; (2) all medical records for which the billing data were identified using the other 3 algorithms but in which the reviewers found a diagnosis of DME; and (3) a 10% random sample of medical records for which there were no such discrepancies.
We used basic descriptive statistics to calculate response rates by diagnostic category and participating physician. We calculated sensitivity, specificity, and the κ statistic for the DME algorithm, treating medical record documentation of the presence or absence of DME as the standard criterion. Sensitivity reflects the percentage of medical records containing a diagnosis of DME that were identified correctly using the DME algorithm. Specificity reflects the percentage of medical records not containing a diagnosis of DME that were identified correctly as not containing DME using the other 3 algorithms. The κ statistic measures the agreement between the DME algorithm and the medical record and corrects for the chance agreement that would be expected if the 2 were unrelated. We also calculated sensitivity, specificity, and the κ statistic for the DME algorithm in identifying clinically significant DME. We used commercially available software (SAS version 9.0; SAS Institute, Inc, Cary, North Carolina) for all analyses.
We received 225 medical records from 10 retina specialists and 12 general ophthalmologists. Approximately 9% of the medical records were from a single academic institution, another 20% were from the community-based practice sites of the academic institution, and the remaining medical records were from other academic institutions (19%) and specialty eye care centers (52%).
On average, each physician provided 11 (SD 3.6) medical records. Six physicians were unable to provide 10 medical records; therefore, 4 physicians provided more than 10 medical records. Twenty-six medical records (12%) were identified by the participating physicians using the algorithm for diabetes mellitus and no retinopathy; 95 (42%) using the codes for background diabetic retinopathy; 28 (12%) using the algorithm for proliferative diabetic retinopathy; and 76 (34%) using the DME algorithm.
The DME algorithm had a sensitivity of 0.88 (95% confidence interval [CI], 0.78-0.94) and a specificity of 0.96 (95% CI, 0.91-0.98) for identifying medical records that contained a diagnosis of DME (Table 2). There was excellent agreement between the DME algorithm and the medical record (κ = 0.84; 95% CI, 0.77-0.92). The DME algorithm performed less well in identifying medical records that contained a diagnosis of clinically significant DME, with a sensitivity of 0.86 (95% CI, 0.75-0.94), a specificity of 0.84 (95% CI, 0.78-0.90), and κ = 0.64 (95% CI, 0.53-0.75) (Table 3). The results did not differ significantly by practice setting.
In this preliminary study, we found that patients with DME can be accurately identified using ICD-9-CM diagnosis codes submitted by physicians in billing claims to payers. Treating the medical record as the standard criterion, the DME algorithm yielded a sensitivity of 0.88 and a specificity of 0.96 for identifying DME. Excellent agreement was noted between the algorithm and the medical record. The algorithm was less specific for clinically significant DME (0.85), which suggests that the algorithm identifies DME in general, not just clinically significant DME.
To our knowledge, this is the first study to validate an algorithm to identify DME in claims data. Javitt et al3 demonstrated more than 15 years ago that Medicare claims data could be used to accurately identify patients who had undergone cataract surgery. Studies in other fields have shown that the accuracy of claims data compared with medical record documentation varies depending on the source of the data, the comorbid condition, and whether that condition is likely to affect reimbursement.4,5 Previous studies suggest that these coding errors in claims databases may be higher than 20%.6-8 This study provides preliminary evidence that the coding of DME is accurate.
With the overall prevalence of diabetes mellitus expected to increase at an epidemic rate,9 the availability of good tools for studying complications of diabetes mellitus will become increasingly important. A low false-positive rate is important when using claims data to analyze the epidemiology of a disease or treatment patterns. As new therapies become available, having confidence in the identification of patients with DME will be vital to assessing a large range of issues associated with such treatment.
Although limited in their clinical detail, administrative data sets are relatively inexpensive to use and can capture information about a large number of patients over time. The DME algorithm that we tested will be particularly useful in identifying patients with DME that developed before the October 2005 adoption of a specific diagnosis code for DME (362.07). Even in analyses of more recent administrative data, the algorithm will be useful because the adoption of new diagnosis codes by physician practices may not occur quickly.
This study has some limitations. The validity of the DME algorithm is based on the premise that the coding ophthalmologist accurately diagnoses the presence or absence of DME. There may be overappreciation or underappreciation of DME and clinically significant DME, resulting in inaccurate classification of patients. However, such errors would apply to any current method of identifying patients that depends on physician diagnostic skills. Because DME, in particular, clinically significant DME, is classically a clinical diagnosis, this potential source of error will be present in any review system absent the use of an instrument definition of the condition (eg, optical coherence tomography or fluorescein angiography). The DME algorithm also requires that the clinical diagnosis of DME be reflected in the billing data. To the extent that retinal edema rather than cystoid macular edema is coded in the medical record, the algorithm may underestimate the prevalence of DME.
Another limitation of this study is the convenience sampling of ophthalmologists, which may overstate or understate the accuracy of ophthalmologists in general. Moreover, there may be important geographic differences in the coding of DME that we could not detect in this limited convenience sample. Twenty-two of 43 physicians who agreed to participate may have been more diligent in record keeping, adding to the potential bias. However, we attempted to reduce bias and improve generalizability by identifying medical records of patients from both general ophthalmologists and retina specialists. These patients were seen at academic institutions and private practices across the United States. Furthermore, there is no a priori reason to suspect that the sites were substantially different from other similar practices in this country.
In conclusion, we found that patients with DME can be identified from administrative data. The algorithm described selects for patients with DME with high sensitivity and specificity. While the findings of this preliminary study are strong, a larger sampling will be needed to verify the results.
Correspondence: Lesley H. Curtis, PhD, Center for Clinical and Genetic Economics, Duke Clinical Research Institute, PO Box 17969, Durham, NC 27715 (firstname.lastname@example.org).
Submitted for Publication: November 7, 2007; accepted November 9, 2007.
Author Contributions: Drs Curtis and P. P. Lee had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.
Financial Disclosure: Dr Bearelly reported receiving an honorarium for consulting from Novartis AG. Dr Suñer reported serving as a consultant for Allergan, Inc; Bausch & Lomb, Inc; Genentech, Inc; Optos PLC, OSI Pharmaceuticals, Inc; and Pfizer, Inc. Dr Curtis reported receiving research and salary support from Allergan, Inc; GlaxoSmithKline PLC; Eli Lilly & Co; Medtronic, Inc; Novartis AG; Ortho Biotech Products, LP; OSI Pharmaceuticals, Inc; Pfizer, Inc; and Sanofi-Aventis; and has made available online a detailed listing of financial disclosures (http://www.dcri.duke.edu/research/coi.jsp). Dr Schulman reported receiving research support and/or salary from Actelion Pharmaceuticals Ltd; Allergan, Inc; Amgen, Inc; Arthritis Foundation; Astellas Pharma, Inc; Bristol-Myers Squibb Co; Duke Endowment; Genentech, Inc; Inspire Pharamceuticals, Inc; Johnson & Johnson; Kureha Corp; LifeMasters Supported SelfCare, Inc; Medtronic, Inc; Merck & Co, Inc; Nabi Biopharmaceuticals; National Patient Advocate Foundation; North Carolina Biotechnology Center; Novartis AG, OSI Pharmaceuticals, Inc; Pfizer, Inc; Hoffmann-La Roche, Inc; Sanofi-Aventis; Schering-Plough Corp; Scios, Inc; Tengion; Theravance, Inc; Thomson Healthcare; Vertex Pharmaceuticals, Inc; Wyeth; and the Yamanouchi USA Foundation; receiving personal income for consulting from Avalere Health; LifeMasters Supported SelfCare, Inc; McKinsey & Co; and the National Pharmaceutical Council; having equity in and serving on the board of directors of Cancer Consultants; having equity in Alnylam Pharmaceuticals, Inc; and having equity in and serving on the executive board of Faculty Connection LLC; and has made available online a detailed listing of financial disclosures (http://www.dcri.duke.edu/research/coi.jsp). Dr P. P. Lee reported receiving grant support from Alcon, Inc; Allergan, Inc; and Pfizer, Inc; serving as a consultant for Alcon, Inc; Allergan, Inc; Pfizer, Inc; and Genentech, Inc; receiving lecture fees from Alcon, Inc; Allergan, Inc; Pfizer, Inc; and Merck & Co; and owning stock in Pfizer, Inc; and Merck & Co, Inc.
Funding/Support: This study was supported by a research agreement between Allergan, Inc, and Duke University (Dr Curtis).
Role of the Sponsor: Allergan, Inc, had no role in the design and conduct of the study; collection, management, analysis and interpretation of the data; or preparation, review, or approval of the manuscript.
Additional Contributions: Damon M. Seils, MA, of Duke University provided editorial assistance and prepared the manuscript.
KJ The Wisconsin Epidemiologic Study of Diabetic Retinopathy: XV, the long-term incidence of macular edema. Ophthalmology
7- 16PubMedGoogle ScholarCrossref
PP Estimates of incidence rates with longitudinal claims data. Arch Ophthalmol
1462- 1468PubMedGoogle ScholarCrossref
F Accuracy of coding in Medicare part B claims: cataract as a case study. Arch Ophthalmol
605- 607PubMedGoogle ScholarCrossref
BF Accuracy of ICD-9-CM
codes for identifying cardiovascular and stroke risk factors. Med Care
480- 485PubMedGoogle ScholarCrossref
et al. Coding algorithms for defining comorbidities in ICD-9-CM
administrative data. Med Care
1130- 1139PubMedGoogle ScholarCrossref
H Use of insurance claims databases to evaluate the outcomes of ophthalmic surgery. Surv Ophthalmol
271- 278PubMedGoogle ScholarCrossref
et al. The accuracy of Medicare's hospital claims data: progress has been made, but problems remain. Am J Public Health
243- 248PubMedGoogle ScholarCrossref
RP Accuracy of diagnostic coding for Medicare patients under the prospective payment system [published correction appears in N Engl J Med
. 1990;24;322(21):1540]. N Engl J Med
352- 355PubMedGoogle ScholarCrossref
et al. Impact of the population at risk of diabetes on the projections of diabetes burden in the United States: an epidemic on the way. Diabetologia
934- 940PubMedGoogle ScholarCrossref