Figure 1. A, Flowchart of the electronic Early Treatment Diabetic Retinopathy Study (E-ETDRS) Electronic Visual Acuity (EVA) test for study participants in another Diabetic Retinopathy Clinical Research Network (DRCR.net) protocol. B, Flowchart of E-ETDRS EVA testing for study participants who did not participate in another DRCR.net protocol. AR-EVA score indicates EVA letter score after autorefraction; MR-EVA score, EVA letter score after manual refraction; MR-EVAsuppl score, supplemental EVA letter score after the DRCR.net manual refraction.
Figure 2. Bland-Altman plot of the difference in visual acuity (letter score) between 2 Electronic Visual Acuity tests that were performed on 878 eyes after manual refraction. The solid reference line indicates the median; the top and bottom dashed lines indicate the 5th and 95th percentiles, respectively.
Figure 3. Bland-Altman plot of the difference in visual acuity (letter score) between an Electronic Visual Acuity test that was performed on 878 eyes after manual refraction and an Electronic Visual Acuity test that was performed on 878 eyes after autorefraction. The solid reference line indicates the median; the top and bottom dashed lines indicate the 5th and 95th percentiles, respectively.
Figure 4. Bland-Altman plot of the difference in visual acuity (letter score) between an Electronic Visual Acuity test that was performed on 335 eyes after manual refraction by Topcon 8000 series machines and an Electronic Visual Acuity test that was performed on 335 eyes after autorefraction by Topcon 8000 series machines. The solid reference line indicates the median; the top and bottom dashed lines indicate the 5th and 95th percentiles, respectively.
Sun JK, Qin H, Aiello LP, et al; Diabetic Retinopathy Clinical Research Network. Evaluation of visual acuity measurements after autorefraction vs manual refraction in eyes with and without diabetic macular edema. Arch Ophthalmol. Published online December 12, 2011. doi:10.1001/archophthalmol.2011.377.
eTable 1. Autorefraction versus Manual Refraction—Spherical Equivalent.
eTable 2. Autorefraction versus Manual Refraction—Refraction Difference
eTable 3. Comparison of Manual Refraction with Auto Refraction and Supplemental Manual Refraction Visual Acuity Letter Score by Subgroups
eTable 4. Distribution of Manual Refraction, Auto Refraction, and Supplemental Manual Refraction Visual Acuity Approximate Snellen Equivalent (Letter Score)
This supplementary material has been provided by the authors to give readers additional information about their work.
Sun JK, Qin H, Aiello LP, Melia M, Beck RW, Andreoli CM, Edwards PA, Glassman AR, Pavlica MR, Diabetic Retinopathy Clinical Research Network FT. Evaluation of Visual Acuity Measurements After Autorefraction vs Manual Refraction in Eyes With and Without Diabetic Macular Edema. Arch Ophthalmol. 2012;130(4):470-479. doi:10.1001/archophthalmol.2011.377
Author Affiliations: Beetham Eye Institute and Research Section, Joslin Diabetes Center (Drs Sun and Aiello), and Harvard Vanguard Medical Associates (Dr Andreoli), Department of Ophthalmology, Harvard Medical School, Boston, Massachusetts; Jaeb Center for Health Research, Tampa, Florida (Mss Qin and Melia, Dr Beck, and Mr Glassman); Department of Ophthalmology and Eye Care Services, Henry Ford Health System, Detroit, Michigan (Dr Edwards); and Family Eye Group, Lancaster, Pennsylvania (Dr Pavlica).
Objective To compare visual acuity (VA) scores after autorefraction vs manual refraction in eyes of patients with diabetes mellitus and a wide range of VAs.
Methods The letter score from the Electronic Visual Acuity (EVA) test from the electronic Early Treatment Diabetic Retinopathy Study was measured after autorefraction (AR-EVA score) and after manual refraction (MR-EVA score), which is the research protocol of the Diabetic Retinopathy Clinical Research Network. Testing order was randomized, study participants and VA examiners were masked to refraction source, and a second EVA test using an identical supplemental manual refraction (MR-EVAsuppl score) was performed to determine test-retest variability.
Results In 878 eyes of 456 study participants, the median MR-EVA score was 74 (Snellen equivalent, approximately 20/32). The spherical equivalent was often similar for manual refraction and autorefraction (median difference, 0.00; 5th-95th percentile range, −1.75 to 1.13 diopters). However, on average, the MR-EVA scores were slightly better than the AR-EVA scores, across the entire VA range. Furthermore, the variability between the AR-EVA scores and the MR-EVA scores was substantially greater than the test-retest variability of the MR-EVA scores (P < .001). The variability of differences was highly dependent on the autorefractor model.
Conclusions Across a wide range of VAs at multiple sites using a variety of autorefractors, VA measurements tend to be worse with autorefraction than manual refraction. Differences between individual autorefractor models were identified. However, even among autorefractor models that compare most favorably with manual refraction, VA variability between autorefraction and manual refraction is higher than the test-retest variability of manual refraction. The results suggest that, with current instruments, autorefraction is not an acceptable substitute for manual refraction for most clinical trials with primary outcomes dependent on best-corrected VA.
Visual acuity (VA) is a common outcome measure in clinical research for diabetic eye disease, and as a measure of visual function, it is one of a handful of well-accepted primary end points for new drug registration with the US Food and Drug Administration.1 For many years, clinical research studies have used the Early Treatment Diabetic Retinopathy Study (ETDRS) testing method for standardizing refraction and subsequent measurement of VA.2 However, this method requires substantial investment in training and certification of refractionists, and the procedure itself can be time-consuming, with substantial costs accrued over the course of a large phase 2 or 3 study owing to the efforts of all the associated personnel. Thus, an acceptable, less time-consuming alternative to the rigorous ETDRS refraction procedure might result in substantial savings of cost and time for clinical trials in diabetic retinopathy. It might also improve the recruitment and retention of participants in clinical trials because of shorter and less technically burdensome clinic visits.
One potential alternative to the ETDRS technique of manual refraction is the technique of autorefraction,3 which utilizes a computer-controlled device to provide an objective measure of an individual's refractive error without the need for a skilled refractionist. Since first being described and validated against manual refraction in the early 1970s,4- 7 autorefraction has come into widespread clinical use because of the ease and speed of the semiautomated autorefractors, the lack of need for a trained refractionist, and its commercial availability.
In clinical trials, the role of autorefraction has been limited to providing starting information for subsequent manual refraction. However, results from a recent single-site study8 sponsored by the Diabetic Retinopathy Clinical Research Network (DRCR.net) suggest that, with certain devices, autorefraction may be an acceptable substitute for manual refraction in obtaining best-corrected VA in eyes of patients with diabetes mellitus. The study8 was performed at a tertiary referral center for diabetes care and enrolled 216 eyes with varying degrees of diabetic retinopathy and VA (20/16 to 20/800 measured with the Electronic Visual Acuity [EVA] test [electronic ETDRS]) using a single autorefractor type. Refractive errors measured by autorefraction and manual refraction were relatively similar, with a median vector dioptric difference (VDD) of 0.71 diopters (D) and a VDD of less than 1.00 D in 70% of eyes. On average, the EVA letter scores after manual refraction (MR-EVA scores) were slightly better than the EVA letter scores after autorefraction (AR-EVA scores), with a median difference (AR-EVA score − MR-EVA score) of −1 letter (25th-75th percentile range, −4 to 2 letters). Furthermore, the variability between the AR-EVA scores and the MR-EVA scores was similar to the test-retest variability of the MR-EVA score itself, and this similarity was present for all VA subgroups.
Although results from this pilot study8 suggested that autorefraction may be a feasible alternative to manual refraction, the study was relatively small, was performed only at a single center with a single autorefractor type, and did not focus specifically on participants with diabetic macular edema (DME), which is a common inclusion criterion for trials in diabetic retinopathy. Our study reports results from a substantially larger DRCR.net-sponsored multicenter study designed to compare VA scores obtained after autorefraction with those obtained after manual refraction in patients with and without center-involved DME across a diverse range of clinical sites, autorefractors, and certified refractionists with varying levels of experience.
Our study was conducted at 26 sites participating in the DRCR.net. The study protocol was approved by the institutional review board of each site, and each study participant gave verbal or written informed consent for participation in the study.
To be eligible for participation, a study participant was required to be at least 18 years old with type 1 or type 2 diabetes and have at least 1 eye with (1) an optical coherence tomography central subfield of 250 μm or greater, (2) a Snellen equivalent VA of 20/400 or better, and (3) DME as the primary cause of any decreased vision.
Each participant provided his or her medical history, which was also extracted from available medical records. Data collected included age, gender, race/ethnicity, diabetes type (1 or 2), and concomitant ocular conditions that might contribute to decreased VA.
All study procedures were performed during a single study visit by an experienced examiner certified by the DRCR.net for VA testing and/or refraction, and data were recorded on standardized forms. All study participants had 3 EVA tests performed on each eye: 1 EVA measurement performed using the refraction from an autorefractor and 2 EVA measurements performed using the DRCR.net manual refraction. Each EVA test was performed first on the right eye and then on the left eye. The VA technician was masked to the source of refraction in 92% and 93% of cases when the source was DRCR.net refraction and autorefraction, respectively (the technician was occasionally unmasked when only 1 technician was available to perform the testing). Tests were performed prior to dilation and prior to measurement of intraocular pressure. There was approximately a 5-minute rest period between each test.
For 112 study participants who participated in another DRCR.net protocol, the DRCR.net manual refraction and EVA measurement using the manual refraction (MR-EVAsuppl score) were performed according to the original protocol. A second refraction was then performed using an autorefractor. Two additional EVA measurements were then completed by a VA technician: (1) a second EVA measurement using the DRCR.net manual refraction (MR-EVA score) and (2) an EVA measurement using the autorefraction (AR-EVA score). The order of the 2 VA tests was randomized (Figure 1A), and study participants were masked during each VA test as to which refraction (autorefraction or manual refraction) was being used. The VA examiner was masked to the source of the refraction in 81% and 85% when the source was DRCR.net refraction and autorefraction, respectively.
For 370 study participants who did not participate in another DRCR.net protocol or who participated in another DRCR.net protocol without a protocol refraction or EVA test scheduled as part of their current visit, a DRCR.net manual refraction was performed in each eye in addition to a refraction using an autorefractor. Visual acuity was measured once in each eye using the DRCR.net manual refraction (MR-EVA score) and once in each eye using the autorefraction (AR-EVA score). The order of the 2 EVA tests was randomized. The VA examiner was masked to the source of the refraction in 95% and 95% when the source was DRCR.net refraction and autorefraction, respectively. An second, unmasked EVA measurement using the DRCR.net manual refraction (MR-EVAsuppl score) was completed last (Figure 1B).
Twenty-two different autorefractor models were used in our study, based on the availability at participating sites. The autorefractor models included in our study are listed in Table 1. The 3 most common types of autorefractors included those manufactured by Marco Nidek, Nidek, and Topcon. Of the Topcon models, the 8000 series autorefractors differed from the other autorefractor models included in our study in that they utilize a rotary prism technology that theoretically enables measurements of a wider retinal area through a smaller diameter pupil.
Eyes evaluated using portable autorefractors (eg, Nikon Retinomax), outdated and not commercially available technology (eg, Xinyuan fa-6000 and Topcon KR3000), or missing either of the MR-EVA scores were not included in the analyses (59 of 964 eyes [6%]). The refractive error transformation method originally described by Long9 and later modified by Harris10 and Thibos11 was used for the comparison between autorefraction and manual refraction. Refractive error data were transformed into a spherical equivalent and 2 Jackson-Cross cylinder powers with axes at 180° and 45° using the Thibos method, and the VDD was calculated from these transformations with a modified formula that scales the unit vector to match that of the Harris method. In addition, autorefraction and manual refraction were compared and were considered the “same” if the spherical equivalent difference was 0.25 D or less and if the cylinder difference was 0.25 D or less; “similar” if the spherical equivalent difference was greater than 0.25 D but less than or equal to 0.5 D, with a cylinder difference of less than or equal to 0.5 D or greater than or equal to 0.5 D and an axis difference of 10° or less; “very different” if the spherical equivalent difference was 2 D or greater, if the cylinder difference was greater than or equal to 2 D or greater than 1.5 D, and if the axis difference was 20° or greater; and “moderately different” if none of the these criteria were met.
A relationship of differences (MR-EVAsuppl score − MR-EVA score and AR-EVA score − MR-EVA score) with VA was explored using Bland-Altman plots.12 Distributions of differences in VA and refractive error according to the refraction method are described using percentiles, rather than limits of agreement, because the differences were not normally distributed. To compute the Bland-Altman coefficient of repeatability, the standard method was used, but differences greater than 3 times the initial Bland-Altman coefficient of repeatability were truncated at this value, and the coefficient was recalculated. A total of 5 MR-EVAsuppl score − MR-EVA score differences and 5 AR-EVA score − MR-EVA score differences were truncated. Because 2 measures of VA based on the (gold standard) manual refraction were available, the estimation of the underlying VA was based on averaging the 2 MR-EVA scores, rather than basing it on an average of the MR-EVA score and the AR-EVA score. Comparisons of differences among subgroups were made using regression models with generalized estimating equations to account for the correlation of data from 2 eyes of the same participant, adjusting for underlying VA and autorefractor as potential confounders. Absolute differences between the AR-EVA score and the MR-EVA score were compared with absolute test-retest differences between the MR-EVA score and the MR-EVAsuppl score, using a paired t test with generalized estimating equations to account for correlation between eyes. A rank-based transformation for normality (van der Waerden scores) was applied to the differences prior to these generalized estimating equation analyses. With subgroup comparisons of MR-EVAsuppl score − MR-EVA score differences and AR-EVA score − MR-EVA score differences, we used the t test with generalized estimating equations and also adjusted for underlying VA and autorefractor as potential confounders. All reported P values are 2-sided and unadjusted for multiple testing. In view of the large number of variables evaluated in the subgroup analyses, only associations with P < .01 were considered to be unlikely due to chance. SAS version 9.1 (SAS Institute) was used for all analyses.
A total of 905 eligible eyes from 458 individuals with diabetes who enrolled in our study were included in the analyses. Of these eyes, 27 eyes (3%) underwent autorefraction that resulted in “no target” readings, leaving 878 eyes from 456 study participants included in the final analyses. The inability to obtain an autorefraction appeared to be associated with worse VA (0%, 1%, 4%, and 8%, respectively, when the manual refraction VA was 20/20 or better, 20/25 to 20/32, 20/40 to 20/80, and 20/100 or worse; P < .001). No associations were detected between age, pupil size, lens status, or autorefractor type and the ability to autorefract successfully.
The mean (SD) age of study participants was 63 (11) years, and 57% were men. The median MR-EVA Snellen equivalent was 20/32, being 20/20 or better in 158 of 878 eyes (18%), 20/25 to 20/32 in 279 of 878 eyes (32%), 20/40 to 20/80 in 307 of 878 eyes (35%), and 20/100 or less in 134 of 878 eyes (15%). The spherical equivalent of refractive error from the manual refraction ranged from −9.38 to 6.88 D. Additional study participant characteristics and ocular characteristics are presented in Table 2.
Refractive errors measured by autorefraction and those measured by manual refraction were similar (median VDD, 0.79 D; 5th-95th percentile range, 0.25-3.55 D), with the VDD differing by 1.00 D or greater in 337 eyes (38%) and by 2.00 D or greater in 122 eyes (14%) (Table 3). The VDD was larger in eyes with worse VA (P < .001) or with a larger refractive error (P < .001). The autorefraction spherical equivalent was similar compared with the manual refraction spherical equivalent (median difference, 0.00; 5th-95th percentile range, −1.75 to 1.13 D), although agreement between the spherical equivalent from autorefraction vs the spherical equivalent from manual refraction was worse in eyes with worse VA and in eyes with a higher refractive error (P < .001 for both; eTable 1).
Of all the autorefractors, the Topcon 8000 series (Topcon RM 8000, KR8000, KR 8800, and KR 8900 models) generated autorefractions most similar to manual refractions. “Same” or “similar” autorefractions were generated for 55% of eyes with a Topcon 8000 series machine compared with 42% of eyes with the Marco Nidek autorefractor, 27% of eyes with the Nidek autorefractor, and 39% of eyes with other autorefractors (eTable 2).
The median difference between the 2 MR-EVA scores (the MR-EVAsuppl score − the MR-EVA score) for all 878 eyes was 0 letters (5th-95th percentile range, −5 to 7 letters), and the median absolute difference was 2 letters (5th-95th percentile range, 0-9 letters) (Figure 2 and Table 4). Thirteen percent of test-retest scores differed by 5 or more letters but less than 10 letters, 2% differed by 10 or more letters but less than 15 letters, and 2% differed by 15 or more letters (Table 4). The overall Bland-Altman coefficient of repeatability (the half-width of the interval containing 95% of test-retest differences) was 9 letters and was larger in eyes with worse VA (5-13 letters; Table 4). Greater absolute differences in test-retest scores were associated with worse VA (P < .001), although the median letter difference between MR-EVA scores (ie, MR-EVA and MR-EVAsuppl) was not statistically different among VA subgroups (P = .18).
Overall, the EVA scores obtained with manual refraction were slightly better than those obtained with autorefraction. The median difference between the 2 EVA measurements (AR-EVA scores − MR-EVA scores) was −1 letter (5th-95th percentile range, −17 to 7 letters) (Figure 3 and Table 4). Of the 878 eyes included in the analysis, 545 (62%) were within −4 to 4 letters, 247 (28%) had an EVA score that was better by 5 or more letters after manual refraction, and 86 (10%) had an EVA score that was better by 5 or more letters after autorefraction. The median absolute difference between AR-EVA scores and MR-EVA scores was 4 letters (5th-95th percentile range, 0-20 letters), with 23% of measurements differing by 5 or more letters but by less than 10 letters, 7% differing by 10 or more letters but by less than 15 letters, and 8% differing by 15 or more letters (Table 4). Greater differences and greater absolute differences between the 2 EVA measurements (ie, AR-EVA scores and MR-EVA scores) were associated with worse VAs (P < .001 and P = .02, respectively).
The absolute differences between the AR-EVA scores and the MR-EVA scores (Table 4) were slightly larger than the differences between the MR-EVA scores and the MR-EVAsuppl scores (P < .001), with the median absolute difference between the AR-EVA scores and the MR-EVA scores ranging from 3 to 4 letters compared with the median absolute difference between the MR-EVA scores ranging from 1 to 2 letters according to VA level (Figure 3 and Table 4). A greater absolute difference between the AR-EVA scores and the MR-EVA scores compared with the absolute difference between the 2 MR-EVA scores was present for all VA subgroups.
Of the 122 eyes that had markedly different refractions, 76 (62%) had an MR-EVA score of 5 or more letters better than the AR-EVA score, and 13 (11%) had an AR-EVA score of 5 or more letters better than the MR-EVA score, whereas 58 (48%) had an MR-EVA score of 10 or more letters better than the AR-EVA score, and 6 (5%) had an AR-EVA score of 10 or more letters better than the MR-EVA score.
Differences and absolute differences between AR-EVA scores and MR-EVA scores were not associated with study participant age, gender, race, DME severity, primary cause of vision loss, pupil size, lens status, or refractive error (eTable 3). However, both differences and absolute differences between AR-EVA scores and MR-EVA scores were highly associated with the type of autorefractor used (P < .001) (Table 5). Of all the autorefractor models included in our study, autorefractors from the Topcon 8000 series generated refractions that yielded VA results most similar to those found using manual refraction. In general, the results with the Topcon 8000 series models were better than the results with other autorefractor models but still worse than the test-retest variability of the MR-EVA scores alone. The median difference between the AR-EVA scores and the MR-EVA scores in the Topcon 8000 series group was 0 letters (5th-95th percentile range, −10 to 7 letters), and the median absolute difference between the AR-EVA scores and the MR-EVA scores in this group was 3 letters (5th-95th percentile range, 0-13 letters) (Figure 4). Test-retest absolute differences for MR-EVA scores of 10 or more letters but less than 15 letters and of 15 letters or more were present in 2% and 2% of eyes, respectively. In the Topcon group, absolute differences of 10 or more letters but less than 15 letters and of 15 or more letters between the AR-EVA scores and the MR-EVA scores still were present for 5% and 4% of eyes, respectively, compared with 8% and 10% of eyes tested with other autorefractor models. The results obtained with the Topcon 8000 series machines were generally consistent between the 8 clinical sites that used these models, although one of these sites appeared to have a greater variability than the others between AR-EVA scores and MR-EVA scores, accounting for 4 of the 5 observations with an AR-EVA and MR-EVA absolute difference of 30 letters or more.
The training and certification of examiners to accurately refract study participants for the determination of best-corrected VA is both time-consuming and expensive. The rigorous manual refraction protocol currently in use also is time-consuming to perform and lengthens the duration of visits for study participants. Thus, the ability to substitute an automated refraction for manual refraction could streamline study visits and result in substantial savings of time and cost for clinical staff. However, our results, obtained across a broad range of sites with a large variety of autorefractors, do not support the general substitution of autorefraction for a standardized manual refraction for most clinical trial protocols at this time.
Although differences in performance were noted between autorefractor models, with the Topcon 8000 series autorefractors performing on average most similarly to manual refraction, there still was wide variability among results generated by the certified VA examiners using these models. Thus, even with this model, the discrepancies between the MR-EVA scores and the AR-EVA scores do not support using the AR-EVA score in lieu of the MR-EVA score routinely for clinical trial purposes. Topcon 8000 series autorefractors differ from the other autorefractor models included in our study in that they utilize a rotary prism technology enabling evaluation of a wider retinal area through a smaller diameter pupil. It is conceivable that the differences in autorefractor performance seen in this study are tied to this difference in autorefractor hardware and that further improvements in autorefraction are possible. It is unlikely that differences between autorefractors were due to differences in performance across clinical sites because, overall, there was no substantial difference in variability of VA measurements among the individual sites that used the Topcon 8000 series machines.
If substitution of autorefraction for manual refraction is desired, these results suggest that some types of clinical trials with specific outcomes or treatment algorithms may lend themselves more readily than others to this approach. To some extent, increased variability of VA measurements associated with autorefraction could be accounted for in study design by increasing sample size (eTable 4). However, the wide range of differences seen between vision tested after autorefraction vs vision tested after manual refraction would make autorefraction a poor substitute for manual refraction in clinical trials in which treatment decisions for individual patients are driven by relatively small changes in VA (eg, 5 letters). Autorefraction might be a more feasible substitute for manual refraction in studies in which VA is evaluated solely for determination of outcomes across a large study population or in which VA is a secondary or tertiary outcome variable. However, in these studies, careful consideration of the value of VA for assessing adverse events would also have to be considered.
The results from our study also confirm that there is a high level of test-retest reliability of EVA measurements in eyes with DME. The EVA test was performed twice with the same manual refraction in our study to serve as the benchmark for comparison of AR-EVA scores and MR-EVA scores. The test-retest variability of the MR-EVA scores for eyes with DME in our study was similar to that for eyes without DME. Furthermore, in our study, the results for the 81% of eyes with DME (half-width of 2-sided CIs: 9 letters, 83% and 96% within 5 and 10 letters, respectively) were comparable to a previous study13 in which only a small proportion (10%) of participants had DME (half-width of 2-sided CIs: 8 letters, 89% and 98% within 5 and 10 letters, respectively). The limitations of our study include the fact that a small percentage of VA examiners were not masked to the refraction source. However, the use of the electronic ETDRS testing protocol to measure VA minimizes the potential bias that could be attributed to an examiner, and no statistically significant difference was seen in absolute differences between VA based on autorefraction and VA based on manual refraction for the results from the more than 90% of VA examinations that were performed in a masked fashion vs those that were unmasked. In addition, although a relatively low percentage of eyes in our study had decreased VA, we were still able to determine a significant difference between AR-EVA scores and MR-EVA scores as measured in different VA subgroups. We were able to enroll substantial numbers of eyes to assess 3 most commonly available autorefraction models (Marco Nidek, Nidek, and Topcon), but for the other, generally older autorefractor models, we were limited by their relatively small numbers. Finally, although our study evaluated the test-retest reliability of MR-EVA scores, it did not assess the test-retest reliability of AR-EVA scores. It is possible that autorefraction, if highly repeatable, might be useful for following changes in VA over time.
In summary, these results demonstrate that VA after autorefraction tends to be slightly worse than VA after manual refraction. Variability between VA after autorefraction and VA after manual refraction is substantially higher than the test-retest variability of manual refraction. We also observed important differences among autorefractor models. Although, in general, autorefraction may not be an acceptable substitute for manual refraction, specific elements of the study design, including increased sample size and nonreliance of treatment algorithm on small differences in VA, may allow limited substitution of autorefraction for manual refraction in some studies.
Correspondence: Haijing Qin, MS, Jaeb Center for Health Research, 15310 Amberly Dr, Ste 350, Tampa, FL 33647 (firstname.lastname@example.org).
Submitted for Publication: June 17, 2011; final revision received October 6, 2011; accepted October 14, 2011.
Published Online: December 12, 2011. doi:10.1001/archophthalmol.2011.377
Financial Disclosure: None reported.
Funding/Support: This study was supported by a cooperative agreement from the National Eye Institute and the National Institute of Diabetes and Digestive and Kidney Diseases (National Institutes of Health, Department of Health and Human Services grants EY14231, EY14229, and EY018817).
Clinical Sites That Participated in This Protocol
Sites are listed in order by number of participants enrolled in the study. The number of participants enrolled is noted in parentheses, preceded by the site location and the site name. Personnel are listed as I for study investigator, C for coordinator, V for visual acuity tester, and P for photographer.
Joslin Diabetes Center, Boston, Massachusetts (163): Jennifer K. Sun (I), Sabera T. Shah (I), Timothy J. Murtha (I), Paul G. Arrigg (I), Lloyd Paul Aiello (I), George S. Sharuk (I), Deborah K. Schlossman (I), Christopher M. Andreoli (I), Margaret E. Stockman (C, V), Troy Kieser (C, V), Julie A. Barenholtz (C, V), Sharon M. Eagan (V), Dorothy Tolls (V), John C. BuAbbud (V), Jamy Borbidge (V), Jerry D. Cavallerano (V), William Carli (V), Mary Ann Robertson (V), Mathew M. Coppola (V), Anna Fagan (V), Leslie L. Barresi (P); Florida Retina Consultants, Lakeland, Florida (70): Oren Z. Plous (I), Scott M. Friedman (I), Kelly A. Blackmer (C), Karen Sjoblom (C, V), Jolleen S. Key (C, V), Damanda A. Fagan (V), Allen McKinney (P), Kimberly A. Williamson (P); Family Eye Group, Lancaster, Pennsylvania (50): Michael R. Pavlica (I), Noelle S. Matta (C, V), Sara Weit (V), Cristina M. Brubaker (P); Elman Retina Group, PA, Baltimore, Maryland (24): Michael J. Elman (I), JoAnn Starr (C), Theresa M. Butcher (C), Dena Y. Salfer-Firestone (V), Pamela V. Singletary (V), Nancy Gore (V), Teresa Coffey (V); Department of Ophthalmology and Eye Care Services, Henry Ford Health System, Detroit, Michigan (24): Paul Andrew Edwards (I), Uday Desai (I), Janet Murphy (C, V), Alexa M. Lipman (C, V), Julianne Hall (C, V), Melanie A. Gutkowski (C, V), Dorena F. Wilson (V); Midwest Eye Institute, Indianapolis, Indiana (22): Raj K. Maturi (I), Laura A. Bleau (C, V), Carolee K. Novak (C, V); Department of Ophthalmology, Loma Linda University Health Care, Loma Linda, California (20): Joseph T. Fan (I), Mukesh Bhogilal Suthar (I), Michael E. Rauser (I), Cara L. Davidson (C, V), Kara E. Rollins (C, V), Blen D. Eshete (C, V), Gisela Santiago (V), William H. Kiernan (V); Charlotte Eye, Ear, Nose, and Throat Associates, PA, Charlotte, North Carolina (12): David Browning (I), Andrew Nicholas Antoszyk (I), Danielle R. Brooks (C, V), Angela K. Price (C, V), Sarah A. Ennis (V), Angella S. Karow (V); Retina and Vitreous of Texas, Houston, Texas (12): H. Michael Lambert (I), Pam S. Miller (C), Valerie N. Lazarte (V), Debbie Fredrickson (V); Penn State College of Medicine, Hershey, Pennsylvania (11): Ingrid U. Scott (I), Susan M. Chobanoff (C, V); The New York Eye and Ear Infirmary/Faculty Eye Practice, New York, New York (10): Ronald C. Gentile (I), Estuardo Alfonso Ponce (I), Anita Ou (C, V), Catiria Guerrero (V), Julie A. Paa (V), Violete Perez (V); Wilmer Eye Institute at Johns Hopkins, Baltimore, Maryland (8): Sharon D. Solomon (I), Susan Bressler (I), Diana V. Do (I), Adrienne Williams Scott (I), Mary Frey (C, V), Sandra West (C, V), Deborah Donohue (V); Valley Retina Institute, McAllen, Texas (8): Victor Hugo Gonzalez (I), Nehal R. Patel (I), Marcos Silva (C), Melody Cruz (C), Monica R. Cantu (V), Marlene Lopez (V), Rachel Rodriguez (V); John-Kenyon American Eye Institute, New Albany, Indiana (7): Howard S. Lazarus (I), Debra Paige Bunch (C, V), Angela D. Ridge (C), Kelly Booth (V); Department of Ophthalmology, University of North Carolina, Chapel Hill, North Carolina (6): Seema Garg (I), Travis A. Meredith (I), Odette M. Houghton (I), Cassandra J. Barnhart (C, V), Nabeel Barakat (C, V), Harpreet Kaur (C, V); Wake Forest University Eye Center, Winston-Salem, North Carolina (6): Craig Michael Greven (I), M. Madison Slusher (I), Joan Fish (C, V), Lori N. Cooke (C, V), Cara Everhart (C, V); University of Illinois at Chicago Medical Center, Chicago, Illinois (5): Jennifer I. Lim (I), Michael P. Blair (I), Marcia Niec (C), Yesenia Ovando (V), Tametha Johnson (V); Paducah Retinal Center, Paducah, Kentucky (5): Carl W. Baker (I), Tracey M. Caldwell (C), Lynnette F. Lambert (C, V), Tracey R. Martin (V), Mary J. Palmer (V); Medical College of Wisconsin, Milwaukee, Wisconsin (4): Judy E. Kim (I), Kimberly E. Stepien (I), Dennis P. Han (I), Vesper V. Williams (C), Vicki Barwick (V), Judy Flanders (V); University of Rochester, Rochester, New York (4): David Allen DiLoreto (I), George Ogara (C, V), Malinda M. Goole (V), Terrance Schaefer (V); Medical Associates Clinic, PC, Dubuque, Iowa (3): Michael H. Scott (I), Philomina M. Wiegman (C), Thomas R. Dvorak (V), Marcia J. Moyle (P), Brenda L. Tebon (P); Southeastern Retina Associates, PC, Kingsport, Tennessee (2): Howard L. Cummings (I), Deanna Jo Long (C), Stacy Carpenter (V); Retina Vitreous Consultants, Pittsburgh, Pennsylvania (2): Karl R. Olsen (I), Tara L. Wilson (C), Kim Whale (V), Pamela Rath (I), Christina Schultz (V), David Steinberg (P), Heather Shultz (P); Retinal Consultants of San Antonio, San Antonio, Texas (2): Calvin E. Mein (I), Moises A. Chica (I), Lita Kirschbaum (C, V), Christopher Sean Weineke (P); Ophthalmic Consultants of Boston, Boston, Massachusetts (1): Trexler M. Topping (I), Lesley-Anne Freese (C), Jennifer L. Stone (V); Barnes Retina Institute, St. Louis, Missouri (1): Rajendra S. Apte (I), Kevin J. Blinder (I), Carolyn L. Walters (C, V), Lynda K. Boyd (V).
DRCR.net Coordinating Center
Jaeb Center for Health Research, Tampa, Florida (staff as of June 8, 2011): Adam R. Glassman (director and principal investigator), Roy W. Beck (executive director), Talat Almukhtar, Bambi J. Arnold, Brian B. Dale, Alyssa Baptista, Sharon R. Constantine, Simone S. Dupre, Allison R. Edwards, Meagan L. Huggins, Paula A. Johnson, Lee Anne Lester, Brenda L. Loggins, Emily B. Malka, Shannon L. McClellan, Michele Melia, Kellee M. Miller, Pamela S. Moke, Haijing Qin, Rosa Pritchard, Eureca Scott, Cynthia R. Stockdale.
Fundus Photograph Reading Center
University of Wisconsin–Madison, Madison, Wisconsin (staff as of June 8, 2011): Matthew D. Davis (director emeritus), Sapna Gangaputra (codirector), Ronald P. Danis (director and principal investigator), Larry Hubbard (associate director), James Reimers (lead color photography evaluator), Pamela Vargo (lead photographer), Ericka Moeller (digital imaging specialist), Dawn Myers (lead optical coherence tomography evaluator), Kristjan Burmeister (project manager), Vonnie Gamma (data management).
DRCR.net Operations Center
Johns Hopkins University School of Medicine, Baltimore, Maryland (staff as of June 8, 2011): Neil M. Bressler (network chair and principal investigator), Connie Lawson, Peggy R. Orr, Beth Wellman.
Susan B. Bressler (2009-current), Scott Friedman (2009-current), Carl W. Baker (2011-current), Ingrid U. Scott (2009-2010).
National Eye Institute
Eleanor Schron (2009-current), Donald F. Everett (2003-2006, 2007-2009), Päivi H. Miskala (2006-2007).
Raj K. Maturi (2009-present; chair 2010) Neil M. Bressler (2006-present; chair 2006-2008), Lloyd Paul Aiello (2002-present; chair 2002-2005), Carl W. Baker (2009-present), Roy W. Beck (2002-present), Susan B. Bressler (2009-present), Alexander J. Brucker (2009-present), Kakarla V. Chalam (2009-present), Ronald P. Danis (2004-present), Matthew D. Davis (2002-present), Michael J. Elman (2006-present; chair 2009), Frederick L. Ferris III (2002-present), Scott Friedman (2007-present), Adam R. Glassman (2005-present), Joseph Googe Jr (2009-present), Eleanor Schron (2009-present), JoAnn Starr (2009-present), Jennifer K. Sun (2009-present). Prior members: Andrew N. Antoszyk (2009), Abdhish Bhavsar (2007-2008), David M. Brown (2006-2007), David J. Browning (2005-2006), Donald F. Everett (2002-2009), Joan Fish (2008-2009), Andreas Lauer (2007-2008), Kim McLeod (2002-2006), Päivi H. Miskala (2005-2007), Cynthia J. Grinnell (2006- 2007), Ingrid U. Scott (2009-2010).