A Kaplan-Meier estimate of the cumulative probability of cataract extraction (CE) from the time of randomization for the 2 initial treatment groups. The error bars represent 95% confidence intervals for each time-specific estimated probability of CE. Log-rank test P = .001.
Clinical and vision-specific quality of life information gathered in the 0- to 2.5-year period leading up to study eye cataract extraction and the 0.1- to 1.5-year period after CE for the 99 patients who underwent cataract extraction. A, Mean deviation score. B, Visual acuity score. C, Visual Activities Questionnaire (VAQ) score.
Musch DC, Gillespie BW, Niziol LM, Janz NK, Wren PA, Rockwood EJ, Lichter PR, for the Collaborative Initial Glaucoma Treatment Study (CIGTS) Group. Cataract Extraction in the Collaborative Initial Glaucoma Treatment StudyIncidence, Risk Factors, and the Effect of Cataract Progression and Extraction on Clinical and Quality-of-Life Outcomes. Arch Ophthalmol. 2006;124(12):1694-1700. doi:10.1001/archopht.124.12.1694
To study the incidence of and predictors for cataract extraction (CE) in patients with newly diagnosed glaucoma, the impact of CE on visual function, and changes in the time around CE.
Patients were randomized to medical or surgical treatments for glaucoma at 14 centers and followed up for a median of 7.7 years. Vision-specific quality of life (VS-QOL) data were collected by telephone interview during follow-up of 607 patients randomized to medical or surgical treatments for glaucoma. The occurrence of CE was the signal event. Risk factors were evaluated using survival analyses; changes from before to after CE were evaluated by paired t tests; and trends were estimated by loess regression.
During follow-up of 607 patients, CE took place in 99 study eyes. Initial surgery, older age, a more negative spherical equivalent, and a diagnosis of pseudoexfoliative glaucoma conferred a higher risk of CE. Visual field testing before and after CE showed the mean deviation improved but the pattern standard deviation worsened. The VS-QOL improved on most subscales.
Initial surgery places a patient with glaucoma at a higher risk of CE. The impact of CE on visual field indexes is mixed—the mean deviation improved but the pattern standard deviation worsened. Most, but not all, VS-QOL subscales were responsive to worsening of cataract prior to and acute improvement in vision after CE.
Reports on the effect of cataract on visual function have mainly focused on the impact that cataract extraction (CE) has on visual acuity (VA) and visual field (VF) indexes. While a beneficial VA impact is an expected and very consistent outcome, the reported effects on VF have been inconsistent. Some studies have concluded that VF indexes do not change substantially,1,2 although most studies have found that global VF indexes, like the mean deviation, show significant improvement after CE.3- 11 The underlying severity of VF loss at the time of CE may lessen the extent of VF improvement that can be expected.4,10
The aims of this study were to evaluate (1) the incidence of and predictors for CE in patients with newly diagnosed glaucoma, (2) the short-term impact of CE on VA, VF, and vision-specific quality of life (VS-QOL), and (3) the extent to which these clinical and VS-QOL measures change in the years preceding and following CE.
The 607 patients enrolled in the Collaborative Initial Glaucoma Treatment Study (CIGTS) made up the cohort of patients included in this study. These newly diagnosed, previously untreated patients with phakic eyes and open-angle glaucoma were enrolled at 14 clinical centers, where institutional review board approval was obtained for the study. On obtaining written informed consent, patients were randomized to initial medication (n = 307) or initial trabeculectomy (n = 300) in the period from October 1993 to April 1997. Since some patients had only 1 eye eligible for treatment, a study eye was selected for all patients prior to randomization. Follow-up data used for this report were collected through December 2004, when patients had been followed up for a median of 7.7 years. At 6-month intervals, patients returned to their CIGTS clinical center for a comprehensive ophthalmologic examination that included automated perimetry with the Humphrey 24-2 full-threshold VF test protocol. Patients were also contacted at 6-month intervals by telephone and asked a comprehensive battery of quality of life measures, including the Visual Activities Questionnaire (VAQ),12 a vision-specific measure assessing patients' perception of their visual function. We computed a VAQ total score based on the mean response to 33 questions and scores for the 8 VAQ subscales.
During the course of follow-up, CE was allowed under the study protocol if the patient's VA had worsened by 20 letters (4 lines) or more on the Early Treatment of Diabetic Retinopathy Study acuity chart. If the patient's visual function deficit led the patient and his/her ophthalmologist to conclude that CE was indicated despite not meeting the VA deficit under the protocol, CE was allowed on approval by the CIGTS chairman.
Statistical comparisons of the characteristics of those who required CE with those who did not were made using independent 2-sample procedures for continuous (t test) and categorical (χ2 test) outcomes. The cumulative probability of CE over time since study entry was estimated by the Kaplan-Meier method; evaluative analyses of factors associated with time to CE were conducted using Cox regression analysis, with the backward selection procedure. Comparisons of clinical and VS-QOL variables measured before and after CE were made using paired t tests. Modeling of the data collected on VF and VS-QOL indexes over the time leading up to and following CE used the loess regression technique.13 Clinical data collected on each patient's study eye were used in the analyses. Data were analyzed using SAS version 9.1 statistical software (SAS Institute Inc, Cary, NC).
During follow-up after initiation of glaucoma treatment with medications or trabeculectomy, 162 CEs were performed in 121 patients; of these patients, 58 had CE performed in the study eye only, 41 had bilateral CE, and 22 had CE performed in the fellow eye only. Cataract extraction in the 99 study eyes occurred a mean (SD) of 3.7 (2.0) years after randomization, with a range from 0.3 to 8.4 years. Study participants who had CE in their study eye differed significantly from those who retained their natural lenses over the follow-up period (n = 486) on several baseline characteristics (Table 1). They were on average 9.5 years older (65.0 vs 55.5 years), more frequently had diabetes mellitus (25% vs 15%) and hypertension (46% vs 35%), and included a higher proportion of patients who had pseudoexfoliative open-angle glaucoma (12% vs 3%). Those who underwent CE also more frequently had undergone initial surgical treatment: 61% (n = 60) of the 99 patients who underwent CE in the study eye had undergone initial trabeculectomy vs 47% (n = 230) of the 486 patients who did not undergo CE.
A Kaplan-Meier estimate of the cumulative probability of CE from the time of randomization for the 2 initial treatment groups is shown in Figure 1. If an initial medicine group patient crossed over to trabeculectomy prior to CE, as occurred in 7 patients, their follow-up was censored at that time. It is evident that CE occurred earlier and more frequently in patients randomized to initial trabeculectomy than initial medical treatment (P = .001, log-rank test). The probability of CE 1 year after randomization was 8 times greater in the initial surgery group (2.4% vs 0.3%). By 5 years after randomization, the cumulative probability of CE in those who had initial surgery (19.0%) was almost 3-fold higher than in the initial medicine group (6.5%). The rate of CE slowed considerably in the surgery group after the first 5 years.
Cox regression modeling of time to CE showed that initial surgical treatment, older age, higher myopia, glaucoma diagnosis (pseudoexfoliative open-angle glaucoma), and diabetes were predictive of CE. Hazard ratios and associated 95% confidence intervals for these risk factors are shown in Table 2. Initial trabeculectomy treatment conferred a 3.8-fold increased risk (95% confidence interval, 2.2-6.5) of CE relative to initial medications during the first 5 years after randomization. The increased risk from surgery was only evident during the first 5 years after treatment initiation, after which the 2 groups' risks did not differ significantly. Older age at baseline also placed patients at an increased risk that was nonlinear in shape. The hazard ratios for older ages (all relative to a 50-year-old) increased rapidly from a 2.7-fold increased risk for a 55-year-old to an 8.8-fold increased risk for a 65-year-old, but the increased risk leveled off with ages older than 65 years. Patients with a self-reported history of diabetes had a risk of CE that was 56% higher than those without diabetes, but this difference only approached significance (P = .08). A diagnosis of pseudoexfoliative glaucoma was associated with a 2-fold greater risk of CE relative to primary open-angle glaucoma (POAG), whereas the risk of CE for patients with pigmentary glaucoma did not significantly differ from that of patients with POAG. Spherical equivalent (SphEq) was associated with the risk of CE in a manner best approximated by a piecewise-linear (V-shaped) function, with the lowest risk of cataract at a SphEq near zero. The CE risk increased for patients with increasing myopia (hazard ratio = 1.86 per 3 diopters < zero; P<.001), and the CE risk increased less strongly (and insignificantly) for patients with hyperopia (hazard ratio = 1.21; P = .61). Other factors that were evaluated and not found to be significantly related to risk for CE included sex, race, education, clinical center, surgeon type (clinical center vs community), hypertension, smoking status, and baseline measures of intraocular pressure (IOP), optic disc hemorrhage, and central corneal thickness.
Clinical and VS-QOL information gathered in the 0- to 2.5-year period leading up to study eye CE and the 0.1- to 1.5-year period after CE for the 99 patients who had CE are shown in Figure 2. Loess regression lines show the average change in these variables in the period prior to and after CE. As expected, VA scores decreased (ie, worsening VA) as the time of CE approached, and on CE, the VA scores increased abruptly and remained constant for the 1.5-year period thereafter. A less striking but evident decrease (ie, worsening) in the mean deviation from VF testing occurred in the pre-CE period. After CE, a stepped increase in the mean deviation (ie, improved VF) was evident. The VAQ total score and most VAQ subscale scores (results not shown in Figure 2) indicate higher scores, consistent with decreasing ability to perform everyday visual tasks as the time of CE approached, followed by lower scores reflecting the beneficial impact of CE on self-reported visual functioning.
Contrasts of average scores obtained in the 0- to 6-month period before and after CE of the study eye (Table 3) show significant improvement in VA and also in 2 measures of VF depression, the CIGTS VF score and the mean deviation. The extent of improvement in these 2 global VF indexes varied in accord with VF severity at baseline. Lesser change occurred as the VF severity at randomization increased (P = .04, analysis of variance). Those with mild VF loss at baseline (mean deviation>−5 dB) showed a mean improvement of 3.66 dB (SD, 3.47 dB); those with moderate VF loss (−5 to −9.9 dB) showed a mean improvement of 1.85 dB (SD, 2.87 dB), and those with substantial VF loss (≤−10 dB) showed a mean improvement of 0.89 dB (SD, 4.92 dB). Significant increases (worsening) were found in the pattern standard deviation (PSD) and corrected PSD (CPSD). No significant change was observed in the short-term fluctuation from Humphrey 24-2 VF testing.
Significant improvements after CE were reported in the VAQ total score and in 6 of the 8 visual function subscales (Table 3). Initial treatment (surgery or medicine) did not impact the extent of improvement. Changes over time in the light/dark adaptation and color discrimination subscales did not reach statistical significance. Since the fellow eye's status might have influenced these person-specific measures of visual functioning, we evaluated subsets in which we deleted those with moderate visual loss (20/50 or worse best-corrected VA) in the fellow eye (n = 2) and those who had fellow eye CE in the 0- to 6-month period before (n = 3) or after (n = 2) the study eye's CE. The analyses of VAQ total and subscale scores before and after CE showed the same significant findings within these subsets.
Intraocular pressure was measured prior to CE and 1 week and 1 month after CE, with before and after data on 96 patients. The mean change at 1 week, +0.5 mm Hg (SD, 6.4 mm Hg) was not significant (P = .42), whereas the mean change at 1 month, +1.0 mm Hg (SD, 5.2 mm Hg), approached statistical significance (P = .06). The distribution of IOP change from pre-CE to 1 month post-CE shows 3 patients with a 10-mm Hg or higher increase in IOP and 1 patient with a 10-mm Hg or higher decrease in IOP.
Table 4 presents changes from baseline to 1 of 2 times: 12 to 18 months after CE or month 60 for those who did not undergo CE. Month 60 was chosen for those who did not undergo CE because the average timing of CE was 42 months postenrollment. Both groups experienced a 3-letter loss of VA, on average. Visual field changes from baseline to 12 to 18 months post-CE showed worsening in those who had CE but not in the non-CE group. The mean deviation worsened 1.4 dB from baseline (P = .03), with a similar change in the CIGTS VF score. Two VAQ subscale scores showed slight worsening from baseline to post-CE: the depth perception subscale (P = .047) and the color discrimination subscale (P = .10). No such worsening was observed in those who did not undergo CE. In fact, that group showed small but significant improvements over 5 years in glare disability, acuity/spatial vision, and visual processing speed (P<.01). Contrasts of the mean change from baseline in these 2 groups yielded 1 significant difference—the 0.2-unit worsening of depth perception in the CE group vs stability (0.0-unit average change) of that VAQ subscale in the non-CE group over a 5-year period.
Longitudinal follow-up of patients with newly diagnosed open-angle glaucoma in the CIGTS permitted an assessment of the impact initial treatment had on the incidence of CE. It also allowed us to evaluate how both clinical and vision-specific measures changed as the time of CE approached, the immediate impact of CE, and how post-CE measures compared with what they were at baseline. We expected an increase in VA resulting from CE, and we found an average 20-letter (4-line) increase from before to after CE. We also found a quite striking 2.6-unit improvement in the mean deviation from the Humphrey 24-2 VF test, which went from an average of −9.15 dB prior to CE to −6.53 dB after CE (P<.001). The generalized VF depression caused by cataractous change is removed by extraction of the cataract, thereby improving global measures of VF depression. However, significant worsening of the PSD and CPSD VF indexes occurred after CE, which Hayashi et al6 reported for patients with glaucoma who had dense scotomata but not for those who lacked dense scotomata. These findings are likely due to an unmasking of the true PSD and CPSD values on eliminating the generalized depression from cataract. Given that the primary outcome in the CIGTS was VF progression, our findings were deemed important enough to warrant an adjustment for cataract in analyses that addressed VF change. Previous studies of patients with glaucoma who underwent CE vary in the reported amount of mean deviation change, ranging from 2 small studies1,2 (involving 24 and 26 patients) that reported no significant change, to larger studies3,4,6,10 (involving from 41 to 140 patients) that found significant improvement in the mean deviation.
Our finding that CE was more frequent in those patients randomized to initial surgery is consistent with what we reported in the interim analysis,14 but increased follow-up diminished the extent to which the incidence differed from those treated medically. Cox regression modeling of the time from randomization to CE indicated that the increased risk induced by surgery over the first 5 years differed from that observed over subsequent follow-up. This indicates that initial surgery likely places a patient at increased risk of cataract for a moderate amount of time, perhaps about 4 to 5 years, after which other factors, such as aging, likely play more important roles. If glaucoma medications do increase the risk of cataract, which was an association of borderline significance in a recent report from the Australian Blue Mountains Eye Study,15 any impact in our study was overwhelmed by the influence of trabeculectomy on CE risk.
The association of age with CE risk was expected, but the association of myopia, type of glaucoma, and diabetes with the risk of CE warrants discussion. More negative SphEq refractive values recorded at baseline, indicative of higher myopia (since patients at enrollment were excluded if they had substantial cataractous lens changes), placed patients at a higher risk of CE. This finding is consistent with results from large epidemiologic studies.16- 20 The impact of myopia on the risk of cataract development and CE in these studies varied by subtype of cataract, with the most consistent associations reported for posterior subcapsular and nuclear opacification. Our lack of specificity in recording the type of cataract prevents a more detailed look at the myopia association reported herein. The lack of linearity in the SphEq association with CE risk, in particular the observation that risk increases slightly as hyperopia increases, implies that modeling this relationship in future studies should evaluate if patients with myopia differ from patients with hyperopia.
The CIGTS eligibility criteria permitted patients with 3 forms of open-angle glaucoma to be enrolled—POAG, pseudoexfoliative, and pigmentary. Primary open-angle glaucoma predominated (n = 550 [90.6%]), with the latter 2 types, pseudoexfoliative (n = 29 [4.8%]) and pigmentary (n = 28 [4.6%]), about equally represented. Of the 29 patients with pseudoexfoliative glaucoma, 12 (41.4%) underwent CE during follow-up, whereas 85 (15.5%) of the patients with POAG and 2 (7.1%) of the patients with pigmentary glaucoma did so. One recent case-control study21 identified pseudoexfoliation syndrome as a risk factor for cortical cataract. The fact that patients with pseudoexfoliative glaucoma were at a higher risk of CE is consonant with the literature on pseudoexfoliative syndrome. Deposition and accumulation of extracellular material on the lens and other ocular tissues has been associated with pseudoexfoliation syndrome,22 and recent studies23,24 have identified aqueous humor factors that may lead to cataract in patients with pseudoexfoliation syndrome, such as increased oxidative stress.
Patients with diabetes were at an increased risk of CE, but in relation to the other risk factors, the association was not strong. Diabetes in CIGTS patients was assessed by self-report at baseline, which may have led to some extent of underdetection, thereby weakening the association. In addition, patients with diabetes were excluded from participating in the CIGTS if they had proliferative retinopathy, macular edema, or nonproliferative retinopathy with more than 10 microaneurysms. If the risk of CE is influenced by severity of diabetes, this exclusion criterion may have limited our ability to detect an association.
Unique to CIGTS is the availability of longitudinal VS-QOL information on patients with glaucoma as they developed cataract, underwent CE, and continued to be followed up for their glaucoma condition. The loess regression plots (Figure 2) reveal worsening of a number of clinical and VS-QOL measures as the time of CE approaches. Similar trends were seen for most VS-QOL subscales. These VS-QOL measures are person specific, rather than eye specific, and assess aspects of visual functioning other than that measured by distance VA testing. The amount of scatter observed in the loess regression plot of the VAQ total scores reflects the multiple factors that contribute to a person's appraisal of their binocular visual function, such as past visual function, expectations, coping, and adaptation. In terms of magnitude, the largest pre-CE to post-CE change (Table 3) occurred in the VAQ's acuity/spatial vision subscale, but other subscales showed highly significant improvements, such as depth perception, visual processing speed, and peripheral vision. Interestingly, when the clinical and VS-QOL measures assessed well after CE (12 to 18 months) were contrasted to the early (0 to 6 months) post-CE measures, the amount of change was negligible. Likewise, when the baseline, pretreatment measures were contrasted to those taken well after CE, the only clinical measure that had worsened significantly was the VF test's mean deviation (1.4-dB worsening), and the only VS-QOL measure that had worsened was the depth perception subscale score (0.2-unit worsening). These changes may reflect progression of glaucoma, but the fact that these changes were unique to the CE group and not seen in the larger non-CE group may indicate an impact of CE or factors that coexist with the need for CE on worsening VF and depth perception. The magnitude of these changes was not large, however, and thus may not be clinically meaningful.
Cataract extraction was performed more frequently in patients with glaucoma whose initial treatment was trabeculectomy than in those treated initially with topical medications. Risk factors other than initial surgery included older age, higher myopic refractive error, diabetes, and a diagnosis of pseudoexfoliative glaucoma. Cataract extraction had a significant impact on both clinical and VS-QOL indicators in CIGTS patients. Marked effects were shown for Humphrey 24-2 VF indexes, including the mean deviation, which improved, and the PSD and CPSD, which worsened. Many measures of VS-QOL that had worsened prior to CE showed significant improvement after CE, and the surgical impact brought patients' VS-QOL scores, on average, to levels approaching those found at initial diagnosis.
Correspondence: David C. Musch, PhD, MPH, Department of Ophthalmology and Visual Sciences, University of Michigan Medical School, Kellogg Eye Center, 1000 Wall St, Ann Arbor, MI 48105 (firstname.lastname@example.org).
Submitted for Publication: March 27, 2006; final revision received July 6, 2006; accepted July 8, 2006.
Financial Disclosure: None reported.
Funding/Support: The CIGTS was supported by grant EY09148 from the National Institutes of Health National Eye Institute from 1993 to 2004; continued follow-up of CIGTS patients was supported by an unrestricted grant from Allergan, Inc (Irvine, Calif) from 2004 through 2005. The analyses conducted for this study were supported by grants EY015860 and EY015700 from the National Institutes of Health National Eye Institute.
Previous Presentations: This study and its findings were presented in part at the annual meeting of the Association for Research in Vision and Ophthalmology; May 5, 2002; Fort Lauderdale, Fla.