Customize your JAMA Network experience by selecting one or more topics from the list below.
Wiggins RE, Etz R. Assessment of the American Board of Ophthalmology’s Maintenance of Certification Part 4 (Improvement in Medical Practice). JAMA Ophthalmol. 2016;134(9):967–974. doi:10.1001/jamaophthalmol.2016.1848
Copyright 2016 American Medical Association. All Rights Reserved. Applicable FARS/DFARS Restrictions Apply to Government Use.
There is disagreement as to whether Maintenance of Certification is creating value for physicians and their patients. To our knowledge, this report provides the first measures of the effectiveness of Part 4 of this activity in assisting ophthalmologists with quality improvement in their practices.
To evaluate the effectiveness of the American Board of Ophthalmology’s quality improvement program—Maintenance of Certification Part 4 (Improvement in Medical Practice)—in assisting its diplomates with quality improvement in their practices.
Design, Setting, and Participants
A retrospective analysis was conducted of the performance of 1046 American Board of Ophthalmology diplomates on Practice Improvement Modules between September 1, 2012, and December 31, 2014. The mean baseline scores for each process or outcome measure on a medical record abstraction were calculated before and after the practice improvement activity. Paired t tests were used to assess improvement before and after the activity. Diplomates’ comments and ratings of the usefulness of the activity in assisting them with quality improvement were also analyzed.
Main Outcome Measures
Diplomate performance on process and outcome measures before and after a performance improvement activity and diplomate satisfaction that the activity met the stated goals of assistance with quality improvement in their practices.
The 1046 American Board of Ophthalmology diplomates completed 1408 Practice Improvement Modules. When measures with participation by at least 20 diplomates were analyzed, there was improvement in 24 of 30 individual process measures (80.0%; 95% CI, 61.4%-92.3%) and in 7 of 18 individual outcome measures (38.9%; 95% CI, 17.3%-64.3%) chosen for improvement by diplomates. Analysis of the mean results for each diplomate on process measures chosen for improvement showed gains occurring in 9 of 12 modules and, for outcomes chosen for improvement, in 6 of 12 modules with at least 20 participants. A total of 826 of 1115 modules (74.1%) assessed by diplomates were rated from good to excellent; positive comments outnumbered negative ones by a ratio of 5:1.
Conclusions and Relevance
Quantitative and qualitative analyses, while limited by self-report that has not been validated, suggest that the American Board of Ophthalmology’s Maintenance of Certification Part 4 can help diplomates improve quality on process, and to a lesser extent, outcome measures. Findings of this study may provide a basis to improve this activity.
Quiz Ref IDThe US health care system is facing significant challenges, foremost among which is how to improve the value of health care delivered.1Reports have highlighted quality problems in the delivery of health care and outlined fundamental changes required to address poor outcomes2,3;as a result, many initiatives have been undertaken to define, measure, and improve quality. One program designed to assist physicians in these efforts is Part 4 of the American Board of Medical Specialties’ Maintenance of Certification (MOC) program. It is intended to help physicians improve performance through “ongoing assessment and improvement activities to improve patient outcomes” and use of “evidence and best practices compared with peers and national benchmarks.”4 The American Board of Ophthalmology (ABO), a member board of the American Board of Medical Specialties, launched its Part 4 program of Practice Improvement Modules (PIMs) in 2012. This online activity consists of abstracting data on process and outcome measures from patient medical records, analyzing the data, developing and implementing a plan to improve the results, and abstracting new records to assess for improvement. Although physicians agree with the importance of quality improvement, there is disagreement as to whether Part 4 of MOC is creating value for physicians or their patients. In February 2015, the American Board of Internal Medicine responded to diplomates’ comments that the requirements for MOC Part 4 “are not meeting the needs of physicians” and suspended portions of that requirement pending further review.5
The ABO is currently undergoing a performance review of its MOC program. This report evaluates the results of the program through December 31, 2014, on a quantitative (success in improving processes and outcomes) and qualitative (diplomate reports of the value of the activity) basis and identifies opportunities to improve the effectiveness of MOC Part 4 activities based on these results.
Question Is the American Board of Ophthalmology’s Maintenance of Certification Part 4 effective in assisting diplomates with quality improvement in their practices?
Findings In this analysis of diplomates’ performance before and after 1408 performance improvement activities, performance improved on 80% of individual process measures and 38.9% of individual outcome measures. Positive comments outnumbered negative ones by a ratio of 5:1.
Meaning This study suggests that the American Board of Ophthalmology’s Maintenance of Certification Part 4 can help diplomates improve quality on both process and outcome measures and may provide a basis for opportunities to improve this activity.
Quiz Ref IDAll diplomates certified by the ABO since 1992 are issued time-limited certification and must participate in MOC to maintain board certification. Part 4 of MOC consists of online PIMs that involve abstracting data from 30 consecutive patient medical records among 1 to 3 clinical modules (eg, 30 records from 1 module or 10 records each from 3 modules). Data are collected on both process and outcome measures related to each module. Next, the diplomate selects specific process or outcome measures on which to focus improvement efforts, conducts an improvement activity, and reports on all measures (including those not chosen for improvement) for another 10 records per module following that activity. By February 2014, the ABO offered 28 PIMs to their diplomates. These PIMs were developed in collaboration with the American Academy of Ophthalmology Practicing Ophthalmologist Curriculum Panels and linked to American Academy of Ophthalmology Preferred Practice Patterns.6Several changes to the program were introduced in 2014. First, a pilot project to increase the flexibility of the program was initiated that allowed diplomates to participate in self-directed quality improvement projects. The ABO also granted diplomates credit for substantive participation in an institutional Part 4 activity through the American Board of Medical Specialties multispecialty MOC portfolio program.7In addition, a process change to the PIMS required a mandatory period of reflection before new medical records could be abstracted after the initial abstraction. A complete explanation of the PIM process can be found on the ABO website.8
The ABO launched 24 of the 28 PIMs in September 2012; the PIMS consist of more than 250 process and outcome measures. The following is an example of a process measure from the Primary Open-Angle Glaucoma (POAG) module: the measure was the percentage of patients who underwent a gonioscopy to differentiate other forms of glaucoma; the numerator was the number of patients with POAG who underwent a gonioscopy, while the denominator was the total number of patients diagnosed with POAG. There were no exclusions.
The deidentified data of the diplomates’ performance on PIMs between September 1, 2012, and December 31, 2014, as well as their survey responses, were provided by the ABO. The mean scores for each process or outcome measure were calculated for each module and its associated process and outcome measures before and after the practice improvement activity. Statistical analyses were conducted on PIMs for which the number of diplomates was at least 20. Paired t tests were used to compare the baseline mean results for each process and outcome measure within each module on the initial medical record abstraction with results on the subsequent abstraction after the quality improvement activity. These results were calculated for all measures within each module and separately for the measures selected by the diplomate for improvement. In addition, a set of means was calculated for each diplomate for the process and/or outcome measures selected for improvement within a given module; a positive value indicated improvement, while a negative value indicated deterioration.
Diplomates were asked a series of evaluation questions following participation in the PIMs, including 3 opportunities to provide open-ended responses. Diplomates were asked to reflect on the importance of changes made in their practice, to identify the most useful lesson learned from the process, and to offer any additional comments. Diplomates were also asked to rate the clinical usefulness of the activity.
Analysis of open-ended comments was performed by one of us (R.E.). First, all comments were grouped into 3 categories: positive feedback, negative feedback, and neutral. Comments were considered positive if they included statements of enjoyment, thankfulness, and helpfulness. Comments were considered negative if they included statements of resentment, wastefulness, or uselessness. Next, a grounded theory approach9 was used to identify keywords. After all keywords were identified (n=117), the keywords were grouped into 13 meaningful clusters. For the qualitative analysis, only PIMs with more than 10 open-ended comments were included in our findings (comprising 14 modules with between 11 and 441 diplomate comments).
A total of 1408 PIMs were completed by 1046 diplomates. “Cataract (Surgical Management)” was the most frequently performed module, accounting for 499 (35.4%) of the total modules. Twelve self-directed quality improvement projects were completed. One diplomate participated in an institutional quality improvement project. Self-directed and institutional quality improvement projects were not amenable to quantitative analysis and did not include follow-up surveys and, therefore, are not included in the results.
Only PIMs with at least 20 participating diplomates choosing either process or outcome measures for improvement were analyzed for mean improvement. A detailed example of one such analysis (the POAG module) is shown in Table 1. Among 30 process measures across all modules chosen for improvement by at least 20 diplomates, improvement (range, 9.1-53.9 points; median, 24.7 points) was found in 24 measures (80.0%; 95% CI, 61.4%-92.3%) (21 modules, P < .001; 2 modules, P = .001; and 1 module, P = .004) and was at least 10 points in 23 modules (76.7%; 95% CI, 57.7%-90.1%) (20 modules, P < .001; 2 modules, P = .001; and 1 module, P = .004). Among 18 outcome measures chosen for improvement by at least 20 diplomates, improvement (range, 3.0-29.7 points; median, 9.5 points) was found in 7 modules (38.9%; 95% CI, 17.3%-64.3%) (5 modules, P < .001; 1 module, P = .002; and 1 module, P = .004), and was at least 10 points in 3 modules (16.7%; 95% CI, 3.6%-41.4%) (1 module, P < .001; 1 module, P = .002; and 1 module, P = .004).
The mean process and outcome scores for all diplomates completing each module and the mean process and outcome scores for the measures chosen for improvement by each diplomate before and after the improvement activity are reported in Table 2 and Table 3. An analysis of the mean results for each diplomate on process measures chosen for improvement showed gains occurring in 9 of 12 modules (8 modules, P < .001; and 1 module, P = .001). Improvement ranged from 12.8 points for exudative age-related macular degeneration to 37.6 points for POAG (median, 21.6 points). An analysis of the mean results for each diplomate on outcome measures chosen for improvement showed gains in 6 of 12 modules (3 modules, P < .001; 2 modules, P = .002; and 1 module, P = .003). Improvement ranged from 4.7 points for cataract to 23.1 points for POAG (median, 12.0 points).
The 1046 diplomates entered 2413 open-ended comments that were assigned to 1 or more clusters (N = 2910); 551 of these comments (19%) were assigned to a cluster that cited improvement in patient outcomes as a meaningful result of this activity. Furthermore, the 353 diplomate responses coded with the “Agree” cluster (12%) supported the diplomates’ impression that their baseline performance was already high but that they supported the activity and the importance of standards promoted by the activity.
The largest cluster shared through open-ended comments related to documentation (609 of the coded comments [21%]). Many of these comments referred to the need for and benefits of improved processes of documentation, including the use and tailoring of electronic health records for quality improvement. An additional 334 comments (11%) suggested that diplomates appreciated the process of self-evaluation, with some noting they had previously wanted to do this kind of activity but did not know a good approach. The general frequencies of clusters as found within the data are presented in Table 4.
Of the 2413 comments analyzed, 553 were coded as generally positive or negative. Among these comments, 463 (83.7%) were positive, outnumbering negative comments by a ratio of 5:1. The frequency of positive, open-text comments supports ratings offered by diplomates regarding overall clinical usefulness: 826 of 1115 modules assessed by diplomates (74.1%; 95% CI, 71.4%-76.6%) were rated from good to excellent (Table 5). Those that criticized the activity most commonly described it as being of low educational value and not commensurate with the time and cost of the activity and an additional burden given other quality initiatives. Data entry was often described by these individuals as tedious and time consuming. Others thought some measures were flawed and did not capture quality well or that they were inflexible and did not allow for creativity in improving quality relative to the individual practitioner’s situation. The limited size of data sets resulting from abstraction of medical records was thought to be a limitation by some diplomates in drawing meaningful conclusions about the results. In those cases, diplomates reported that benchmarks and other feedback would have been more useful.
The goals of the ABO for MOC Part 4 are to assist ophthalmologists with evaluation of practice standards, identify areas for self-improvement through self-assessment, and promote reduction in gaps of quality of care through improved patient outcomes.8 This study addresses the ABO’s success at meeting these goals through a quantitative and qualitative analysis.
Quiz Ref IDBaseline mean scores in which the diplomate met the criteria for individual measures were commonly high for process measures for the entire cohort. Among measures chosen for improvement, there were greater gains on process than outcome measures. These findings are similar to those reported by the Centers for Medicare & Medicaid Services in 2015 for quality measures being used across a broad spectrum of US healthcare facilities.10 Process measures, which test how well health care professionals meet clinical guidelines, tend to perform at a higher baseline and reach high levels more quickly than do outcome measures, which assess the patients’ health status. Whereas health outcomes are the ultimate goal, they require more time to achieve and are affected by more than delivery of health care, which is only one factor. For example, poverty, comorbidities, and patient adherence to treatment are examples of other factors in outcomes.11One diplomate captures this concern about outcomes measurements: “Glaucoma is a progressive disease that often coexists with significant ocular and systemic morbidities. Despite a clinician’s best efforts, vision can decline due to glaucoma, diabetes, macular degeneration, mental decline, or cataract progression.” Although process measures are sensitive and direct measures of the quality of health care delivered, they “may have a limited lifespan, since performance benchmarks are more rapidly achieved.”10(p14)Despite high baseline results for many process measures for the group as a whole as measured by improvement in both process and outcome measures, the quantitative analysis suggests that, in aggregate, diplomates were able to evaluate practice standards, identify areas for self-improvement, and reduce gaps in quality of care. However, high baseline mean scores for many process measures highlight the difficulty in identifying gaps in care.
Qualitative findings suggest that most diplomates found MOC Part 4 to be clinically useful. Their comments identified both a direct benefit from the activity (“surprised by the paucity of gonioscopy findings documented”) and an indirect benefit (“most useful thing was taking the time to think about how I could improve my results”). However, about one-fourth of diplomates did not find the activity helpful. Diplomate feedback suggests that at least some of the latter participants may have already established a high standard in their practices for the chosen modules, a finding confirmed by some of the high baseline scores noted in the quantitative analysis. The comments from this group and the results of quantitative analysis provide a basis for opportunities to improve this activity.
Quiz Ref IDAn analysis of diplomates’ comments suggests that use of a registry may have the single largest effect in addressing the greatest weaknesses of the activity while capitalizing on their recognition of the value of standardized documentation and technology in quality improvement. The largest number of negative comments related to tedious data entry and a desire for larger, more meaningful data sets for benchmarking on a variety of outcomes, including identification of uncommon but clinically meaningful complications, such as posterior capsule rupture or endophthalmitis rates. Each of these issues could be addressed by a registry, which can automatically extract large amounts of data and address the small numbers of patients and time frames of the current activity, allow for internal and external benchmarking, and provide the flexibility for the diplomate to create his or her own performance improvement study. Furthermore, the qualitative analysis showed that the cluster that received the largest number of comments (ie, documentation) related in large part to using electronic health records to efficiently document and abstract data for quality purposes. Finally, registries can also assure integrity of the data, tabulate use of resources (eg, costs for bundles of care), assess the patient experience of care, incorporate the effect of comorbidities in outcomes measurements, and allow for coordination with other organizational quality improvement programs and thereby reduce measurement fatigue.12,13
Another opportunity for improvement of this activity involves a thorough review of the quantitative analysis. Certain measures within the PIMs should be reviewed, including the measures with scores near 100%; with more than 2 years of data in hand, attention should be focused on measures in which a gap in quality care is identified. For example, gonioscopy was frequently cited by diplomates as a gap in practice. In addition, measures in which very low scores are identified should also be reviewed. Are these low scores related to a gap in care or, rather, is there a problem with the measure? For example, diplomates noted that one POAG outcomes measure did not take into account an important factor, such as the reliability of the visual field. Similarly, attention should be focused on review of modules with low satisfaction scores.
Other areas for improvement suggested by this report include provision of an educational module on quality improvement to the diplomates, many of whom have had no formal training in the science of quality improvement, and continuation of the self-directed quality improvement projects, which would allow for projects that study low-frequency but high-risk conditions (such as the diagnosis and management of giant cell arteritis) and those outside the scope of the formal PIMs.14,15 Finally, another opportunity for group work in quality improvement recognizes that teams and systems are at the core of many substantive quality improvement efforts. Many malpractice cases result from a breakdown of processes in the practice or operating facility. Systems-based practice, an American Board of Medical Specialties core competency, offers many opportunities for improvement, including systems involving care delivered via the telephone; ordering, reviewing, and communicating test results and requests for consultations; appointment scheduling and missed appointments; adherence to treatment recommendation; requests for refills; and follow-up for high-risk conditions, such as retinopathy of prematurity.16-18
This report studies one approach to quality improvement using a limited, self-reported data set per ophthalmologist that has not been validated. Furthermore, although this activity promotes the incorporation of quality improvement theory into practice, there is no assurance that it results in sustained improvements in quality among the specific measures chosen for improvement or whether the diplomate incorporates these ideas in a systematic way in other areas of clinical practice. We discussed earlier in this article how this activity could be modified to address these limitations. In addition, not every diplomate contributed open-ended comments. Additional information could have been captured through diplomate interviews. Finally, diplomates may have been concerned that negative comments might have adversely affected their MOC status.
There is a mandate to improve the quality of healthcare delivered in the United States.19-22To our knowledge, this report provides the first measures of the effectiveness of MOC Part 4 in assisting ophthalmologists with this objective. The high baseline scores, particularly for processes, reveal the difficulty in development of meaningful measures that show gaps in care over time. Nonetheless, diplomates were able to identify opportunities for improvement within most modules. The most common theme among diplomate comments related to the key role of standardized documentation (including the use of electronic health records) in quality improvement. Most diplomates found the activity useful, but about one-fourth did not. Although some recommendations proposed, such as use of a registry, may offer greater opportunities to improve this activity, it is likely that one size may not fit all, reflective of the unique backgrounds and needs of ophthalmologists attempting to improve quality in their practices.
Submitted for Publication: December 18, 2015; final revision received April 22, 2016; accepted May 1, 2016.
Corresponding Author: Robert E. Wiggins Jr, MD, MHA, Asheville Eye Associates, 8 Medical Park Dr, Asheville, NC 28803 (email@example.com).
Published Online: July 21, 2016. doi:10.1001/jamaophthalmol.2016.1848.
Author Contributions: Dr Wiggins had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Study concept and design: Both authors.
Acquisition, analysis, or interpretation of data: Both authors.
Drafting of the manuscript: Both authors.
Critical revision of the manuscript for important intellectual content: Both authors.
Administrative, technical, or material support: Etz.
Conflict of Interest Disclosures: All authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest. Dr Wiggins reported serving as the Senior Secretary for Ophthalmic Practice for the American Academy of Ophthalmology, as a board member of the Ophthalmic Mutual Insurance Company, and as an oral examiner for the American Board of Ophthalmology. The authors had no involvement with the development of Maintenance of Certification Part 4. No other disclosures were reported.
Funding/Support: The American Board of Ophthalmology provided funding for the qualitative analysis to Dr Etz.
Role of the Funder/Sponsor: The American Board of Ophthalmology provided the data requested by the authors and provided financial support for the quantitative (statistical) and qualitative analyses in anticipation that a manuscript would be developed and submitted for publication. The American Board of Ophthalmology had no role in the design and conduct of the study; management, analysis, and interpretation of the data; and preparation, review, or approval of the manuscript.
Additional Contributions: Antoine Messiah, MD, PhD, DrSc, research scientist and consultant in Biostatistics and Epidemiology, conceived and performed the statistical analyses in this study. He was compensated for his contribution.
Create a personal account or sign in to: