The blue dotted line indicates the percentage of abstracts reporting more than half of the evaluated items.
eTable 1. Search Strategy and Search Results
eTable 2. Guidance on the Interpretation of Items
eTable 3. Examples of Complete Reporting of Items Among ARVO Abstracts
eReferences. List of Included Abstracts (n=126)
Customize your JAMA Network experience by selecting one or more topics from the list below.
Korevaar DA, Cohen JF, de Ronde MWJ, Virgili G, Dickersin K, Bossuyt PMM. Reporting Weaknesses in Conference Abstracts of Diagnostic Accuracy Studies in Ophthalmology. JAMA Ophthalmol. 2015;133(12):1464–1467. doi:10.1001/jamaophthalmol.2015.3577
Conference abstracts present information that helps clinicians and researchers to decide whether to attend a presentation. They also provide a source of unpublished research that could potentially be included in systematic reviews. We systematically assessed whether conference abstracts of studies that evaluated the accuracy of a diagnostic test were sufficiently informative.
We identified all abstracts describing work presented at the 2010 Annual Meeting of the Association for Research in Vision and Ophthalmology. Abstracts were eligible if they included a measure of diagnostic accuracy, such as sensitivity, specificity, or likelihood ratios. Two independent reviewers evaluated each abstract using a list of 21 items, selected from published guidance for adequate reporting. A total of 126 of 6310 abstracts presented were eligible. Only a minority reported inclusion criteria (5%), clinical setting (24%), patient sampling (10%), reference standard (48%), whether test readers were masked (7%), 2 × 2 tables (16%), and confidence intervals around accuracy estimates (16%). The mean number of items reported was 8.9 of 21 (SD, 2.1; range, 4-17).
Conclusions and Relevance
Crucial information about study methods and results is often missing in abstracts of diagnostic studies presented at the Association for Research in Vision and Ophthalmology Annual Meeting, making it difficult to assess risk for bias and applicability to specific clinical settings.
Diagnostic accuracy studies evaluate how well a test distinguishes diseased from nondiseased individuals by comparing the results of the test under evaluation (“index test”), with the results of a reference (or “gold”) standard. Deficiencies in study design can lead to biased accuracy estimates, suggesting a level of performance that can never be reached in clinical practice. In addition, because of variability in disease prevalence, patient characteristics, disease severity, and testing procedures, accuracy estimates may vary across studies evaluating the same test.1 For example, in one Cochrane review, the sensitivity of optical coherence tomography in detecting clinically significant macular edema in patients with diabetic retinopathy ranged from 0.67 to 0.94 across included studies and specificity ranged from 0.61 to 0.97.2
Given these potential constraints, readers of diagnostic accuracy study reports should be able to judge whether the results could be biased and whether the study findings apply to their specific clinical practice or policy-making situation.3,4
Conference abstracts often are short reports of actual studies, presenting information that helps clinicians and researchers to decide whether to attend a presentation. They also provide a source of unpublished research that could potentially be included in systematic reviews.5 These decisions should be based on an early appraisal of the risk for bias and applicability of the abstracted study. We systematically evaluated the informativeness of abstracts of diagnostic accuracy studies presented at the 2010 Annual Meeting of the Association for Research in Vision and Ophthalmology (ARVO).
Understanding the informative value of ophthalmology abstracts might lead to improved content in the future.
Abstracts of diagnostic accuracy studies presented at the 2010 Annual Meeting of the Association for Research in Vision and Ophthalmology were evaluated.
A minority reported inclusion criteria (5%), clinical setting (24%), patient sampling (10%), the gold standard used (48%), and masking (7%).
Reporting was better for study design (87%), the test under evaluation (100%), number of participants (82%), and disease prevalence (80%).
This study exemplified how deficiencies in abstracts may make it difficult to assess risk for bias and applicability to specific clinical settings.
The online abstract proceedings from ARVO were searched for diagnostic accuracy studies presented in 2010 (eTable 1 in the Supplement). One reviewer (D.A.K.) assessed identified abstracts for eligibility. Abstracts were included if they reported on the diagnostic accuracy of a test in humans and stated that they calculated 1 or more of the following accuracy measures: sensitivity, specificity, predictive values, likelihood ratios, area under the receiver operating characteristic curve, or total accuracy.
For each abstract, one reviewer (D.A.K.) extracted the research field, commercial relationships, support, study design, sample size, and word count (Table 1). Extraction was independently verified by a second reviewer (J.F.C. or M.W.J.dR.).
The informativeness of abstracts was evaluated using a previously published list of 21 items, selected from existing guidelines for adequate reporting (Table 2; eTable 2 in the Supplement).6 The items focus on study identification, rationale, aims, design, methods for participant recruitment and testing, participant characteristics, estimates of accuracy, and discussion of findings. Two reviewers (D.A.K. and J.F.C./M.W.J.dR.) independently scored each abstract. Disagreements were solved through discussion.
Of 6310 abstracts accepted at ARVO 2010, we identified 126 as reporting on diagnostic accuracy studies (eReferences in the Supplement). Abstract characteristics are provided in Table 1. The most common target condition was glaucoma (n = 51); corresponding studies mostly (n = 39) evaluated imaging of the retinal nerve fiber layer, other retina and choroid structures, or optic disc morphology. Ocular surface and corneal disease (keratoconus and dry eye) and common chorioretinal diseases (diabetic retinopathy and age-related macular degeneration) were targeted in 16 and 15 studies, respectively, followed by various types of uveitis and optic nerve diseases in 9 and 7 studies, respectively.
The reporting of individual items is presented in Table 2; examples of complete reporting per item are provided in eTable 3 in the Supplement. Several elements that are crucial when assessing risk for bias or applicability of the study findings were rarely reported: inclusion criteria (5%), clinical setting (24%), patient sampling (10%), reference standard (48%), masking of test readers (7%), 2 × 2 tables (16%), and confidence intervals around accuracy estimates (16%). None of the abstracts reported all of these items. Reporting was better for other crucial elements: study design (87%), test under evaluation (100%), number of participants (82%), and disease prevalence (80%).
On average, the abstracts reported 8.9 of the 21 items (SD, 2.1; range, 4-17). Twenty-four abstracts (19%) reported more than half of the items (Figure). The mean number of reported items was significantly lower in abstracts of case-control studies compared with cohort studies (P = .001) and in abstracts with sample sizes (number of eyes) below the median (P = .03) (Table 1).
The informativeness of abstracts of diagnostic accuracy studies presented at the 2010 ARVO Annual Meeting was suboptimal. Several key elements of study methods and results were rarely reported, making it difficult for clinicians and researchers to evaluate method quality.
Differences in patient characteristics and disease severity are known sources of variability in accuracy estimates, and nonconsecutive sampling of patients can lead to bias.1,4 Therefore, readers want to know where and how patients were recruited,3 yet less than a quarter of abstracts reported inclusion criteria, clinical setting, and sampling methods.
Risk for bias and applicability largely depend on the appropriateness of the reference standard.4 However, the reference standard was not reported in half of the abstracts. Agreement between 2 tests is likely to increase if the reader of one test is aware of the results of the other test1,4; however, information about masking was available in only 7%.
About half of all conference abstracts are never published in full.5 It is only possible to include the results of a conference abstract in a meta-analysis if the number of true-positive, true-negative, false-positive, and false-negative test results are provided; however, 2 × 2 tables were only available in 16%. Although it is widely recognized that point estimates of diagnostic accuracy should be interpreted with measures of uncertainty, confidence intervals were reported in 16%.
Other crucial elements were more frequently provided. The study design, reported by 87%, is important because case-control studies produce inflated accuracy estimates owing to the extreme contrast between participants with and without the disease.1,7 Diagnostic accuracy varies with disease prevalence, an important determinant of the applicability of study findings, and reported by 80%.
Suboptimal reporting in conference abstracts is not only a problem for diagnostic accuracy studies.8 A previous evaluation of the content of abstracts of randomized trials presented at the ARVO Annual Meeting also found important study design information frequently unreported.9 However, the authors concluded that missing information was often available in the corresponding ClinicalTrials.gov record. Because diagnostic accuracy studies are rarely registered,10 complete reporting of conference abstracts is even more critical for these studies.
Using the same list of 21 items, we previously evaluated abstracts of diagnostic accuracy studies published in high-impact journals.6 The overall mean number of items reported there was 10.1; crucial items about design and results were similarly lacking. One previous study assessed elements of reporting in conference abstracts of diagnostic accuracy studies in stroke research.11 In line with our findings, 35% reported whether the data collection was prospective or retrospective, 24% reported on masking, and 11% reported on test reproducibility. Incomplete reporting is not only a problem for abstracts. Five previous reviews evaluated the reporting quality of full-study reports of ophthalmologic diagnostic accuracy studies, all of them pointing to important shortcomings.12
Crucial study information is often missing in abstracts of diagnostic accuracy studies presented at the ARVO Annual Meeting. Suboptimal reporting impedes the identification of high-quality studies from which reliable conclusions can be drawn. This is a major obstacle to evidence synthesis and an important source of avoidable research waste.13
Our list of 21 items is not a reporting checklist; we are aware that word count restrictions make it impossible to report all items in an abstract, and some items are more important than others. Reporting guidelines have been developed for abstracts of randomized trials and systematic reviews,8,14 and a similar initiative is currently under way for diagnostic abstracts.15 The scientific community should encourage informative reporting, not only for full-study reports, but also for conference abstracts.
Corresponding Author: Daniël A. Korevaar, MD, Department of Clinical Epidemiology, Biostatistics, and Bioinformatics, Academic Medical Center, University of Amsterdam, Meibergdreef 9, 1105 AZ Amsterdam, the Netherlands (email@example.com).
Submitted for Publication: April 9, 2015; final revision received August 13, 2015; accepted August 17, 2015.
Published Online: October 8, 2015. doi:10.1001/jamaophthalmol.2015.3577.
Author Contributions: Dr Korevaar had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Study concept and design: Korevaar, Cohen, Bossuyt.
Acquisition, analysis, or interpretation of data: All authors.
Drafting of the manuscript: Korevaar.
Critical revision of the manuscript for important intellectual content: Cohen, de Ronde, Virgili, Dickersin, Bossuyt.
Statistical analysis: Korevaar, Bossuyt.
Study supervision: Bossuyt.
Conflict of Interest Disclosures: All authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest and none were reported.
Funding/Support: Dr Dickersin is the principal investigator of a grant to the Johns Hopkins Bloomberg School of Public Health from the National Eye Institute (U01EY020522), which contributes to her salary.
Role of the Funder/Sponsor: The National Eye Institute had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
Create a personal account or sign in to: