Creation and Validation of Classification Criteria for Discoid Lupus Erythematosus | Dermatology | JAMA Dermatology | JAMA Network
[Skip to Navigation]
Figure.  Area Under the Receiver Operating Characteristic Curve (AUC) for the Final Model of Discoid Lupus Erythematosus Classification Criteria
Area Under the Receiver Operating Characteristic Curve (AUC) for the Final Model of Discoid Lupus Erythematosus Classification Criteria
Table 1.  Findings Associated With DLE
Findings Associated With DLE
Table 2.  Final Model Statistics for Identification of DLE Based on Clinical Variables
Final Model Statistics for Identification of DLE Based on Clinical Variables
Table 3.  Final Model for DLE Classification Criteria
Final Model for DLE Classification Criteria
Table 4.  Test Characteristics for Points-Based System for DLE Classification
Test Characteristics for Points-Based System for DLE Classification
1.
Merola  JF, Nyberg  F, Furukawa  F,  et al.  Redefining cutaneous lupus erythematosus: a proposed international consensus approach and results of a preliminary questionnaire.   Lupus Sci Med. 2015;2(1):e000085. doi:10.1136/lupus-2015-000085 PubMedGoogle Scholar
2.
Durosaro  O, Davis  MDP, Reed  KB, Rohlinger  AL.  Incidence of cutaneous lupus erythematosus, 1965-2005: a population-based study.   Arch Dermatol. 2009;145(3):249-253. doi:10.1001/archdermatol.2009.21 PubMedGoogle Scholar
3.
Elman  SA, Nyberg  F, Furukawa  F,  et al.  Developing classification criteria for discoid lupus erythematosus: an update from the World Congress of Dermatology 2015 meeting.   Int J Womens Dermatol. 2016;2(2):44-45. doi:10.1016/j.ijwd.2015.12.002PubMedGoogle Scholar
4.
Singh  JA, Solomon  DH, Dougados  M,  et al; Classification and Response Criteria Subcommittee of the Committee on Quality Measures, American College of Rheumatology.  Development of classification and response criteria for rheumatic diseases.   Arthritis Rheum. 2006;55(3):348-352. doi:10.1002/art.22003 PubMedGoogle Scholar
5.
Fabbri  P, Cardinali  C, Giomi  B, Caproni  M.  Cutaneous lupus erythematosus: diagnosis and management.   Am J Clin Dermatol. 2003;4(7):449-465. doi:10.2165/00128071-200304070-00002 PubMedGoogle Scholar
6.
Haber  JS, Merola  JF, Werth  VP.  Classifying discoid lupus erythematosus: background, gaps, and difficulties.   Int J Womens Dermatol. 2016;2(1):8-12. doi:10.1016/j.ijwd.2016.01.001 PubMedGoogle Scholar
7.
Elman  SA, Joyce  C, Nyberg  F,  et al.  Development of classification criteria for discoid lupus erythematosus: results of a Delphi exercise.   J Am Acad Dermatol. 2017;77(2):261-267. doi:10.1016/j.jaad.2017.02.030 PubMedGoogle Scholar
8.
Fransen  J, Johnson  SR, van den Hoogen  F,  et al.  Items for developing revised classification criteria in systemic sclerosis: results of a consensus exercise.   Arthritis Care Res (Hoboken). 2012;64(3):351-357. doi:10.1002/acr.20679 PubMedGoogle Scholar
9.
van den Hoogen  F, Khanna  D, Fransen  J,  et al.  2013 classification criteria for systemic sclerosis: an American College of Rheumatology/European League against Rheumatism collaborative initiative.   Arthritis Rheum. 2013;65(11):2737-2747. doi:10.1002/art.38098 PubMedGoogle Scholar
10.
Diamond  IR, Grant  RC, Feldman  BM,  et al.  Defining consensus: a systematic review recommends methodologic criteria for reporting of Delphi studies.   J Clin Epidemiol. 2014;67(4):401-409. doi:10.1016/j.jclinepi.2013.12.002 PubMedGoogle Scholar
11.
Graham  B, Regehr  G, Wright  JG.  Delphi as a method to establish consensus for diagnostic criteria.   J Clin Epidemiol. 2003;56(12):1150-1156. doi:10.1016/S0895-4356(03)00211-7 PubMedGoogle Scholar
12.
Harris  PA, Taylor  R, Thielke  R, Payne  J, Gonzalez  N, Conde  JG.  Research electronic data capture (REDCap)—a metadata-driven methodology and workflow process for providing translational research informatics support.   J Biomed Inform. 2009;42(2):377-381. doi:10.1016/j.jbi.2008.08.010 PubMedGoogle Scholar
13.
Steyerberg  EW, Harrell  FE  Jr, Borsboom  GJJM, Eijkemans  MJC, Vergouwe  Y, Habbema  JDF.  Internal validation of predictive models: efficiency of some procedures for logistic regression analysis.   J Clin Epidemiol. 2001;54(8):774-781. doi:10.1016/S0895-4356(01)00341-9 PubMedGoogle Scholar
14.
Sullivan  LM, Massaro  JM, D’Agostino  RB  Sr.  Presentation of multivariate data for clinical use: the Framingham Study risk score functions.   Stat Med. 2004;23(10):1631-1660. doi:10.1002/sim.1742 PubMedGoogle Scholar
15.
Johnson  SR, Goek  ON, Singh-Grewal  D,  et al.  Classification criteria in rheumatic diseases: a review of methodologic properties.   Arthritis Rheum. 2007;57(7):1119-1133. doi:10.1002/art.23018 PubMedGoogle Scholar
Limit 200 characters
Limit 25 characters
Conflicts of Interest Disclosure

Identify all potential conflicts of interest that might be relevant to your comment.

Conflicts of interest comprise financial interests, activities, and relationships within the past 3 years including but not limited to employment, affiliation, grants or funding, consultancies, honoraria or payment, speaker's bureaus, stock ownership or options, expert testimony, royalties, donation of medical equipment, or patents planned, pending, or issued.

Err on the side of full disclosure.

If you have no conflicts of interest, check "No potential conflicts of interest" in the box below. The information will be posted with your response.

Not all submitted comments are published. Please see our commenting policy for details.

Limit 140 characters
Limit 3600 characters or approximately 600 words
    Original Investigation
    June 17, 2020

    Creation and Validation of Classification Criteria for Discoid Lupus Erythematosus

    Author Affiliations
    • 1Department of Dermatology, Brigham and Women’s Hospital, Harvard Medical School, Boston, Massachusetts
    • 2Department of Public Health Sciences, Loyola University, Chicago, Illinois
    • 3Department of Dermatology, University of Missouri, Columbia
    • 4Department of Dermatology, The University of Texas Southwestern Medical Center, Dallas
    • 5Department of Dermatology, Cleveland Clinic Foundation, Cleveland, Ohio
    • 6Department of Dermatology, Takatsuki Red Cross Hospital, Takatsuki, Japan
    • 7Division of Medicine, Department of Dermatology, Faculty of Medical Sciences, University of Fukui, Fukui, Japan
    • 8Department of Dermatology, Gachon Gil Medical Center, Gachon University College of Medicine, Incheon, South Korea
    • 9Department of Dermatology, Venereology and Allergology, University of Medicine, Wroclaw, Poland
    • 10Corporal Michael J. Crescenz Veterans Affairs Medical Center, Philadelphia, Pennsylvania
    • 11Department of Dermatology, University of Pennsylvania, Philadelphia
    • 12Division of Rheumatology, Department of Medicine, Brigham and Women’s Hospital, Harvard Medical School, Boston, Massachusetts
    JAMA Dermatol. 2020;156(8):901-906. doi:10.1001/jamadermatol.2020.1698
    Key Points

    Question  Can a broadly applicable set of classification criteria be developed to classify a lesion or a rash as discoid lupus erythematosus for research purposes?

    Findings  In this diagnostic study, previously defined candidate criteria items were applied prospectively to 215 patients at international academic dermatology centers who were identified as having discoid lupus erythematosus or a disease mimicker. Features were compared between groups, and candidate models were identified using best subsets logistic regression analysis to select 1 final model yielding a points-based system for discoid lupus erythematosus classification.

    Meaning  This diagnostic study presents the initial validation of classification criteria for discoid lupus erythematosus for use in clinical research.

    Abstract

    Importance  Classification criteria are the standardized definitions that are used to enroll uniform cohorts for research studies. They emphasize high specificity and are distinct from diagnostic criteria. No universally recognized classification criteria currently exist for discoid lupus erythematosus (DLE), which has led to problematic heterogeneity in observational and interventional clinical studies across the field.

    Objective  To create and validate classification criteria for DLE using 12 previously defined candidate criteria items.

    Design, Setting, and Participants  For this diagnostic study, candidate criteria items were prospectively applied by dermatologists and dermatopathologists at clinical visits of patients with DLE or a condition that could be confused for DLE, termed a DLE mimicker, at academic dermatology practices across the United States, Poland, Japan, and South Korea. Data were collected from December 1, 2017, to February 1, 2019, and analyzed from March 1 to September 19, 2019.

    Main Outcomes and Measures  Clinical features among these 2 groups were calculated and compared with χ2 or Fisher exact tests. Candidate models were identified using best subsets logistic regression analysis. Improvement tests, fit statistics, and discrimination were considered to choose a final model.

    Results  Nine sites contributed 215 patients, 15 of whom had missing or incomplete data. The final model for DLE classification criteria includes only clinical variables: atrophic scarring (3 points), location in the conchal bowl (2 points), preference for the head and neck (2 points), dyspigmentation (1 point), follicular hyperkeratosis and/or plugging (1 point), and erythematous to violaceous in color (1 point), with an area under the receiving operating characteristic curve of 0.91 (95% CI, 0.87-0.95). A score of at least 5 points yields a sensitivity of 84.1% and a specificity of 75.9% in the classification of DLE, with increasing scores yielding higher specificity.

    Conclusions and Relevance  These findings provide the initial validation of classification criteria for DLE for use in observational and clinical trials.

    Introduction

    Cutaneous lupus erythematosus (CLE) remains an ill-defined set of disorders that are often grouped together based on common clinical features, histopathological findings, laboratory abnormalities, association with underlying systemic lupus erythematosus, or combinations thereof.1 Inadequate definitions of CLE and lack of classification criteria of its variants have led to problematic heterogeneity in observational and interventional trials. These sentiments are shared by many international CLE experts; a 2015 survey showed that 91.6% of CLE experts believed that a “single international classification scheme is needed.”1(p3)

    We decided to begin by developing classification criteria for discoid lupus erythematosus (DLE) because it is the most prevalent form of CLE and places a high burden on quality of life. Furthermore, interest in understanding DLE disease burden as well as the treatment of recalcitrant disease is increasing, with many novel therapeutics in the pipeline.2 For the aims of studying DLE, to our knowledge no uniform definition of DLE currently exists on which to base a study population for observational and interventional trials. Variability in disease classification impedes comparison of findings between studies and limits the ability to pool results and address questions of treatment efficacy.

    It is important to make the distinction that classification criteria are the standardized definitions primarily intended to enroll uniform cohorts for research and emphasize high specificity, whereas diagnostic criteria reflect a more broad and variable set of features of a given disease and would therefore require a high sensitivity.3,4 Although classification criteria are not synonymous with diagnostic criteria, they typically mirror the list of criteria that are used for diagnosis.4 Diagnostic criteria for DLE were proposed by Fabbri et al5 in 2003; although not formally adopted by the expert community,6 this initial work serves as a good framework for our efforts.

    The working group of Elman et al7 has described our efforts to identify a list of candidate criteria items previously. This potential list includes 7 clinical items (erythematous to violaceous in color, atrophic scarring, dyspigmentation, follicular hyperkeratosis/plugging, scarring alopecia, location in the conchal bowl, and preference for the head and neck) and 5 histopathological items (interface/vacuolar dermatitis, perivascular and/or periappendageal lymphohistiocytic infiltrate, follicular keratin plugs, mucin deposition, and basement membrane thickening). The specific objective of this study is to develop a points-based system for classifying DLE and to provide test characteristics (sensitivity and specificity) of this points-based system. Our method was adopted from guidance provided by the American College of Rheumatology guidelines committee and through their work specifically on the American College of Rheumatology-European League Against Rheumatism classification criteria for systemic sclerosis.8,9

    Methods

    Previous methods to derive a candidate item list of classification criteria are described elsewhere in full.7 To summarize, the Delphi technique is a method of consensus building using a series of iterative questionnaires to collect data from a panel of experts and stakeholders in a given area of interest. The iterative nature of the process, together with controlled anonymous feedback at each questionnaire stage, participant anonymity, and a predefined stop criterion, allow convergence toward a consensus answer. Using this technique, a potential classification criteria item set was narrowed from 48 to 12 items in 2 rounds of voting as well as an intermediary step of nominal group techniques.3,7,10,11 These candidate criteria items were moved forward into this classification development and validation process. This study was approved by the Partners Healthcare System institutional review board, which waived the need for informed consent for use of deidentified data.

    Selection of Patients for Validation

    Domestic and international experts in the field of CLE were identified. At each expert’s clinical site, each dermatologist was asked to prospectively identify patients with cutaneous morphologic features suggestive of DLE vs a disease mimicker. Disease mimickers constitute a set of conditions from which DLE needs to be distinguished based on morphologic and/or histopathological features to obtain the study population of interest.

    Patients were included if they carried a diagnosis of DLE at a single visit, either with the presence of distinct morphologic features that suggested DLE, as determined by an international CLE expert dermatologist at each international academic center (including K.B., B.F.C., A.P.F., F.F., M.H., H.J.K., J.C.S., V.P.W., and J.F.M.), or if a diagnosis of DLE had been made previously by morphologic and/or histopathological features. A virtual investigator meeting was held wherein dermatologists were provided with reference photographs and a collection of images depicting representative clinical images of each morphologic descriptor agreed on by the DLE Steering Committee to ensure standardization of definitions.

    Patients were also included if they carried a diagnosis of an entity that morphologically could be confused with DLE. These disease mimickers, as defined by the expert panel, included dermatomyositis, subacute CLE, other cutaneous lupus subsets (acute cutaneous lupus, chilblains, lupus panniculitis), psoriasis, lichen planus, lichen planopilaris, rosacea, sarcoidosis, and other scarring alopecias. Individuals were excluded who lacked clinical and/or histopathological findings suggestive of DLE or a mimicker.

    After each patient encounter, dermatologists were asked to identify the presence or absence of each candidate item with regard to morphologic characteristics of DLE. Dermatologists then recorded their diagnosis of DLE or another relevant mimicker disease diagnosis. Clinicians were also asked to rank their diagnostic certainty as very certain, certain, neutral, uncertain, or very uncertain.

    At each site, if a biopsy specimen of a relevant clinical lesion was available (obtained previously or obtained at this encounter but not for the purposes of the study), 1 central dermatopathologist at each site was asked to review the case. The site dermatopathologist was asked to determine whether the candidate histopathological criteria items were present. As above, dermatopathologists were asked to determine whether the diagnosis was consistent or not consistent with DLE and, if not consistent, to provide an alternative diagnosis. Dermatopathologists were also asked to rank their diagnostic certainty using the same scale from very certain to very uncertain. Data were entered into a secure database created and maintained through Partners Healthcare System’s Research Electronic Database Capture (REDCap) by 1 representative at each site without the transmission of patient identifying data.12

    Statistical Methods

    Data were analyzed from March 1 to September 19, 2019. Diagnoses by clinical and dermatopathological features were tabulated and presented as counts and percentages. Clinical features among those with and without diagnoses of DLE were calculated and compared with χ2 or Fisher exact tests as appropriate. Among those with both clinical and dermatopathological features, agreement was measured with the Cohen κ statistic. Candidate models were identified using best subsets logistic regression analysis to predict DLE. Adjusted odds ratios (AORs) and 95% CIs were computed for multivariable models. Differences in the area under the receiver operating characteristic curve (AUC) were tested for statistical significance among models using a nonparametric contrast estimation procedure. Improvement tests, fit statistics, discrimination, and clinical relevance were considered to choose a final model. As a measure of internal validation, the 95% CI for the AUC was calculated using bootstrap simulations, including a deflation factor to account for performance optimism.13 A points-based scoring system was developed using the β coefficients from the logistic regression model, with higher scores indicating higher likelihood of DLE diagnosis.14 Cut points in the scoring system were evaluated to identify optimal sensitivity and specificity. Two-sided statistical tests were performed and P < .05 indicated significance. Analyses were performed using SAS, version 9.4 (SAS Institute Inc).

    Results
    DLE Criteria

    From December 1, 2017, to February 28, 2019, 9 sites contributed data from 215 patients (178 [82.8%] in the United States, 24 [11.2%] in Japan and South Korea, and 13 [6.0%] in Poland), with 94 patients (43.7%) evaluated as having a diagnosis consistent with DLE. Overall, 189 of 214 patients (88.3%) had findings that were erythematous to violaceous in color; 146 of 215 (67.9%), dyspigmentation; 107 of 214 (50.0%), atrophic scarring; and 81 of 213 (38.0%), scarring alopecia. One hundred forty-one of 215 patients (65.6%) had preference for head and neck, and 57 of 212 (26.9%) had lesions in the conchal bowl. All findings were more prevalent among patients with DLE compared with DLE mimickers, with greatest differences observed for atrophic scarring (78 of 94 [83.0%] vs 29 of 120 [24.2]; P < .001), location in the conchal bowl (45 of 92 [48.9%] vs 12 of 120 [10.0%]; P < .001), scarring alopecia (56 of 92 [60.9%] vs 25 of 121 [20.7%]; P < .001), and location in the head and neck (82 of 94 [87.2%] vs 59 of 121 [48.8%]; P < .001). All comparisons were statistically significant except for differences in erythematous-violaceous in color (87 of 94 [92.6%] vs 102 of 120 [85.0%]; P = .09) (Table 1).

    Features from dermatopathological evaluation were reported for 86 patients, of whom 76 (88.4%) had perivascular and/or periappendageal lymphohistiocytic infiltrate; 56 of 85 (65.9%), interface/vacuolar dermatitis; 46 of 86 (53.5%), mucin deposition; 31 of 86 (36.0%), follicular keratin plugs; and 28 of 86 (32.6%), basement membrane thickening. Dermatopathological findings were similarly more common among patients with DLE compared with mimickers, with statistically significant differences for basement membrane thickening (21 of 37 [56.8%] vs 7 of 49 [14.3%]; P < .001), follicular keratin plugs (21 of 37 [56.8%] vs 10 of 49 [20.4%]; P < .001), mucin deposition (27 of 37 [73.0%] vs 19 of 49 [38.8%]; P = .002), and interface/vacuolar dermatitis (30 of 36 [83.3%] vs 26 of 49 [53.1%]; P = .004) (Table 1).

    The most common mimickers included dermatomyositis (n = 31), subacute CLE (n = 15), other forms of cutaneous lupus (n = 8), lichen planopilaris and other lichenoid disorders (n = 21), and psoriasis (n = 10). Clinical findings agreed with histopathological findings in 79 of 86 patients in the sample (91.9%), and the Cohen κ = 0.83 (95% CI, 0.72-0.95) suggests strong agreement.

    Model Development

    Fifteen patients who had missing values for any clinical variables, whose diagnoses were uncertain or very uncertain as determined by the dermatologist and/or dermatopathologist, or who had disagreement in diagnosis between the dermatologist and dermatopathologist were removed. This yielded 200 total cases for inclusion into model identification. The removal of these patients did not significantly alter the geographical representation of our cohort (166 [83.0%] for the United States, 22 [11.0%] for Japan and South Korea, and 12 [6.0%] for Poland).

    Candidate models were compared against a full model that included all main clinical effects. Although some reduced models provided similar discrimination, we weighed parsimony against the ability of our model to encompass the heterogeneous presentation of DLE, including location differences, and to incorporate important features identified from the previous Delphi study of Elman et al.7 The relatively small sample size of cases with dermatopathology (86 of 215 [40.0%]) precluded meaningful use of these items in our final model. Our final 6-item model includes atrophic scarring (AOR, 8.70; 95% CI, 3.56-21.23), location in the conchal bowl (AOR, 6.80; 95% CI, 2.31-20.05), preference for head and neck (AOR, 9.41; 95% CI, 3.33-26.56), dyspigmentation (AOR, 3.23; 95% CI, 1.21-8.62), follicular hyperkeratosis/plugging (AOR, 2.94; 95% CI, 0.98-8.82), and erythematous to violaceous in color (AOR, 3.44; 95% CI, 0.97-12.18), and featured excellent discrimination (AUC, 0.91; 95% CI, 0.87-0.95) (Table 2). Scarring alopecia had a moderate association with DLE in the full model (AOR, 1.46; 95% CI, 0.54-3.96) and did not lead to higher discrimination compared with the chosen 6-variable model (AUC, 0.91; 95% CI, 0.87-0.95; P = .43 for model comparison) (Figure). This final model was selected based on model fit statistics with predictive value in line with our group’s goals for classification criteria. The final model with β coefficients is provided in the eFigure in the Supplement.

    Model Evaluation

    The internally validated model performance was similar to the apparent model performance measured by the C statistic (AUC, 0.89; 95% CI, 0.85-0.93) via 5000 bootstrap samples. The β coefficients were converted to a points-based system as follows: 3 points for atrophic scarring, 2 points each for location in the conchal bowl or preference for head and neck, and 1 point each for dyspigmentation, follicular hyperkeratosis/plugging, and erythematous to violaceous in color. Based on a cut score of at least 5 points, sensitivity and specificity were 84.1% and 75.9%, respectively. However, as the cut score increases (with a maximum score of 8 points), the specificity of the classification criteria increases. The final model is displayed in Table 3, and the test characteristics for the full point-based system are displayed in Table 4.

    Discussion

    Herein we present the initial validation of classification criteria for DLE. Based on our proposed model, a score of at least 5 yields classification as DLE with sensitivity of 84.1% and specificity of 75.9%, with increasing points yielding higher specificity. These classification criteria can be applied to both localized (lesions above the neck) and generalized (lesions above and below the neck) disease and do not require a biopsy to apply these classification criteria successfully.

    A few items, such as scarring alopecia, were not included in the criteria because their inclusion did not substantially change our test characteristics or receiver operating characteristic curve. Furthermore, dermatopathology was not included in the final model. There are several reasons for this. First, only 86 patients (40.0%) had pathological findings available, which limited our ability to meaningfully incorporate these items into classification criteria models. In addition, if a main aim of classification criteria is for enrollment into clinical trials or observational studies, we believe that the feasibility of using our classification criteria is enhanced by not requiring a biopsy for classification. We plan to use our histopathological data to devise a separate DLE histopathological classification criteria set; this work is ongoing.

    Strengths and Limitations

    Strengths of our cohort include relatively large numbers for an uncommon disease, as well as diverse geographic representation and involvement of expert stakeholders from North America, Europe, and Asia. We believe that the classification criteria have good face validity, because nearly all proposed items in the list can be frequently used to diagnose DLE in clinical practice. Furthermore, our data for model development included 9 sites, and we calculated measures of internal validation using bootstrap estimation on the entire sample. This approach was previously demonstrated to be superior to split sample analyses where a proportion of test sites may be used to test independent performance after development on training sites.13

    Limitations of our study include that most patients come from specialized referral centers, most of which are in the United States, and that demographic data of patients were not evaluated, because this study sought primarily to assess morphologic and histopathological characteristics alone. These limitations may limit the generalizability of our criteria. Moreover, testing and validating a classification system for DLE is difficult because there is no criterion standard test or criterion apart from expert opinion. We relied on expert opinion from a group of internationally recognized clinical and research leaders in connective tissue disease, including dermatologists, dermatopathologists, and rheumatologists, as our criterion standard, which is similar to the process used in the development of other classification criteria.9,15

    Conclusions

    Importantly, many of the features used in our classification criteria represent features of DLE disease damage rather than disease activity. This is understandable because early active lesions of CLE may be more indistinguishable from other inflammatory dermatoses, with specificity being largely driven by damage characteristics. If one of the purposes of classification criteria is to enroll patients in clinical trials, relying on disease damage may yield a group of patients whose disease is more advanced. It is likely that limiting trials to DLE as opposed to CLE would not allow inclusion of patients with early disease or with more than 1 subtype of CLE. These proposed criteria may be most helpful in defining the subtype of CLE, when possible, for a given patient enrolled in CLE or systemic lupus erythematosus/CLE trials. This would have the added benefit of defining whether an intervention might be effective for DLE relative to early disease or another subtype of CLE. We propose that for clinical trials, additional metrics focused on disease activity, such as the activity score of the Cutaneous Lupus Erythematosus Disease Area and Severity Index, be used to define activity of the underlying DLE as appropriate. It is also worth noting that many patients with CLE may have more than 1 subtype present at any time.

    Overall, the importance of DLE classification is highlighted by the need to ensure that patients categorized as having DLE for inclusion in studies do indeed have the disease based on defined characteristics. We hope that classification criteria will provide investigators with a foundation on which to base observational and interventional clinical trials and a common language with which to communicate effectively about this patient population. Classification of DLE is part of a larger process to classify other subsets of CLE as well as other connective tissue diseases, with active efforts under way in dermatomyositis and morphea. All of this is with the aim of advancing knowledge and treatment of these diseases to better care for our patients in the future.

    Back to top
    Article Information

    Accepted for Publication: April 14, 2020.

    Corresponding Author: Joseph F. Merola, MD, MMSc, 221 Longwood Ave, Boston, MA 02115 (jfmerola@bwh.harvard.edu).

    Published Online: June 17, 2020. doi:10.1001/jamadermatol.2020.1698

    Author Contributions: Drs Elman and Joyce served as co-first authors of this article. Drs Werth and Merola served as co-senior authors. Dr Merola had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

    Concept and design: Elman, Furukawa, Lian, Werth, Merola.

    Acquisition, analysis, or interpretation of data: Elman, Joyce, Braudis, Chong, Fernandez, Hasegawa, Kim, Li, Lian, Szepietowski, Werth, Merola.

    Drafting of the manuscript: Elman, Joyce, Fernandez, Furukawa, Li, Merola.

    Critical revision of the manuscript for important intellectual content: Braudis, Chong, Fernandez, Furukawa, Hasegawa, Kim, Lian, Szepietowski, Werth, Merola.

    Statistical analysis: Elman, Joyce, Merola.

    Obtained funding: Lian.

    Administrative, technical, or material support: Elman, Fernandez, Hasegawa, Kim, Li.

    Supervision: Elman, Furukawa, Lian, Werth.

    Conflict of Interest Disclosures: Dr Chong reported receiving personal fees from Viela Bio, Inc, and Beacon Bioscience, Inc, grants from Daavlin Corporation, and other from Pfizer, Inc, and Biogen, Inc, outside the submitted work. Dr Fernandez reported receiving personal fees from AbbVie, Inc, and UCB, grants and personal fees from Novartis International AG and Mallinckrodt Pharmaceuticals, and serving as principal investigator for a phase 2b clinical trial from Pfizer, Inc, and Corbus Pharmaceuticals outside the submitted work. Dr Fernandez reported receiving research support from Mallinckrodt Pharmaceuticals, Novartis International AG, Pfizer, Inc, and Corbus Pharmaceuticals; serving as a consultant for AbbVie, Inc, Novartis International AG, Mallinckrodt Pharmaceuticals, and UCB; and serving as a speaker/teacher for AbbVie, Inc, Novartis International AG, and Mallinckrodt Pharmaceuticals. Dr Szepietowski reported serving as a consultant for AbbVie, Inc, Biogenetica International Laboratories, Leo Pharma A/S, Merck-Serono, Novartis International AG, Pierre Fabre, Sandoz, Inc, and Toray Corporation and a speaker for AbbVie, Inc, Astellas Pharma, Inc, Actavis Generics, Adamed Pharma SA, Berlin-Chemie Mennarini, Bioderma Laboratories, Fresenius SE & Co, Janssen-Cilag BV, Leo Pharma A/S, Takeda Pharmaceutical Company Limited, and Vichy Laboratories outside the submitted work. Dr Werth reported serving as a consultant for Celgene Corporation, Incorporated, Medimmune, LLC, Resolve Therapeutics, LLC, Genentech, Inc, Idera, Inc, Janssen Pharmaceutica, Eli Lilly and Company, Pfizer, Inc, Biogen, Inc, Bristol-Myers Squibb, Gilead Sciences, Inc, Amgen, Inc, Medscape, Nektar Therapeutics, Incyte Corp, EMD Serono, CSL Behring, Principia Biopharma, Inc, Crisalis BioTherapeutics, Viela Bio, Inc, argenx SE, Kyowa Kirin, Inc, Regeneron Pharmaceuticals, AstraZeneca plc, Octapharma AG, and GlaxoSmithKline and receiving grants from Celgene Corporation, Incorporated, Janssen Pharmaceutica, Pfizer, Inc, Biogen, Inc, Gilead Sciences, Inc, Corbus Pharmaceuticals, Genentech, Inc, AstraZeneca plc, Viela Bio, Inc, CSL Behring, and Syntimmune, Inc. Dr Merola reported serving as a consultant and/or investigator for Merck Research Laboratories, AbbVie, Inc, Dermavant Sciences, Eli Lilly and Company, Novartis International AG, Janssen Pharmaceutica, UCB, Celgene Corporation, Incorporated, Sanofi Regeneron Pharmaceuticals, Almirall SA, Sun Pharmaceutical Industries, Ltd, Biogen, Inc, Pfizer, Inc, Incyte Corp, Aclaris Therapeutics, Inc, EMD Serono, Avotres Therapeutics LLC, and Leo Pharma A/S outside the submitted work. No other disclosures were reported.

    Funding/Support: This study was supported by the Department of Dermatology, Brigham and Women’s Hospital.

    Role of the Funder/Sponsor: The sponsor had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

    Additional Contributions: We thank the International Conference on Cutaneous Lupus Erythematosus Steering Committee, including Filippa Nyberg, MD, Karolinska University Hospital, Mark Goodfield, MD, Leeds General Infirmary, Branka Marinovic, MD, University Hospital Center Zagren, and Jan Dutz, MD, FRCPC, University of British Columbia. We also thank Arash Mostaghimi, MD, MPP, MPH, Brigham and Women's Hospital, Ruth Ann Vleugels, MD, MPH, Brigham and Women's Hospital, Meera Tarazi, MD, University of Pennsylvania, and Joyce Okawa, RN, University of Pennsylvania, for their input and assistance with subject recruitment. None of these contributors was compensated for their work. Finally, we thank the Department of Dermatology, Brigham and Women’s Hospital, Boston, Massachusetts, for their administrative assistance.

    References
    1.
    Merola  JF, Nyberg  F, Furukawa  F,  et al.  Redefining cutaneous lupus erythematosus: a proposed international consensus approach and results of a preliminary questionnaire.   Lupus Sci Med. 2015;2(1):e000085. doi:10.1136/lupus-2015-000085 PubMedGoogle Scholar
    2.
    Durosaro  O, Davis  MDP, Reed  KB, Rohlinger  AL.  Incidence of cutaneous lupus erythematosus, 1965-2005: a population-based study.   Arch Dermatol. 2009;145(3):249-253. doi:10.1001/archdermatol.2009.21 PubMedGoogle Scholar
    3.
    Elman  SA, Nyberg  F, Furukawa  F,  et al.  Developing classification criteria for discoid lupus erythematosus: an update from the World Congress of Dermatology 2015 meeting.   Int J Womens Dermatol. 2016;2(2):44-45. doi:10.1016/j.ijwd.2015.12.002PubMedGoogle Scholar
    4.
    Singh  JA, Solomon  DH, Dougados  M,  et al; Classification and Response Criteria Subcommittee of the Committee on Quality Measures, American College of Rheumatology.  Development of classification and response criteria for rheumatic diseases.   Arthritis Rheum. 2006;55(3):348-352. doi:10.1002/art.22003 PubMedGoogle Scholar
    5.
    Fabbri  P, Cardinali  C, Giomi  B, Caproni  M.  Cutaneous lupus erythematosus: diagnosis and management.   Am J Clin Dermatol. 2003;4(7):449-465. doi:10.2165/00128071-200304070-00002 PubMedGoogle Scholar
    6.
    Haber  JS, Merola  JF, Werth  VP.  Classifying discoid lupus erythematosus: background, gaps, and difficulties.   Int J Womens Dermatol. 2016;2(1):8-12. doi:10.1016/j.ijwd.2016.01.001 PubMedGoogle Scholar
    7.
    Elman  SA, Joyce  C, Nyberg  F,  et al.  Development of classification criteria for discoid lupus erythematosus: results of a Delphi exercise.   J Am Acad Dermatol. 2017;77(2):261-267. doi:10.1016/j.jaad.2017.02.030 PubMedGoogle Scholar
    8.
    Fransen  J, Johnson  SR, van den Hoogen  F,  et al.  Items for developing revised classification criteria in systemic sclerosis: results of a consensus exercise.   Arthritis Care Res (Hoboken). 2012;64(3):351-357. doi:10.1002/acr.20679 PubMedGoogle Scholar
    9.
    van den Hoogen  F, Khanna  D, Fransen  J,  et al.  2013 classification criteria for systemic sclerosis: an American College of Rheumatology/European League against Rheumatism collaborative initiative.   Arthritis Rheum. 2013;65(11):2737-2747. doi:10.1002/art.38098 PubMedGoogle Scholar
    10.
    Diamond  IR, Grant  RC, Feldman  BM,  et al.  Defining consensus: a systematic review recommends methodologic criteria for reporting of Delphi studies.   J Clin Epidemiol. 2014;67(4):401-409. doi:10.1016/j.jclinepi.2013.12.002 PubMedGoogle Scholar
    11.
    Graham  B, Regehr  G, Wright  JG.  Delphi as a method to establish consensus for diagnostic criteria.   J Clin Epidemiol. 2003;56(12):1150-1156. doi:10.1016/S0895-4356(03)00211-7 PubMedGoogle Scholar
    12.
    Harris  PA, Taylor  R, Thielke  R, Payne  J, Gonzalez  N, Conde  JG.  Research electronic data capture (REDCap)—a metadata-driven methodology and workflow process for providing translational research informatics support.   J Biomed Inform. 2009;42(2):377-381. doi:10.1016/j.jbi.2008.08.010 PubMedGoogle Scholar
    13.
    Steyerberg  EW, Harrell  FE  Jr, Borsboom  GJJM, Eijkemans  MJC, Vergouwe  Y, Habbema  JDF.  Internal validation of predictive models: efficiency of some procedures for logistic regression analysis.   J Clin Epidemiol. 2001;54(8):774-781. doi:10.1016/S0895-4356(01)00341-9 PubMedGoogle Scholar
    14.
    Sullivan  LM, Massaro  JM, D’Agostino  RB  Sr.  Presentation of multivariate data for clinical use: the Framingham Study risk score functions.   Stat Med. 2004;23(10):1631-1660. doi:10.1002/sim.1742 PubMedGoogle Scholar
    15.
    Johnson  SR, Goek  ON, Singh-Grewal  D,  et al.  Classification criteria in rheumatic diseases: a review of methodologic properties.   Arthritis Rheum. 2007;57(7):1119-1133. doi:10.1002/art.23018 PubMedGoogle Scholar
    ×