According to a 2011 definition from the Institute of Medicine, clinical practice guidelines provide “recommendations intended to optimize patient care,” and “are informed by a systematic review of evidence and an assessment of the benefits and harms of alternative care options.” It has been known for years that the quality of guidelines varies considerably. The guidelines that best serve the interests of patients and physicians are those that are formulated by independent experts without funding from industry or others with self-interest in the outcome and that are based on rigorous but readily understood methods.1
In this issue of JAMA Internal Medicine, Gionfriddo and colleagues discuss the 2013 diabetes management recommendations of the American Association of Clinical Endocrinologists. They identify substantial shortcomings with regard to the process by which the recommendations were developed as well as the financial conflicts of interest of most panel members, including the chair. It should not have been difficult for a leading specialty society to prepare management recommendations through a process that was devoid of such shortcomings. Clinicians and patients are most likely to pay attention to recommendations that are incontrovertibly based on evidence and free of bias.
Medical specialty societies and professional associations are the most common sponsors of guidelines. Of the approximately 2500 guidelines listed in the National Guideline Clearinghouse (http://www.guideline.gov), approximately two-fifths were issued by a medical specialty society and one-fifth by a professional association. These organizations have leading roles in improving the quality and consistency of guidelines.
There are many readily available resources to help. In 2011, the Institute of Medicine issued 2 authoritative reports: “Clinical Practice Guidelines We Can Trust” and “Finding What Works in Health Care: Standards for Systematic Reviews.” The framework of the Grading of Recommendations Assessment, Development and Evaluation Working Group, more commonly known as GRADE (http://www.gradeworkinggroup.org), can be used to rate the strength of the evidence that supports specific management recommendations. Recently, an ad hoc working group issued recommendations to assess bias in the constitution of guideline panels.2 Among the “red flags” that should raise “substantial skepticism” are guidelines that are sponsored by professional societies that receive substantial industry funding or by a proprietary company, sponsorship that is “undeclared or hidden,” financial conflicts of the committee chair or multiple panel members, and the lack of substantial involvement of an expert in methodology or the evaluation of evidence.
There remain frequent examples, from many fields of medicine, of clinical practice guidelines or similar statements that are marred by weak methods and financial conflicts of interest. There is no reason why this should continue. All groups that develop guidelines should adhere to standards that are at least as rigorous as those of the Institute of Medicine and GRADE. Clinicians and patients should be very skeptical of guidelines that do not meet these standards.
Steinbrook R. Improving Clinical Practice Guidelines. JAMA Intern Med. 2014;174(2):181. doi:10.1001/jamainternmed.2013.7662