This week, a
Health Affairs blog post titled "
Fixing Clinical Practice Guidelines" echoed several concerns
I've discussed previously: practice guidelines are being produced in abundance but often have variable methodological quality, financial conflicts of interest that threaten objectivity, and/or poor applicability to the clinicians and populations for whom they are intended. To address these problems, the authors reasonably suggested restoring funding for AHRQ's
National Guideline Clearinghouse and giving this centralized guideline repository the authority to require that guidelines meet a universal, rigorous methodology standard (including policies to avoid conflicts of interest) for inclusion.
My only real quibble with the commentary is its title: clinical practice guidelines have problems, but they're not broken. I am currently a volunteer panel member for three guidelines in various stages of development, sponsored or co-sponsored by three different medical specialty societies. Each guideline is following the National Academy of Medicine's (formerly Institute of Medicine's)
standards for trustworthy guideline development and on track to produce practical recommendations for clinicians that are consistent with the best evidence on each topic. If I didn't think that these guidelines were worthwhile endeavors, I wouldn't have agreed to spend so many hours reviewing and discussing studies, systematic reviews, and meta-analyses, and drafting the text of the recommendations.
Drs. Benjamin Djulbegovic and Gordan Guyatt recently argued in a
JAMA Viewpoint that we should not make false distinctions between evidence-based and consensus-based guidelines, since the "evidence alone never speaks for itself" and interpretation of evidence by guideline panelists via a consensus process is always required. Therefore, consensus-based does not necessarily imply weak or insufficient evidence; rather, "the crucial difference between evidence-based medicine and non-evidence-based medicine methods is that the former necessitates that judgments are consistent with underlying evidence, whereas the latter do not."
To me, "non-evidence-based" or "expert consensus" calls to mind an outdated process for developing guidelines (though some groups still use it): assemble a group of distinguished subject matter experts, ask them to formulate some recommendations based on their own practices (which, since they're the experts, must be the most effective and efficient ways to manage patients with the condition), find some published references to support what the experts already know, then write up a report. Bonus points if the guideline panel has an authoritative-sounding name such as the
Joint National Committee (whose hypertension guidelines, until
JNC 8 at least, largely followed an expert consensus process).
Applying the evidence-based paradigm to primary care guidelines, then, what is the appropriate role of experts? Since a well-conducted systematic review ought to retrieve all relevant research evidence, and guideline panelists should already have expertise in evidence interpretation and grading of recommendations, what more can experts bring to the table? In a
BMJ analysis, Dr. Holger Schunemann and colleagues make a useful distinction between "expert evidence" and "expert opinion": evidence is factual, while opinion is a judgment that may (or may not) be based on facts:
For example, a patient might say: “I had prostate cancer detected by prostate specific antigen (PSA) screening and I am alive 10 years later.” That is evidence. It is not the same as saying: “PSA screening saved my life.” That is an opinion. Similarly, a clinical expert might say: “I operated on 100 patients with prostate cancer and none of them died from prostate cancer.” That is evidence. It is not the same as saying: “Prostatectomy is effective.” That is an opinion. In both cases, the opinions might be based on that evidence, but the evidence is clearly not the same as the conclusion.
Schunemann and colleagues review several pitfalls of expert evidence and opinion: not distinguishing between the two; untimely introduction of expert evidence; inadequate disclosure or management of financial and intellectual conflicts of interest; and inadequate appraisal of expert evidence. To make the influence of expert evidence on guidelines more transparent, they advise (and I agree) that it be collected systematically and appraised using the same methodology as for research evidence, which gives more weight to experimental studies or systematically collected observations that are less likely to be biased than a subspecialist physician's personal experiences.