Thursday, August 22, 2019

Vaping and health: some answers, more questions

As the Centers for Disease Control and Prevention is actively investigating a cluster of severe lung illnesses in 14 states that may be linked to e-cigarette use among adolescents and young adults, an article in the August 15 issue of American Family Physician discusses common questions and answers about vaping and health. Since my colleague Dr. Jennifer Middleton's 2016 blog post on the promise and perils of e-cigarettes, more data has accumulated about the potential harms and benefits of this increasingly common activity. In 2017, one in five high school students reported e-cigarette use in the previous year, leading U.S. Surgeon General Jerome Adams to issue an advisory last year that labeled e-cigarette use in youth a "public health epidemic." More recent data from the Monitoring the Future survey suggested that this epidemic shows no signs of slowing:

Put in historical context, the absolute increases in the prevalence of nicotine vaping among 12th-graders and 10th-graders are the largest ever recorded by Monitoring the Future in the 44 years that it has continuously tracked dozens of substances. These results indicate that the policies in place as of the 2017–2018 school year were not sufficient to stop the spread of nicotine vaping among adolescents.

Although a nationally representative survey of parents of middle and high school students found that nearly all are aware of e-cigarettes, only 44% accurately identified an image of the "pod mod" device Juul; less than one-third reported concerns about their own child's use of e-cigarettes; and nearly three-quarters had received no communication from their child's school regarding the dangers of e-cigarettes. To help clinicians counsel parents and adolescents about vaping and Juuls, a patient education handout accompanying the AFP article highlights important discussion points.

It remains unclear whether e-cigarettes can help adults who are trying to quit smoking. E-cigarettes are not approved by the U.S. Food and Drug Administration as smoking cessation devices; however, a recent randomized trial in the U.K. National Health Service found that in smokers receiving weekly behavioral support, the 1-year abstinence rate in the e-cigarette group was superior to that of smokers using traditional nicotine replacement products. Notably, 80 percent of the e-cigarette group was still vaping after 1 year, compared with only 9 percent of the nicotine-replacement group - a troubling secondary finding given the unknown long-term health consequences of e-cigarette use.

In addition, the AFP article cautions that "unlike nicotine replacement therapy, the advertised nicotine dose on the labeling of e-cigarettes is not always consistent with laboratory analysis of the e-cigarette liquid, and the device and user behavior may affect the dose of nicotine received." Consequently, the authors recommend that clinicians first counsel patients to quit using evidence-based smoking cessation guidelines such as those from the U.S. Preventive Services Task Force, and only discuss using e-cigarettes if these methods are ineffective. In my own practice, I've yet to meet a patient who has successfully quit smoking by switching to e-cigarettes.


This post first appeared on the AFP Community Blog.

Tuesday, August 6, 2019

What is the appropriate role of experts in primary care guidelines?

This week, a Health Affairs blog post titled "Fixing Clinical Practice Guidelines" echoed several concerns I've discussed previously: practice guidelines are being produced in abundance but often have variable methodological quality, financial conflicts of interest that threaten objectivity, and/or poor applicability to the clinicians and populations for whom they are intended. To address these problems, the authors reasonably suggested restoring funding for AHRQ's National Guideline Clearinghouse and giving this centralized guideline repository the authority to require that guidelines meet a universal, rigorous methodology standard (including policies to avoid conflicts of interest) for inclusion.

My only real quibble with the commentary is its title: clinical practice guidelines have problems, but they're not broken. I am currently a volunteer panel member for three guidelines in various stages of development, sponsored or co-sponsored by three different medical specialty societies. Each guideline is following the National Academy of Medicine's (formerly Institute of Medicine's) standards for trustworthy guideline development and on track to produce practical recommendations for clinicians that are consistent with the best evidence on each topic. If I didn't think that these guidelines were worthwhile endeavors, I wouldn't have agreed to spend so many hours reviewing and discussing studies, systematic reviews, and meta-analyses, and drafting the text of the recommendations.

Drs. Benjamin Djulbegovic and Gordan Guyatt recently argued in a JAMA Viewpoint that we should not make false distinctions between evidence-based and consensus-based guidelines, since the "evidence alone never speaks for itself" and interpretation of evidence by guideline panelists via a consensus process is always required. Therefore, consensus-based does not necessarily imply weak or insufficient evidence; rather, "the crucial difference between evidence-based medicine and non-evidence-based medicine methods is that the former necessitates that judgments are consistent with underlying evidence, whereas the latter do not."

To me, "non-evidence-based" or "expert consensus" calls to mind an outdated process for developing guidelines (though some groups still use it): assemble a group of distinguished subject matter experts, ask them to formulate some recommendations based on their own practices (which, since they're the experts, must be the most effective and efficient ways to manage patients with the condition), find some published references to support what the experts already know, then write up a report. Bonus points if the guideline panel has an authoritative-sounding name such as the Joint National Committee (whose hypertension guidelines, until JNC 8 at least, largely followed an expert consensus process).

Applying the evidence-based paradigm to primary care guidelines, then, what is the appropriate role of experts? Since a well-conducted systematic review ought to retrieve all relevant research evidence, and guideline panelists should already have expertise in evidence interpretation and grading of recommendations, what more can experts bring to the table? In a BMJ analysis, Dr. Holger Schunemann and colleagues make a useful distinction between "expert evidence" and "expert opinion": evidence is factual, while opinion is a judgment that may (or may not) be based on facts:

For example, a patient might say: “I had prostate cancer detected by prostate specific antigen (PSA) screening and I am alive 10 years later.” That is evidence. It is not the same as saying: “PSA screening saved my life.” That is an opinion. Similarly, a clinical expert might say: “I operated on 100 patients with prostate cancer and none of them died from prostate cancer.” That is evidence. It is not the same as saying: “Prostatectomy is effective.” That is an opinion. In both cases, the opinions might be based on that evidence, but the evidence is clearly not the same as the conclusion.

Schunemann and colleagues review several pitfalls of expert evidence and opinion: not distinguishing between the two; untimely introduction of expert evidence; inadequate disclosure or management of financial and intellectual conflicts of interest; and inadequate appraisal of expert evidence. To make the influence of expert evidence on guidelines more transparent, they advise (and I agree) that it be collected systematically and appraised using the same methodology as for research evidence, which gives more weight to experimental studies or systematically collected observations that are less likely to be biased than a subspecialist physician's personal experiences.