Tuesday, September 10, 2019

Draft USPSTF statement on screening for illicit drug use requires major revisions

It may surprise some observers that for its first quarter century, the U.S. Preventive Services Task Force did not post draft research plans, recommendation statements or systematic reviews online for public comments. Instead, these documents were developed and discussed on private conference calls and voted on at invitation-only Agency for Healthcare Research and Quality meetings, which I attended as a medical officer from 2006 through 2010. This policy changed after the media uproar over the USPSTF's 2009 mammography recommendations, which included criticism for the Task Force's lack of transparency in guideline development. Reluctant to open their meetings to the public out of fear that it would stifle candid debates about politically sensitive subjects, the USPSTF chose, instead, to institute a one-month public comment period on draft documents before finalizing their recommendations.

For the first few years, public comments resulted in few significant changes to draft statements. However, there are now examples of the public comment period leading to substantial changes in recommended testing options and letter grades in high-profile topics such as screening for colorectal cancer and cervical cancer. That's a good thing, since the USPSTF draft statement on screening for illicit drug use, which recently closed to public comments*, requires major revisions.

In 2008, the USPSTF concluded that "the current evidence is insufficient to assess the balance of benefits and harms of screening adolescents, adults, and pregnant women for illicit drug use." What specific evidence gap prevented them from making a recommendation?

The most significant research gap identified by the USPSTF is the lack of studies to determine if interventions found effective for treatment-seeking individuals with symptoms of drug misuse are equally effective when applied to asymptomatic individuals identified through screening.

Consequently, the research plan finalized by the USPSTF in October 2016 to update their 2008 statement focused on summarizing evidence of the benefits and harms of counseling interventions to reduce drug use in "screen-detected persons." Focusing the systematic review on this population recognized that their willingness and motivation to change their drug use behavior in response to an intervention likely differs from those who actively seek medical treatment.

The draft review produced by the team that carried out this research plan determined that a great deal more applicable evidence was published in the past decade: 27 randomized, controlled trials with a total of 8,705 participants. The studies' findings, however, were disappointing for advocates of screening:

Across all 27 trials, in general, there was no consistent effect of the interventions on rates of self-reported or biologically confirmed drug use at 3- to 12-month followup. Likewise, across 13 trials reporting the effects of the interventions on health, social, or legal outcomes, none of the trials found a statistically significant difference between intervention and control groups on any of these measures at 3- to 12-month followup.

In other words, interventions for persons who had illicit drug use detected by screening didn't reduce drug use, improve physical health, or lead to fewer brushes with the law. No benefit + no harm (though only 4 studies reported on potential screening harms) = no net benefit. So the appropriate response to the evidence would be to either recommend against primary care screening for illicit drug use (since it adds burden to practices without benefiting patients), or, if the studies were considered too heterogenous to make that definitive a conclusion, to declare the evidence insufficient to determine the balance of benefits and harms.

Here, though, is where the Task Force appears to have gone off the rails. Rather than draw one of these two evidence-based conclusions, they instead commissioned a second systematic review from a completely different team (without posting a new research plan for public comment) seeking evidence on interventions in treatment-seeking populations. This draft review concluded that psychosocial interventions increase the likelihood of abstinence from drug use for up to 12 months, and that there are effective medications for opioid use disorder in persons who desire treatment (nice to confirm, but hardly a novel finding). The USPSTF relied on this second review (and apparently ignored the first one) to support their draft "B" recommendation to screen for illicit drug use in adults age 18 years or older.

Don't primary care clinicians already ask their patients about illicit drug use? We certainly do, as part of taking the social history of a new patient, but not in the methodical, intensive way that the USPSTF is now recommending. Perhaps the Task Force felt compelled by the pressure of the opioid epidemic to offer something more in terms of clinical prevention than an "I" statement or a politically unpalatable "D" recommendation against screening. Regardless of their rationale, by bypassing their published methods and processes to produce a statement that the evidence clearly doesn't yet support, the USPSTF has ventured onto dangerous ground, raising questions about their scientific credibility at a time when evidence-based institutions need to be defended more than ever.

--

* - A summary of my assertions in this post was submitted to the USPSTF during the public comment process.

Thursday, September 5, 2019

What we choose to name a disease matters

A few years ago around this time, I was dealing with a series of minor health problems. I developed a sinus infection that took several weeks to resolve. I twisted one of my knees ice skating, and for a while I feared that I had torn a meniscus. Occasionally after eating a heavy meal, I had the sensation that food was getting stuck on the way to my stomach - so along with an x-ray and MRI for my knee, my doctor also sent me for an upper GI series. Finally, my blood tests for a new life insurance policy came back with a slightly high hemoglobin A1c level. The A1c test was once used only to monitor glucose control in patients with established diabetes, but in 2010 the American Diabetes Association changed their diagnostic criteria to classify an A1c level of 6.5% or greater as consistent with diabetes, 5.7% to 6.4% as prediabetes, and 5.6% or lower as normal. So on top of knee tendinitis and gastroesophageal reflux disease (GERD), I also found out that I had prediabetes.

Intellectually, I knew that there was no evidence that screening for prediabetes is beneficial (the life insurance company, not my doctor, had ordered the test), and that a screen-and-treat approach to diabetes prevention leads to lots of overdiagnosis. Emotionally, it was a different story. I had recently turned 40 and was feeling old. It had been years since I had gotten the recommended amount of physical activity for adults, and now I was doing even less because my knee hurt. It didn't help that the afternoon I found out about my A1c level, my wife called and asked me to pick up some Burger King sandwiches and fries to bring home for dinner. Not exactly what a pre-diabetic adult with GERD should be eating.

Would I have felt less sick if I had instead been told that I had "slightly high blood sugar"? In recent years, oncologists have recommended re-naming slow-growing lesions that we currently call cancer, such as "ductal carcinoma in situ" of the breast, indolent lesions of epithelial origin (IDLE), hoping that a less scary term will discourage patients from pursuing unnecessarily aggressive (and potentially harmful) treatment. Similarly, a study showed that telling patients that they have a "chest cold" rather than "acute bronchitis" will help them feel more satisfied when they don't receive an antibiotic prescription.

systematic review published in BMJ Open supported the notion that what clinicians choose to name a disease influences patients' management preferences. Some study examples: women who were told they had "polycystic ovary syndrome" were more likely to want a pelvic ultrasound than those who were told they had a "hormone imbalance." Women were more likely to want surgery if they had "pre-invasive breast cancer cells" versus "abnormal cells" or a "breast lesion." Patients were more likely to expect surgery or casting of a "broken bone" or "greenstick fracture" than a "hairline fracture" or "crack in the bone." In each of these cases, the use of a more medicalized or precise term led patients to prefer invasive management options that were no better than more conservative choices.

How will I apply this knowledge to my daily practice? Although I already use the term "prediabetes" sparingly (preferring "increased risk for diabetes"), I'm going to start telling more patients with A1c levels similar to mine that they have high blood sugar instead. That they have heartburn rather than GERD. That they have overuse knee strains instead of tendinitis. And certain medical terms, such as "advanced maternal age" (i.e., pregnancy after the age of 35, or my wife's age when she gave birth to 3 of our 4 children), I will strive to eliminate from my vocabulary entirely.

**

A slightly different version of this post first appeared on Common Sense Family Doctor on October 5, 2017.

Thursday, August 22, 2019

Vaping and health: some answers, more questions

As the Centers for Disease Control and Prevention is actively investigating a cluster of severe lung illnesses in 14 states that may be linked to e-cigarette use among adolescents and young adults, an article in the August 15 issue of American Family Physician discusses common questions and answers about vaping and health. Since my colleague Dr. Jennifer Middleton's 2016 blog post on the promise and perils of e-cigarettes, more data has accumulated about the potential harms and benefits of this increasingly common activity. In 2017, one in five high school students reported e-cigarette use in the previous year, leading U.S. Surgeon General Jerome Adams to issue an advisory last year that labeled e-cigarette use in youth a "public health epidemic." More recent data from the Monitoring the Future survey suggested that this epidemic shows no signs of slowing:

Put in historical context, the absolute increases in the prevalence of nicotine vaping among 12th-graders and 10th-graders are the largest ever recorded by Monitoring the Future in the 44 years that it has continuously tracked dozens of substances. These results indicate that the policies in place as of the 2017–2018 school year were not sufficient to stop the spread of nicotine vaping among adolescents.

Although a nationally representative survey of parents of middle and high school students found that nearly all are aware of e-cigarettes, only 44% accurately identified an image of the "pod mod" device Juul; less than one-third reported concerns about their own child's use of e-cigarettes; and nearly three-quarters had received no communication from their child's school regarding the dangers of e-cigarettes. To help clinicians counsel parents and adolescents about vaping and Juuls, a patient education handout accompanying the AFP article highlights important discussion points.

It remains unclear whether e-cigarettes can help adults who are trying to quit smoking. E-cigarettes are not approved by the U.S. Food and Drug Administration as smoking cessation devices; however, a recent randomized trial in the U.K. National Health Service found that in smokers receiving weekly behavioral support, the 1-year abstinence rate in the e-cigarette group was superior to that of smokers using traditional nicotine replacement products. Notably, 80 percent of the e-cigarette group was still vaping after 1 year, compared with only 9 percent of the nicotine-replacement group - a troubling secondary finding given the unknown long-term health consequences of e-cigarette use.

In addition, the AFP article cautions that "unlike nicotine replacement therapy, the advertised nicotine dose on the labeling of e-cigarettes is not always consistent with laboratory analysis of the e-cigarette liquid, and the device and user behavior may affect the dose of nicotine received." Consequently, the authors recommend that clinicians first counsel patients to quit using evidence-based smoking cessation guidelines such as those from the U.S. Preventive Services Task Force, and only discuss using e-cigarettes if these methods are ineffective. In my own practice, I've yet to meet a patient who has successfully quit smoking by switching to e-cigarettes.

**

This post first appeared on the AFP Community Blog.

Tuesday, August 6, 2019

What is the appropriate role of experts in primary care guidelines?

This week, a Health Affairs blog post titled "Fixing Clinical Practice Guidelines" echoed several concerns I've discussed previously: practice guidelines are being produced in abundance but often have variable methodological quality, financial conflicts of interest that threaten objectivity, and/or poor applicability to the clinicians and populations for whom they are intended. To address these problems, the authors reasonably suggested restoring funding for AHRQ's National Guideline Clearinghouse and giving this centralized guideline repository the authority to require that guidelines meet a universal, rigorous methodology standard (including policies to avoid conflicts of interest) for inclusion.

My only real quibble with the commentary is its title: clinical practice guidelines have problems, but they're not broken. I am currently a volunteer panel member for three guidelines in various stages of development, sponsored or co-sponsored by three different medical specialty societies. Each guideline is following the National Academy of Medicine's (formerly Institute of Medicine's) standards for trustworthy guideline development and on track to produce practical recommendations for clinicians that are consistent with the best evidence on each topic. If I didn't think that these guidelines were worthwhile endeavors, I wouldn't have agreed to spend so many hours reviewing and discussing studies, systematic reviews, and meta-analyses, and drafting the text of the recommendations.

Drs. Benjamin Djulbegovic and Gordan Guyatt recently argued in a JAMA Viewpoint that we should not make false distinctions between evidence-based and consensus-based guidelines, since the "evidence alone never speaks for itself" and interpretation of evidence by guideline panelists via a consensus process is always required. Therefore, consensus-based does not necessarily imply weak or insufficient evidence; rather, "the crucial difference between evidence-based medicine and non-evidence-based medicine methods is that the former necessitates that judgments are consistent with underlying evidence, whereas the latter do not."

To me, "non-evidence-based" or "expert consensus" calls to mind an outdated process for developing guidelines (though some groups still use it): assemble a group of distinguished subject matter experts, ask them to formulate some recommendations based on their own practices (which, since they're the experts, must be the most effective and efficient ways to manage patients with the condition), find some published references to support what the experts already know, then write up a report. Bonus points if the guideline panel has an authoritative-sounding name such as the Joint National Committee (whose hypertension guidelines, until JNC 8 at least, largely followed an expert consensus process).

Applying the evidence-based paradigm to primary care guidelines, then, what is the appropriate role of experts? Since a well-conducted systematic review ought to retrieve all relevant research evidence, and guideline panelists should already have expertise in evidence interpretation and grading of recommendations, what more can experts bring to the table? In a BMJ analysis, Dr. Holger Schunemann and colleagues make a useful distinction between "expert evidence" and "expert opinion": evidence is factual, while opinion is a judgment that may (or may not) be based on facts:

For example, a patient might say: “I had prostate cancer detected by prostate specific antigen (PSA) screening and I am alive 10 years later.” That is evidence. It is not the same as saying: “PSA screening saved my life.” That is an opinion. Similarly, a clinical expert might say: “I operated on 100 patients with prostate cancer and none of them died from prostate cancer.” That is evidence. It is not the same as saying: “Prostatectomy is effective.” That is an opinion. In both cases, the opinions might be based on that evidence, but the evidence is clearly not the same as the conclusion.

Schunemann and colleagues review several pitfalls of expert evidence and opinion: not distinguishing between the two; untimely introduction of expert evidence; inadequate disclosure or management of financial and intellectual conflicts of interest; and inadequate appraisal of expert evidence. To make the influence of expert evidence on guidelines more transparent, they advise (and I agree) that it be collected systematically and appraised using the same methodology as for research evidence, which gives more weight to experimental studies or systematically collected observations that are less likely to be biased than a subspecialist physician's personal experiences.

Sunday, July 28, 2019

Deliberate clinical inertia protects patients from low value care

Clinical inertia is usually considered to be a negative term, used to refer to situations in which clinicians do not appropriately initiate or intensify therapy for uncontrolled chronic conditions. For example, a recent study in JAMA Internal Medicine found that less than one-quarter of patients with chronic hypercalcemia in the Veterans Affairs health system received recommended parathyroid hormone level testing, and only about 13 percent of patients who met diagnostic criteria for primary hyperparathyroidism underwent parathyroidectomy.

However, clinical inertia has also been described as a "clinical safeguard" against aggressive consensus guideline prescriptions that do not account for patient preferences and/or potential harms of intensifying treatment. For example, an analysis of the incremental benefits of and harms of the 2017 American College of Cardiology / American Heart Association guideline that redefined hypertension as a sustained blood pressure of >= 130/80 mm Hg concluded:

For most adults newly classified as having high blood pressure under the ACC/AHA guideline (the 80% of those newly diagnosed who have <10% 10-year risk), there is no incremental benefit in CVD risk reduction, but potential incremental harms from disease labeling, and, for those who meet the threshold for drug treatment, from adverse drug effects.

In this instance, a large number of patients with systolic blood pressures between 130 and 140 mm Hg could potentially benefit from clinical inertia by avoiding a hypertension diagnosis, additional testing, or prescription medications.

In a 2011 JAMA commentary, Drs. Dario Giugliano and Katherine Esposito observed that clinical inertia "also may apply to the failure of physicians to stop or reduce therapy no longer needed," but that "this neglected side of clinical inertia does not seem to generate as much concern among physicians or scientific associations." A review of polypharmacy in the July 1 issue of American Family Physician noted that regular use of at least five medications is associated with decreased quality of life, increased mobility problems and falls, greater health system use, and increased long-term care placement. Judicious deprescribing can help reduce polypharmacy and improve patient outcomes.

Another (sometimes better) strategy is not starting nonbeneficial medications for unclear reasons in the first place. In a 2018 article in Emergency Medicine Australasia, Dr. Gerben Keijzers and colleagues defined "deliberate clinical inertia" as "the art of doing nothing as a positive response." Arguing that doctors generally have a bias to intervene with diagnostic tests, drugs, or procedures, they suggested reframing the typical decision-making approach:

In clinical practice, 'risk versus benefit' is usually considered in terms of missing a diagnosis rather than potential risks of treatment, so a better approach to care may be to ask, 'Is this intervention more likely to cause harm than the underlying condition with its possible harm or risk?' There are many reasons why 'doing nothing' is difficult, but doing what we can to provide excellent care while preventing medical harm from unnecessary interventions must become one of the pillars of modern holistic healthcare.

Health professionals may readily grasp the rationales behind campaigns to avoid harms and costs of low value care such as Choosing Wisely and Right Care, but patients may be skeptical. Dr. Keijzers and colleagues suggested several ways to support deliberate clinical inertia in practice: empathy and acknowledgment; symptom management; clinical observation; explanation of the natural course of the condition; managing expectations; and shared decision-making ("communicating rather than doing").

**

This post first appeared on the AFP Community Blog.

Tuesday, July 23, 2019

Admissions straight talk from ... me!

Many thanks to Linda Abraham for interviewing me on Admissions Straight Talk about my path in medicine and recent blog post critiquing the U.S. News & World Report's medical school rankings. You can either listen to the full podcast episode embedded below or read a summary of the high points on her Accepted website.