Pages

Wednesday, January 12, 2022

Clinical prevention shorthand: Do, Don't Do, or Don't Know

Dr. Ned Calonge, the chairman of the U.S. Preventive Services Task Force (USPSTF) during my time as a medical officer at the Agency for Healthcare Research and Quality (AHRQ), liked to say that USPSTF recommendation letter grades boiled down to one of three things: Do, Don't Do, or Don't Know. Doctors like actionable guidelines. Do, or Don't Do. We don't like seeing Don't Know. When I left AHRQ at the end of 2010, about one-third of recommendation statements were rated "I," or insufficient evidence to determine the balance of benefits and harms. In other words, a whole lot of clinical prevention Don't Know.

A provision of the Affordable Care Act stipulated that the USPSTF submit an annual report to Congress identifying evidence gaps and recommending priority areas for prevention research. Its first report, which came out the year after I left, identified eleven clinical topics that had critical evidence gaps (resulting in "I" statements) "that if filled are likely to result in new recommendations." These topics were

1. Screening for coronary heart disease

2. Screening for colorectal cancer with fecal DNA testing and CT colonography

3. Screening for hepatitis C

4. Screening for hip dysplasia

5. Obesity: moderate- to low-intensity counseling

6. Interventions to prevent child abuse and neglect

7. Screening for illicit drug use

8. Screening for osteoporosis in men

9. Screening for depression in children

10. Screening and counseling for alcohol misuse in adolescents

11. Aspirin use in adults ages 80 years and older

I've bolded the four topics that, just over a decade later, are no longer "I" statements. #2, 3, and 7 are now Do's ("A" and "B" grades), and #11 is a Don't Do ("D" grade). That's 36 percent, which is something of a disappointment since the USPSTF called these "critical" evidence gaps, not evidence gaps that it felt researchers could fill at some indistinct future date. Moreover, two of these Do's, in my opinion, were highly questionable calls; no trials have shown that fecal DNA testing, CT colonography, or illicit drug use screening improve patient-oriented outcomes. Arguably, these changes were driven by changes in the composition of the Task Force, not new evidence.

A recent study examined characteristics of evidence and funding support for 11 USPSTF "I" statements that changed to a letter grade (most commonly a "B") between 2010 and 2019. The National Institutes of Health (NIH), the largest federal funder of biomedical research, supported a sizable percentage of the critical studies: 28.8%, to be exact. Were the researchers reading the USPSTF's annual reports to Congress when they wrote their grant requests? Perhaps a few did, but it seems unlikely to me. And for every completed study that filled a prevention evidence gap, many more studies were not done because NIH program officers didn't know there were other gaps that needed to be filled.

A new National Academy of Medicine report, "Closing Evidence Gaps in Clinical Prevention," aims to make it easier for the USPSTF and AHRQ to communicate research questions that need to be answered to the agencies that fund prevention research. An ad hoc committee developed a clinical prevention taxonomy and workflow designed to classify different types of evidence gaps (Foundational Issues, Analytic Framework, Dissemination and Implementation) and prioritize them into a research agenda that can be more easily shared. This is such a good idea that it's a wonder that no one thought of it before or, if they did, got decision-makers to pay attention. When 2032 rolls around, hopefully a much higher percentage of evidence gaps identified by the Task Force in its Eleventh Report to Congress (which, as a reflection of our times, involve health equity and disparities in cardiovascular disease and cancer topics) will be Do, or Don't Do, rather than Don't Know.