Pages

Wednesday, September 14, 2016

Drowning in a sea of redundant or flawed systematic reviews

As a medical officer for the U.S. Preventive Services Task Force from 2006 through 2010, I authored or co-authored several systematic reviews of the effectiveness of screening tests. Lately I have been wanting to assemble a team of colleagues to perform a systematic review of a research question that, to my knowledge, has not been satisfactorily answered for at least a decade (when there was insufficient evidence to answer it), but have been putting it off because I don't have the time. Doing a high-quality systematic review can require countless hours of work, which as a physician / medical teacher / editor / blogger I have been unable to find in my schedule.

Clearly many others do find the time, though. In the current issue of The Milbank Quarterly, my one-time collaborator John Ioannidis, a prolific dean of evidence-based medicine who is best known for his 2005 paper "Why Most Published Research Findings Are False," takes on the problem of "The Mass Production of Redundant, Misleading, and Conflicted Systematic Reviews and Meta-analyses." Ioannidis discusses the implications of an astounding 2700% increase in the number of systematic reviews appearing in the indexed medical literature between 1991 and 2014, a period during which the number of PubMed-indexed items only increased by 150%. He argues that this massive increase is not explained by the need to "catch up" with older published literature; rather, only a small percentage of studies are being included in these reviews, and so many systematic reviews are cataloging the same bodies of evidence that "it is possible that nowadays there are more systematic reviews of randomized trials being published than new randomized trials."

For example, between 2008 and 2012, 11 meta-analyses appeared on statins for the prevention of atrial fibrillation after cardiac surgery. The second of these reported a sizeable and statistically significant benefit, and the next 9 had similar findings. Case closed? Apparently not, since 10 more meta-analyses of the same topic were published between 2013 and 2015! In some cases, excessive production of systematic reviews seems to have a marketing, rather than knowledge-advancing, purpose. Redundancy as stealth marketing is particularly pronounced for certain drugs, such as antidepressants, where financially conflicted authors produced 80% of the 185 meta-analyses published between 2007 and 2014.

Finally, Ioannidis points out that reviews may be original and methodologically well-done but still clinically useless because they are purposely not published; they pool studies of outdated genetic approaches (candidate gene studies with small sample sizes and fragmented reporting, a favorite of Chinese reviewers); or they don't find enough consistent evidence to draw conclusions. In all, he estimates that only 3% of currently produced meta-analyses are "decent and clinically useful," meaning, of course, that the other 97% are not.

There are many possible solutions to this problem, including stricter standards for publication of reviews; altering current incentives for biomedical researchers to "publish or perish"; and establishing single, authoritative, publicly accessible systematic reviews that can serve as living documents to be updated periodically by teams of researchers (think Wikipedia for systematic reviews). After reading Ioannidis's article, I have decided that if and when I do find time to work on a systematic review again, I will do everything in my power to make it one of the 3% that are worth doing.