Wednesday, May 7, 2014

Cancer epidemiology 101

I've had many Twitter conversations with cancer screening advocates who fear that the U.S. Preventive Services Task Force's "D" (don't do it) recommendation against PSA-based screening for prostate cancer will lead to a dramatic spike in prostate cancer deaths as primary care physicians screen more selectively, or perhaps stop screening at all. I seriously doubt these apocalyptic forecasts (for one thing, prostate cancer causes only 3% of deaths in men, and the decline in the U.S. prostate cancer mortality rate since 1990 hasn't had any appreciable effect on overall life expectancy). However, I recognize that reasonable people disagree with my review and the USPSTF's interpretation of the evidence. The American Cancer Society, for example, continues to support screening if men are adequately informed of the known risks and uncertain benefits. But it's one thing to argue from evidence, and quite another to argue from ignorance. Many of the tweets I've seen from urologists, unfortunately, represent the latter.

When I went to medical school, biostatistics and epidemiology was the course that no one took seriously. Things have changed for the better over the years - today I teach a rigorous course in evidence-based medicine and population health at Georgetown University School of Medicine - but there's still an appalling lack of basic knowledge about these topics among practicing physicians of all specialties. Below are a few key concepts in cancer epidemiology that anyone should understand before getting into a dispute about the value of prostate cancer screening.

Lead-time bias - "Prostate cancer survival has improved since we started PSA screening, so screening must work!" The first clause of this sentence is absolutely true: in the 1970s, about 70 percent of men diagnosed with prostate cancer were still alive 5 years after diagnosis, while today that figure is closer to 99 percent. The second clause could be true, but does not invariably follow from the first. By finding prostate cancers in men long before they become symptomatic (if they ever become symptomatic), screening advances the time of diagnosis, but could have no impact on mortality. In other words, 5-year survival always increases when a screening test is implemented, but its effect might only be giving patients an earlier cancer diagnosis without affecting their disease course. See chest x-ray screening for lung cancer (which was unfortunately a common practice for years) for an example of this phenomenon.


A variation on the above statement is the observation made by older urologists that "Before PSA testing, we used to see men with prostate cancer only when they came in with metastatic disease; today we see them with much less advanced tumors so that they can be cured." Much of this clinical experience reflects the effect of lead time, as well as overdiagnosis of cancer that didn't need to be found in the first place. In the words of urologic oncologist Willet Whitmore, "For a patient with prostate cancer, if treatment for cure is necessary, is it possible? If possible, is it necessary?"

Association does not equal causation - "Prostate cancer mortality has declined by 30 percent since 1990, which must be due to PSA screening." It's tempting to make sweeping conclusions based on observational data - the press has been doing this forever, linking caffeine use to cancer today, then reversing itself when a new headline is needed tomorrow. It's certainly possible that some of the decline in mortality is due to screening, but it's just as likely that some other factor is responsible. For example, men who smoke are less likely then male non-smokers to die from prostate cancer. Does that mean that tobacco use has a protective effect? Of course not; these men are dying prematurely from heart attacks and chronic obstructive pulmonary disease, and therefore not dying of prostate cancer. Also, the temporal association between the mortality decline and the PSA screening doesn't make any sense, since the only randomized trial to show that PSA screening reduces prostate cancer mortality (more on this later) found that it takes at least 9 years to do so. Since PSA screening was not widespread in the U.S. until the early 1990s, any benefit of screening wouldn't have changed the mortality statistics until 2000. But that's not what happened.

The most favorable study represents "the truth" - "Prostate cancer screening reduces prostate cancer mortality by 20 percent." If you only speak to urologists, you might come away thinking that there's only been one randomized trial of PSA screening: the European Randomized Study of Screening for Prostate Cancer (ERSPC), which reported this result in 2009 and again in 2012 after 11 years of follow-up. The usual description of the ERSPC as a single "trial" is problematic (it's actually a combined analysis of screening results from 7 European countries), but even allowing for that, it's only one of 5 randomized trials of screening, and (guess what?) the only one of the 5 to show a benefit.

In the old days, the process of writing reviews and guidelines went as follows: write up recommendations that you already know to be correct from clinical experience, then go to the literature to select evidence that supports your positions. A less biased approach is to evaluate all of the available evidence, regardless of one's preexisting biases, which is what my colleagues and I did and what independent reviewers did for the both the Cochrane Collaboration and BMJ. Here's what the Cochrane reviewers concluded: "Prostate cancer screening did not significantly decrease all-cause or prostate cancer-specific mortality in a combined meta-analysis of five RCTs."

Specialist-authored guidelines are superior to generalist-authored guidelines - "The USPSTF guidelines are invalid because there were no urologists on the panel." Let's put aside for the moment the fact that more prostate cancer diagnoses invariably lead to more business for urologists, and that guidelines authored by specialty societies are of lower quality than those authored by generalists. Dismissing the USPSTF recommendation on the basis of its primary care membership is still nonsense, pure and simple. The vast majority of prostate cancer screening occurs in primary care settings, and therefore primary care clinicians are the most appropriate experts to evaluate and weigh the evidence about screening. (The same can be said about mammography guidelines and radiologists.) Several urologists were, of course, consulted at multiple stages during and after the writing of our evidence review to make sure that no important studies were missed.

As I've said, I welcome debates with well-informed advocates of PSA screening, who tend to view this imperfect test as a glass half-full rather than half-empty (or shattered beyond repair). For the less-informed, I get it that you don't have time to go back to medical school for remedial epidemiology lessons. So consider this post your Cliff's Notes.

**

This post originally appeared on Common Sense Family Doctor on December 12, 2012.