Monday, June 17, 2019

For most, an aspirin a day won't keep the doctor away

A daily low-dose (81 mg) aspirin was once considered an essential component of cardiovascular disease (CVD) prevention for middle-aged and older adults. In 2006, the National Commission on Prevention Priorities ranked "discussing aspirin use in high-risk adults" the highest priority preventive service based on clinically preventable burden and cost effectiveness, and two years ago, in an updated set of rankings, it still rated aspirin use as the fifth highest priority for improving utilization. However, in 2018 the results of three large randomized trials suggested that the harms of aspirin taken to prevent a first CVD event outweigh its benefits for most persons. In an editorial in the June 1 issue of American Family Physician, Dr. Jennifer Middleton and I reviewed the latest evidence and concluded:

The new data do not exclude the possibility that aspirin may still benefit adults at very high CVD risk (e.g., 20% or more over 10 years) or those at lower risk who are unable to tolerate statins, but the data otherwise suggest that the risks of low-dose aspirin therapy for primary prevention outweigh any potential benefits. For most patients, we should be deprescribing aspirin for primary prevention of CVD. To prevent heart attacks and strokes, family physicians should focus instead on smoking cessation and lifestyle changes, controlling high blood pressure, and prescribing statins when indicated.

In a 2019 clinical practice guideline, the American College of Cardiology / American Heart Association largely concurred, recommending against prescribing aspirin for primary prevention of CVD in adults older than age 70 and downgrading its role in other adults at high risk to "may be considered" on a case-by-case basis.

Although aspirin is still strongly recommended to prevent recurrent CVD events, its rise and fall in primary prevention seems to have become another case of medicine reversing itself. Unlike other notable examples of medical reversal such as menopausal hormone therapy and tight glucose control in type 2 diabetes, the effectiveness of aspirin was supported by many well-conducted randomized, controlled trials. Aspirin worked ... until it didn't. In a recent commentary in the Journal of General Internal Medicine, Palmer Greene and colleagues suggested that it may be a good idea to consider established evidence-based practices as having an "expiration date":

An “evidentiary statute of limitations” would require the occasional reassessment of accepted therapies to consider which might no longer be of use—possibly because of changes in the population as a whole, a changing understanding of whom the treatment is appropriate for, or evolving therapies for the prevention or treatment of the disease in question. Not only should we consider if older data still applies, we should also strive to anticipate the factors to which the results of a newly published positive study might be sensitive. For instance, is there an event rate in the control group below which the harms of the therapy might outweigh the benefit? Is there a treatment success rate that, when achieved, would make screening inefficient?

Not starting aspirin is relatively straightforward, but patients who have taken aspirin for many years without adverse effects or CVD events may resist discontinuing it. After making sure that we are appropriately treating all of their risk factors (e.g., high blood pressure, high cholesterol, diabetes, tobacco use), I have taken a shared decision-making approach to these deprescribing discussions, emphasizing the small additional benefit of aspirin compared to the increased risk of serious bleeding events.

**

This post first appeared on the AFP Community Blog.

Monday, June 10, 2019

The problems with using population-level data to estimate prostate screening benefits

Almost any debate about the effectiveness (or lack thereof) of PSA-based screening for prostate cancer in the U.S. will usually involve whether the results of the two largest randomized screening trials or national mortality statistics more accurately represent the effects of intensive screening from the early 1990s to the late 2000s. Putting aside the conflicting results of the U.S. Prostate, Lung, Colorectal, and Ovarian Cancer Screening (PLCO) trial and the European Randomized Study of Screening for Prostate Cancer (ERSPC) - for which there are many plausible explanations - even the most optimistic statistical interpretation of these trials suggests that PSA screening reduces prostate cancer mortality by 25-30% at best, which does not fully account for the observed 40% decline in prostate cancer mortality from 1991 to 2008. Since the effectiveness of standard prostate cancer therapy did not change significantly during this time frame, PSA screening advocates have suggested that the discrepancy is probably due to flaws in the trials, rather than issues with "real-world evidence" derived from population-level mortality data.

However, in a thoughtful commentary recently published in Mayo Clinic Proceedings, Drs. Joaquin Chapa, Alyson Haslam  and Vinay Prasad provide lots of good reasons to question the validity of prostate cancer mortality trends. First, as any clinician who has filled out a death certificate knows, determining the underlying cause of death can be difficult in a patient with several serious health conditions. Patients with metastatic prostate cancer may die with incurable cancer, but not of it. Then, the algorithm used by the Mortality Medical Data System may introduce error, noise, and bias because prostate cancer is accepted as an underlying cause of death for many conditions (e.g., cirrhosis, bacterial endocarditis) that could be related to the cancer but could also simply co-exist.

In addition, studies show that patterns in attribution of causes of death often change over time due to factors other than actual changes in underlying causes. Changes in population composition (e.g., increases in the Hispanic and Asian proportion of the population relative to whites and African Americans) can also result in different overall prostate cancer mortality rates by increasing the percentages of populations who have lower cancer mortality.

In contrast, the methods used to determine causes of death in PLCO and ERSPC were much more rigorous; the cause listed on the death certificate was double-checked by 1 to 3 independent, blinded reviewers. These processes demonstrated that assigning a cause of death is potentially fraught with error and subject to human bias. As Chapa and colleagues observe:

Even with more rigorous processes for determining COD in the PLCO and ERSPC trials, COD determination remains difficult and is subject to uncertainty. Of all deaths in the PLCO study, 28% required additional human review because of discordance between the death certificate and the initial human reviewer. Of reviewed cases, 3% required a conference call to resolve discordance among 3 reviewers.

I've always thought that crediting PSA screening for the historical decline in U.S. prostate cancer mortality made little sense; for one thing, one wouldn't expect a mortality difference to be visible for at least 7-8 years after screening became common in clinical practice, the earliest point in the ERSPC trial when the survival curves separate. That would have been 1997 or 1998 at the earliest, not 1991. Other studies have observed that prostate cancer mortality also began falling in the U.K. in the 1990s, even though PSA screening was uncommon. This new analysis provides even more reason to doubt that there is a straightforward cause-and-effect relationship - if, indeed, there is any relationship at all.

By the way, I'd like to give a shout-out to the terrific medical podcast Plenary Session, hosted by Dr. Prasad. An interview with Dr. Chapa in a recent episode was the reason I knew about his paper in the first place. Plenary Session is too new to have made my most recent list of favorite podcasts, but you can bet that it will be on the next one.

Wednesday, May 29, 2019

Reforming medical school rankings and admissions criteria to meet urgent national needs

If you read my curriculum vitae, you might assume that I must have a high opinion of the U.S. News & World Report higher education rankings. I earned my bachelor's degree from Harvard University, #2 behind Princeton in the "Best National Universities" category. My Master of Public Health degree is from Johns Hopkins, the #1 public health school. And my medical degree is from NYU, tied with Cornell and the Mayo Clinic as the 9th ranked research medical school, and likely to move up due to its decision to go tuition-free last fall (though whether NYU will improve its middling primary care ranking is uncertain at best). To top it all off, I even wrote a blog for U.S. News for a year called "Healthcare Headaches."

I admit that when I applied to college and medical school, I placed a great deal of stock - far too much - in these rankings. (I ended up at Johns Hopkins for public health because it was local, offered a part-time/online option, and I already had connections there.) But as my formal education recedes into the rearview mirror of my career, I find, instead, that I agree with Northwestern University professor William C. McGaghie's renewed critique of the U.S. News rankings published recently in Academic Medicine.

Dr. McGaghie observed that "the methods used by U.S. News & World Report to rank medical schools are based on factors that can be measured easily but do not reflect the quality of a medical school from either a student or patient perspective." For example, 20% of the research and primary care ranking reflects "student selectivity," a combination of incoming students' mean undergraduate grade point averages (GPAs), Medical College Admission Test (MCAT) scores, and acceptance rates. These criteria may modestly predict academic performance in preclinical courses, but have virtually no impact the quality of doctors schools produce. They also have real downsides. As Dr. Arthur Kellermann, dean of the Herbert School of Medicine at Uniformed Services University, wrote in explaining his school's 2016 decision to stop participating in the U.S. News medical school rankings:

Schools have a perverse incentive to boost their rank at the expense of applicants and the public. Based on the methodology used by U.S. News, a medical school that wants to boost its rank should heavily favor applicants with super-high MCAT scores and grade point averages and ignore important attributes such as character, grit, and life experiences that predict that a student will become a wonderful doctor. A school might also encourage applications from large numbers of people with little or no chance of acceptance simply to boost its “selectivity” score.

This isn't to say that prospective medical students can't have stellar test scores and GPAs and great character and life experiences - I interview several every year. But I wonder how many outstanding future physicians we also prematurely weed out by our slavish devotion to the former metrics. I write from personal experience: my overall undergraduate GPA was 3.4, and my GPA in science prerequisite courses closer to 3.2, which caused my applications to be automatically rejected at several medical schools I applied to - including the one where I'm now a full Professor.

From the perspective of a patient or a community, the outcome that matters most for a medical school is how well it fulfills its social mission: to produce physicians who improve the health of the communities it serves, including an optimal mix of generalists and subspecialists; urban, suburban, and rural physicians; practicing physicians, teachers, and researchers. In 2010, Dr. Fitzhugh Mullan and colleagues published the first ranking of medical schools based on social mission, which eventually evolved into the Robert Wood Johnson Foundation-supported Social Mission Metrics Initiative, a national survey that enables dental, medical, and nursing school deans to receive confidential feedback on their performance in 18 social mission areas.

As Dr. Eric Topol wrote in Deep Medicine, the forthcoming integration of artificial intelligence (AI) into medical care over the next few decades is another good reason to change the way we evaluate medical school applicants:

Are we selecting future doctors on a basis that can be simulated or exceeded by an AI bot? ... Knowledge, about medicine and individual patients, can and will be outsourced to machine algorithms. What will define and differentiate doctors from their machine apprentices is being human, developing the relationship, witnessing and alleviating suffering. Yes, there will be a need for oversight of the algorithmic output, and that will require science and math reasoning skills. But emotional intelligence needs to take precedence in the selection of future doctors over qualities that are going to be of progressively diminished utility.

In another Academic Medicine commentary, Dr. Melanie Raffoul (a former Georgetown Health Policy Fellow) and colleagues offered a starting point for medical and other health professions schools to "meet the needs of tomorrow's health care system." Among other things, they proposed 1) incorporating emotional intelligence testing into admissions criteria; 2) specifically recruiting from rural and underserved settings; 3) "consciously reaching out to disadvantaged and underrepresented students at the primary and secondary education levels"; 4) establishing community partnerships to develop pools of eligible trainees; 5) bridging gaps between health care and public health; and 6) supporting health professions education research. Ironically, the most effective way to motivate schools to make these wide-ranging changes might be for U.S. News to weigh these factors heavily in next year's rankings. If that happened, my current dim view of the rankings would change dramatically.

Saturday, May 25, 2019

Reducing medication cost burden in primary care: challenges and opportunities

Earlier this month, the Centers for Medicare & Medicaid Services (CMS) finalized a new rule requiring that pharmaceutical companies disclose drug list prices in direct-to-consumer television advertisements for drugs that cost more than $35 for a month's supply or usual course. A fact sheet further explaining the rule noted that "the 10 most commonly advertised drugs have list prices ranging from $488 to $16,938 per month or usual course of therapy." Although pricing transparency could push patients to select more affordable or non-pharmacologic alternatives, and help clinicians improve high-value prescribing, it unfortunately does not make these drugs any less expensive.

In an editorial in the April 1 issue of American Family Physician, Dr. Randi Sokol discussed four strategies for helping patients with type 2 diabetes mellitus afford insulin while providing evidence-based care: 1) Relax A1c goals to 8% or less; 2) Switch to human insulins instead of insulin analogues; 3) use Health Resources and Services Administration-certified 340B pharmacies and patient assistance programs; and 4) join advocacy efforts to reduce the high cost of insulin and other drugs, such as the Lown Institute's Right Care Alliance and the American Medical Association's Truth in Rx.

Family physicians can take a systematic approach to reducing prescription costs for all of their patients. In an article published in FPM, Dr. Kevin Fiscella and colleagues described the approach taken by 7 primary care practices in New York, Georgia, and California. Office staff screen patients for prescription cost concerns by privately asking them, "Is the cost of any of your medications a burden for you?" For patients who answer yes, clinicians briefly explore the circumstances (e.g., unmet deductible, use of brand name drugs) and employ several cost-reducing strategies, including deprescribing unnecessary medications, using extended (90-day) prescriptions, and substituting lower-cost medications or referring patients to large chain pharmacy discount programs (e.g. "$4 lists").

In a preliminary study published in a supplement to the Annals of Internal Medicine, Dr. Fiscella's team found that a single 60-minute training for clinicians and staff on cost-of-medication importance, team-based screening, and cost-saving strategies increased the frequency of cost-of-medication conversations from 17% to 32%. Other helpful articles in the same supplement supported by the Robert Wood Johnson Foundation included "The 7 Habits of Highly Effective Cost-of-Care Conversations" and "Tools to Help Overcome Barriers to Cost-of-Care Conversations." The American College of Physicians offers several additional cost-of-care conversation resources on its website.

**

This post first appeared on the AFP Community Blog.

Tuesday, May 21, 2019

Is Common Sense Family Doctor a professional liability?

In the many talks I've given about blogging and social media over the years, one question that almost always came up was some variation of, "can being opinionated on social media hurt my career?" My usual response is no, provided that you don't do unprofessional things like post photos of identifiable patients or insult current or former supervisors. And even if some readers have been turned off by my less-is-more medical philosophy (which my friend and cardiologist John Mandrola recently termed "being a medical conservative"), for me any negative consequences of blogging are greatly outweighed by the positives. These include many speaking and writing invitations, positive recognition in family and conservative medicine communities, and appointments to practice guideline and advisory panels such as the Advisory Committee on Breast Cancer in Young Women. At the Society of Teachers of Family Medicine conference in Toronto, I was humbled by how many people introduced themselves to me the way I imagine one would approach a celebrity or high-ranking dignitary, simply because they counted themselves among my ten thousand or so Twitter followers.

That was my view, anyway, until this year. Now I wonder if my nearly 10-year commitment to blogging for Common Sense Family Doctor and other outlets (e.g., Medscape) is more of a professional liability than I believed.

Before getting into that, I want to make clear that I recognize how fortunate I've been in my career path to this point. I divide my time between teaching, editing, writing, and patient care so that these activities often complement each other, and I greatly appreciate the flexibility and support that the family medicine department at Georgetown/Medstar has provided for the past several years since I returned to academic practice. Being deputy editor of the most-read medical journal in primary care is a great privilege. I have a terrific relationship with the editor-in-chief, who goes out of her way to acknowledge the value of my contributions and has been extremely understanding when I have pursued other opportunities that could significantly reduce the time I have to devote to American Family Physician.

That said, last month I experienced a crushing professional disappointment. I had the opportunity to interview for a senior science position at an organization I greatly respect, and for which I have volunteered hundreds of hours of time over the past 5 years. The position would have involved moving my family across the country, and frankly, I could not imagine a more qualified candidate being willing to do so. In short, I thought that I had the inside track on the job. So I was shocked to receive a form e-mail from their Human Resources department just two days after my on-site interview, informing me that they had decided to move forward with another candidate. A more personal follow-up e-mail the next day explained that they wanted to fill the position more quickly than I was willing to leave my current institution and clinical practice.

Fair enough. Except this: the position is now being widely advertised again on social media and multiple listserves to which I subscribe. It clearly has not been filled by a competing candidate, and if negotiations with that candidate unexpectedly fell though, they haven't reached out to me to extend a backup offer. Did I really bomb the interview that badly? Were my shirt buttons misaligned, or was there something hanging out of my nose? It got me thinking about something that vaguely bothered me about the interview and the telephone interview that preceded it: their repeatedly asking me if I would be willing to publicly support organizational positions that I personally disagreed with. I repeatedly answered yes, explaining that I understood the nature of the position required it, and as long as I had input in coming up with any scientific stance (as they assured me I would), that would be fine by me. Maybe they didn't buy my assurances. Maybe they didn't believe that an opinionated social media star could suppress his ego in order to toe the party line. (They would have been wrong. Had I accepted this position, I fully intended to stop writing this blog and substantially tone down my Twitter feed.)

So is Common Sense Family Doctor sometimes a professional liability? It probably is; I can't say for certain either way. But as a dear friend consoled me after learning that I was not offered this position that I coveted, "they are truly missing out and we get to retain a fabulous family doc and educator at Georgetown." I hope she's right. I am grateful to my colleagues, students, and patients for making this latest disappointment sting a little bit less than it could have.

Monday, May 6, 2019

Making the case for primary care-led, federally funded clinical practice guidelines

Talk about throwing down the gauntlet. In a provocative editorial published last year in Circulation: Cardiovascular Quality and Outcomes, Dr. John Ioannidis, who in 2005 shocked the scientific research community with his article "Why Most Published Research Findings Are False," took aim at medical professional societies authoring clinical practice guidelines and disease definition statements. He observed that despite notable progress in improving the trustworthiness of guidelines since the 2011 Institute of Medicine report Clinical Practice Guidelines We Can Trust, guideline panels continue to be plagued by financial conflicts of interest, lack of methodologist involvement, and domination by specialists "who have overt preferences (even without overt conflicts)."

Recent studies support Dr. Ioannidis's points. One study found that more than half of authors of gastroenterology guidelines received industry payments between 2014 and 2016. Another study of the top 10 highest-revenue medications of 2016 determined that more than half of authors of related guidelines had financial conflicts of interest, many of which were not disclosed in the journal publications. Finally, a study evaluating levels of evidence supporting U.S. and European cardiology guidelines from 2008-2018 found that only 8 to 14% of recommendations were supported by evidence from multiple randomized, controlled trials (RCTs) or a single, large RCT, while 42% and 55% of U.S. and European recommendations, respectively, were based on expert opinion only. In sum, even when guideline authors weren't on the take, eminence-based medicine trumped evidence-based medicine.

Poorly conducted professional society guidelines don't benefit front-line clinicians, but Ioannidis noted that they do have other benefits:

Guidelines writing activities are particularly helpful in promoting the careers of specialists, in building recognizable and sustainable hierarchies of clan power, in boosting the impact factors of specialty journals and in elevating the visibility of the sponsoring organizations and their conferences that massively promote society products to attendees. However, do they improve medicine or do they homogenize biased, collective, and organized ignorance?

A way to move beyond the production of clinical practice guidelines that are essentially "industry-friendly opinion pieces" is to centralize development efforts within government health agencies, or publicly-supported independent panels such as the U.S. Preventive Services Task Force (USPSTF). A recent review of 421 clinical practice guidelines for noncommunicable diseases in primary care concluded that guidelines developed or financed by governments were substantially more likely to be rated high-quality according to the AGREE-II tool than those developed by others. Dr. Michael LeFevre, a family physician colleague and former USPSTF chairman, suggested in a 2017 editorial that public investment is "essential" to producing trustworthy guidelines:

A substantial and consistent funding stream should be available for the development of clinical practice guidelines and should be awarded competitively through a process similar to research grant funding. The logical place for this funding to occur is through the Agency for Healthcare Research and Quality (AHRQ). ... The topic, guideline development panel, and methodology would be part of a competitive grant proposal. ... Proposals receiving funding would be assigned an evidence-based practice center (EPC) to work with the guideline development panel to provide an independent systematic review of the literature. The [EPC] program would need additional funding, but the focus of the efforts would shift to be channeled to producing reviews that would be assured of being used in the development of a clinical practice guideline we can trust.

Unfortunately, funding for AHRQ has always been politically precarious, and the closure of the National Guideline Clearinghouse last year does not bode well for starting a major new program to support guideline development and assessment, even as AHRQ-supported researchers continue to break new ground with the National Guideline Clearinghouse Extent of Adherence to Trustworthy Standards (NEATS) instrument.

And what about the problem of intellectual bias - being unable to see beyond the scope of one's own limited clinical experience to evaluate evidence impartially? Dr. Ioannidis first proposed having methodologists and patients write guidelines, with content experts serving as non-voting reviewers. Alternatively,

another possibility is to recruit also to the writing team medical specialists who are unrelated to the subject matter. Involvement of such outsiders (eg, family physicians involved in cardiology guidelines) could be refreshing. These people may still have strong clinical expertise, but no reason to be biased in favor of the specialized practices under discussion. They may scrutinize comparatively what is proposed, with what supporting evidence, and at what cost. Devoid of personal stake, they can compare notes to determine if this makes sense versus what are typical trade-offs for evidence and decisions in their own, remote specialty.

As a family physician who has served on guideline panels for cardiology (Pharmacologic management of newly detected atrial fibrillation) and otolaryngology (Cerumen impaction) topics, I find a great deal of merit in the latter approach, and a similar effort led by Dr. Ray Moynihan and primary care colleagues to reform disease definitions so that potential harms of expanding diagnostic criteria are considered along with the benefits for chronic conditions such as hypertension. It's no accident that the USPSTF has long been considered an exemplar of guideline development: the panel's members are all primary care clinicians or methodologists, and have one of the strictest conflict-of-interest policies in the field. Their recommendations don't make everyone happy or anyone wealthy, and that's most likely a good thing for patients.

**

This post first appeared on The Daily Physician.