Pages

Monday, February 25, 2013

The role of whistleblowing in health care

The first instinct of a bureaucracy is self-preservation, and health care bureaucracies are no exception. This rule applies not only to government agencies, but to academic and industry settings as well. This was the conclusion I came to after listening to a panel of scientist and physician "whistleblowers" at last week's Selling Sickness 2013 conference in Washington, DC. One by one, they described their painful discoveries that widely prescribed medications such as the diabetes drug Avandia and the antibiotic Ketek were causing serious harm, and sometimes death, in thousands of patients. They talked about passing this knowledge on to supervisors and being ignored, rebuked, or ostracized. Protecting patients paled to the bureaucratic sin of rocking the boat.

For example, one academic researcher was fired by her institution, which seemed more interested in not jeopardizing the funding it received from a drug's manufacturer than evaluating evidence from her clinical trial that this drug was jeopardizing children's lives. A former Food and Drug Administration safety reviewer who determined that Avandia caused heart failure and heart attacks was verbally reprimanded and told to keep that conclusion to herself. (The FDA finally issued a safety alert more than a year later, after a study appeared in the New England Journal of Medicine supporting the reviewer's determination, but the agency did not restrict access to the drug until 4 years after her initial finding.)

I suppose that I consider myself a whisteblower, too, though what I did doesn't begin to compare with the courageous stories I heard last week. As readers of this blog know, I resigned from the Agency for Healthcare Research and Quality in November 2010 after senior health officials in the Obama Administration forced the cancellation of a U.S. Preventive Services Task Force meeting that was set to recommend against the use of prostate-specific antigen screening for prostate cancer because it led to more harm than good. The key vote was scheduled for the day of the midterm elections that the President's party would lose in a landslide, despite its desperate attempt to keep accusations of health care "rationing" off voters' minds. The cover-up included persuading Dr. Ned Calonge, the then-chairman of the USPSTF, to take personal responsibility for calling off the meeting, and circulating internal talking points that attributed the cancellation to logistical issues rather than Democratic Party politics. Nothing about this episode was illegal, but lying to the Task Force, its clinical partners, and press was definitely unethical. And from my perspective as a physician, withholding critical information from patients was immoral. Thousands more uninformed men received the PSA test before the USPSTF finally released its recommendation statement a year later. Many of them have already experienced anxiety, pain, and more permanent adverse effects of interventions such as prostate surgery that may not extend their lives by a single day.

The prolonged debate over who should pay for health care in U.S. has obscured the more important question of why the costs are much too high in the first place. One reason they're so high is because doctors like me order far too many tests and interventions that the scientific evidence shows are useless or potentially harmful, but also because health care institutions can get away with charging uninsured patients 20 or more times what these services actually cost. Time Magazine's Steven Brill recently did some whistleblowing of his own by exposing routine travesties such as a $21,000 emergency room heartburn bill and an $87,000 bill for outpatient surgery, inflated by charges such as $108 for a tube of Bacitracin or $1.50 for a single tablet of generic Tylenol. (Yes, a hot dog can cost $5 or more at ballgames and amusement parks, but you have a choice to go there and pay for those things. In health care, you may be too sick to shop around for the best deal on an appendectomy, or even if you know that you will experience a predictable health event such as having a baby, no one can tell you how much it will cost.) People may rail against the multi-million dollar compensation packages of investment banking CEOs, but executives at supposedly nonprofit hospitals are often paid on the same salary scale as professional sports stars.

To improve the health of all Americans, we need to stop political posturing over the pros and cons of "Obamacare" and instead empower more whistleblowers to pull back the curtains on the unscrupulous or corrupt practices of drug companies, government regulators, and venerable academic and health care institutions, who collectively constitute a "medical-industrial complex" that dwarfs the size of anything else in our economy. To quote my friend and family medicine colleague Pat Jonas (the blogosphere's Dr Synonymous), we need to keep pushing the Beast back. Common Sense Family Doctor is proud to be part of that effort.

Thursday, February 21, 2013

Should screening mammography always be a shared decision?

In the February 15th issue of American Family Physician, Dr. Maria Tirona reviews areas of agreement and disagreement in major organizational guidelines on screening for breast cancer. There is widespread consensus that annual or biennial mammography should be offered to women 50 to 74 years of age, and that teaching breast self-examination does not improve health outcomes. For women 40 to 49 years of age, in whom the risks and benefits of mammography are closely balanced on a population level, the U.S. Preventive Services Task Force and the American Academy of Family Physicians recommend shared decision making, taking into account individual patient risk and patients' values regarding benefits and harms of screening.

In an accompanying editorial, however, Drs. Russell Harris and Linda Kinsinger argue that shared decision making regarding breast cancer screening need not be limited to younger women:

More and more, the goal for breast cancer screening is not to maximize the number of women who have mammography, but to help women make informed decisions about screening, even if that means that some women decide not to be screened. ... The goal of improving patient decision making should be expanded to all women eligible for breast cancer screening (i.e., those 40 to 75 years of age who are in reasonable health), because the benefits and harms of screening are not very different among these age groups.

The primary benefit of screening mammography is an estimated 15 percent relative reduction in deaths from breast cancer; harms of mammography include false positive results, overdiagnosis, and overtreatment. A recent study published in BMJ explored the impact of overdiagnosis on attitudes toward mammography in several focus groups of Australian women 40 to 79 years of age. Few women had ever been informed about overdiagnosis as a potential harm of screening. Most women continued to feel that mammography was worthwhile if overdiagnosis was relatively uncommon (30 percent or less of all breast cancers detected). However, a higher estimate of overdiagnosis (50 percent) "made some women perceive a need for more careful personal decision making about screening."

Notably, a 2011 Cochrane Review estimated that 30 percent of breast cancers detected through screening are overdiagnosed:

This means that for every 2,000 women invited for screening over 10 years, one will have her life prolonged, and 10 healthy women who would not have been diagnosed if there had not been screening will be treated unnecessarily. Furthermore, more than 200 women will experience important psychological distress for many months because of false-positive findings.

Given this information, what approach should doctors take with screening mammography? Do you believe that this test should be routinely provided to women of eligible ages, a shared decision for some, or (as Drs. Harris and Kinsinger advocate), a shared decision for a woman at any age? Why is it often difficult to promote such shared decision making in clinical practice?

**

The above post was first published on the AFP Community Blog.

Tuesday, February 19, 2013

The Massachusetts Avenue of health reform

In contrast to the personality-driven path that Lyndon Johnson took to navigate legislative obstacles to Medicare and Medicaid, former management consultant Mitt Romney charted a decidedly different course to expanding health insurance when he became governor of Massachusetts in 2003. This month's Georgetown University Health Policy Seminar explored the politics of "Romneycare," a state-level health reform which in many ways made possible the future Affordable Care Act. Both readings for this session, a New Yorker article and a Health Affairs paper, portrayed Romney as a “data wonk” who viewed the issue of the uninsured as a problem-solving challenge rather than a grand moral imperative. Yet much like LBJ's inspirational leadership, Romney's data-crunching approach produced tangible results.



In an Op-Ed about his nascent reform plan that appeared in the Boston Globe in November 2004, Governor Romney proposed applying "carrots and sticks" to persons who could afford private health insurance but had chosen not to purchase it. At that point, he had not yet committed to the individual health insurance mandate that made his reforms possible but later became a political liability during his Presidential campaigns. According to New Yorker columnist Ryan Lizza:

Romney and his aides had a lengthy debate about the merits of the mandate, which evolved into a broader philosophical discussion. Personal responsibility was important, some aides argued, but what about the libertarian view that the government had no business requiring people to buy something? It was one thing to ask drivers to buy car insurance. Owning a car is a choice. But the health-insurance mandate demanded the purchase of a product just for being alive.

Once he made the decision to incorporate the individual mandate into his reform plan, Governor Romney found an unlikely ally in Senator Ted Kennedy, whom he had tried unsuccessfully to unseat in 1994. Together, Romney and Kennedy approached the George W. Bush Administration and reached an agreement to redirect a multi-million dollar fund for Massachusetts hospitals to provide subsidized health insurance for lower income workers. Romney also alternately courted and cajoled the Democratic leaders of the Massachusetts legislature, whose support was essential to passing his plan.

Health reform in Massachusetts has been judged a mixed success. On one hand, the percentage of state residents who were uninsured fell from 6.4% in 2006 to 1.9% in 2010, as the national average rose from 15.2% to 16.3%. However, Romney's hope that insurance expansion would help control costs has not been fulfilled, as the percentage of the state budget spent on health services has risen from 29 to 43 percent.

Compared to the policy environment that confronted President Barack Obama in passing the Affordable Care Act, seminar participants identified some key advantages for Romney: Massachusetts's already low uninsurance rate provided a "fertile environment" for reform, and he could focus his attention on health care without having to simultaneously manage financial crises and war. Although Romney "had little choice as governor about grappling with health care," wrote Martha Bebinger in Health Affairs, "for the most part he embraced the issue. Aides say Romney was enticed by the challenge of solving a complex problem, one that had eluded politicians for decades." How critical do you think Romney's public embrace of health reform was to the law's eventual passage? Was it as important, for example, as LBJ's advocacy for Medicare?

**

The above post first appeared on The Health Policy Exchange.

Wednesday, February 13, 2013

Lessons from the passage of Medicare

"Don't let dead cats stand on your porch." This famous quotation, attributed to President Lyndon Johnson during his strenuous and ultimately successful efforts to pass the 1965 bills that established the Medicare and Medicaid programs, embodied his approach to arguably the most important U.S. health care legislation until the 2010 Affordable Care Act. Translated, it meant that the best strategy for passing health care (and other potentially controversial) legislation was to act quickly and move bills along in the Congressional process before political opponents or outside advocacy groups had time to organize themselves.


The legislative passage of Medicare was the subject of the first of a series of monthly one-hour health policy seminars for Family Medicine fellows and residents at Georgetown University School of Medicine. The goal of this monthly series is for participants to gain a better understanding of the policy process at the federal, state, and local levels by reading and discussing real-life examples in a small group. These seminars will be led by me and the current Robert L. Phillips, Jr. Health Policy Fellow as well as selected guest faculty. Participants complete one or two short readings prior to the seminar (this inaugural session's assignment was "The Secret History of Medicare" from David Blumenthal and James Morone's The Heart of Power, pictured above).

Remarkably, Medicare was fully implemented only 11 months after the bill's signing, overcoming obstacles such as hospital segregation in the South, resistance from physician organizations such as the American Medical Association, and the logistical issues involved in issuing insurance cards to 18 million eligible seniors. As Medicare approaches its 50th anniversary, it faces huge budgetary challenges driven by increasing costs of health care and the demographics of the enormous "Baby Boom" generation, the first member of whom became eligible for Medicare benefits in 2011. This short video produced by the Kaiser Family Foundation summarizes changes that occurred in the program in the intervening years.



Liberal legislators saw Medicare as the first step toward enacting federally-administered universal health insurance for all Americans, while others saw it as a program, like health programs for active-duty military, veterans, and Native Americans, whose benefits were appropriately limited to specific groups and therefore must be defended against encroachment by future wide-ranging health reforms. Princeton professor Paul Starr has called this resistance to change by protected groups the "policy trap" that contributed to the defeat of the Clinton health reform proposal in 1994 and the near-defeat of the Affordable Care Act 16 years later.

Other points raised during the seminar included the book's observation that "an honest economic forecast would have very likely sunk Medicare." Like every federally financed health insurance initiative to come, Medicare ended up costing substantially more than initially projected. (In fact, the reason that most provisions of the ACA, passed in 2010, don't take effect until 2014 was to allow the Congressional Budget Office - which didn't exist in 1965 - to artificially score it as deficit-reducing over a 10-year time period.) Ethical or not, Lyndon Johnson's decision to "lowball" the estimated costs of Medicare was essential to getting it through Congress.

Was President Johnson - the last President to previously hold the position of Senate Majority Leader - a political anomaly, or can lessons from his deft management of the Congressional process be applied to national health care policy today? What do you think about Blumenthal and Marone's lessons for future Presidents, listed below?

1. Presidents must be deeply committed to health reforms.
2. Speed is essential. Waiting makes reforms a lot harder to win.
3. Presidents should concentrate on creating political momentum.
4. Presidents must actively manage the Congressional process.
5. Know when to compromise and know when to push.
6. Pass the credit.
7. Muzzle your economists. First expansion, then cost control.

**

The above post was first published on The Health Policy Exchange.

Friday, February 8, 2013

Concerns about calcium supplements

Until recently, the idea that calcium-containing supplements, which more than half of older adults in the U.S. consume regularly, could be harmful would have seemed absurd. Primary care clinicians have long recommended calcium supplements to reduce the risk of osteoporotic fractures in adults who are unable to meet the Institute of Medicine's Dietary Reference Intakes through diet alone. However, a large prospective study published this week in JAMA Internal Medicine demonstrated a statistically significant association between supplemental calcium (as opposed to dietary calcium) intake and a 20 percent higher relative risk of death from cardiovascular disease in men.

This troubling finding adds to the evidence base that suggests harmful cardiovascular effects of calcium-containing supplements. A timely pair of editorials in the February 1st issue of American Family Physician debates the population-level risk of widespread calcium supplementation. Arguing that this potential risk should be a serious concern, Drs. Ian Reid and Mark Bolland review the results of their previous randomized trial and meta-analysis that found 20 to 30 percent increases in the incidence of acute myocardial infarction in adults taking calcium supplements. In their view, these adverse effects are not worth the potential benefits to bone health:

In both of our meta-analyses, calcium supplementation was more likely to cause vascular events than to prevent fractures. Therefore, the bolus administration of this micronutrient should be abandoned in most circumstances, and patients should be encouraged to obtain their calcium intake from an appropriately balanced diet. For those at high risk of fracture, effective interventions with a fully documented safety profile superior to that of calcium are available. We should return to seeing calcium as an important component of a balanced diet and not as a low-cost panacea to postmenopausal bone loss.

In the second editorial, Dr. Rajib Bhattacharya points out that the Women's Health Initiative and other randomized trials did not indicate that calcium supplements increased cardiovascular risk. He argues that secondary analyses of trials designed with other primary outcomes in mind may have predisposed these analyses to unforeseen bias, and that there is "no compelling evidence" that calcium supplements at usual doses pose dangers to heart health.

Notably, a draft recommendation statement released by the U.S. Preventive Services Task Force last June stated that there was insufficient evidence that vitamin D and calcium supplementation prevent fractures or cancer in otherwise healthy older adults. Although the only adverse effects of supplements mentioned in the Task Force's evidence review were renal and urinary tract stones, none of the reviewed studies were specifically designed to assess cardiovascular harms. Is it time to abandon routine calcium supplementation in healthy adults? If not, what additional evidence do we need?

**

The above post first appeared on the AFP Community Blog.

Tuesday, February 5, 2013

Unintended consequences of "pregnancy prevention"

A provocative essay published a few days ago in the Wall Street Journal argued that America's falling fertility rate (which the author called the "baby bust") will make it difficult, if not impossible, to address challenges of anemic economic growth, an immense federal budget deficit, and the care needs of an exploding population of retiring "baby boomers." This piece has already inspired a good deal of back-and-forth debate - one columnist, for example, labeled it "a bunch of baloney" - and I don't intend to adjudicate that debate here. However, it is ironic that as a national conversation ensues about the pros and cons of having fewer children, the Obama Administration is struggling to placate religious and other employers which have objected to the Affordable Care Act's provision that requires them to provide and fully finance medications to prevent pregnancies.

Full disclosure: I am a practicing Catholic and father of three children. And I don't believe for a moment that our President intended to wage a secular "holy war" against institutional Catholicism, any more than I subscribe to the bogus liberal myth that faith-based groups that have moral qualms with hormonal contraception are bound and determined to block non-believers from accessing it.  (If that was really the case, they'd be leading boycotts of Target and Walmart, which both sell a month's supply of birth control pills for $9, according to the Reproductive Access Project.) But the overheated rhetoric about what some simply term "the HHS Mandate" has, in my mind, obscured a critical point: "pregnancy prevention" is vitally different from the prevention of diseases.

The Department of Health and Human Services web page that summarizes preventive services covered by the ACA covers long list of conditions that no one would ever want or wish on their worst enemies: cancer, heart attacks, strokes, hip fractures, diabetes, depression, and a host of infectious diseases. And then there's pregnancy. "Unintended" pregnancy, to be sure, but its inclusion should be a bit jarring even to health advocates who believe that delaying or declining childbearing is associated with health benefits. But when the Institute of Medicine's Committee on Clinical Preventive Services for Women recommended that FDA-approved methods of contraception be called preventive, it effectively defined pregnancy as a disease.

Defining pregnancy as a disease to be prevented is not just a matter of semantics. I've written before about how an overly interventionist approach to pregnancy is largely responsible for the current U.S. rate of one in 3 babies being born by Cesarean section, and predictions that it may soon approach 50 percent. In most countries, prenatal care and labor are primarily managed by midwives - pregnancy generalists, if you will. In the U.S., most pregnant women are instead attended by obstetrician-gynecologists: specialists in surgical delivery. Imagine if every person with garden-variety back pain was advised to seek care from a spine surgeon, or every person with a sinus infection first consulted an otolaryngologist. Would you be surprised if the result was many more back and sinus surgeries? A recent article in Harvard Magazine encapsulated this problem of perspective:

Risk perception and tolerance help determine professional standards of care, influence hospital protocols, mold the media’s telling of stories, and even influence laws. All these forces interact in complex ways. ... Saying that a certain percentage of C-sections are unnecessary is fairly simple. But weighing risks and knowing whether surgery is necessary in a particular case—or even whether a surgery was necessary in retrospect—is much more complex, and fraught with emotion. The obstetrician sees C-sections as generally safe, and if the outcome he or she wants to avoid is dire, even devastating—such as a baby’s becoming stuck and deprived of oxygen, which could lead to cerebral palsy—why wait to find out what will happen, however unlikely that outcome may be?

Make no mistake, a zero percent rate of C-sections is neither achievable nor desirable. A small proportion of pregnancies are complicated by health risks to the mother and baby, and interventions are necessary to prevent bad outcomes. But much lower Cesarean rates can be achieved without sacrificing safety, simply by approaching pregnancy as a normal, healthy condition, rather than a disease. A recent study in the Annals of Family Medicine reported a 4 percent Cesarean rate and 95% successful VBAC (vaginal birth after Cesarean) rate at a Wisconsin birth center for Amish women over a 17-year period, with no maternal deaths and a neonatal death rate similar to that of Wisconsin and the U.S. Lest this result be attributed to a miracle of Amish genetics, an Indian Health Service hospital in New Mexico where I spent a month-long elective during my family medicine residency attributed its 7 percent Cesarean rate to a conservative approach to labor (managed exclusively by family physicians) and cultural attitudes that favored vaginal deliveries.

We can agree that in general, unmarried teenagers should not be conceiving babies, and that a few pregnancies do expose mothers and infants to serious complications. But classifying contraceptives as preventive services and treating pregnant women as if they have fatal diseases is not a rational way to go about improving women's and maternal health outcomes.