Pages

Monday, October 15, 2012

Why don't comparative effectiveness studies change practice?

The October 1st issue of American Family Physician features the third article in the "Implementing Effective Health Care Reviews" series, a summary of the Agency for Healthcare Research and Quality's comparative effectiveness report on treatments for gastroesophageal reflux disease. Notably, the report found no differences in efficacy between proton pump inhibitors; better symptom relief from continuous daily compared with on-demand dosing; and limited data on endoscopic treatments. What are the chances that results from this and other high-quality comparative effectiveness studies will quickly change clinical practice? Not very good, unfortunately. As I wrote in an editorial that introduced the series:

To date, the track record of translating comparative effectiveness research findings into clinical practice has been mixed, at best. For example, several years after a landmark randomized controlled trial demonstrated the superiority of thiazide diuretics compared with other first-line medications for hypertension, prescribing of thiazide diuretics had increased only modestly. An evaluation of diabetes practice guidelines produced after the publication of an Effective Health Care review of oral treatments found numerous inconsistencies between guideline recommendations and evidence-based conclusions. Despite extensive evidence that initial coronary stenting provides no advantages over optimal medical therapy for stable coronary artery disease, more than one-half of patients who undergo stenting in the United States have not had a prior trial of medical therapy.

In the current issue of Health Affairs, Justin Timbie and colleagues propose five reasons that scientific evidence is slow to change how physicians practice:

1) Misalignment of financial incentives - e.g., fee-for-service payment systems tend to reward invasive therapies, such as surgery for back pain, that may be no better than conservative management.

2) Ambiguity of results - "Without consensus on evidentiary standards prior to the release of comparative effectiveness results, ambiguous results become fuel for competing interpretations, making it difficult for providers, insurers, and policy makers to act on the evidence."

3) Cognitive biases in interpreting new information - e.g., a tendency to reject evidence that contradicts previous strongly held beliefs, such as the superiority of atypical to conventional antipsychotics.

4) Failure to address the needs of end users - e.g., designing a study to compare the benefits of two therapeutic strategies, but not the harms.

5) Limited use of decision support - e.g., poorly designed electronic or paper patient decision aids that do not fit into the workflow of primary care practices.

Do these reasons sound about right to you? How do you think these obstacles could be overcome in order for front-line family and specialist physicians to rapidly incorporate the best scientific evidence into their practices?

**

The above post was first published on the AFP Community Blog.