An insightful commentary published in JAMA last month took this point one step further by asserting that narratives deployed to support evidence-based guidelines should include not only patients' stories, but the story of the guideline developers themselves:
Typically, experts present a “clean” version of their findings without any narrative about how they made sense of the data. This fulfills the scientific virtues of objectivity, coherence, and synthesis. When the USPSTF released its report on screening mammography to much controversy, it included no narrative about the process. Only later was the story of the task force deliberations revealed. This narrative, with multiple characters operating within the context of historical precedents, timing mandates, and a messy political milieu, created a substantially more compelling perspective. But the account came too late to engage a confused and angry public with the task force's conclusions.
Guideline developers could include as part of their reports the narrative of their internal workings: We started with what we knew, we looked at the evidence, we revisited our hypotheses, we argued about the findings, and ultimately we acted here and now because it was prudent, but there are more data to come, and here is what we plan to do as we learn more. Such stories could increase trust and therefore improve the translation of evidence for individual use and public policies.
Typically, experts present a “clean” version of their findings without any narrative about how they made sense of the data. This fulfills the scientific virtues of objectivity, coherence, and synthesis. When the USPSTF released its report on screening mammography to much controversy, it included no narrative about the process. Only later was the story of the task force deliberations revealed. This narrative, with multiple characters operating within the context of historical precedents, timing mandates, and a messy political milieu, created a substantially more compelling perspective. But the account came too late to engage a confused and angry public with the task force's conclusions.
Guideline developers could include as part of their reports the narrative of their internal workings: We started with what we knew, we looked at the evidence, we revisited our hypotheses, we argued about the findings, and ultimately we acted here and now because it was prudent, but there are more data to come, and here is what we plan to do as we learn more. Such stories could increase trust and therefore improve the translation of evidence for individual use and public policies.
I attended both of the Task Force's 2008 meetings when screening mammography was debated, and the difference between them spoke volumes. During the first meeting, the panel deadlocked multiple times over whether to recommend for ("B") or recommend against routinely ("C") mammograms for women in their 40s. Both sides made impassioned arguments in favor of their points of view, and after running hours beyond the time allotted for discussion, they finally admitted that they were unable to reach a consensus. In contrast, at the second meeting when the results of a new decision analysis were presented, there was - to everyone's great relief - near-unanimity that the benefits and harms of screening were closely balanced in this age group. (Incidentally, the Canadian Task Force on Preventive Health Care recently concurred with the USPSTF's 2009 recommendations.)
Given the potential for narratives to humanize guidelines for the public, it was disappointing that the USPSTF's first Report to Congress offered a thoroughly sanitized description of the lengthy and challenging process by which it identified and prioritized research gaps in clinical preventive services. This process, which I participated in as a medical officer, consisted of a series of spirited debates over more than two years about thorny questions such as: 1) Is there an objective, defensible way to prioritize certain preventive services more than others? 2) Is it more important to support research on services with insufficient evidence that are already in widespread practice (e.g., PSA tests), or less commonly provided services with potentially large benefits (e.g., CT scans for lung cancer)? Unfortunately, the Report doesn't even begin to hint at how we grappled with these and other contentious issues, much less the multiple impasses that were reached and eventually overcome.
Consequently, I couldn't agree more with the elegantly stated conclusion of JAMA commentators Drs. Zachary Meisel and Jason Karlawish:
Stories help the public make sense of population-based evidence. Guideline developers and regulatory scientists must recognize, adapt, and deploy narrative to explain the science of guidelines to patients and families, health care professionals, and policy makers to promote their optimal understanding, uptake, and use.