In almost every large hospital in this country, there are at least two types of patient beds: regular and intensive care. Intensive care beds are designed for the sickest of the sick - patients who require continuous monitoring, specialized respiratory or cardiovascular support, the most knowledgable consultants, the most powerful drugs. Intensive care units (ICUs) have long been accepted as a necessary innovation in inpatient care, leading to better outcomes for patients than would have otherwise occurred if they were treated with a hospital's "ordinary" resources.
In his oft-cited New Yorker article, "The Hot Spotters," Harvard surgeon Atul Gawande reviewed medical outreach programs to the sickest, costliest five percent of outpatients, programs that he termed "intensive outpatient care." It was the first time I had seen this term, and it got me thinking. While hospital ICUs have become the domains of subspecialist critical care physicians (often called "intensivists"), intensive outpatient care's natural leaders are primary care clinicians. So when Gawande described family physician Jeffrey Brenner's innovative program to improve care coordination and reduce hospitalizations in Camden, New Jersey, what he was describing was really intensive primary care:
If he [Dr. Brenner] could find the people whose use of medical care was highest, he figured, he could do something to help them. If he helped them, he would also be lowering their health-care costs. And, if the stats approach to crime was right, targeting those with the highest health-care costs would help lower the entire city’s health-care costs. His calculations revealed that just one per cent of the hundred thousand people who made use of Camden’s medical facilities accounted for thirty per cent of its costs. That’s only a thousand people—about half the size of a typical family physician’s panel of patients.
As Josh Freeman pointed out on his blog Medicine and Social Justice, the reason that attempts to constrain health care spending by increasing co-payments for drugs and other services (described by supporters as giving patients more "skin in the game") inevitably fail is that these interventions target the 90 percent of patients who hardly utilize the health care system at all. Meanwhile, the 5 to 10 percent whose illnesses drive health care expenditures - the sickest of the sick - cut back on essential care, their conditions spiral rapidly out of control, and hospitalizations and costs keep rising.
The programs described in Gawande's New Yorker article aren't the only models of intensive primary care out there. Some have been around for quite a few years, mostly targeting elderly patients with multiple chronic conditions and funded through Medicare. These include the national Program for All-Inclusive Care for Elderly (PACE), covering more than 23,000 people in 29 states; Johns Hopkins University's Guided Care nurse-coordinator program; and old-fashioned house calls, which family physician Steven Landers has dubbed "The Other Medical Home" and believes are key to revitalizing the specialty of family medicine.
Intensive primary care isn't for everyone, of course. For one thing, it costs too much. And for most patients with acute or simple health conditions, the 15-minute office visit model still works just fine. Intensive primary care should be reserved for the sickest of the sick - patients who require frequent monitoring, specialized social support, the most knowledgable consultants, the most complicated drugs. So how can we design criteria to identify patients who should be transferred from regular to intensive primary care - criteria that will improve the health of the sickest patients, be acceptable to payers, and result in lower health care costs?
**
This post originally appeared on Common Sense Family Doctor on February 24, 2011.
Common sense thoughts on public health and conservative medicine from a family doctor in Lancaster, PA.
Pages
▼
Tuesday, January 28, 2014
Sunday, January 19, 2014
Are drugs the best medicine for children with ADHD?
Data from the Centers for Disease Control and Prevention document a steady rise in diagnoses of attention deficit hyperactivity disorder (ADHD) since its first national survey in 1997. Since stimulant medications are widely considered to be first-line therapy for ADHD, it is not surprising that by 2011, more than 3.5 million U.S. children were taking these medications. Guidelines for ADHD, such as one from the American Academy of Pediatrics, prefer prescription drugs over behavioral interventions due in part to the results of an influential 1999 study sponsored by the National Institute of Mental Health that compared these treatments and declared drugs to be superior.
However, a recent article published in the New York Times reported that some of the original study investigators are now openly questioning this conclusion. Since the primary outcomes were short-term impulsivity and inattention symptoms, rather than academic and social outcomes that may be affected more by behavioral skills training, the study's design inherently favored drug therapies. And the manufacturers of these drugs were happy to promote the results to boost sales:
Just as new products ... were entering the market, a 2001 paper by several of the study’s researchers gave pharmaceutical companies tailor-made marketing material. For the first time, the researchers released data showing just how often each approach had moderated A.D.H.D. symptoms: Combination therapy did so in 68 percent of children, followed by medication alone (56 percent) and behavioral therapy alone (34 percent). Although combination therapy won by 12 percentage points, the paper’s authors described that as “small by conventional standards” and largely driven by medication. Drug companies ever since have reprinted that scorecard and interpretation in dozens of marketing materials and PowerPoint presentations. They became the lesson in doctor-education classes worldwide.
However, a recent article published in the New York Times reported that some of the original study investigators are now openly questioning this conclusion. Since the primary outcomes were short-term impulsivity and inattention symptoms, rather than academic and social outcomes that may be affected more by behavioral skills training, the study's design inherently favored drug therapies. And the manufacturers of these drugs were happy to promote the results to boost sales:
Just as new products ... were entering the market, a 2001 paper by several of the study’s researchers gave pharmaceutical companies tailor-made marketing material. For the first time, the researchers released data showing just how often each approach had moderated A.D.H.D. symptoms: Combination therapy did so in 68 percent of children, followed by medication alone (56 percent) and behavioral therapy alone (34 percent). Although combination therapy won by 12 percentage points, the paper’s authors described that as “small by conventional standards” and largely driven by medication. Drug companies ever since have reprinted that scorecard and interpretation in dozens of marketing materials and PowerPoint presentations. They became the lesson in doctor-education classes worldwide.
There are, of course, practical challenges to providing behavioral therapy for ADHD, including a lack of resources in many communities and high costs, which, unlike drug therapies, are often not paid by health insurance. One way family physicians may facilitate therapy is to integrate behavioral health specialists into their practices.
**
This post first appeared on the AFP Community Blog.
**
This post first appeared on the AFP Community Blog.
Thursday, January 9, 2014
Guest Post: Medical schools are no place to train physicians (Part 2 of 2)
This is the second of two guest posts by Dr. Josh Freeman. Part 1 is available here.
**
The fact is that most doctors who graduate from medical school will not practice in a tertiary academic health center (AHC), but rather in the community, although the other fact is that a disproportionate number of them will choose specialties that are of little or no use in many communities that need doctors. They will, if they can (i.e., if their grades are high enough) often choose subspecialties that can only be practiced in the high-tech setting of the AHC or the other relatively small number of very large metropolitan hospitals, often with large residency training programs. As they look around at the institution in which they are being educated, they see an enormously skewed mix of specialties. For example, 10% of doctors may be anesthesiologists and there well may be more cardiologists than primary care physicians. While this is not the mix in world of practice, and still less the mix that we need to have for an effectively functioning health system, it is the world in which they are being trained.
**
The fact is that most doctors who graduate from medical school will not practice in a tertiary academic health center (AHC), but rather in the community, although the other fact is that a disproportionate number of them will choose specialties that are of little or no use in many communities that need doctors. They will, if they can (i.e., if their grades are high enough) often choose subspecialties that can only be practiced in the high-tech setting of the AHC or the other relatively small number of very large metropolitan hospitals, often with large residency training programs. As they look around at the institution in which they are being educated, they see an enormously skewed mix of specialties. For example, 10% of doctors may be anesthesiologists and there well may be more cardiologists than primary care physicians. While this is not the mix in world of practice, and still less the mix that we need to have for an effectively functioning health system, it is the world in which they are being trained.
The extremely atypical mix of medical specialties in the AHC is not “wrong”; it reflects the atypical mix of patients who are hospitalized there. It is time for another look at the studies that have been done on the “ecology of medical care”, first by Kerr White in 1961 and replicated by the Robert Graham Center of the American Academy of Family Physicians in 2003, and represented by the graphic reproduced here. The biggest box (1000) is a community of adults at risk, the second biggest (800) is those who have symptoms in a given month, and the tiny one, representing less than 0.1%, is those hospitalized at an academic teaching hospital. Thus, the population that students mostly learn on is atypical, heaving skewed to the uncommon; it is not representative of even all hospitalized people, not to mention the non-hospitalized ill (and still less the healthy-but-needing-preventive care) in the community.
Another aspect of educating students in the AHC is that much of the medical curriculum is determined by those non-physician scientists who are primarily researchers. They not only teach medical students, they (or their colleagues at other institutions) write the questions for USMLE Step 1. They are often working at the cutting edge of scientific discovery, but the knowledge that medical students need in their education is much more basic, much more about understanding the scientific method, and what constitutes valid evidence. There is relatively little need, at this stage, for students to learn about the current research that these scientists are doing. Even the traditional memorization of lots of details about basic cell structure and function is probably unnecessary; after 5 years of non-use students likely retain only 10% of what they learn; even if they need 10% -- or more – in their future careers, there is no likelihood that it will be the same 10%.
We have to do a better job has of determining what portion of the information currently taught in the “basic sciences” is crucial for all future doctors to know and memorize, and we also need to broaden the definition of “basic science” to include the key social sciences of anthropology, sociology, psychology, communication, and even many areas of the humanities such as ethics. This is not likely to happen in a curriculum controlled by molecular biologists.
Medical students need a clinical education in which the most common clinical conditions are the most common ones they see, the most common presentations of those conditions are the most common ones they see, and the most common treatments are the ones they see implemented. They need to work with doctors who are representative, in skills and focus, of the doctors they will be (and need to be) in practice. Clinical medical education seems to work on the implicit belief that ability to take care of patients in an intensive care unit necessarily means one is competent to take care of those in the hospital, or that the ability to care for people in the hospital means one can care for ambulatory patients, when in fact these are dramatically different skills sets.
This is not to say that we do not need hospitals and health centers that can care for people with rare, complicated, end stage, tertiary and quarternary disease. We do, and they should have the mix of specialists appropriate to them, more or less the mix we currently have in AHCs. And it is certainly not to say that we do not need basic research that may someday come up with better treatments for disease. We do, and those research centers should be generously supported. But their existence need not be tied to the teaching of medical students. The basic science, and social science, and humanities that every future doctor needs to learn can be taught by a small number of faculty members focused on teaching, and does not need to be tied to a major biomedical research enterprise.
Our current system is not working; we produce too many doctors who do narrow rescue care, and not enough who provide general care. We spend too much money on high-tech care and not enough on addressing the core causes of disease. If we trained doctors in the right way in the right place we might have a better shot at getting the health system, and even the health, our country needs.
Another aspect of educating students in the AHC is that much of the medical curriculum is determined by those non-physician scientists who are primarily researchers. They not only teach medical students, they (or their colleagues at other institutions) write the questions for USMLE Step 1. They are often working at the cutting edge of scientific discovery, but the knowledge that medical students need in their education is much more basic, much more about understanding the scientific method, and what constitutes valid evidence. There is relatively little need, at this stage, for students to learn about the current research that these scientists are doing. Even the traditional memorization of lots of details about basic cell structure and function is probably unnecessary; after 5 years of non-use students likely retain only 10% of what they learn; even if they need 10% -- or more – in their future careers, there is no likelihood that it will be the same 10%.
We have to do a better job has of determining what portion of the information currently taught in the “basic sciences” is crucial for all future doctors to know and memorize, and we also need to broaden the definition of “basic science” to include the key social sciences of anthropology, sociology, psychology, communication, and even many areas of the humanities such as ethics. This is not likely to happen in a curriculum controlled by molecular biologists.
Medical students need a clinical education in which the most common clinical conditions are the most common ones they see, the most common presentations of those conditions are the most common ones they see, and the most common treatments are the ones they see implemented. They need to work with doctors who are representative, in skills and focus, of the doctors they will be (and need to be) in practice. Clinical medical education seems to work on the implicit belief that ability to take care of patients in an intensive care unit necessarily means one is competent to take care of those in the hospital, or that the ability to care for people in the hospital means one can care for ambulatory patients, when in fact these are dramatically different skills sets.
This is not to say that we do not need hospitals and health centers that can care for people with rare, complicated, end stage, tertiary and quarternary disease. We do, and they should have the mix of specialists appropriate to them, more or less the mix we currently have in AHCs. And it is certainly not to say that we do not need basic research that may someday come up with better treatments for disease. We do, and those research centers should be generously supported. But their existence need not be tied to the teaching of medical students. The basic science, and social science, and humanities that every future doctor needs to learn can be taught by a small number of faculty members focused on teaching, and does not need to be tied to a major biomedical research enterprise.
Our current system is not working; we produce too many doctors who do narrow rescue care, and not enough who provide general care. We spend too much money on high-tech care and not enough on addressing the core causes of disease. If we trained doctors in the right way in the right place we might have a better shot at getting the health system, and even the health, our country needs.
Wednesday, January 8, 2014
Guest Post: Medical schools are no place to train physicians (Part 1 of 2)
Dr. Josh Freeman is Professor and Chair of the Department of Family Medicine at the University of Kansas. His research interests include medical education, faculty development and curricular innovation, and health care for underserved populations. This is the first of two guest posts that were originally published on his blog, Medicine and Social Justice. Part 2 is available here.
**
Doctors have to go to medical school. That makes sense. They have to learn their craft, master skills, and gain an enormous amount of knowledge. They also, and this is at least as important, need to learn how to think and how to solve problems. And they need to learn how to be life-long learners because new knowledge is constantly being discovered, and old truths are being debunked. Therefore, they must learn to un-learn, and not to stay attached to what they once knew to be true but no longer is. They also need, in the face of drinking from this fire-hose of new information and new skills, to retain their core humanity and their caring, the reasons that (hopefully) most of them went into medicine.
Medical students struggle to acculturate to the profession, to learn the new language replete with eponyms, abbreviations, and long abstruse names for diseases (many are from Latin, and while they are impressive and complicated, they are also sometimes trite in translation, e.g., “itchy red rash”). They have to learn to speak “medical” as a way to be accepted into the guild by their seniors, but must be careful that it does not block their ability to communicate with their patients; they also need to continue to speak English (or whatever the language is that their patients speak). “Medical” may also offer a convenient way of obscuring and temporizing and avoiding difficult conversations (“the biopsy indicates a malignant neoplasm” instead of “you have cancer”). But there needs to be a place for them to learn.
So what is wrong with the places that we are teaching them now? Most often, allopathic (i.e., “MD”) medical schools are part of an “academic health center” (AHC), combined with a teaching hospital. They have large biomedical research enterprises, with many PhD faculty who are, if they are good and lucky, are externally funded by the National Institutes of Health (NIH). Some or many of them spend some of their time teaching the “basic science” material (biochemistry, anatomy, physiology, microbiology, pharmacology, pathology) that medical students need to learn.
By “need to learn” we usually mean “what we have always taught them” or “what they need to pass the national examination (USMLE Step 1) that covers that material.” This history goes back 100 years, to the Flexner Report of 1910. Contracted by the AMA, educator Abraham Flexner evaluated the multitude of medical schools, recommended closing many which were little more than apprenticeship programs without a scientific basis, and recommended that medical schools be based upon the model of Johns Hopkins: part of a university (from the German tradition), grounded in science, and based in a core curriculum of the sciences. This has been the model ever since.
However, 100 years later, these medical schools and the AHCs of which they are a part have grown to enormous size, concentrating huge basic research facilities (Johns Hopkins alone receives over $300 million a year in NIH grants) and tertiary and quarternary medical services – high tech, high complexity treatment for rare diseases or complex manifestations of more common ones. They have often lost their focus on the health of the actual community of which they are a part.
This was a reason for two rounds of creating “community-based” medical schools, which use non-university, or “community”, hospitals: the first in the 1970s and the second in the 2000s. Some of these schools have maintained a focus on community health, to a greater or lesser degree, but many have largely abandoned those missions as they have sought to replicate the Hopkins model and become major research centers. The move of many schools away from community was the impetus for the “Beyond Flexner” conference held in Tulsa in 2012 and for a number of research studies focused on the “social mission” of medical schools.
**
Doctors have to go to medical school. That makes sense. They have to learn their craft, master skills, and gain an enormous amount of knowledge. They also, and this is at least as important, need to learn how to think and how to solve problems. And they need to learn how to be life-long learners because new knowledge is constantly being discovered, and old truths are being debunked. Therefore, they must learn to un-learn, and not to stay attached to what they once knew to be true but no longer is. They also need, in the face of drinking from this fire-hose of new information and new skills, to retain their core humanity and their caring, the reasons that (hopefully) most of them went into medicine.
Medical students struggle to acculturate to the profession, to learn the new language replete with eponyms, abbreviations, and long abstruse names for diseases (many are from Latin, and while they are impressive and complicated, they are also sometimes trite in translation, e.g., “itchy red rash”). They have to learn to speak “medical” as a way to be accepted into the guild by their seniors, but must be careful that it does not block their ability to communicate with their patients; they also need to continue to speak English (or whatever the language is that their patients speak). “Medical” may also offer a convenient way of obscuring and temporizing and avoiding difficult conversations (“the biopsy indicates a malignant neoplasm” instead of “you have cancer”). But there needs to be a place for them to learn.
So what is wrong with the places that we are teaching them now? Most often, allopathic (i.e., “MD”) medical schools are part of an “academic health center” (AHC), combined with a teaching hospital. They have large biomedical research enterprises, with many PhD faculty who are, if they are good and lucky, are externally funded by the National Institutes of Health (NIH). Some or many of them spend some of their time teaching the “basic science” material (biochemistry, anatomy, physiology, microbiology, pharmacology, pathology) that medical students need to learn.
By “need to learn” we usually mean “what we have always taught them” or “what they need to pass the national examination (USMLE Step 1) that covers that material.” This history goes back 100 years, to the Flexner Report of 1910. Contracted by the AMA, educator Abraham Flexner evaluated the multitude of medical schools, recommended closing many which were little more than apprenticeship programs without a scientific basis, and recommended that medical schools be based upon the model of Johns Hopkins: part of a university (from the German tradition), grounded in science, and based in a core curriculum of the sciences. This has been the model ever since.
However, 100 years later, these medical schools and the AHCs of which they are a part have grown to enormous size, concentrating huge basic research facilities (Johns Hopkins alone receives over $300 million a year in NIH grants) and tertiary and quarternary medical services – high tech, high complexity treatment for rare diseases or complex manifestations of more common ones. They have often lost their focus on the health of the actual community of which they are a part.
This was a reason for two rounds of creating “community-based” medical schools, which use non-university, or “community”, hospitals: the first in the 1970s and the second in the 2000s. Some of these schools have maintained a focus on community health, to a greater or lesser degree, but many have largely abandoned those missions as they have sought to replicate the Hopkins model and become major research centers. The move of many schools away from community was the impetus for the “Beyond Flexner” conference held in Tulsa in 2012 and for a number of research studies focused on the “social mission” of medical schools.