Pages

Tuesday, February 28, 2023

Individualizing drug therapy for adults with major depressive disorder

Managing patients with depressive disorders constitutes a significant portion of the typical family physician’s practice. A serial cross-sectional study in Health Affairs estimated that the percentage of U.S. adult primary care visits that addressed mental health concerns rose from 10.7% in 2006 to 15.9% by 2018. Previous AFP Community Blog posts have discussed the primary care evidence base for psychologic and pharmacologic treatments and screening and treatment guidelines from the U.S. Preventive Services Task Force and the American College of Physicians (ACP). In the February issue, Dr. Heather Kovich and colleagues provided an update on pharmacologic treatment and tapering strategies to minimize the risk of discontinuation syndrome. When initiating medication, the authors recommend using shared decision-making, considering factors such as “prior treatment and response, comorbidities, costs, and risk of adverse effects.”

An accompanying editorial by Drs. Andrew Buelt and John McQuaid compared the three major U.S. clinical guidelines for major depressive disorder from the ACP, the American Psychological Association (APA), and the U.S. Department of Veterans Affairs and Department of Defense (VA/DoD). (The ACP released an update to its 2016 guideline while the article and editorial were in press.) All of the guidelines recommend initial treatment with evidence-based psychotherapy or pharmacotherapy. Pharmacogenetic tests such as GeneSight Psychotropic “[have] not been shown to improve patient-oriented outcomes and [are] not recommended to assist in drug choice.” Most patients will not experience additional benefit from combining psychotherapy and pharmacotherapy; however, the VA/DoD suggests that this combination is appropriate for patients with severe, persistent (more than two years), or recurrent (two or more episodes) depression.

A Canadian group recently developed a visual evidence-informed decision support tool based on a literature review and the Canadian Network for Mood and Anxiety Treatments depression treatment guidelines. The tool consists of two Figures that guide primary care clinicians in antidepressant selection based on specifiers (sleep disturbance, cognitive dysfunction, anxious distress, somatic symptoms), comorbid conditions, adverse effects, drug interactions, and administration. Physicians using this tool should note that the costs of antidepressants in Canada are considerably lower than those in the U.S., even for Medicare beneficiaries. Another helpful decision tool for antidepressants and other psychiatric drugs is the Waco Guide to Psychopharmacology in Primary Care, which is available as a downloadable app for Apple users.

**

This post first appeared on the AFP Community Blog.

Tuesday, February 21, 2023

Diagnosing ovarian and other cancers with human and artificial intelligence

The Ovarian Cancer Research Alliance recently released a consensus statement that encourages women who have completed childbearing to consider having their fallopian tubes removed if they are having pelvic surgery for benign conditions. This surgical strategy for reducing ovarian cancer mortality acknowledges the negative results of the UK Collaborative Trial of Ovarian Cancer Screening, which Dr. Jennifer Middleton previously discussed. Studies have found that early ovarian cancer frequently causes constitutional, abdominal, urinary, and pelvic symptoms, which can be used in combination with a cancer antigen 125 level to estimate ovarian cancer risk. However, the low sensitivity and specificity of this “symptom index” means that it misses early cancers and leads to unnecessary cancer evaluations for benign conditions.

A population-based United Kingdom (UK) cohort study assessed the underlying cancer risk of patients aged 30 years and older who presented to primary care with new-onset fatigue and co-occurring vague symptoms. After excluding persons with anemia or alarm symptoms (e.g., dysphagia), researchers followed patients for up to 9 months for a diagnosis of cancer. Cancer risk increased with age, reaching 3% or more in patients in their mid-60s. For all age groups combined, cancer risk was highest for women with fatigue and abdominal bloating and men with fatigue and weight loss, constipation, dyspnea, or abdominal pain.

A Danish research group developed an artificial intelligence (AI) based model that used results from common blood tests (complete blood count, electrolytes, and/or liver function tests) to generate a risk score that predicted cancer within 90 days. This laboratory data is often readily available; another UK study of patients who were diagnosed with cancer found that 41% received common blood tests in primary care as part of their diagnostic process. However, relying solely on blood tests neglects the value of primary care physicians’ non-analytical “gut feelings” that the patient has a benign or serious condition.

In an observational study of 155 general practitioners in Spain, a “sense of alarm,” present in 22% of consultations for new symptoms, had a sensitivity of 59% for cancer and other serious diseases and a negative predictive value of 98%. Thus, AI may also assist in cancer diagnosis by imitating the intuitive behavior of groups of family physicians. In an editorial in Annals of Internal Medicine, Dr. Gary Weissman and colleagues proposed that AI clinical decision support systems (CDSSs) utilize a “wisdom of crowds” approach that, like the best chess-playing AI systems, “reli[es] on imitation learning and collective intelligence” rather than set rules:

Averaging the judgments of many clinicians may outperform even the best clinician in the group. Training models to learn these consensus behaviors could lead to clinically significant improvements in accuracy. Furthermore, most diagnostic errors are the result of overlooking common diagnoses rather than very rare ones. … An AI CDSS that offers human-like suggestions may improve the reliability of clinical care by helping to avoid these clinical blunders. … Having an AI system that acts more like a thoughtful human guide rather than a black-box arbiter of truth may be the best next move.

**

This post first appeared on the AFP Community Blog.

Friday, February 17, 2023

Health care heroism

It's been nearly three years since the COVID-19 pandemic began in the U.S., and the public health emergencies that were declared by the Trump Administration and renewed several times by the Biden Administration are scheduled to end on May 11, 2023. At least 1.1 million Americans have perished from coronavirus infections, with excess deaths data indicating that absent the pandemic, the vast majority would still be living. Early on, stories of doctors and nurses having to wear garbage bags as personal protective equipment and reusing the same N-95 mask for days on end drove a narrative that health care professionals were heroes, wading into battle against the virus like soldiers under enemy fire or firefighters rescuing people from a blazing home. We were working longer hours under unusually stressful circumstances, and despite precautions, many front-line health care workers became infected on the job, particularly before the first vaccines became available in December 2020. But how many U.S. physicians made the ultimate sacrifice for their service, like the late Li Wenliang, the Chinese ophthalmologist who sounded the alarm during the early days of the Wuhan outbreak

Although we may never have a precise answer, a Research Letter in JAMA Internal Medicine recently shed light on it. Researchers used information from the American Medical Association Masterfile and Deceased Physician File to compare expected with observed deaths among U.S. physicians aged 45 to 84 years from March 2020 through December 2021. Results were stratified by age group, practicing vs. non-practicing, and provision of direct patient care. Overall, among an average of 785,000 physicians, 4511 deaths occurred over the period of analysis, representing 622 more deaths than otherwise would have been expected in the absence of the pandemic. In my group (age 45-64, active physician providing direct patient care), 652 deaths occurred, 81 more than expected. Notably, no excess physician deaths occurred after April 2021, when vaccines for adults had become widely available.

Some study findings were unsurprising: for example, excess mortality was higher among older than younger physicians. Nonactive physicians had slightly more excess deaths than active physicians (providing direct patient care or not). In all age groups, excess death rates were substantially lower than those among the US general population. While the study was not designed to determine the reasons for these disparities, it's easy to understand why: active physicians tend to be wealthier and healthier than inactive ones and the general population, and taking COVID-19 seriously from the start, they were more likely to get vaccinated and boosted and (during the time period of the study) to wear masks in public places.

That physicians and other health professionals had advantages over other "front-line" workers doesn't detract from the heroism that I witnessed in those pre-vaccine days, when every time I went to work part of me was terrified of inadvertently catching and bringing back home a potentially lethal virus with no effective treatment. Although the end of the public health emergency doesn't mean the end of the pandemic (today more than 1,000 infected patients are hospitalized in Pennsylvania alone), we are certainly in a much better place than we were. Health care works best when health care workers don't have to be heroes.

Friday, February 10, 2023

AI like ChatGPT will not make family physicians obsolete

Several years ago, I was speaking on the phone with Dr. Roland Grad, a family physician at McGill University and co-author of "Look It Up! What Patients, Doctors, Nurses, and Pharmacists Need to Know about the Internet and Primary Health Care." We were discussing the (to us, preposterous) notion that there would be no future for primary care physicians because we will all be replaced by cognitive computing / artificial intelligence (AI) systems such as IBM's Watson and Open AI's ChatGPT. Roland told me that whenever someone asks him about this, he points out that Star Trek clearly shows that there will be human doctors well into the 24th century. Even the holographic Doctor on the U.S.S. Voyager is only pressed into service after the entire human medical staff is killed in an accident.

Many of the prospective medical students I interview have asked me about how AI will influence how I practice family medicine in the future. A 2017 Perspective article on machine learning in the New England Journal of Medicine asserted that "the complexity of medicine now exceeds the capacity of the human mind." The authors argued that since doctors can no longer keep all relevant medical knowledge in their heads, and "every patient is now a 'big data' challenge," we will soon need to rely on massive computer-generated algorithms to avoid diagnostic and treatment paralysis.

It's no surprise that neither author of this piece was a family physician. Since I began my residency 22 years ago, and well before that, I knew that no matter how much I learned, it wouldn't be possible to keep everything I needed in my head. I never had to. In medical school I carried around a variety of pocket-sized print references, and in residency and clinical practice I had several generations of Palm Pilots and, eventually, smartphones that allowed me to look up what I didn't know or couldn't recall. The same goes for keeping up with the medical literature. Although I regularly read more journals than the average generalist (nine*), I know that there's no way that I can possibly read, much less critically appraise, every new primary care-relevant study. Drs. David Slawson and Allen Shaughnessy have argued that rather than pursue that hopeless (even for a super-subspecialist) task, clinicians should be taught information management skills, which consist of foraging (selecting tools that filter information for relevance and validity); hunting ("just in time" information tools for use at the point of care), and "combining the best patient-oriented evidence with patient-centered care."

And although Watson made short work of the previously invincible Ken Jennings on Jeopardy! and vanquished world chess champions with ease, it had a much harder time cracking medicine. Although IBM started selling Watson for Oncology as a "revolution in cancer care" to hospital systems worldwide in 2014 and has spent millions of dollars lobbying Congress to exempt its software from FDA regulation, a STAT investigation found that the system fell far short of its hype:

At its heart, Watson for Oncology uses the cloud-based supercomputer to digest massive amounts of data - from doctor's notes to medical studies to clinical guidelines. But its treatment recommendations are not based on its own insights from these data. Instead, they are based exclusively on training by human overseers, who laboriously feed Watson information about how patients with specific characteristics should be treated.

AI will no doubt play a supporting role in the future of health care, alongside smartphone physicals and precision medicine and many other promising innovations borrowed from other industries. But based on past experience, I'm not convinced that any of these innovations will be as revolutionary as advertised. In my own career, doctors have gone from using paper charts that were time-consuming to maintain and couldn't communicate with each other to electronic health records that are even more time-consuming to maintain and still can't communicate with each other. You get my point. Even if IBM or OpenAI eventually harnesses AI to improve primary care practice, here's what Roland and his colleagues have to say in Look It Up!:

Some might wonder whether this new automated world of information will create a medical world that is dominated by artificial intelligence, where doctors - if we even need them anymore - will just repeat what the machines say. On the contrary, as more information becomes readily available, doctors, nurses, pharmacists, and allied health professionals will become more important as the interpreters of that information in accordance with the specific clinical and social history, values, and preferences of the patient and her or his family. 

Right. I couldn't have said it better myself.


* - American Family Physician, Annals of Family Medicine, Annals of Internal Medicine, Health Affairs, JAMA, JAMA Internal Medicine, Journal of the American Board of Family Medicine, Journal of Family Practice, New England Journal of Medicine

**

A previous version of this post appeared on Common Sense Family Doctor on November 17, 2017.