Monday, March 27, 2023

Carried by deer ticks, babesiosis is spreading in the northeastern U.S.

The Centers for Disease Control and Prevention reported last week that the incidence of babesiosis rose substantially in 10 northeastern states from 2011 to 2019, including Maine, New Hampshire, and Vermont, where it was not previously considered to be endemic. Maps from a previous American Family Physician article on tickborne diseases illustrate the geographic distribution of babesiosis compared to other tickborne diseases such as Lyme disease. Babesiosis is usually transmitted to humans by the bite of an infected deer tick (Ixodes scapularis), though rare cases of transfusion-associated and perinatal transmission have been reported.

The Environmental Protection Agency has concluded that the expanding range of disease-carrying ticks to northern latitudes has been influenced by ongoing climate change:

Deer ticks are mostly active when temperatures are above 45˚F, and they thrive in areas with at least 85-percent humidity. Thus, warming temperatures associated with climate change are projected to increase the range of suitable tick habitat. … Because tick activity depends on temperatures being above a certain minimum, shorter winters could also extend the period when ticks are active each year. … Unlike some other vector-borne diseases, tick-borne disease patterns are generally less influenced by short-term changes in weather (weeks to months) than by longer-term climate change.

After an incubation period of one to nine weeks, patients with babesiosis can experience nonspecific flulike symptoms, including fever, generalized weakness, and myalgias. More severe complications may develop, including acute respiratory distress syndrome, congestive heart failure, and disseminated intravascular coagulation. The diagnosis can be made by polymerase chain reaction (PCR) or microscopic identification of intraerythrocytic organisms on a Giemsa-stained peripheral blood smear.

First-line treatment for mild to moderate babesiosis is oral atovaquone and azithromycin for 7 to 10 days. More severe infections should be treated with intravenous clindamycin and oral quinine. Exchange transfusions are “reserved for patients who are extremely ill – with blood parasitemia of more than 10 percent, massive hemolysis and asplenia.” Since co-infection with Lyme disease and ehrlichiosis can occur, clinicians can consider starting oral doxycycline while awaiting the results of serologic testing. Babesiosis may persist for more than two months after effective treatment and for months to years in patients with unrecognized infections.

A previous AFP editorial provided advice on use of effective insect repellents to prevent diseases carried by ticks and mosquitoes. A patient education handout reviewed strategies for preventing tick bites and safely removing attached ticks. Finally, readers interested in mitigating infectious and other health impacts of warming temperatures in their clinics and communities can consult a curated collection of articles on environmental health and climate change.

**

This post first appeared on the AFP Community Blog.

Wednesday, March 15, 2023

Aspirin, preeclampsia, and heart disease in later life and children

Preeclampsia affects an estimated 4 million pregnancies worldwide each year and has lifelong health consequences for women and children. The U.S. Preventive Services Task Force (USPSTF) recommends screening for preeclampsia with blood pressure measurements throughout pregnancy; last month it released an updated draft statement that expands the screening indication to identify all hypertensive disorders of pregnancy. Exercise during pregnancy is recommended to reduce risk of gestational hypertension and preeclampsia. Additionally, the USPSTF and the American College of Obstetricians and Gynecologists recommend that pregnant patients at high risk for preeclampsia start taking daily low-dose (81 mg) aspirin at 12 weeks’ of gestation and continue until delivery. The high prevalence of preeclampsia risk factors has made preventive aspirin use increasingly common in the U.S. An analysis of 2019 birth certificate data found that low-dose aspirin was indicated in more than half of all pregnancies and could have been considered in more than 85 percent based on USPSTF criteria.

Since aspirin may increase peripartum bleeding risk, an open-label, noninferiority randomized trial in Spain compared discontinuing aspirin at 24 to 28 weeks’ gestation to continuation until 36 weeks (the standard of care in Europe) in pregnant patients judged to be at lower risk of preeclampsia based on second-trimester biomarkers. The incidence of preterm preeclampsia, the primary outcome, was similar between the groups. Of note, the aspirin dose was 150 mg daily, and high risk individuals were identified in the first trimester based on a screening algorithm that combined clinical factors with objective measurements such as mean uterine artery pulsatility index and serum placental growth factor. Differences in the European approach to preeclampsia prevention make it difficult to determine this study’s implications for U.S. practice.

A 2017 systematic review and meta-analysis of 22 studies found that preeclampsia is associated with a 4-fold increase in future heart failure risk and 2-fold increases in heart disease, stroke, and cardiovascular death. Should a history of adverse pregnancy outcomes be considered in atherosclerotic cardiovascular disease (ASCVD) risk assessments? To shed light on this question, Swedish researchers did a cross-sectional study of a population-based cohort of 10,000 women with one or more deliveries in 1973 or later who underwent coronary computed tomography angiography at age 50 to 65 years as part of a study from 2013-2018. Patients with histories of gestational hypertension and preeclampsia were more likely to have coronary atherosclerosis and significant stenosis even if their predicted ASCVD risk was low. Whether intensive primary prevention with statin therapy would improve outcomes in these patients is not known.

Finally, maternal preeclampsia has been associated with increased cardiovascular risks in children. In a population-based cohort study of individuals born in Denmark, Finland, and Sweden from 1973 to 2016, offspring of pregnancies with preeclampsia had increased risks of ischemic heart disease (adjusted hazard ratio, 1.33) and stroke (aHR, 1.34), independent of preterm or small for gestational age birth.

**

This post first appeared on the AFP Community Blog.

Sunday, March 12, 2023

Springing forward and building my Substack

Today marks the first day of daylight saving time in 2023. Many U.S. health reporters and bloggers have devoted newsprint or digital space to the pros and cons of shifting our clocks one hour forward so that there is more late-day sunshine during the warmer months. In 2019, my colleague Jen Middleton discussed how to minimize sleep disruptions caused by the time change. This year, Rita Rubin wrote a terrific news article in JAMA that highlighted the contrast in public support for making daylight saving time year-round with the positions of major medical organizations, including the American Academy of Sleep Medicine and the American Medical Association, that support abolishing daylight saving and sticking with permanent standard time.

For nine months in 1974, the U.S. actually instituted year-round daylight saving time in the hope of reducing energy consumption during the OPEC oil embargo. Unfortunately, noted Rubin, "the shift to daylight saving time in the middle of winter meant that many schoolchildren had to go to school in the dark," contributing to the publicized deaths of 8 Florida students in early-morning car accidents. Public support for the change waned rapidly, and President Gerald Ford signed a law that reverted to standard time that fall.

Switching topics, I've been working to build Common Sense Family Doctor's presence on Substack, where I started cross-posting in January after I was briefly locked out of Twitter. Although my Twitter access was eventually restored, that platform is going rapidly downhill, with qualified health professionals fleeing in droves while purveyors of misinformation have been emboldened by Elon Musk's "anything goes as long as it can be monetized" stance. So I appreciated a recent shout-out from fellow blogger Hans Duvefelt, MD, whose long-running A Country Doctor Writes features thoughtful and absorbing observations on the pleasures and pains of practicing family medicine in rural Maine, and is now available as a subscription on Substack. Hans has also written three books based on his blog writings and videos, the first of which entertained my family for many hours as we drove back from Salt Lake City to Washington, DC in the summer of 2021. If you haven't previously visited his blog or Substack, they are well worth a few minutes of your day, regardless of your feelings on daylight saving.

Wednesday, March 8, 2023

"I want to be a regular doctor" - making primary care the norm

Today I spent a few hours updating my "Introduction to the U.S. health care system" lecture for the first-year medical student course I directed before my 2020-21 Salt Lake City sabbatical and last year's move to Lancaster, Pennsylvania. (I continue to hold a Georgetown faculty appointment as a guest lecturer for this course and a health policy elective for 4th year students and residents.) The last several slides are taken from a series of reports from the Commonwealth Fund illustrating that as U.S. health care spending has accelerated in comparison to spending in peer countries, key health outcomes, such as infant and maternal mortality and average life expectancy, have fallen farther and farther behind. I then ask the students: why are our outcomes worse than those of other countries that spend much less?

There isn't a single correct answer to this question. Culprits include high administrative costs, poor continuity of care due to lack of insurance portability, and the fact that too many people (insured and uninsured) can't access routine health care services because they are not affordable or not convenient. But the explanation that resonates with me most, as a family physician who has worked in public health, is that public health and primary care have been systematically undervalued and have insufficient resources to do their jobs well. Consider the latest evidence: a primary care scorecard developed by the Robert Graham Center shows that primary care's share of overall U.S. health care expenditures fell from 6.2% in 2013 to a paltry 4.6% in 2020.

Providing primary care is generally inexpensive, and no one is arguing that it should have a 50% or even 25% share, but achieving even the 8% average share among Organization for Economic Co-operation and Development countries would be transformative for American medicine. Absent new investments, the primary care workforce will continue to shrink and fewer and fewer adults will be able see a primary care clinician without waiting for weeks to months. Efforts to date to improve income equity between generalists and subspecialists have been anemic; a recent study found that adjustments to the Medicare Physician Fee Schedule designed to increase the value of "cognitive work" (activities that don't involve performing procedures or using technological tools) that went into effect in 2021 only narrowed the payment gap by 2%.

Other well-intentioned efforts to prime the primary care pipeline that may yield modest gains. Several, like Texas Tech University's Family Medicine Accelerated Track, condense medical school into 3 years for students who commit early to family medicine or primary care careers. This approach eliminates one year of tuition payments and allows the medical school graduate to start earning an attending physician's salary one year sooner. A less conventional path to primary care is switching medical specialties mid-career. One of my friends, a longtime colleague and previous personal physician, began her career as a radiation oncologist and later re-trained in family medicine, where she practiced until her retirement. Doing so required that she complete a second residency, with long hours and relatively low pay, and the strain that this arrangement might put on significant others and families is not insignificant. Thus it's unlikely even if artificial intelligence eventually reduces demand for some subspecialties (e.g., pathology and radiology) that enough doctors will migrate into primary care to address future workforce shortages.

A recent episode of the Society of Teachers of Family Medicine (STFM) podcast featured Dr. Margot Savoy, one of the most talented family physicians I know and the Senior Vice President of Education for the American Academy of Family Physicians. Asked to describe the origins of her interest in a family medicine career, she spoke about wanting to be a "regular doctor," the health professional you saw when you needed a checkup or had an acute injury or illness. Innocent of the divisions that existed in medicine, she had to be educated that this type of "regular doctor" was called a primary care physician and about the differences between physicians who took care of kids only, adults only, and family physicians. Countless others have begun medical school considering primary care careers to be the norm before being seduced by the siren song of higher paid subspecialties with narrower bodies of knowledge to master. We need schools to continue producing subspecialists, of course, but to bring U.S. health outcomes back to par with the rest of the world, we need primary care physicians more.

Tuesday, February 28, 2023

Individualizing drug therapy for adults with major depressive disorder

Managing patients with depressive disorders constitutes a significant portion of the typical family physician’s practice. A serial cross-sectional study in Health Affairs estimated that the percentage of U.S. adult primary care visits that addressed mental health concerns rose from 10.7% in 2006 to 15.9% by 2018. Previous AFP Community Blog posts have discussed the primary care evidence base for psychologic and pharmacologic treatments and screening and treatment guidelines from the U.S. Preventive Services Task Force and the American College of Physicians (ACP). In the February issue, Dr. Heather Kovich and colleagues provided an update on pharmacologic treatment and tapering strategies to minimize the risk of discontinuation syndrome. When initiating medication, the authors recommend using shared decision-making, considering factors such as “prior treatment and response, comorbidities, costs, and risk of adverse effects.”

An accompanying editorial by Drs. Andrew Buelt and John McQuaid compared the three major U.S. clinical guidelines for major depressive disorder from the ACP, the American Psychological Association (APA), and the U.S. Department of Veterans Affairs and Department of Defense (VA/DoD). (The ACP released an update to its 2016 guideline while the article and editorial were in press.) All of the guidelines recommend initial treatment with evidence-based psychotherapy or pharmacotherapy. Pharmacogenetic tests such as GeneSight Psychotropic “[have] not been shown to improve patient-oriented outcomes and [are] not recommended to assist in drug choice.” Most patients will not experience additional benefit from combining psychotherapy and pharmacotherapy; however, the VA/DoD suggests that this combination is appropriate for patients with severe, persistent (more than two years), or recurrent (two or more episodes) depression.

A Canadian group recently developed a visual evidence-informed decision support tool based on a literature review and the Canadian Network for Mood and Anxiety Treatments depression treatment guidelines. The tool consists of two Figures that guide primary care clinicians in antidepressant selection based on specifiers (sleep disturbance, cognitive dysfunction, anxious distress, somatic symptoms), comorbid conditions, adverse effects, drug interactions, and administration. Physicians using this tool should note that the costs of antidepressants in Canada are considerably lower than those in the U.S., even for Medicare beneficiaries. Another helpful decision tool for antidepressants and other psychiatric drugs is the Waco Guide to Psychopharmacology in Primary Care, which is available as a downloadable app for Apple users.

**

This post first appeared on the AFP Community Blog.

Tuesday, February 21, 2023

Diagnosing ovarian and other cancers with human and artificial intelligence

The Ovarian Cancer Research Alliance recently released a consensus statement that encourages women who have completed childbearing to consider having their fallopian tubes removed if they are having pelvic surgery for benign conditions. This surgical strategy for reducing ovarian cancer mortality acknowledges the negative results of the UK Collaborative Trial of Ovarian Cancer Screening, which Dr. Jennifer Middleton previously discussed. Studies have found that early ovarian cancer frequently causes constitutional, abdominal, urinary, and pelvic symptoms, which can be used in combination with a cancer antigen 125 level to estimate ovarian cancer risk. However, the low sensitivity and specificity of this “symptom index” means that it misses early cancers and leads to unnecessary cancer evaluations for benign conditions.

A population-based United Kingdom (UK) cohort study assessed the underlying cancer risk of patients aged 30 years and older who presented to primary care with new-onset fatigue and co-occurring vague symptoms. After excluding persons with anemia or alarm symptoms (e.g., dysphagia), researchers followed patients for up to 9 months for a diagnosis of cancer. Cancer risk increased with age, reaching 3% or more in patients in their mid-60s. For all age groups combined, cancer risk was highest for women with fatigue and abdominal bloating and men with fatigue and weight loss, constipation, dyspnea, or abdominal pain.

A Danish research group developed an artificial intelligence (AI) based model that used results from common blood tests (complete blood count, electrolytes, and/or liver function tests) to generate a risk score that predicted cancer within 90 days. This laboratory data is often readily available; another UK study of patients who were diagnosed with cancer found that 41% received common blood tests in primary care as part of their diagnostic process. However, relying solely on blood tests neglects the value of primary care physicians’ non-analytical “gut feelings” that the patient has a benign or serious condition.

In an observational study of 155 general practitioners in Spain, a “sense of alarm,” present in 22% of consultations for new symptoms, had a sensitivity of 59% for cancer and other serious diseases and a negative predictive value of 98%. Thus, AI may also assist in cancer diagnosis by imitating the intuitive behavior of groups of family physicians. In an editorial in Annals of Internal Medicine, Dr. Gary Weissman and colleagues proposed that AI clinical decision support systems (CDSSs) utilize a “wisdom of crowds” approach that, like the best chess-playing AI systems, “reli[es] on imitation learning and collective intelligence” rather than set rules:

Averaging the judgments of many clinicians may outperform even the best clinician in the group. Training models to learn these consensus behaviors could lead to clinically significant improvements in accuracy. Furthermore, most diagnostic errors are the result of overlooking common diagnoses rather than very rare ones. … An AI CDSS that offers human-like suggestions may improve the reliability of clinical care by helping to avoid these clinical blunders. … Having an AI system that acts more like a thoughtful human guide rather than a black-box arbiter of truth may be the best next move.

**

This post first appeared on the AFP Community Blog.

Friday, February 17, 2023

Health care heroism

It's been nearly three years since the COVID-19 pandemic began in the U.S., and the public health emergencies that were declared by the Trump Administration and renewed several times by the Biden Administration are scheduled to end on May 11, 2023. At least 1.1 million Americans have perished from coronavirus infections, with excess deaths data indicating that absent the pandemic, the vast majority would still be living. Early on, stories of doctors and nurses having to wear garbage bags as personal protective equipment and reusing the same N-95 mask for days on end drove a narrative that health care professionals were heroes, wading into battle against the virus like soldiers under enemy fire or firefighters rescuing people from a blazing home. We were working longer hours under unusually stressful circumstances, and despite precautions, many front-line health care workers became infected on the job, particularly before the first vaccines became available in December 2020. But how many U.S. physicians made the ultimate sacrifice for their service, like the late Li Wenliang, the Chinese ophthalmologist who sounded the alarm during the early days of the Wuhan outbreak

Although we may never have a precise answer, a Research Letter in JAMA Internal Medicine recently shed light on it. Researchers used information from the American Medical Association Masterfile and Deceased Physician File to compare expected with observed deaths among U.S. physicians aged 45 to 84 years from March 2020 through December 2021. Results were stratified by age group, practicing vs. non-practicing, and provision of direct patient care. Overall, among an average of 785,000 physicians, 4511 deaths occurred over the period of analysis, representing 622 more deaths than otherwise would have been expected in the absence of the pandemic. In my group (age 45-64, active physician providing direct patient care), 652 deaths occurred, 81 more than expected. Notably, no excess physician deaths occurred after April 2021, when vaccines for adults had become widely available.

Some study findings were unsurprising: for example, excess mortality was higher among older than younger physicians. Nonactive physicians had slightly more excess deaths than active physicians (providing direct patient care or not). In all age groups, excess death rates were substantially lower than those among the US general population. While the study was not designed to determine the reasons for these disparities, it's easy to understand why: active physicians tend to be wealthier and healthier than inactive ones and the general population, and taking COVID-19 seriously from the start, they were more likely to get vaccinated and boosted and (during the time period of the study) to wear masks in public places.

That physicians and other health professionals had advantages over other "front-line" workers doesn't detract from the heroism that I witnessed in those pre-vaccine days, when every time I went to work part of me was terrified of inadvertently catching and bringing back home a potentially lethal virus with no effective treatment. Although the end of the public health emergency doesn't mean the end of the pandemic (today more than 1,000 infected patients are hospitalized in Pennsylvania alone), we are certainly in a much better place than we were. Health care works best when health care workers don't have to be heroes.

Friday, February 10, 2023

AI like ChatGPT will not make family physicians obsolete

Several years ago, I was speaking on the phone with Dr. Roland Grad, a family physician at McGill University and co-author of "Look It Up! What Patients, Doctors, Nurses, and Pharmacists Need to Know about the Internet and Primary Health Care." We were discussing the (to us, preposterous) notion that there would be no future for primary care physicians because we will all be replaced by cognitive computing / artificial intelligence (AI) systems such as IBM's Watson and Open AI's ChatGPT. Roland told me that whenever someone asks him about this, he points out that Star Trek clearly shows that there will be human doctors well into the 24th century. Even the holographic Doctor on the U.S.S. Voyager is only pressed into service after the entire human medical staff is killed in an accident.

Many of the prospective medical students I interview have asked me about how AI will influence how I practice family medicine in the future. A 2017 Perspective article on machine learning in the New England Journal of Medicine asserted that "the complexity of medicine now exceeds the capacity of the human mind." The authors argued that since doctors can no longer keep all relevant medical knowledge in their heads, and "every patient is now a 'big data' challenge," we will soon need to rely on massive computer-generated algorithms to avoid diagnostic and treatment paralysis.

It's no surprise that neither author of this piece was a family physician. Since I began my residency 22 years ago, and well before that, I knew that no matter how much I learned, it wouldn't be possible to keep everything I needed in my head. I never had to. In medical school I carried around a variety of pocket-sized print references, and in residency and clinical practice I had several generations of Palm Pilots and, eventually, smartphones that allowed me to look up what I didn't know or couldn't recall. The same goes for keeping up with the medical literature. Although I regularly read more journals than the average generalist (nine*), I know that there's no way that I can possibly read, much less critically appraise, every new primary care-relevant study. Drs. David Slawson and Allen Shaughnessy have argued that rather than pursue that hopeless (even for a super-subspecialist) task, clinicians should be taught information management skills, which consist of foraging (selecting tools that filter information for relevance and validity); hunting ("just in time" information tools for use at the point of care), and "combining the best patient-oriented evidence with patient-centered care."

And although Watson made short work of the previously invincible Ken Jennings on Jeopardy! and vanquished world chess champions with ease, it had a much harder time cracking medicine. Although IBM started selling Watson for Oncology as a "revolution in cancer care" to hospital systems worldwide in 2014 and has spent millions of dollars lobbying Congress to exempt its software from FDA regulation, a STAT investigation found that the system fell far short of its hype:

At its heart, Watson for Oncology uses the cloud-based supercomputer to digest massive amounts of data - from doctor's notes to medical studies to clinical guidelines. But its treatment recommendations are not based on its own insights from these data. Instead, they are based exclusively on training by human overseers, who laboriously feed Watson information about how patients with specific characteristics should be treated.

AI will no doubt play a supporting role in the future of health care, alongside smartphone physicals and precision medicine and many other promising innovations borrowed from other industries. But based on past experience, I'm not convinced that any of these innovations will be as revolutionary as advertised. In my own career, doctors have gone from using paper charts that were time-consuming to maintain and couldn't communicate with each other to electronic health records that are even more time-consuming to maintain and still can't communicate with each other. You get my point. Even if IBM or OpenAI eventually harnesses AI to improve primary care practice, here's what Roland and his colleagues have to say in Look It Up!:

Some might wonder whether this new automated world of information will create a medical world that is dominated by artificial intelligence, where doctors - if we even need them anymore - will just repeat what the machines say. On the contrary, as more information becomes readily available, doctors, nurses, pharmacists, and allied health professionals will become more important as the interpreters of that information in accordance with the specific clinical and social history, values, and preferences of the patient and her or his family. 

Right. I couldn't have said it better myself.


* - American Family Physician, Annals of Family Medicine, Annals of Internal Medicine, Health Affairs, JAMA, JAMA Internal Medicine, Journal of the American Board of Family Medicine, Journal of Family Practice, New England Journal of Medicine

**

A previous version of this post appeared on Common Sense Family Doctor on November 17, 2017.

Sunday, January 29, 2023

Integrating AI into family medicine education and practice

In a 2021 editorial, Drs. Winston Liaw, Ioannis Kakadiaris, and Zhou Yang asserted that embracing artificial intelligence (AI) is “the key to reclaiming relationships in primary care.” For example, AI tools can efficiently identify patients at high risk for poor outcomes, perform triage, provide clinical decision support, and assist with visit documentation. On the other hand, AI “could just as easily make things worse by leading to endless alerts, nonsensical notes, misdiagnoses, and data breaches.” To avoid having AI reenact the cautionary tale of electronic health records and cause more problems than it solves, Dr. Liaw and colleagues encouraged family physicians to partner with researchers, participate on health information technology committees, and lend their primary care expertise to computer scientists developing AI tools.

In the future, medical students, residents, and practicing clinicians will need to meet basic competencies for the effective deployment of AI-based tools in primary care. In a recent special report in the Annals of Family Medicine, Drs. Liaw, Kakadiaris, and colleagues proposed six competency domains for family medicine:

1) foundational knowledge (what is this tool?), (2) critical appraisal (should I use this tool?), (3) medical decision making (when should I use this tool?), (4) technical use (how do I use this tool?), (5) patient communication (how should I communicate with patients regarding the use of this tool?), and (6) awareness of unintended consequences (what are the “side effects” of this tool?)

The report provided examples of AI competencies within each domain based on learner roles (student, resident and faculty) and noted that primary care team members other than physicians would also benefit from additional training in AI.

AI-enabled chatbots, which can be trained to write intelligible text and complete essays in response to specific queries, are already changing the way universities assess students and have the potential to distort the scientific literature. This month, the World Association of Medical Editors released a preliminary statement advising medical journals that chatbots cannot be authors, and that authors who use chatbots for writing assistance remain fully responsible for their work and should be transparent about how chatbots were used. (The journal Cureus has taken a different approach, inviting the submission of case reports written with the assistance of the chatbot ChatGPT and asking that the AI tool be named as an author.)

In August 2022, the U.S. Department of Health and Human Services (DHHS) announced its intention to confront health care discrimination resulting from the application of biased clinical algorithms and tools through a proposed rule that could hold clinicians liable for clinical decisions made by relying on flawed AI-based tools. A JAMA Viewpoint recommended that DHHS shield clinicians from liability if they are following the accepted standard of care (e.g., utilizing the American College of Cardiology / American Heart Association Pooled Cohort Equations, which generate higher cardiovascular risk estimates for patients who identify as Black) and work closely with the U.S. Food and Drug Administration to determine how to best assess algorithmic software for bias.

Clinical algorithms are not the only way that AI could worsen health inequities in primary care. In a systematic scoping review in Family Medicine and Community Health, Dr. Alexander d’Elia and colleagues identified 86 publications that discussed potential negative effects of AI on access (“digital divide”), patient trust, dehumanization / biomedicalization, and agency for self-care. It described approaches to improving equity in AI implementation that included prioritizing community involvement and participation and considering system-wide effects outside of the primary care setting.

**

This post first appeared on the AFP Community Blog.

Tuesday, January 17, 2023

New clinical recommendations on osteoporosis treatment

The American College of Physicians (ACP) has updated its 2017 clinical practice guideline on treatment of primary osteoporosis or low bone mass to prevent fractures in adults. The previous version, which was endorsed by the American Academy of Family Physicians, recommended treating women with osteoporosis for 5 years with alendronate, risedronate, zoledronic acid, or denosumab to reduce the risk of hip fractures and vertebral fractures. It also suggested that men with clinically recognized osteoporosis be offered bisphosphonates to reduce the risk of vertebral fractures. Treating older women with low bone mass (osteopenia) at high risk for fracture was deemed to be optional based on patient preferences and medication benefits, harms, and costs. The publication of additional studies on existing therapies and the availability of new therapies such as abaloparatide prompted this guideline update.

An independent evidence review team performed a systematic review and network meta-analysis of osteoporosis treatments that analyzed 34 randomized, controlled trials and 36 observational studies. The review confirmed the effectiveness of bisphosphonates and denosumab in reducing hip, vertebral, and other clinical fractures. In older postmenopausal females at very high fracture risk, abaloparatide, teriparatide, and sequential romosozumab, then alendronate, appeared to be more effective at reducing clinical fractures over 24 months than bisphosphonates. Harms of therapies included an increased risk of adverse events with abaloparatide and teriparatide and a small absolute increased risk of atypical femoral fractures and osteonecrosis of the jaw persons taking bisphosphonates for 36 months or more.

In the updated guideline, the ACP now recommends that clinicians preferentially use bisphosphonates first-line in women and men with osteoporosis, with the exception of women at very high risk of fracture. In this group, either romosozumab or teriparatide can be used, followed by a bisphosphonate. Denosumab is endorsed as a second-line therapy for adults with contraindications to or who experience adverse effects from bisphosphonates. Similar to the 2017 guideline, the ACP suggests an individualized approach to prescribing bisphosphonates in women over age 65 with low bone mass. Rather than revisiting the recommendations in a specific time frame, the ACP plans to perform quarterly literature surveillance and maintain this topic as a “living” guideline that will be updated periodically when new evidence becomes available.

The U.S. Preventive Services Task Force currently recommends screening for osteoporosis in all women age 65 years and older and postmenopausal women younger than 65 years at increased risk using a clinical risk assessment tool. It found insufficient evidence to assess the balance of benefits and harms of screening for osteoporosis in men. The American College of Obstetricians and Gynecologists recently made similar screening recommendations. A 2020 report from the Women’s Health Initiative study found that repeating bone mineral density (BMD) testing after 3 years did not provide more clinical information than a baseline measurement. Additional information on osteoporosis diagnosis and treatment is available in AFP By Topic, including a Lown Right Care article on making decisions about fracture prevention in older adults.

**

This post first appeared on the AFP Community Blog.

Tuesday, January 10, 2023

A world without Twitter

Sometime during the week after Christmas, my Twitter account was hacked. Someone with an IP address in Quebec stole my password, logged in as me, and changed my e-mail recovery address so that I would not be able to reset my password. I have contacted Twitter Support three times since December 29 and have not received a response. After Elon Musk's reported "firing frenzy" after his acquisition of Twitter, it isn't clear to me when or if I ever will.

In the meantime, if you don't subscribe to Common Sense Family Doctor by e-mail or an RSS feed reader, you can also find notifications of new posts at the following websites:

Facebook: https://www.facebook.com/commonsensefamilydoc/

Substack: https://commonsensemd.substack.com/

LinkedIn: https://www.linkedin.com/in/kennylinafp/recent-activity/shares/

Friday, January 6, 2023

Improving early cancer diagnosis: it's (mostly) not about screening

In a recent medical news item that you may have missed, an analysis from NORC at the University of Chicago determined that only 14% of cancers in the U.S. are diagnosed by a recommended screening test for breast, cervical, colorectal, or lung cancer.  An additional 11% represent prostate cancers detected through PSA screening, which isn't technically recommended. Adding these two percentages together and subtracting from 100% means that 75% of cancers are either detected incidentally or after patients develop symptoms that cause them to seek medical care. Notably, the study was funded by GRAIL, which sells the Galleri blood test for screening for many types of cancer at once (most without currently recommended tests), and the company no doubt plans to use the results to increase demand for its unproven $949 test. 

However, there is another way to respond to this analysis. If three-quarters of cancers are detected after symptoms develop, the medical community should focus on improving outcomes by reducing the time from symptoms to cancer diagnosis. In a 2022 JAMA Viewpoint and a more detailed paper in Cancer Prevention Research, Dr. Elizabeth Sarma and colleagues made the case for this approach, arguing that symptom detection should be viewed as a "partner to screening" in primary care. People with possible cancer symptoms don't always seek timely care; a mixed-methods systematic review of 80 studies suggested that older adults often initially attribute symptoms to normal aging, but when they do recognize them as potentially serious, they are quicker to see a doctor than younger persons. Another study found that patients with more than two chronic conditions had a longer diagnostic interval (time from primary care presentation to cancer diagnosis) and a higher likelihood of seeking emergency care, possibly because clinicians incorrectly attributed the symptom to the pre-existing condition rather than cancer.

The diagnostic challenge that family physicians face is that most patients with common symptoms that could be due to cancer don't have cancer. If I ordered a CT scan or referred to a gastroenterologist every adult who presented to my office with abdominal pain, many patients would endure a lot of unnecessary procedures to identify the few with colorectal cancer. A 2019 review found few electronic clinical decision support tools for cancer diagnosis in primary care. However, a Veterans Affairs health system study concluded that visiting a primary care clinician at least annually is associated with substantially lower risks of metastatic disease at time of diagnosis and cancer-related death. So what factors influence our decisions to perform tests or refer patients with symptoms that could represent cancer? A systematic review found that the only factors that consistently prompted more diagnostic workups and referrals were alarm symptoms (e.g., fever or unexplained weight loss) and a "gut feeling" that a serious cause was responsible. This isn't good enough. Alarm symptoms are generally obvious, and gut feelings may overestimate or underestimate risk depending on the physician's training and experience.

Although they perform less cancer screening than we do in the U.S., the United Kingdom is well ahead of us in refining systems for early diagnosis. Dr. Sarma observed that U.K. researchers used data from their national heath system to "generate symptom lists and corresponding positive predictive values ... [that] were used to develop interactive calculators for primary care practice to predict an individual's risk of cancer." She endorsed a three-pronged research agenda: describing pre-diagnostic care pathways for symptomatic cancers; identifying signs and symptoms that can be used to identify patients at higher risk for specific cancers; and improving diagnostic pathways for symptomatic patients by increasing patient awareness and improving point-of-care tests in primary care. Can the U.S. successfully emulate the U.K. model of improving early cancer diagnosis?

Monday, January 2, 2023

A "hot take" on screening colonoscopy

Screening for colorectal cancer is an important preventive health practice that saves lives. But is colonoscopy really the "gold standard" for colorectal cancer screening? In Episode 172 of the American Family Physician podcast, I provided my "hot take" on a recent randomized trial that was designed to inform the answer to this question. You can listen to it in the embedded player starting at 22:50 or read the transcript below. Health care professionals may also be interested in a more in-depth discussion that I participated in for Medscape.



Hi, I’m Kenny Lin, deputy editor of AFP and an expert in cancer screening guidelines.

In 2002, the US Preventive Services Task Force first recommended colonoscopy as a primary screening test for colorectal cancer (CRC) in adults. This was an uncharacteristic decision, since the first randomized trial of colonoscopy would not be published for another 20 years.

Since then, flexible sigmoidoscopy has virtually disappeared as a screening option, and colonoscopy has become the primary screening method in the U.S. Gastroenterologists call it the “gold standard” and portray stool-based tests as an inferior alternative to be offered only to patients who refuse.

Of course, colonoscopy is less convenient and has serious risks that stool tests don’t: perforations, bleeding, and infections. The Task Force and others have assumed that colonoscopy saves more lives than stool tests, which reduce CRC mortality by around 15%, or flex sig, which lowers it by 25 to 30%. So the first trial results were surprising, even shocking. After 10 years, the group invited to undergo screening colonoscopies developed fewer cancers, but there was no change in CRC mortality.

Some have argued that a longer follow-up period, higher adherence in the intervention group, and better trained endoscopists might have produced better results. But at a minimum, this landmark trial suggests that it is not accurate to inform patients that colonoscopy is the best test for CRC screening or ethical to recommend it preferentially.

Instead, we should explain that stool-based tests and colonoscopy have different benefits, harms, and screening intervals; that either test is better than none; and then let them decide. On a health system level, it may be worth taking another look at flex sig, an office procedure that older family physicians like me were trained to perform before the promotion of screening colonoscopy got out ahead of the evidence.