Pages

Sunday, January 29, 2023

Integrating AI into family medicine education and practice

In a 2021 editorial, Drs. Winston Liaw, Ioannis Kakadiaris, and Zhou Yang asserted that embracing artificial intelligence (AI) is “the key to reclaiming relationships in primary care.” For example, AI tools can efficiently identify patients at high risk for poor outcomes, perform triage, provide clinical decision support, and assist with visit documentation. On the other hand, AI “could just as easily make things worse by leading to endless alerts, nonsensical notes, misdiagnoses, and data breaches.” To avoid having AI reenact the cautionary tale of electronic health records and cause more problems than it solves, Dr. Liaw and colleagues encouraged family physicians to partner with researchers, participate on health information technology committees, and lend their primary care expertise to computer scientists developing AI tools.

In the future, medical students, residents, and practicing clinicians will need to meet basic competencies for the effective deployment of AI-based tools in primary care. In a recent special report in the Annals of Family Medicine, Drs. Liaw, Kakadiaris, and colleagues proposed six competency domains for family medicine:

1) foundational knowledge (what is this tool?), (2) critical appraisal (should I use this tool?), (3) medical decision making (when should I use this tool?), (4) technical use (how do I use this tool?), (5) patient communication (how should I communicate with patients regarding the use of this tool?), and (6) awareness of unintended consequences (what are the “side effects” of this tool?)

The report provided examples of AI competencies within each domain based on learner roles (student, resident and faculty) and noted that primary care team members other than physicians would also benefit from additional training in AI.

AI-enabled chatbots, which can be trained to write intelligible text and complete essays in response to specific queries, are already changing the way universities assess students and have the potential to distort the scientific literature. This month, the World Association of Medical Editors released a preliminary statement advising medical journals that chatbots cannot be authors, and that authors who use chatbots for writing assistance remain fully responsible for their work and should be transparent about how chatbots were used. (The journal Cureus has taken a different approach, inviting the submission of case reports written with the assistance of the chatbot ChatGPT and asking that the AI tool be named as an author.)

In August 2022, the U.S. Department of Health and Human Services (DHHS) announced its intention to confront health care discrimination resulting from the application of biased clinical algorithms and tools through a proposed rule that could hold clinicians liable for clinical decisions made by relying on flawed AI-based tools. A JAMA Viewpoint recommended that DHHS shield clinicians from liability if they are following the accepted standard of care (e.g., utilizing the American College of Cardiology / American Heart Association Pooled Cohort Equations, which generate higher cardiovascular risk estimates for patients who identify as Black) and work closely with the U.S. Food and Drug Administration to determine how to best assess algorithmic software for bias.

Clinical algorithms are not the only way that AI could worsen health inequities in primary care. In a systematic scoping review in Family Medicine and Community Health, Dr. Alexander d’Elia and colleagues identified 86 publications that discussed potential negative effects of AI on access (“digital divide”), patient trust, dehumanization / biomedicalization, and agency for self-care. It described approaches to improving equity in AI implementation that included prioritizing community involvement and participation and considering system-wide effects outside of the primary care setting.

**

This post first appeared on the AFP Community Blog.