This year, I did a lot of reading about current and future applications of artificial intelligence (AI) in health care - for example, how it will reduce the grunt work of selecting future physicians; become a required competency in medical education; provide relief from overflowing primary care electronic in-baskets; and provide clinical decision support for treating patients with depression. I've read pessimistic commentaries about chatbots and large language models being a "Pandora's box" and more optimistic pieces arguing that generative AI can overcome the "productivity paradox" of information technology: that is, it won't take decades to see large gains in health care quality and efficiency, as we haven't seen with implementation of electronic health records. Meanwhile, regulatory authorities are still struggling to catch up to ensure the safety of AI products without discouraging technological innovation. (And while I was retrieving these articles online, Microsoft Bing's AI-enabled search engine kept trying to take over writing this blog post.) But the most interesting article that I read about AI this year had nothing to do with health care. It was about the U.S. Air Force.
"AI brings the robot wingman to aerial combat," declared the science fiction-sounding headline of this August 2023 New York Times story. It discussed the XQ-58A Valkyrie, a pilotless "collaborative combat aircraft" described as "essentially a next-generation drone." Eying a seemingly inevitable armed conflict with China over the disputed island of Taiwan, U.S. Air Force war planners hope that these robot wingmen (wingAIs?) will not only be far less expensive to produce than conventional piloted warplanes, but also spare the lives of many human pilots who would otherwise be shot down by China's vast antiaircraft apparatus. Why expect our flying servicemen and women to become casualties while performing exploits of derring-do when a fearless AI can complete the same mission at a fraction of the risk?
Military AI raises ethical dilemmas, of course. Behind every drone attack on suspected terrorists is a human being who has judged (rightly or wrongly) that the target is indeed a wartime adversary and fair game. But "the autonomous use of lethal force" - the idea that AI could be making kill decisions without any human signoff - makes many people uneasy. The Pentagon, naturally, dodged a reporter's question about whether the Valkyrie aircraft could eventually have this capability.
Similarly, I could imagine that in the next decade or two (before the end of my career) AI could be developed to perform many of the basic functions of a physician assistant in primary care: ordering recommended screening tests and vaccines, titrating medications for hypertension and diabetes, and deciding whether or not to prescribe antibiotics or antiviral drugs for patients with acute respiratory illnesses. Physician supervision would probably consist of reviewing charts and signing off on them at the end of a clinical session rather than double-checking the AI's decisions in real time. Would that mean that AI would be autonomously practicing health care? Sure it would. Would this application be easier or harder to adjust to than formations of armed Valkyries using machine algorithms to identify enemy personnel and shooting to kill?