The values underlying new healthcare AI regulations and algorithms will...

The values underlying new healthcare AI regulations and algorithms will mean more — or less — access to care for those who are now under-treated, experts say.  Credit: Dreamstime/TNS

A majority of Americans would feel “uncomfortable” with their doctor relying on AI in their medical care, according to recent polling, but despite those misgivings it is likely you have already encountered the results of artificial intelligence in your doctor’s office or local pharmacy.

The true extent of its use “is a bit dependent on how one defines AI,” said Lloyd B. Minor, dean of the Stanford University School of Medicine, but he said some uses have been around for years.

Most large health care providers already use automated systems that verify dosage amounts for medications and flag possible drug interactions for doctors, nurses and pharmacists.

“There’s no question that has reduced medication errors, because of the checking that goes on in the background through applications of AI and machine learning,” Minor said.

Hundreds of devices enabled with AI technologies have been approved by the FDA in recent years, mostly in the fields of radiology and cardiology, where these fancy formulas have shown promise at detecting abnormalities and early signs of disease in X-rays and diagnostic scans.  But despite new applications for AI being touted every day, a science fiction future of robot practitioners taking your vitals and diagnosing you isn’t coming soon to your doctor’s office.  

With the recent public launch of large language model chatbots like ChatGPT, the buzz around how the health care industry can ethically and safely use artificial intelligence is reaching a crescendo. Among the concerns is worry about increasing inequities in health care that already exist.

Narrow demographic group 

AI and algorithms “could replicate and amplify disparities … unless they’re recognized and responsibly addressed,” Minor said. For example, he cited data collected from the clinical trials that the FDA uses to approve drugs, the participants of which have “typically been whites of European descent.”

“If you train AI on a narrow demographic group, you are going to get results that really only apply to that narrow group,” Minor said.

Stanford is one of many institutions tackling the challenges and promises of artificial intelligence in the health care industry, and many of its researchers and experts have been at the forefront of these discussions for years.

Sonoo Thadaney Israni was co-chair of the Working Group of Artificial Intelligence in Healthcare for the National Academy of Medicine (NAM) when the group published a report on AI in 2019.

Start with 'real' problems 

“The wisest guidance for AI is to start with real problems in health care,” Israni and her colleagues wrote, like the lack of access to providers for the poor and uninsured, or the ballooning costs of care.

Will using AI provide better health, lower costs, improve patients’ experience and clinicians’ well-being and promote equity in health care? Those are the key questions, Israni says.

But there is no central regulatory agency overseeing the boom in AI, and as a World Health Organization report points out, the laws and policies around the use of AI for health “are fragmented and limited.”

“The real question becomes not what should be the regulation, but what should be the values underlying those regulations,”   Israni said. The Biden administration has said addressing the effects and future of AI is a “top priority.”

Systems that detect and prevent errors — checking for a misplaced decimal point in a dosage of medication, or automatically checking possible adverse drug interactions — are “clearly making health and health care safer,” Minor said.

But even those who embrace the new technology express concern about privacy and security, data quality and who is represented in the data.

Many experts and reports have warned how bias and disparity can be reflected and amplified when algorithms are used without careful consideration.

Black patients' needs

For example, a 2019 paper published in Science found that a commonly used algorithm to identify the sickest patients in hospitals underestimated the care needs of Black patients even when they were sicker than white patients.

“The bias arises because the algorithm predicts health care costs rather than illness,” the authors wrote, “but unequal access to care means that we spend less money caring for Black patients than for white patients.”

Narrow benefits for those who already have the best access to health care are what Israni and her colleagues hope to avoid as AI applications spread throughout health care.

“Whatever we create has to work very well for the person with the least privilege,” she said. “We should not be building technologies that exacerbate inequities.”

SUBSCRIBE

Unlimited Digital AccessOnly 25¢for 6 months

ACT NOWSALE ENDS SOON | CANCEL ANYTIME