We must build trust in AI for use in health care
Of all the potential applications of artificial intelligence, its use as an agent of change in the health care industry could be the most profound. AI has the potential to streamline the drugmaking process, bringing essential medicines to pharmacy shelves at a fraction of the current time expectation.
Likewise, its deep-learning algorithms can interpret radiological scans like ultrasounds and X-rays and produce more precise diagnoses. AI also can help in the early detection of conditions such as heart failure and stroke, alerting medical teams to potential emergencies.
So why then was news of the first fully AI-produced drug reaching clinical trials met with simultaneous expressions of hope and apprehension by those who follow the health care sector?
Developed by Hong Kong-based startup Insilico Medicine, the drug, INS018_055, is seen as a potential remedy for idiopathic pulmonary fibrosis, a chronic disease that causes scarring in the lungs and can lead to death within five years if not treated. It currently affects about 100,000 people in the United States, according to the National Institutes of Health.
My work in biomedical informatics sits at the intersection of technology and health care. I have seen how continuously working computers can speed up the process of collecting and processing information and dramatically shorten drug development times. Our team at bukharilab.org studies medical information accessibility algorithms and builds trustworthy AI models for clinical application.
I am sensitive to the wariness with which some approach greater reliance on technology in health care. Issues of social inequality between economies that can afford to employ AI in health care and those that cannot is a legitimate public concern. So is data protection. As health care companies acquire and transmit large amounts of sensitive patient data via AI, they will inevitably become targets of cybercriminals. Public worries about the ability of the industry to anonymize data and protect it from bad actors represent a significant (and legitimate) impediment to AI’s broad acceptance in the health care sector.
For those reasons, AI’s future in the industry depends almost wholly on its ability to cultivate public trust.
Unlike, say, agriculture or auto manufacturing — two industries that have employed AI with little public pushback — the embrace of it in health care is seen by skeptics as part of the continued erosion of the doctor-patient relationship. Computers, for all of their capabilities, cannot generate the compassion of, say, an experienced fertility doctor guiding a couple through the challenges of conception.
Nor are computers likely to assuage the anxiety of a young child in the operating room for the first time. Or those hospitalized for depression or mental illness. In fact, “robotic” medicine could produce in such patients reactions so extreme they undermine treatment.
AI offers perhaps its best possibilities in the field of drug discovery. But even so, manufacturers will need to ensure that after analyzing the AI-generated compounds that create a potential drug, care is paid to cultivating the most diverse clinical trial population possible. If the testing population is biased or unrepresentative, the resulting predictions on the drug’s effectiveness could be inaccurate or unfair.
It would seem then, the best hope for the continued development of AI in health care is technology guided by a human touch. The public must trust in the ability of AI tools to produce accurate and reliable information, and believe that such information, gathered without prejudice, assists a physician’s ability to care for patients.
This guest essay reflects the views of Syed Ahmad Chan Bukhari, assistant professor and director of healthcare informatics at St. John's University.