Lab tests and scans interpreted by AI? These Penn doctors are researching the good — and bad — ways to use AI in health care
Can patients depend on sites like ChatGPT and Google Bard for medical advice?
Artificial Intelligence is already forging a path in healthcare: Robocalls and texts now remind you of upcoming appointments, virtual nursing assistants submit requests for insurance pre-authorization, and bills are automatically generated.
But can patients depend on sites like ChatGPT and Google Bard for medical advice? That’s what Samiran Mukherjee, chief fellow in gastroenterology at the Perelman School of Medicine at the University of Pennsylvania, wanted to find out.
In February 2023, about three months after ChatGPT’s launch, he was part of a team that tested whether ChatGPT was an effective tool for providing medical information. The group, led by Michael L. Kochman, a gastroenterology professor, published their results in the medical journal Gastro Hep Advances in July.
The Inquirer spoke with Mukherjee about the potential ways AI can benefit healthcare professionals and patients.
How can AI be an effective educational tool?
ChatGPT has a wealth of data, so vast that its ability to have answers which mimic human conversation is astounding, Mukherjee said.
“We wanted to see how lifelike the answers were and if they were accurate,” he said.
» READ MORE: Artificial intelligence may influence whether you can get pain medication
Their questions about colonoscopies got mixed results. The answers were easy to understand and seemed like they were coming from a human, but some of the information was incorrect. AI did well with simple, single objective questions, but struggled to provide accurate answers to two-part questions.
“To a patient who doesn’t understand what a procedure might be, it would be a good starting point to gather some general information,” he said.
When is it better to ask ChatGPT instead of Google?
When people search for information about a series of symptoms, Dr. Google often offers up an alarmist - and unlikely - diagnosis, which can feed anxiety. For instance, when looking up abdominal pain, google spits out “peritonitis,” a surgical emergency, as the second option. ChatGPT does a better job itemizing the possible causes, such as a stomach virus or constipation.
ChatGPT and Google Bard may be good alternatives because they are more likely to order their search results with the most relevant first, he said.
Still, it’s best to check in with your doctor if you have questions about your health.
A report from Harvard found AI could reduce treatment costs by 50% and improve health outcomes by 40%. What’s the outlook for AI being used to read and interpret tests such as CT scans or MRI’s?
Trials are currently exploring this question, he said. One of AI’s strengths is recognizing patterns, and computers can process information much faster than humans, so using AI to read these types of scans is plausible.
“Though you can never replace the gut feeling that comes with years of experience, AI could provide a very good collaborative tool,” said Mukherjee.
» READ MORE: Will AI push humans aside, or just give us new tools? Six tech experts weigh in.
For example, he can imagine AI reading an EKG, a test that records the electrical signal from the heart. EKG results appear as a line on a piece of paper, and that line’s pattern suggests different underlying heart issues.
Will AI ever be able to operate independently in health care, or will it always need to be alongside a healthcare professional?
Although machines can be very accurate in predicting models, they can never replace human professionals, he said. The capacity of their utility lies in a corroboratory nature.
“The counseling that comes along before or after [a diagnosis] are things only humans can do,” he said. “Empathy cannot be replicated, and is the hallmark of the patient-physician relationship.”
What are the risks of AI that patients and medical workers should be aware of?
With any new technology, encouraging use is good, but patients and providers should be careful to not over-estimate the results, he said. In Google’s early days, the search engine often responded to a search about medical symptoms with a list of potential diagnoses (some of them scary) without any context.
“Humans are emotional creatures and whatever is the scariest will stick,” he said. “A similar situation may happen if these tools are used unchecked, without the proper education and counseling.”
Robust rules being developed by the FDA, World Association of Medical Editors, and other organizations are helping providers understand appropriate uses for AI.
“Adding new technologies into healthcare, where more patients can potentially benefit by lower barriers to access information and diagnoses, is the best way forward,” said Mukherjee. “With human collaboration, we can better understand the bugs and fixes, and take care of those things in future renditions.”