Advertisement

Dr. Loh: ChatGPT and your healthcare — distinguishing facts from fiction

ChatGPT is the hottest item in technology and seems to be ubiquitous. It is a bit overhyped, so I thought it might be worthwhile to give you a bit of a pause to help sort out some facts from fiction, which actually is a primary issue with ChatGPT.

First, a bit on validation. Most readers know that I am a cardiovascular disease management physician, which is a another way of saying I am a cardiologist that focuses on assessment and management of complex cardiovascular disorders. I’ve been doing that for over four decades in private practice. You also likely know that I am a clinical researcher since my days at NIH and carried through my faculty position at Cedars and was part of the foundation of the Ventura Heart Institute. I’ll discuss all that soon in another column, but this is about ChatGPT.

Fewer of you know that for the last decade, I’ve also been the chief medical officer and a co-founder of an artificial intelligence in healthcare company incorporated in the U.S., but based in the European Union. We are deployed in 30 countries and operate in 20 languages, and our focus is on clinical decision support. Again, off topic, but the point is that over the years I have given lectures on artificial intelligence and the future of medicine at national meetings, to specialty organizations, and for hospitals.

At the end of last year, ChatGPT3 was released and it was apparent to me that this was a threshold to a new world, warts and all. So I have incorporated a discussion of generative artificial intelligence and large language models into my core lecture for healthcare professionals, but realized that the public may benefit from an overview. This necessarily will be very short and overly simplified, but hopefully provide a framework for you when you read about it.

ADVERTISEMENT

GPT stands for generative pre-trained transformer. Generative AI means that it is creating new text (or whatever) when given a “prompt,” or request. Pre-trained means that it is pulling information from huge sources, basically the entire internet and Wikipedia. ChatGPT3, released at the end of November 2022, was trained on 175 billion data points, but stopped learning at the end of 2021. ChatGPT4, just released in mid-March, has been trained on trillions of data-points, plus has a few other tricks. But the company that created it is being more secretive of exactly how it was trained. Transformer is a very special type of neural network, which is an architecture for something called deep learning, which is a strategy for machine learning. Google published it in 2017 and the machine learning world was changed in an instant.

Transformer operates like the most powerful “fill in the blank” game you can’t probably imagine. Based on the enormous training it’s had, and paying “attention” to the context, it guesses the next word in a sequence. Because it’s been trained on essentially everything, it gets pretty good at figuring out the mathematical relationships between words in a given setting. It’s very fluid and fluent in its output, which is part of its charm, and central to its danger. It does not understand what it is putting out, but it just seems like it. If it doesn’t have a good match, it makes one up, or “hallucinates” or becomes a bit delusional, but it sounds good while doing it. In that regard, it simulates some politicians, and some people we all know. But I digress.

It also learns bigotry, racism, and intolerance, because all that is out there in the internet. Microsoft pulled its early large language model, Tay, a few years ago because it had learned those negative elements from its users. Facebook pulled its for the same reason. But they’re all getting better, and, of course, the newest versions from these tech companies have been released with ostensibly better guardrails for evaluation and improvement.

The CEO of OpenAI, the company that created ChatGPT, has from the beginning been very honest about this technology. He has stated again and again that ChatGPT is imperfect and will make mistakes, and not to trust it implicitly for things that really matter. It’s one thing to get a fact wrong, especially it can be prompted to answer both sides of a question with equal alacrity and conviction. But in healthcare, giving you the wrong answer can be fatal. I give examples of that in my lectures, but will not here. You should simply believe OpenAI’s CEO and their technology folks; it’s a fantastic technology that we as humans will have to help improve. The just-released GPT4 is an iterative improvement over Chat GPT3.5 with new capabilities and almost incredulously jaw-dropping output.

In healthcare, these large language models will be the interface with which not only clinicians, but patients themselves, will be able to interact with the huge amount of healthcare data that exists online. Unlike Google, the results are not lists of websites, but output is delivered in a conversational format and structured to be compelling. But depending on the questions asked, the results may be conflicting. That’s one of my biggest worries. These technologies can be used to spread misinformation and conspiracies even more effectively that those that have already been created by those who seek to deceive for power or profit. Remember, these technologies are not alive or thinking; they just fake it.

The caveat, and it’s a big one, is that in healthcare, your life may be on the line. So use these new tools for information gathering, but do not act on that information until you check with your doctor, whom you presumably trust.

Irving Kent Loh, M.D., is a preventive cardiologist and the director of the Ventura Heart Institute in Thousand Oaks. Email him at drloh@venturaheart.com.

This article originally appeared on Ventura County Star: Dr. Loh: ChatGPT and your healthcare — distinguishing facts from fiction