NOTE: By submitting this form and registering with us, you are providing us with permission to store your personal data and the record of your registration. In addition, registration with the Medical Independent includes granting consent for the delivery of that additional professional content and targeted ads, and the cookies required to deliver same. View our Privacy Policy and Cookie Notice for further details.



Don't have an account? Register

ADVERTISEMENT

ADVERTISEMENT

Welcome to the era of the chatbot

By Dr Muiris Houston - 02nd Apr 2023

ChatGPT

For now, it is best to use ChatGPT as an incomplete tool and to be aware of its dangers

I’m sure you have been subjected to the hype of the latest version of artificial intelligence (AI) to hit the headlines. ChatGPT is a natural language processing model that has been creating quite a stir since the US start-up OpenAI made the text-based dialogue system accessible to the public in November 2022. ChatGPT stands for Chat Generative Pre-trained Transformer. I am certainly not going to pretend to understand the computer science behind this, but let’s take a look at how it might impact on our professional lives.

Chatbot technology has the potential to help alleviate some of the causes of burnout by handling routine tasks, such as scheduling appointments and answering frequently asked questions. It can also automate patient triage and provide patients with self-help resources. Additionally, chatbot technology can provide physicians with more efficient communication and coordination between healthcare providers, allowing them to make more informed decisions and improve the overall patient experience.

Guess what? The entire paragraph above was written by ChatGPT, following a prompt offered by Alok Patel, a Medscape contributor. Aside from the most obvious implication – that my days as a medical journalist could be numbered as the technology is further finessed – how might it play out for doctors in our various roles?

Patel, a hospital paediatrician, spent some time trying to find creative ways that he could use ChatGPT in a work environment. Typing phrases such as, “Provide instructions for giving albuterol at school,” and “Explain how to monitor blood glucose in type 1 diabetes”, produced thorough and credible instructions in each case.

As well as providing complete sentences and thoughts, ChatGPT provides academic references to back up its answers. Now this could be a slippery slope on one’s career path: On the one hand offering a facility to put together genuine abstracts more quickly; on the other is the temptation to use the technology to create false, but real-sounding research with a credible reference in your own name.

Thilo Hagendorff, a post-doctoral researcher in machine learning at the University of Tübingen, Germany, made it clear at a recent press briefing that ChatGPT brings an array of ethical challenges in its wake. He considers the “heavy focus on the negative aspects” to be “difficult”. Among the ethical issues already identified with the technology are: Discrimination, toxic language, and stereotypes. These occur with ChatGPT because the linguistic model uses real speech and so reproduces discrimination, toxic language, and stereotypes. Then there are information risks. ChatGPT may be used, for example, to find private, sensitive, or dangerous information. The linguistic model could be asked about the best ways to commit certain crimes. And there is no guarantee that these models generate only correct information. They can also deliver nonsensical information, since they work on the basis of calculating the probability of the next word.

OpenAI has even been listed as a co-author of scientific articles – something editors of specialist journals have already clamped down on. Nature announced that AI will not be accepted as an author, and other journals, such as JAMA, followed suit. But they have not moved to ban the tool completely: Its use must be mentioned in the methodology section of a study.

An expert from the University of Sydney says we can improve AI by developing our “prompt skills”. This is the skill of crafting an input to deliver a desired result from generative AI. “Despite being trained on more data and computational resources than ever before, generative AI models have limitations. For instance, they’re not trained to produce content aligned with goals, such as truth, insight, reliability, and originality,” Dr Marcel Scharth says.

Since it’s a chatbot, you may be inclined to engage with it conversationally, he observes. But this isn’t the best approach if you want tailored results. Instead, adopt the mindset that you’re programming the machine to perform a writing task for you and prompt accordingly, Scharth advises.

Like most new technologies it will take time to bed down and work around. In the meantime, it’s probably best to treat ChatGPT as an incomplete tool. At its worst, it’s capable of complete fabrication. One doctor described asking it to do a literature review that she had already done and it came up with fake, but real-looking, references, and fake, but real-sounding, information. This could be especially dangerous if you asked it for information to give to patients.

At this point, it is clearly a case of caveat emptor where ChatGPT and its use in healthcare is concerned.

Leave a Reply

ADVERTISEMENT

Latest

ADVERTISEMENT

ADVERTISEMENT

ADVERTISEMENT

Latest Issue
The Medical Independent 3rd December 2024

You need to be logged in to access this content. Please login or sign up using the links below.

ADVERTISEMENT

Trending Articles

ADVERTISEMENT

ADVERTISEMENT

ADVERTISEMENT