NOTE: By submitting this form and registering with us, you are providing us with permission to store your personal data and the record of your registration. In addition, registration with the Medical Independent includes granting consent for the delivery of that additional professional content and targeted ads, and the cookies required to deliver same. View our Privacy Policy and Cookie Notice for further details.



Don't have an account? Register

ADVERTISEMENT

ADVERTISEMENT

AI in medicine – innovation and challenges

By Priscilla Lynch - 14th Aug 2024

Artificial intelligence holds immense potential to revolutionise healthcare, yet it also presents significant challenges as regulators race to keep pace with its rapid development. Priscilla Lynch reports

Dr Guido Giunti

“Artficial intelligence [AI] won’t replace doctors; but doctors who use AI will replace those who don’t,” Dr Guido Giunti, Digital Therapeutic Lead at Trinity College Dublin, and Adjunct Professor and Senior Researcher at University of Oulu, Finland, told the Irish Neurology Association Annual Meeting earlier this year.

His comment was adapted from an original quote from the ‘medical futurist’ Dr Bertalan Mesko on AI potentially replacing radiologists. It neatly encapsulates the current view about the rapidly burgeoning role of AI in healthcare and medicine. In the last year, most major Irish medical conferences have featured sessions on the rise of AI in medicine, such is the interest level among the profession.

There are almost endless possibilities for AI in medicine, from diagnosing diseases, developing personalised treatment plans, and assisting clinicians with decision-making, to reducing the administrative burden, and accelerating research and drug discovery. But there are plenty of potential pitfalls too. These include concerns over accuracy, patient privacy, blurred boundaries, and ethical issues.

So how do we make the most of the transformative positive potential of AI in healthcare while ensuring patient safety and addressing ethical and privacy concerns?

Pandora’s box

AI is here to stay and is well on its way to transforming how healthcare is delivered for both staff and patients, “which we need to acknowledge and accept.” That is according to Prof Erwin Loh from the Monash Centre for Health Research and Implementation, Monash University, Australia, an international expert on the role of AI in medicine. He addressed the Irish Society for Rheumatology Spring Meeting earlier this year on the topic.

“Like all new major discoveries there is no closing the Pandora’s Box when it comes to AI. It is going to evolve and improve. We don’t know where it is going to end, and we’re now caught up in trying to grapple with that,” he told the Medical Independent (MI).

While AI has multiple “huge” potential positive consequences for the field of medicine, accountability and regulation are key, and countries need to catch up rapidly on this, he maintained.

“The main issue for governments and regulators is governance. And how do you catch up and regulate something that everyone is already using and employing without asking for permission?” Dr Loh said. He pointed out AI applications are already being employed in hospitals and healthcare facilities in a number of different modalities from diagnostics to healthcare administration. It is also being increasingly incorporated into surgical and procedure tools, for example in endoscopy and robotic surgical systems.

“We need to make sure that AI systems are introduced in a safe way. How do we leverage their potential without creating patient safety issues? Regulators are now looking at AI software like a medical device… so there are existing processes to introduce new technologies and AI is going to have to be part of that, and incorporated into frameworks,” he said.

He added this is what the newly published EU Artificial Intelligence Act (AI Act)  is aiming to achieve.

One major challenge is ensuring the accuracy and reliability of the information provided by AI chatbots and large language models. He said these platforms are only as good as the data they are trained on; for example, if disease prediction models are only trained on large single race/gender datasets they will not be accurate.

“One reason that people implement AI systems is that their assumption is that a machine does not have bias. Humans have bias – conscious, unconscious and systemic. But AI systems have to learn from something to function, so they are learning from a set of data that could have bias in it. So, AI models are only as good as the data they are learning from, and, unfortunately, the Internet has a lot of sources that may have data that is skewed or focused on a population that cannot be generalised.”

Dr Giunti concurs with this view.

“We need to be aware of the underlying technologies and how they work to avoid this kind of thinking,” he told MI.

“There is this saying in research ‘garbage in, garbage out’, which is very much true for AI as well. There are hidden biases that we are building into these algorithms that will affect the results that we get. We need to approach AI with the careful anticipation that we approach getting into the sea – testing the waters with a tentative foot and being mindful of every step.”

Another challenge is ensuring the privacy and security of sensitive medical information, Dr Loh acknowledged, which is a key element enshrined in the new EU AI Act.

A further emerging issue is that while humans currently have the final say in decision-making based on AI tools, what happens if they are allowed make autonomous decisions in the future? With robotic surgical systems, for example, what happens if they detect and decide to remove a tumour, but something goes wrong?

Dr Loh says these questions have yet to be answered: “Who is accountable? Who do you sue if something goes wrong?”

Dr Giunti agrees that the regulation of AI is a complex yet essential endeavour.

“Regulating AI is a bit of a double-edged sword. On the one hand, you need to protect the population from unethical use of the technology, as this is an area where we the people as a whole are not really prepared. On the other hand, AI is right now part of an ‘arms race’ where lagging behind could mean ending up at a severe disadvantage,” he told MI.

“To put it in perspective, in the early 2000s sequencing your own genome became available to consumers. You could take a swab and then find out your ancestry and whether you had risks for high blood pressure, etc. Many people found lost relatives and sought care because of this information. Now 20 years later, that information was used to create targeted attacks against minorities. The dangers are very real on both paths since we cannot predict how technology will evolve.”

Dr Loh also described the rapid pace of development in AI technology as “an arms race”, which necessitates countries adhering to the same set of general rules and having appropriate “safety guardrails”. He noted that the United Nations (UN) has now waded into the discussion. In March, the UN adopted a landmark resolution on the promotion of “safe, secure, and trustworthy” AI systems that will also benefit sustainable development for all. The UN Assembly emphasised the need for respect, protection and promotion of human rights in the design, development, deployment, and the use of AI.


Like all new major discoveries there is no closing the Pandora’s Box when it comes to AI

Regulation

It is clear that traditional policy instruments, such as legislation and guidance, are struggling to keep pace with the rapid advancements in highly innovative AI technologies. But it is also clear that they really need to catch up.

On 1 August 2024, the world’s first comprehensive AI law, the aforementioned EU AI Act, came into force. The overarching aims of the Act are to address potential risks to citizens’ health, safety, and fundamental rights, provide developers and deployers with clear requirements and obligations regarding specific uses of AI, while reducing administrative and financial burdens for businesses. The AI Act applies a risk-based approach, dividing AI systems into different risk levels: Unacceptable, high, limited and minimal risk. The requirements of the Act are being phased in over the next two years, with bans on AI applications deemed to pose an unacceptable risk taking effect within six months.

Much remains unknown about the Act’s final interpretation and implications, as ways must be found to apply it alongside pre-existing single-sector legislation. The European AI Office, established in February 2024 within the Commission, is overseeing the AI Act’s enforcement and implementation with member states.

A report prepared for the European Parliament, while the details of the Act were being threshed out, identified seven main risks of AI in medicine and healthcare: 1) Patient harm due to AI errors; 2) the misuse of medical AI tools; 3) bias in AI and the perpetuation of existing inequities; 4) lack of transparency; 5) privacy and security issues; 6) gaps in accountability; and 7) obstacles in implementation.

So does the Act mitigate these risks, ensuring patient safety, but safeguarding innovation at the same time?

Mr Thomas Regnier, Spokesperson for Digital Economy, Research and Innovation, European Commission, told MI that the regulation aims to establish a harmonised internal market for AI in the EU, encouraging the uptake of this technology and creating a supportive environment for innovation and investment, including within the health sector.

“The European health data space will facilitate non-discriminatory access to health data and the training of AI algorithms on those data sets, in a privacy-preserving, secure, timely, transparent and trustworthy manner, and with an appropriate institutional governance,” he said.

Mr Regnier explained that while most AI systems will pose low-to-no risk, certain AI systems create risks that need to be addressed to avoid undesirable outcomes. AI systems identified as high risk will be required to comply with strict requirements, including risk-mitigation systems, high quality of data sets, logging of activity, detailed documentation, clear user information, human oversight, and a high level of robustness, accuracy, and cybersecurity.

The healthcare sector is significantly impacted by the Act, particularly due to the inclusion of several ‘high risk’ healthcare related categories. These include systems to evaluate a person’s eligibility for public healthcare services, systems to determine pricing of health insurance, triage systems to analyse emergency calls, and AI systems for biometric categorisation.

The AI Act introduces specific transparency obligations to ensure that humans are informed when necessary about the presence/use of AI (ie, when using AI systems, such as chatbots, humans should be made aware that they are interacting with a machine so they can take an informed decision to continue or step back). Providers also have to ensure that AI-generated content is identifiable.

Of note, the Act does not apply to AI systems developed and used solely for the purpose of scientific research or for personal, non-professional activities. The Act also ensures that AI systems used as medical devices comply with both the AI Act and existing medical device regulations. The European Medicines Agency and the Heads of Medicines Agencies have begun preparing to support the implementation of the Act and are developing AI guidance in the medicines lifecycle this year, including for specific domains, such as pharmacovigilance. They will also establish an AI observatory.

Further legislation

The benefits and challenges of the use of AI for pharmaceutical discovery and development is also a hot global topic, which is going to see further developments but also further legislation and guidelines in the coming years.

In March this year, the World Health Organisation (WHO) published a discussion paper looking at the expanding application of AI to each step of development and deployment of medicines and vaccines.

The WHO said that while it recognises that AI holds great promise for pharmaceutical development and delivery, it also presents risks and ethical challenges that must be addressed if societies, health systems and individuals are to fully reap its benefits.

AI is already used in most steps of pharmaceutical development, and, in the future, it is likely that nearly all pharmaceutical products that come to market will have been “touched” by AI at some point in their development, approval or marketing, noted the WHO. Although these uses of AI may have a commercial benefit, it is imperative that use of AI also has public health benefit and appropriate governance, the WHO maintains.

Where to start?

For clinicians and health organisations captivated by the potential of AI yet daunted by its complexity and potential pitfalls, where should they begin?

AI-based IT systems can be used to support frontline clinicians in very useful ways, automating administrative tasks and generating discharge summaries, thus helping reduce the now almost overwhelming burden of paperwork, comments Dr Giunti.

“I think that in the short-term it’s very promising to think about using AI to automate processes like medical charting and decision support. Healthcare professionals spend up to four hours per week on average writing clinical documentation – it is actually a common complaint from patients that doctors sometimes don’t look up from their screens to talk to them. This is time that could be better spent focusing on the patients. Coupling AI to this process can improve their decision-making by providing suggestions that the professional may not have considered and they can ultimately decide to follow.”

Mr Arthur Cummings, Cataract and Refractive Surgeon at the Wellington Eye Clinic in Dublin, and Associate Clinical Professor of Ophthalmology at University College Dublin, discussed the use of AI at the Irish College of Ophthalmologists 2024 Annual Conference. In particular, Mr Cummings focused on how AI can help make medical practice more sustainable during a time of global healthcare worker shortages and ever-increasing care demand

He told MI that, from his own experience, “AI can be extremely useful” for clinical note taking and summarising patient consultations.

It can also help to reduce the administrative burden in making patient appointments as well as organising and analysing patient data and tests, helping guide clinical decision-making. He said the AI-based system he uses in his own practice is “fantastic” and can distil a long consultation down and remove superfluous information.

Mr Cummings said doctors need to educate themselves on the use of AI in their particular field. For those who are overwhelmed, he suggests starting with AI-based medical administration tools such as those that provide consultation summaries.

While AI-based technologies can help doctors better manage their time and resources at an individual level, they can also have a positive impact at system level. Dr Giunti said healthcare systems have started using AI to analyse huge amounts of data to try to better deploy and manage resources. “We are already using rudimentary forms of AI to look for factors and relations between relapses, hospital admissions, and disease progression. As we move onwards with this approach, the challenge will be to redesign the care pathways so that they actually account for these innovations. Knowing that a patient has a 70 per cent chance of having a complication means very little if we don’t have a process to act when that gets flagged.”

At the patient-level, he believes AI is going to be particularly helpful in increasing self-management of health. “I think that in the long run, the most promising role [for AI] is in providing personalised recommendations to manage your own health. Medicine approaches people from a population perspective: Of all the people with condition X, 80 per cent experience symptom Y. This is very useful for us in terms of organising healthcare, but on the patient perspective it is lacking because it’s not often so clear-cut. Having specialised AI that know you as your health companion can be a real game-changer for most conditions.”

‘Dr AI will see you now’

There have been many pronouncements about artificial intelligence (AI) eventually replacing doctors, particularly in fields like radiology, pathology, and dermatology, where AI’s diagnostic and pattern recognition abilities can match or exceed that of experienced specialised clinicians. In assessing X-rays, retinal scans, blood tests and other investigations, as well as making the correct diagnosis based on detailed patient histories, large language models (LLMs) demonstrate enormous capacity and impressive accuracy.

AI LLM generative text systems are now able to pass the US Medical Licensing Exam without any human input, while displaying valid clinical reasoning and insights, with scores improving as the systems develop – initially achieving the passing mark of 60 per cent. The expert-level medical LLM MedPaLM 2 more recently scored 85 per cent on the same exam, while GPT-4 scored 90 per cent, outperforming the other models.

Nevertheless, it is unlikely that AI will completely replace doctors anytime soon. The human aspects of care, including empathy, compassion, critical thinking, and complex decision-making, are invaluable in providing holistic patient care beyond diagnosis and treatment decisions.

That said, a number of recent studies have shown AI chatbots to score higher on perceived empathy in their responses to patients, with patients preferring to open up to virtual chatbots than real clinicians or counsellors. Meanwhile, machine learning systems have proved adept at identifying psychosis, trauma, and as a promising tool to enhance suicidal prediction through more accurate algorithms.

A recent study published in JAMA Oncology, which compared responses by a number of different AI chatbots versus verified oncologists to 200 patient questions about cancer from a public online forum, consistently rated the best-performing chatbot responses higher in relation to empathy, quality, and readability based on style of writing. The use of AI chatbots to communicate with and counsel patients given serious medical diagnoses is currently being explored and trialled in many healthcare systems.

However, Dr Guido Giunti, who specialises in digital health design and development, urges caution about overestimating the potential of AI LLMs and associated technology. “A common mistake that we make with technology in general is to misunderstand its potential and its limitations. I’ve seen many treating solutions like ChatGPT as their own personal oracles, asking questions with the expectation of receiving revealed truth,” he said.

“Others get upset because the responses they got are wrong and then they swear off against using it. The reality is that these large language models are just word calculators, they just estimate what are the chances of this word being next to this other word according to its database.”

This project was funded by the International Center for Journalists through the Health Innovation call.

Leave a Reply

ADVERTISEMENT

Latest

ADVERTISEMENT

ADVERTISEMENT

ADVERTISEMENT

Latest Issue
Medical Independent 17th December
The Medical Independent 17th December 2024

You need to be logged in to access this content. Please login or sign up using the links below.

ADVERTISEMENT

Trending Articles

ADVERTISEMENT