Porträts von virtuellen Krankenpflegekräften
www.hippocraticai.com (Screenshot))
2025-04-01 VDE dialog

Medical technology: Clever therapy assistants

Artificial intelligence has long been used in medical devices. However, the European AI Act and the increasing prevalence of self-learning algorithms are driving change in this sector. The supervisory authorities also need to rethink. And Europe must take care not to fall behind.

By Philipp Grätzel von Grätz

At the beginning of 2025, the US regulatory authority FDA announced that it had already approved over 1,000 medical devices that use artificial intelligence. In Europe, there are significantly fewer, but there are also no reliable figures: “The EUDAMED database, which is supposed to create transparency for approved medical devices, is still not operational, which is very disappointing,” says Dr. Thorsten Prinz, Senior Manager Health at VDE, who is extremely familiar with the approval landscape for AI-based medical devices. There is at least one estimate for AI-based radiological medical devices in particular; the Health AI Register website lists around 80 such tools with EU approval.

A male and female doctor discuss a mammography image

In addition to medical expertise: AI-based mammography software compares the smallest lumps and groups of calcifications, which can be precursors of cancer, with existing data from more than five million images and sometimes enables an earlier diagnosis.

| Siemens Healthineers

What is an AI-based medical device anyway? In imaging, for example, this refers to software that analyzes image data independently. In the past, these tended to be static applications using classic AI. These days, more and more complex AI models are being used. Olympus, Siemens Healthineers and Philips are involved in this area, and new companies are entering the market. Modern echocardiography programs, for example, are impressive because they not only find large parts of the echo film independently, but also help to position the transducer correctly. Artificial pancreases for diabetes patients (“AID”) are also AI-based medical devices. They are insulin pumps that are linked to continuous glucose monitoring (CGM). An algorithm first learns about the person's lifestyle and eating habits and then adjusts the insulin pump based on the CGM measurements – or at least makes a recommendation.

Traditional AI is also behind AID systems. Medical device manufacturers are still holding back when it comes to the new AI models that can be trained for very different scenarios – such as the large language models (LLMs) that are very familiar thanks to ChatGPT: “I’m not aware of any LLM-based medical device that has been approved to date,” says Prof. Stephen Gilbert, Professor of Medical Device Regulatory Science at the Else Kröner Fresenius Center at TU Dresden. However, this is likely to change, as LLMs are opening up new opportunities. Gilbert cites the example of virtual nurses who advise patients. The company Hippocratic AI is building such agents, and the WHO has also developed S.A.R.A.H., an avatar nurse who provides advice on mental health.

LLMs can also evaluate image and text data together and make treatment recommendations. LLMs are already in use in some places in the field of medical documentation. There are tools that generate doctor’s letters from the entries in an electronic patient file. And there are LLM-based AI applications that listen in on doctor-patient conversations and automatically generate documentation. As soon as such applications also provide recommendations for diagnosis or therapy, they would be deemed to be a medical device in Europe in accordance with the Medical Device Regulation (MDR) and would require approval.

If manufacturers want to develop an AI-based medical device, they face two challenges. They need data to train the applications. And they need a license to be permitted to market them. In addition to the MDR and the In Vitro Diagnostics Regulation (IVDR), the AI Act has been relevant for AI-based medical devices since summer 2024. “The MDR didn’t have AI on its radar until now,” says Prinz. The challenge now is to combine the requirements of the AI Act with the approval processes for digital medical devices in accordance with the MDR/IVDR so that no unnecessary bureaucracy arises. This is made easier by the fact that the AI Act explicitly refers to the MDR. AI-based software that the MDR classifies as a Class IIa or higher medical device is a high-risk application under the AI Act.

Portrait of the Avatar nurse S.A.R.A.H.

Avatar nurse S.A.R.A.H. The system uses language models and modern technologies that enable interaction with users around the clock, in eight different languages and via any device.

| WHO Website (Screenshot))

As far as the requirements for manufacturers are concerned, there is considerable overlap between the AI Act and the MDR/IVDR, for example in risk management systems, quality management and labeling obligations. However, requirements in the area of data governance and automated function monitoring are new and specific to the AI Act. The supervision of a human over the AI must also be technically supported. This is where the next special feature of the AI Act becomes apparent – it applies not only to manufacturers, but also to operators of AI applications. A medical facility must implement the human supervision of the AI-based medical device on site: “The bottom line is actually that the AI Act makes the safety of AI-based medical devices a joint task of manufacturers and operators,” says Prinz. Overall, he considers the additional effort to be relevant, but manageable: “Manufacturers who already have experience with software as a medical device are in a relatively good position, because they can use much of what they have already established.”

What about the approval of self-learning AI-based medical devices? In principle, according to Prinz, the AI Act stipulates that manufacturers will be allowed to modify applications that are already on the market within narrow limits. An algorithm that analyzes image or ECG data could in future be retrained on local data, for example, without the need for recertification. However, the respective manufacturer must have this approved in advance by the notified body as part of the CE conformity test. “The important thing here is that the system continues to learn in a controlled manner, and the manufacturer needs to know exactly what data goes into the system,” Prinz points out. “This has nothing to do with uncontrolled systems learning in the field.” The AI chatbot, which makes therapy suggestions on such a basis with the help of an LLM, would certainly not be certifiable according to this model. “I believe that uncontrolled learning systems won't be around for the foreseeable future,” says Prinz.

Stephen Gilbert sees things differently. He fears that Europe could be left behind when it comes to AI-based medical devices. All the more so as the UK regulatory authority has been trying to establish an approval pathway for LLM-based medical devices since the end of 2024 (see box). The USA is hot on the British heels, and AI-based medical devices have an easier time there anyway. This is because decision-support software that only works with text data and is only aimed at doctors – not patients – does not require medical device approval under certain conditions. According to Gilbert, several LLM-based software solutions that would be classed as medical devices in Europe are already in active use in the USA. The best-known example is Glass Health, which makes specific suggestions for clinical management based on text data from patient records. Sooner or later, Europe will also have to think about how to deal with such applications. Waiting until the AI Act is fully implemented in 2027 is unlikely to be a suitable strategy if Europe wants to remain competitive.

Kontakt
VDE dialog - the technology magazine