Page content
Yes to AI in medicine – but only with the Geneva Pledge

Panel Ethical Responsibility in Algorithm-Based Medicine
Yes, is the answer of the three doctors on the panel “Ethical Responsibility in Algorithm-Based Medicine”: the medical profession is increasingly dependent on artificial colleagues. However, for the benefit of patients, they must be able to rely completely on the trustworthiness of AI systems.
To kick off the panel, moderator Prof. Dr. Ivica Grgic showed an experiment. The nephrologist from the University Hospital of Marburg and Gießen played a one-and-a-half-minute clip for which he created an avatar and had it deliver a devastating cancer diagnosis to a fictitious patient. “You are not alone with this diagnosis,” the avatar tells the imaginary death row patient at the end of his talk, and that ‘we’ – that is, the medical team – would ensure the best possible treatment.
Where doctors cannot be replaced
The audience in box 2 of the DMEA hall 6.2 found the scenario quite unsettling, as an instant survey showed: “eerie”, “untrustworthy”, “cold” were the keywords that appeared in the reaction chat. “I would still feel alone with the diagnosis,” was one response. The three medical experts on the panel were also somewhat shaken by the scenario.
“AI can communicate facts, but it can't respond to the shock a patient experiences in such a moment,” noted Dr. Irmgard Landgraf, who runs a family practice in Berlin and, as a board member of the Berlin Medical Association, deals with medical ethics in digital medicine. AI can, for example, give patients a good overview of treatment options. But when it comes to people and their feelings, “there is no substitute for us doctors”.
There is no us and them when it comes to humans and AI
“AI cannot be empathetic, it can only pretend to be empathetic, but that is a purely superficial staging,” clarified Prof. Dr. Martin Christian Hirsch. He is the head of the Institute for AI in Medicine at the University of Marburg and co-founder of the diagnostic app Ada Health. AI also has no understanding of the things it talks about. That is why it should never talk about “we” and should not create a sense of “we” in the human counterpart. “There is no we,” emphasized Hirsch. Humans share feelings and needs. “When an AI avatar tells me, ‘We will find a solution,’ that is wrong because it is from a completely different entity than me. I'm sitting here in the physical world, it's behind RAM memory. Where is the we?”
With seals and controls: Europe's opportunity in the medical sector
In response to such scenarios, the doctors on the panel unanimously demand that IT developers in the medical field should be bound by the Geneva Declaration – in other words, that doctors should be required to make a de facto commitment to always putting the patient's well-being first. “In my opinion, ethics in medicine will become a business model,” said Dr. Christian Becker, cardiologist at the University Medical Center Göttingen and spokesperson for the Young DGIM working group (German Society of Internal Medicine). ”In any case, I would have qualms about using an ethically unframed AI that is allowed to prescribe medication, for example.” With the AI Act, Europe has the chance to be at the forefront when it comes to developing trustworthy AI.
Irmgard Landgraf added that she could absolutely imagine working with an AI colleague in her GP practice who would have the latest expertise at the ready for her and her patients. But such an AI would have to be ethically trustworthy – ideally certified with a seal and controls.