Should Medicine Be Afraid of AI? How to Avoid Being Replaced by Artificial Intelligence

- Does the use of artificial intelligence in medicine carry more risks or benefits for humans?
- How does its ubiquitous use affect communication and the practice of medical professions?
- This is, among other things, the topic of the interview with Dr. Zbigniew Kowalski, an experienced lecturer, consultant, and medical communication trainer.
Luiza Jakubiak, Rynek Zdrowia: You are an experienced lecturer and trainer of medical communication. Do you see any interest in artificial intelligence during your workshops for medical professionals? How might it change communication and influence the medical profession?
Dr. Zbigniew Kowalski: Are we talking about fear associated with the development of artificial intelligence, or fascination with it? Because both phenomena occur.
Is this fascination mixed with concerns? Artificial intelligence is already present in journalism and is sometimes helpful, but it could potentially replace humans in certain tasks and eliminate them from the workforce.
Let me start by providing a broader perspective. I'm a middle-aged man in my fifties, and I decided not to miss the "ejaculation hype"—I have teenage children, so I know the language (laughter). I signed up for a course, which I successfully completed. I think this gives me a slightly better understanding of AI than the average person. From that point on, I divide people into those who use AI and those who don't. There's a radical difference.
Artificial intelligence is used by a person who has installed the chatGPT app and enters queries into it just as they would into Google. Let's be honest, this has little to do with realizing the capabilities and potential of artificial intelligence. Google's search engine is better suited to using language model mechanisms than chatGPT. So, it's better to enter your questions into Google than chatGPT. There's a chance we'll get fewer hallucinations, though there's no guarantee.
People use artificial intelligence every day because it's ubiquitous. For example, when watching Netflix, although it would be more accurate to say that it's the artificial intelligence that benefits us, as AI algorithms suggest films we might like. When shopping online, AI manages our shopping journey, learning our tastes and habits, and over time, relieving us of the need to think. Even some home air conditioners are powered by artificial intelligence, which analyzes the best temperature settings to save energy.
However, artificial intelligence is being used consciously and for specific purposes. There are few people who do this, and only they could pose a threat to various professions, referring to the fear that language models might replace journalists. I want to say that it's a bit like today's news media have lost their purpose not because they are poorly run, but because they present information that was previously available on X. If I've already read everything on X, I see no reason to use news media.
Whether in medicine, journalism, or any other field of life, people who can use artificial intelligence and do so consciously can be a step ahead, provided they don't replace their work with artificial intelligence. People who use artificial intelligence in the simple, imitative way described above necessarily demonstrate a derivative approach, they are imitators. Their work will be easier to replace, because life is about creating value in whatever field we engage in. Creating value is a necessary condition for development.
For example, physicians either create value in their interactions with patients, or they don't when they behave (read: communicate) inappropriately. If a physician can't explain, but czatGPT can, the average patient will ask themselves: why do I need a physician?
We are outraged by something new that we do not understand.Is it about always being one step ahead of what artificial intelligence can offer today?
No, you don't have to be. Medicine is the science I love more than life itself because it has always been at the forefront and one of the fastest-developing fields of science.
Even when I recall my first contacts with the medical world 30 years ago, and what medicine was like then and what it is like now, there's a huge gulf. However, this doesn't mean that we have to be exceptionally innovative when practicing medicine. Moreover, from Everett Rogers' research, we know that in medicine, as in every other area of life, 2.5 percent of mad innovators drive it forward.
Medicine is developing rapidly not because everyone wants to be one step ahead, but because there are 2.5 percent of medics who are crazy and dictate the pace for everyone else. It's the same in every other area of life.
Let me tell you something interesting. I often use this example because it's so graphic for me. I remember vividly going to a press conference at the Polish Press Agency (PAP), where then-Deputy Minister of Health Janusz Cieszyński presented the idea for e-prescriptions.
For the first time in my life, I was thrilled by any idea from the Ministry of Health. Everyone was thrilled, by the way. Everyone except the doctors. Medical representatives present at the conference and participating in the debate said it was simply impossible, that patients didn't know anything about computers, phones, or the internet, and would never accept it.
I wondered why the medical community was so inhibited, causing this whole revolution to happen somewhat in the background. Today, it's hard to imagine life without e-prescriptions. What's more, we all know how simple it is.
We become outraged by something new we don't understand. If we were more open-minded, willing to understand, we might ultimately find that we can do something effortlessly.
In my opinion, the same is true with artificial intelligence. In response to your question about whether that means medical professionals must be progressive and ahead of others, I answer no. They just need to be open-minded and not reject everything new by definition.
What if an AI doctor makes more accurate diagnoses than a human? Could there be a competition between humans and technology in terms of competence?
It's a matter of usurping territory. Various studies have recently appeared in the media, showing, for example, that artificial intelligence is more effective than a doctor or the human eye in making diagnoses based on images.
Is it wrong? Yes, it is, but it's wrong in a way that's safer for the patient, at least physically, though perhaps not necessarily psychologically. Artificial intelligence, for example, is more likely to detect cancerous lesions where they aren't present (i.e., suspect them), while humans are more likely to miss cancerous lesions where they are.
Secondly, we know, and have very interesting Polish observations, that doctors who use artificial intelligence models in their surgical procedures lose their professional skills over time. After three months of working with AI support, the Polish doctors surveyed became somewhat less skilled.
Why? Because a mechanism was at work: I don't have to be as good as I was without AI. This observation also shows how much AI helps, since it makes my brain lazy, even contrary to our expectations.
And you, as a potential patient, are not concerned about this?
I'll start with a digression. When I started driving in Poland thirty or so years ago, I used a road atlas and city maps. Today, I drive with navigation because it would be more difficult with an atlas. Today, my hippocampus functions a bit differently. The use of technology creates permanent changes in brain function.
This is a story about how, I can't say whether it's a good thing or a bad thing, that our use of technology is making us stupid. But I do want to say that I'm not so concerned that a surgeon operating with AI becomes a slightly worse surgeon as long as the AI is there. Because ever since I got GPS and used Google Maps to drive, I've also become a less savvy driver. We all have navigation, we use it, we live our lives. It didn't occur to me to worry about it.
If we have a doctor who does everything reliably, no language model will beat himAt some point, will we stop noticing that there is medicine and digital medicine?
Absolutely. A large part of our lives is already digital, and we'll be easily accustomed to many technologies without even noticing it. Therefore, I see no reason to scare people with technology. As a user, it's better to focus on its usability rather than on how and what it does.
The second thing is that technology will ideally support us in situations where it can cope better than humans. There are situations where humans can cope better, unless they voluntarily give up, and that's where I see a threat. If humans believe their humanity isn't that important, a doctor, for example, will be easily replaced. However, if a doctor knows that medicine begins where humans meet humans—to quote Professor Szczylik—then technology doesn't pose a threat.
How else can we use AI in medicine?
Here's a British study examining chat conversations between a patient and their general practitioner. In one case, the patient corresponded with a doctor, and in another, with a language model posing as a doctor. The language model was rated significantly higher by patients than the "live" doctor, both in terms of competence and "concern for my health."
Interestingly, and this is confirmed by other studies, the patient has no problem interacting with technology, as long as they are aware of it. They cannot be deceived.
As a result, the patient is convinced that the model performed better than a real doctor. Not only did it answer questions faster and be more understandable, but it also gave the impression that the model cares more about the patient. I don't think this is about AI posing a threat to doctors. I think it's about doctors posing a threat to doctors. Because if we have a doctor who does everything well and reliably, no language model can beat them.
But humans are not perfect and mistakes will happen.
Mistakes will happen, but in interpersonal relationships, it's not even about mistakes, but about intentions and the ability to express those intentions. If a person truly cares about another person, mistakes made will be forgiven.
Is the word sorry enough?
We have an expert at the Polish Society of Medical Communication who specializes in mediation. His experience and research indicate that patients primarily want to hear the words "I'm sorry," while physicians believe patients primarily care about money.
In the United States, where law is based on precedent and court rulings must be respected in subsequent court rulings, ever since one court ruled that the words "sorry" uttered by a doctor cannot be treated as an admission of guilt and used against them, doctors have begun apologizing with impunity, and the number of lawsuits has dropped dramatically. For many patients, that's all they care about. Yes, that's the key.
Is there no fear that artificial intelligence will develop to such an extent that it will acquire the ability to empathize with people's emotions?
It would have to have consciousness, which it doesn't have yet. We will have what we create.
I'm a huge "Star Wars" fan and have always thought it was the best kind of science fiction imaginable. But I've observed that, until cell phones existed in our world, no one in "Star Wars" communicated this way. It wasn't until cell phones became widespread that instant messaging appeared in the new trilogy in the 1990s. It's similar in "Blade Runner." It's a movie about the future where cars fly. But at the same time, everyone smokes cigarettes and reads newspapers everywhere.
What are these two stories about? About how our imaginations are also quite limited, and how we overestimate ourselves as humans. This world is—let me remind you—driven forward by the crazy two and a half percent, while everyone else adapts.
Nothing can replace human-to-human contact. I believe that artificial intelligence models that support human-to-human relationships offer a huge opportunity for technological development. For example, an artificial intelligence model that listens to a doctor's conversation with a patient and takes notes, allowing the doctor to talk to the patient and look them in the eye, rather than typing data into a computer. This is the future.
Let us remember that we are not dealing with a democracyWhat if humanoid robots stood at the patient's bedside, making accurate, rapid diagnoses, and also being patient and caring? I'm referring to the news about a Chinese hospital run entirely by AI. Although it's currently a virtual hospital, it could become a reality.
I'd like to point out one thing: First, China is a technological empire, and every day it surprises us with something that makes us feel a bit backward. However, as we are fascinated by the possibilities of Chinese technology, let's remember that we're not dealing with a democracy, but with a country that is also developing to increase control over its citizens.
Secondly, their level of ethical constraints and the resulting rights are completely different from our world. Therefore, Chinese technological innovations should be viewed with some caution.
Looking at any technology, including medicine, one might also ask why the Chinese invented it and not us. This is because we primarily see countless legal and ethical restrictions , which means many of these solutions couldn't be implemented in Europe. The Chinese have no restrictions and can do anything. Why are cardiac surgery procedures being tested so extensively in China, bringing so many new advances? Because there are no restrictions on access to organs there. There are so many prisoners from whom organs can be harvested with impunity.
We shouldn't compare ourselves to China, because we have a completely different level of regulation. I remember being hired as a consultant to train an avatar to talk to patients in hospital emergency departments. Helenka, as I called her, was tasked with answering patients' questions about wait times in a way that would keep them waiting. I can say that teaching her to respond in a way that satisfied the patient was much easier than going through legal discussions, which we can afford. In China, we don't have that problem.
Is there no risk that due to cost and optimization pressures, we will make certain technologies available over time, lowering legal or ethical standards?
Again, let me start with an example. There's a hospital where they perform robotic surgery for several million dollars, and a single procedure costs tens of thousands of złoty. And the postman just brought me a registered letter from that hospital. Who and what are we replacing with technology to cut costs? I don't see any reason why a hospital using a robot for several million dollars should send registered letters. But that's what happens. In a world where registered letters are sent, I'm not afraid of replacing doctors with technology to cut costs. I'd fire the entire hospital's secretarial staff sooner, because they're easier to replace without risk, not the doctors.
In my opinion, it is much easier, less risky and faster to look for savings in very simple functionalities rather than in the most complex ones.
Using the capabilities of artificial intelligence, I created an agent who can handle my emails. You sent me an email with a request for a conversation and topics you're interested in. I know that artificial intelligence could respond to that email for me, because for AI algorithms, that's no problem, and for me, it saves time.
And who replied? You or the AI agent?
I assure you that the response you received in my email was written by me, not by artificial intelligence. It could probably do it more beautifully, but I'm not ready to take that risk yet. I'm a little worried about artificial intelligence algorithms reading and responding to my emails.
Dr. Zbigniew Kowalski, lecturer, consultant and trainer in medical communication, author and co-author of 14 books, member of the Polish Society of Medical Communication and the International Association for Communication in Healthcare.
Copyrighted material - reprint rules are specified in the regulations .
rynekzdrowia