![getty_1728138175_AI Using AI technology helps with genetic code prediction research.](https://www.insideprecisionmedicine.com/wp-content/uploads/2025/02/Feb1_2025_GettyImages_1728138175_GeneticCodeAITech-696x464.jpg)
There can be little doubt that artificial intelligence (AI) is having a moment. From big wins at this year’s Nobel prizes to AI summaries topping search engine results, its reach stretches from cutting-edge science to the most mundane of Google answers.
Physicians have been particularly enthusiastic about the technology, likening AI’s transformative potential to that of gene therapy. It has quickly been absorbed into clinical practice, where it promises to improve diagnoses, support treatment, and better treatment delivery.
The great hope is that these machines, which mimic human intelligence and our mental agility in learning and problem-solving, will revolutionize patient care.
Indeed, the future looks promising. As diagnostic tools, these algorithms can read magnetic resonance imaging (MRI) scans and pathology slides at lightning speed. In genomics, they can trawl through vast genetic datasets and spot potential new medications, shaving years off of drug discovery pipelines.
![Janik Jaskolski](https://www.insideprecisionmedicine.com/wp-content/uploads/2025/02/janik-Jaskolski-e1739301371171-300x300.jpg)
Co-Founder and Chief Product Officer, Semalytix
“If someone gave me the option: Do you want your MRI to be diagnosed by a human or AI, I would always go for AI because it’s just so much better,” said Janik Jaskolski, co-founder and chief product officer of Germany-based Semalytix, which makes an AI tool offering patient-centric insights.
“If it’s a problem where you’re looking at huge data vectors with an appropriate algorithm, AI is always going to outperform, that’s just how it is.”
But there has been unease over the pace of uptake, with worries over AI being applied too widely without due diligence. Its increasing use means major changes for the healthcare workforce. And with signs that it can diagnose at least as well as clinicians in some specialties, there have been concerns over de-skilling and unemployment.
Added to this are issues around the protection of patient data, lack of personalization, ethical concerns relating to text generation, and inherent bias.
For while these algorithms have been lauded for their strengths of consistency and lack of bias compared with humans, increasing evidence suggests that their performance can vary widely and may need tailoring to the idiosyncrasies of each healthcare system.
![Dana Edelson](https://www.insideprecisionmedicine.com/wp-content/uploads/2025/02/Dana-Edelson-e1739301493561-273x300.jpg)
Co-Founder and Chief Medical Officer, AgileMD
The old adage of garbage in, garbage out holds true for AI, according to Dana Edelson, MD, co-founder and chief medical officer of AgileMD, which provides an AI early warning score that predicts clinical deterioration while in hospital.
She stressed the importance of the outcomes used in a model, the variables put into it, and the fidelity of those variables in clinical practice.
“It may be that you have a pristine dataset on which you train the model, but in reality, the clinical team doesn’t actually put the vital signs in, in real time,” she said.
“One of my favorite workarounds that I see in the hospital—and I say favorite with sarcasm—is the practice of writing vitals down on a paper towel or on your scrubs and then doing the whole floor and then going back in and entering them in at the end of the shift.” This changes everything, she explained. “If then you’re testing the model or you’re using the model in real time, well, the model didn’t have the actual vital signs available to it because they were never entered.”
Jaskolski agreed that the human element is key. “If you annotate data, you’re not just annotating data, you have to find experts that are capable of annotating the data in a certain way, and then you have to make sure that this is actually up to par. That you compare, for example, and you track what’s actually the human precision and recall for this problem. … You can’t expect AI to be 99% precise in a problem where humans are 30% precise.”
It is a problem touched upon by Nolan Gertz, PhD, an associate professor of applied philosophy at the University of Twente in the Netherlands, who spoke at London’s HowtheLightGetsIn festival this summer. When using a machine that just reads data, he maintained that what matters most is who labels that data.
“Right now, you have data centers in these third world countries—Kenya, Pakistan, Venezuela—where people are getting paid almost no money at all to go through massive amounts of data and do all the labeling and try to figure out, ‘Am I looking at a picture of a red shirt with white stripes or a white shirt with red stripes?’”
Gertz pointed out that every time a person goes through CAPTCHA security on the internet, where they identify all the bicycles in a picture, for example, they are labeling data in a way that AI is not able to do.
“When you think about how stupid a machine has to be to require you to go through and process that—am I looking at a nipple or am I looking at a raisin?—you have to go through and sort that out. That’s what we really mean by AI. … And then we say, okay, now look at MRIs and X-rays and tell me am I looking at cancer or not? But if you have people being paid almost nothing to label cancer, not cancer, then that’s really who’s making the diagnostics, not the AI.”
Jaskolski called this a huge problem that impacted Semalytix during the development of its AI tool PatientGPT, which uses online comments to help pharma understand the patient journey through disease. “When we started, it was actually quite a boom of crowdsourced annotation companies,” he recalled. “We tried using them and it was exactly the problem. They weren’t uneducated, but they were untrained in our domain, in the medical domain, and the data was just not usable at all. … We make a point of hiring people who have backgrounds in computational linguistics, in medicine, in pharmaceuticals, in health economics. So, they have already had three, five years of education in simply understanding what’s the symptom, what’s the comorbidity?”
This level of understanding is particularly important for a company that strives to see through the lens of patients and discover their day-to-day experiences, quality-of-life issues, and unmet needs.
“If you have cancer, then your number one priority is not to have cancer, but there’s a thousand other indications with reasonable prevalence where that’s not that clear,” said Jaskolski.
He offered his own childhood experiences by way of example. “A speaker box exploded next to me when I was still in high school, so I had hearing loss and that’s gone, but I still have tinnitus,” he continued. “And my doctors back then told me, ‘Well, we’re doing everything we can to restore your hearing in those two frequencies.’ And I couldn’t have cared less. I was worried that I can’t sleep and that I can’t go to college because I had so much trouble concentrating, and no one ever even listened to that.”
![Amol Verma](https://www.insideprecisionmedicine.com/wp-content/uploads/2025/02/Amol-Verma-MD-e1739301585997-300x300.jpg)
Asst. Professor, Dept. of Medicine
University of Toronto
Semalytix uses AI to see the world through the eyes of the patient. But, in a circular fashion, patient experiences are also increasingly shaped by AI. Chatbots powered by AI arrived in major search engines last year, widening the ability to engage in human-like healthcare conversations.
While Google’s Gemini—formerly known as Bard—regularly refuses to answer medical questions, Microsoft’s Bing chatbot has fewer barriers. But while a chatbot may seem more authoritative than a simple internet search, it can be as unreliable and harder to assess in terms of credibility, warned Canadian physician Amol Verma, MD, who studies healthcare data and AI use at the University of Toronto.
“It sounds like the voice of something that knows a little more definitively than a search engine that directs you towards a website,” he said. “You can then use other kinds of intuition or knowledge to assess the credibility of that website. It’s a bit harder to do that with a chatbot.”
![Wahram Andrikyan](https://www.insideprecisionmedicine.com/wp-content/uploads/2025/02/Wahram-Andrikyan-e1739301732819-253x300.jpg)
PhD student
University of Erlangen-Nuremberg
A recent study showed that the Bing copilot, or virtual assistant, can provide accurate and complete information to patients about drugs, but that its answers could be hard to read, unreliable, and potentially harmful. Wahram Andrikyan, from the University of Erlangen-Nuremberg in Germany, who was part of the study team, acknowledged that patients often need answers to questions that are not addressed in the health information they receive. AI-powered chatbots in search engines can help make this information more accessible to patients, he continued, particularly for those with limited access to healthcare providers.
Nonetheless, he advised caution. “Standard non-specialized chatbot models were never designed to provide or replace medical advice,” he stressed. “The quality and safety of future models intended for this use … should be assured by regulatory authorities and laws like the Medical Devices Regulation, to ensure compliance with data privacy and ethics standards.”
As these new technologies evolve, questions arise as to their management, autonomy, and liability if an AI inadvertently harms a patient. Last year, then U.S. president Joseph Biden announced a landmark executive order outlining goals to advance a coordinated and regulated approach to the safe and responsible development of AI systems and technologies. The European Union followed in July this year with its Artificial Intelligence Act, representing the first comprehensive legal framework on AI worldwide.
The issue is something that Semalytix takes seriously. “We’re a small team, yet we still have legal experts and data safety and security experts, which also wasn’t easy to set up, but that’s a big portion of what we do,” said Jaskolski.
He maintained that there are ways of making sure AI operates in an ethical and safe way. “There are technologies, decentralized learning and such, where you don’t actually exchange the data, so it stays with the person that owns the data. I think that’s the right path.”
But he added that there is “no free lunch” and it will not be possible to get a data-driven healthcare system without making data accessible.
“It’s a bit of a give and take. If you make it impossible to work on data, or at least so expensive that you grind innovation down to a halt, except for a couple of companies that can fund themselves for however long it takes, then you’re going to slow progress down … But you also don’t want to obviously create a wild west situation where no one knows where the hell is my data floating about. So, you do need a middle ground.”
Ultimately, Jaskolski believes that guidance and policies need to come from the top. “In a nutshell, either you have consensus that’s a red line from top to bottom, or it’s always going to be a shattered picture.”
It seems that physicians are eager to take the technology on board before these frameworks are in place, with recent survey findings suggesting that a fifth of U.K. family doctors have incorporated chatbots into their clinical practice without formal guidance or work policies. But Gertz predicted that this widespread adoption of AI may not bode well for their future employment. “I think it has to be appreciated that the number one way we talk about healthcare is as an industry with profit margins … So, if what we’re concerned primarily with is how do we keep costs down, AI will always be seen as a solution.”
Jaskolski is more hopeful. “I don’t think medicine is an area where people are going to be out of jobs anytime soon,” he said. “Anyone you actually talk to, from lab staff to private practice physicians to hospital doctors, everyone is drowning … so I would say we’re probably not in any danger of this taking jobs.”
And what of the impact in de-skilling the health workforce? Edelson said an argument can be made in both directions and compared it to the arrival of satnav. “Did my GPS de-skill me? It certainly de-skilled me from the old days of opening up those books of maps that we used to have, where you would plan a trip and have to open it and find exactly where you’re on. … For sure, my children don’t have that skill. They cannot open up a big book of maps. But are they less skilled at knowing their way around the neighborhood or the city than I am? I don’t think so. … I bet you that they know places that they wouldn’t have known before, when we always had to go the exact same way because we weren’t using maps [every time], because it was such a pain in the butt to open up the map and go looking.”
She believes instead that the biggest pitfall with these algorithms could be their lack of transparency if they run amok. “If you’re just completely following your GPS blindly, in theory, you could drive your car right into a river if there’s a mistake. We’ve certainly heard those examples … we tend to trust these tools and can do so blindly. And they’re fallible too. They’re just fallible in a different way than humans are.”
Anita Chakraverty is a U.K.-based journalist who has been writing about medicine and health across several international publications for more than 20 years. In her spare time, she enjoys reading, films, and walks in the countryside.