20 August 2025

Made-up body parts? Tech bots must be dreaming

The Back Page

AI is not the only thing that is failing to learn from past mistakes.


When it comes to the Brave New World of artificial intelligence, never has the aphorism “You couldn’t make this shit up” seemed more apposite.

Because when it comes to genuinely making shit up, the AI bots are effortlessly raising the barking moonbat bonkers bar to unprecedented heights, and never more so than in the world of healthcare.

The propensity of the tech bots to invent “facts” and tell convincing lies is euphemistically referred to in the AI development world as “hallucinations”. Which makes them sound a bit harmless, but in reality these illusory facts can be downright terrifying, especially if no-one actually notices the errors.  

A salutary example of this phenomenon has recently come to light, thanks to the good old-fashioned efforts of a real-life expert, a neurologist and researcher called Bryan Moore.

We have Dr Moore to thank for pointing out that one of tech giant Google’s AI healthcare models, called Med-Gemini, isn’t quite as smart as its developers claim it is.  

Until Dr Moore came along, Google researchers had been happily spruiking Med-Gemini’s ability to analyse brain scans from radiology laboratories for various conditions.

One of the AI bot’s analyses identified an “old left basilar ganglia infarct”, in a part of the brain called the “basilar ganglia”.

Which all sounds terribly clever, until Dr Moore pointed out that no such body part called the “basilar ganglia” actually exists.

According to Google, the AI likely conflated the basal ganglia, an area of the brain that’s associated with motor movements and habit formation, and the basilar artery, a major blood vessel at the base of the brainstem. Google blamed the incident on a simple misspelling of “basal ganglia”.

Now comes the scary part. Nobody noticed the mistake for more than a year. 

What’s more, while Google were happy to correct the error in a blog post referring to the AI research paper, the mistake had yet to be rectified in the research paper itself.

It’s quite possible the engineering types at Google think that this is no big deal and nobody was harmed by this seemingly simple cock-up.

Other medical professionals are not so sanguine, however.

They argue that, in a medical context, AI hallucinations such as these could easily lead to confusion and potentially put lives at risk.

After all, it would be naïve to think that in a busy and high-pressured medical environment that flesh-and-blood humans would have the capacity to monitor and double-check these AI outputs in each instance.

In fact, having to do that would be inefficient and counter-productive, negating the very thing the AI hucksters purport is the technology’s selling point. 

Your grizzled Back Page scribe now feels compelled to point out that this may not be the very first time that technologies designed to make our lives easier and more efficient have had the opposite impact.

Send your favourite made-up body parts and story tips to Holly@medicalrepublic.com.au.

Something to say?

Leave a Reply

Be the First to Comment!

Please log in in to leave a comment


wpDiscuz