The introduction of AI to the operating theater doesn’t always work out as expected. Reuters has published a detailed report, with examples taken from lawsuits involving three ‘AI-enhanced’ medical machines. In the most excruciating example of an AI-assisted medical device going wrong, the ‘enhanced’ system is blamed for causing major medical emergencies, resulting in blood spraying around the operating room, and causing victims to suffer strokes.
Before we go into detail, it must be noted that the FDA reports seen by Reuters aren’t complete, so don’t definitively indicate the introduction of AI was the root cause behind the increase in mishaps. However, lawsuits are pushing the argument that AI contributed to the injuries that occurred since the medical machinery ‘enhancements’ arrived.
TruDi Navigation System from Acclarent
TruDi from Acclarent is used by clinicians to treat chronic sinusitis. It is designed to simplify surgical planning and provide real-time feedback during delicate procedures such as sinus operations.
After three years on the market, and a reported eight malfunctions, the device was ‘enhanced’ by AI algorithms and has since been involved in “at least 100 malfunctions and adverse events,” notes the Reuters report.
Problems attributed to TruDi AI have included cerebrospinal fluid leaks, puncture of the base of the skull, major arterial damage, and strokes.
Two specific (horrific) cases are detailed in the source story. The first involves Erin Ralph. This victim is currently taking legal action, and their lawsuit alleges that the TruDi system misdirected the doctor to put surgical instruments near the carotid artery. This caused a blood clot incident and stroke, it is claimed.
Ralph subsequently needed five days in intensive care, experienced brain swelling, and a portion of skull needed to be removed as part of the remedial treatment. One year later, Ralph is still in physical therapy due to ongoing physical issues.
Donna Fernihough also experienced carotid artery damage. According to the source report, this artery “blew” during her op, with “blood spraying all over” the theater. The victim in this case had a stroke – again. Apparently, a TruDi rep (Acclarent) was watching this medical procedure.
Fernihough’s lawyers say the AI-enhanced system is “inconsistent, inaccurate, and unreliable.” Moreover, Acclarent “lowered its safety standards to rush the new technology to market,” and set “as a goal only 80% accuracy for some of this new technology before integrating it into the TruDi Navigation System,” insist the plaintiffs in this legal case.
Issues with the eSonio Detect system and the Medtronic LINQ implantable cardiac monitor
The Sonio Detect fetal image analyzer maker is accused of using a faulty algorithm. Due to this alleged built-in AI error, it purportedly misidentifies fetal structures and body parts, says the report. There have been no reports of patient harm from this analyzer’s use.
Medtronic LINQ implantable cardiac monitors are AI-assisted devices that are alleged to have failed to recognize abnormal rhythms or pauses in patients. Again, no incidents of patient harm are known.
In addition to FDA resources being under strain, as noted previously, the body’s AI device approval screening process may need reworking. Reuters indicates that, currently, the FDA seems to lean a lot on a device’s prior reputation. In effect, they are “positioning new devices as updates on existing ones,” suggests the source. This might help device makers push through their AI-enhanced machines and apparatus quicker, but it doesn’t seem thorough enough when human health is in the balance.
Follow Tom’s Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.
