AI is Wrecking Medicine, Killing Patients and Crashing the Economy
Don’t trust anything that anyone says about AI, it is basically useless.
Introduction
We here at Sentia don’t usually promote anyone’s writing but our own, but in this case, Sergei Polevikov did the research and said it and did the research better than we could. In his two part series “These 15 Health AI Companies Have Been Lying About What Their AI Can Do (Part 1 of 2)” and “15 Health AI Liars Exposed—Including One That Just Raised $70M at a $0.5B Valuation (Part 2 of 2)” he exposes what health Artificial Intelligence (AI) companies are doing criminally wrong. We think that he is only seeing half the picture.
The Problem with “AI”
Bluntly, “AI” is useless. Just like the Big Data and Blockchain debacles, marketing people got hold of technology they don’t understand and wrapped up the idea in pretty, slick brochures in order to sell you a pig in a poke, a bill of goods. First, “AI” isn’t AI. What they really mean is Machine Learning (ML), and machines don’t learn. They persist data to a disk and they compare one value with another and do something if the comparison evaluates true and something else if it is false. This is called a decision structure. If you add in a looping structure where they do something over and over, you have a fully-fledged programming language. That is all computers do, it is all they have ever done and that is probably all they will ever do. With a nod to the Drake equation, if machines could think, we would see them all over the universe because they don’t die. With the decision structure and looping structures in mind, the smart people who decoded the human genome, at the University of California, Santa Cruz, came up with a way to mimic human intuition. They came up with a database of facts and then weighted each of those facts heavier or lighter to come up with the same, correct answer that a human would if s/he were given the same question. It isn’t intelligent, it doesn’t think, it mimics, and it only mimics after you give it the correct weighting. Sure, that is a neat trick, and saved some time, but they had to look at every answer to ensure that it was giving the correct conclusion, making it nearly useless. You still had to have a person checking the work. These incorrect answers are generally called hallucinations.
An hallucination is when a generative “AI” model generates inaccurate information but presents it as if it were true. AI hallucinations are caused by limitations and/or biases in training data and algorithms, which can potentially result in producing content that is not just wrong but harmful.
“AI” hallucinations are the result of large language models (LLMs), which are what allow generative AI tools and chatbots (like ChatGPT) to process language in a human-like way. Although LLMs are designed to produce fluent and coherent text, they have no understanding of the underlying reality that they are describing. All they do is predict what the next word will be based on probability, not accuracy.
This is why we can’t trust “AI” to do any meaningful work for us. Flash forward 20 years, add the salespeople (speaking of useless) and a slick brochure and here we are with a ton of promises that can’t be fulfilled.
How “AI” is Being Used in Medicine
We are picking on Pieces because we came to their site first. The others are from the same boilerplate. Let’s break down what they claim they can do.
Pieces claims it “summarizes, charts, and drafts clinical notes for your doctors and nurses in the EHR."
Let’s take a look at what their slick brochure says the software does and then break it down to facts.
Specifically,
- Pieces Working Summary
The Pieces Working Summary provides a concise overview of the patient and is updated based on the latest EHR documentation.
This is worthless for the very reasons stated above. The “AI” reads the chart and comes up with some inane, incorrect conclusions.
- Pieces Discharge Identification
Pieces identifies and keeps track of discharge barriers that are needed for the patient to go home.
We don’t need “AI” for this. We need a checklist. The way Sentia has solved this is to have a user configurable questionnaire with all the things that need to happen before a patient can be discharged (or handed off to recovery or anything else that requires a checklist). This is simple, user configurable, foolproof, trackable and not some magic black box that you get charged millions for.
- Pieces Working Progress Note
Pieces drafts daily progress notes for doctors within the EHR – saving time, reducing stress and error, and improving billing.
With hallucinations, you can’t trust AI to do anything, particularly when lives are in the balance. We can’t speak to the claim of improving billing because a progress note doesn’t involve billing at all.
- Pieces Predictive Models
Pieces provides a wide variety of customizable predictive models across clinical and operational areas to improve patient safety, reduce waste and generate revenue.
This is utterly useless. “AI” can’t think and can only parrot back to you the things that you have weighted. Notice the use of ‘weighted’ instead of ‘trained.’ You can’t train a computer, outside of the decision and looping structures that programmers use as stated earlier. Consequently, you get no new value. Why not just use the checklist from earlier to figure out what you might have missed?
- Diagnosis Capture
Pieces suggests potential diagnoses and diagnosis clarification – with a goal of improving billing and reducing CDI inquiries.
The old, evil, legacy insurance companies have forced us to use code sets that aren’t adequate to the task of documenting the patient encounter. We here at Sentia use the UMLS that contains over 14 million concepts and is fully adequate for documenting a patient encounter. That means you can do a search for fully structured data for symptoms and their associated diagnoses and get back real information and outcomes. With our systems you get a payment amount right on the screen with the procedure so you as a practitioner know exactly what we pay. There are no networks, no negotiated rates and no medical coding. Type in your procedure and see the rate.
- Pieces SafeRead
One challenge of generative “AI” systems is that they can “hallucinate”: invent non-factual information. Pieces has created the Pieces SafeRead platform to make sure its AI outputs are “safe to read”–i.e. hallucination risk is minimized as far as possible. SafeRead employs highly tuned adversarial “AI” alongside board-certified clinician oversight.
Here is the whole problem in a nutshell. “AI” can’t think, “AI” can’t DO anything of value without being monitored by a real human being that knows what the answer should be in the first place, so it is literally a waste of time, effort and resources and they state that right in their slick little sales brochure.
So, either the notes in the EMR/EHR are kind of worthless, or you can’t have them generated by “AI.” If they are worthless, don’t bother with them. If they aren’t worthless then you can’t use “AI.”
The Larger Problem
Doesn’t anyone remember the dot com bubble? How much did your company spend on Big Data? How about NoSQL databases? Let’s talk about Blockchain. What about SalesForce or PeopleSoft or any of the other packaged solutions that require a solutions architect and dozens of developers? None of those changed your business one iota and your company spent millions and collectively hundreds of billions of dollars on deals made at the golf course by sales people selling to C-Suite ‘executives’ who don’t understand what they are buying. Why should “AI” be any different? Spoiler Alert: it isn’t.
We mentioned the dot com bubble specifically because in it, we learned that some of this is just sales hype and that we can’t trust most technology companies to actually produce anything of value, at least in any reasonable amount of time. Nope, the dot com bubble was for the high rolling fat cats to get in on an IPO and execute the exit strategy and make a lot of money before people realized the product was all smoke and mirrors. We here at Sentia say that because we do produce real, working software that creates value and does it at a reasonable price and for the common good. Yes, we make a profit, and no that isn’t a dirty word, but we give value for your money. We can show off our software, working, and explain how it works and what it does and why you should use it in real, concrete terms.
“AI” on the other hand, is just more smoke and mirrors: a tool to separate fools from their money.
The “AI” bubble is already more than an order of magnitude larger than the size of the dot com bubble. When everyone figures out that it can’t really DO anything there is going to be an economic collapse. In 2002 the dot com bubble burst and the NASDAQ crashed going from an intraday high of 5132.53 on March 10, 2000 to 1,139 in October of 2002, erasing all of the dot com gains and quite a bit more. When the dot com bubble burst CNN posited that the economy lost about 1.7 trillion dollars . In 2002 the US GDP was 10.2 trillion dollars, so 16 2/3% of the total US economy disappeared virtually overnight. Forbes thinks the “AI” bubble is set to burst and is going to cost the global economy 15 trillion dollars. The US GDP is expected to be a little over 28 trillion dollars, meaning that if when the “AI” bubble bursts, we won’t be looking at a tiny little 17% bump in the road, we are looking at a devastating 1930s style depression where we lose 53% of the total output of the country, and again, basically overnight.
The Consequences for Medicine
If this prediction comes to pass, people are going to die. Hospitals and practices will shut down. Some will just flat starve. This could seriously turn into the zombie apocalypse the doomsday preppers have been warning us about.
What We Can Do
First, this is a dumb thing put forth by dumb people. They are salesmen who will do and say anything for a paycheck. In fact, the smart people at UC Santa Cruz came up with this ML thing and we didn’t hear anything about it until recently when the marketing weenies got hold of it. Ignore it. Tell your friends and colleagues to ignore it. Certainly, do not give the marketing weenies any money. The real question is “what can we do to keep this from happening in the future?” That one is tougher. We suppose that the Venture Capital (VC) companies are at the root of the problem. They get hold of anything novel and pump a bunch of money into it so they can do an Initial Public Offering (IPO) and multiply their money by orders of magnitude and exit the business altogether. They don’t care if the product is viable or even real, they are long gone by the time that song gets sung. They really don’t care how the company does after that. By that point they already have their money. This is a classic pump and dump scheme. It is also illegal. I am unsure how to catch these VCs doing this, and something has to be done before we all end up in a 1930s style dustbowl.
Conclusions
VCs and marketing people are a dangerous combination. Maybe we will get lucky. and the “AI” bubble will deflate slowly like Big Data or Blockchain and not cause too much harm. Maybe we will all suddenly realize that “AI” as it stands is actually worthless for doing much of anything and can’t be left to mind the store in any reasonable fashion. In 2002 during the dot com bubble, people finally realized that these companies weren’t producing anything of value, even information, and the whole house of cards collapsed. That realization would cause this bubble to burst as well, and half the American economy will go up in smoke overnight. If the bubble just bursts, we are looking at a 1930s style depression that could last for decades. Many practices and hospitals will close and tens or hundreds of thousands of lives could be lost. We are unclear on which of these will come to pass, but the storm is coming. The only resolution we can think of is a bad one. We rarely to never call on government to do anything because the track record they present indicates that they CAN’T do anything effectively. That said, regulating the VCs and financial systems seems to be the only choice to stop this historical hype from crashing the economy every ten years, on average. Maybe we could tax these fat cats at 100% over $100,000 and their companies at 100% over one million dollars. Those numbers are artificially low, but you see our point. There would then be no incentive for them to make more than a modest profit and cease the “brain drain” of attracting the best and the brightest to the finance industry. This is quite literally the tail wagging the dog. Finance is a means of accomplishing goals not the goal itself.
We hear about Business being cyclical. Business isn’t cyclical. The cycle is caused by a decade of regulation slowly being reduced and some dumb finance guys making idiotic decisions about something they don’t understand with someone else’s money and causing the economy to crash. Sure, COVID wasn’t a dumb business decision, but thanks to Jerome Powell and the FED, that didn’t cause a crash.
I call on you, dear reader, to come up with a strategy to stop these finance idiots and keep them from crashing our economy and killing people.