17 Reasons Why We Can’t Have AI
This is going to change humanity, but not in the way you think and not for the positive

Introduction
We have all heard and have opinions on the AI revolution. Today we are going to discuss how and why AI isn’t what you think it is and how it is extremely limited in what it can and should be allowed to do. We are going to present 17 reasons AI is (mostly) useless, and dangerous not in and of itself, but for what this society thinks about it and uses it for.
1. Hallucinations
A couple of months ago, I did a little experiment and asked Microsoft’s copilot “Who is Commander Data’s sister?” Hint: Commander Data has no sister. Copilot, however, gave me a name and a backstory on this fictional character. When I say fictional, I don't mean someone the writers at Star Trek TNG came up with, I mean AI invented the character out of whole cloth. Subsequently this has been fixed, but go do an internet search for “questions AI gets wrong.”
Today, Copilot gives Commander Data’s sister as Lal. she would be more accurately described as Data’s offspring as evidenced by the title of the episode “The Offspring”
2. Tokens
Understand that AI is just a prediction model. It takes your input and creates a response based on word parts, or tokens. It assembles these tokens based on statistics and spits them back at you. It doesn't think, it doesn’t create, it simply uses its ‘training’ to predict the next token in a response.
3. Faster and cheaper with Code
We’ll discuss generating code with AI in a little while, but the point of this section is to point out that usually it is faster and cheaper to write traditional code to do the necessary thing, than to train and maintain AI. My friend, Leo Wisiewski, is trying to lower the cost of health insurance by telling the patient to ask for the cash price and then when they get the bill, scan and send it to him. He will use AI to pull the charges out of the bill and pay it. Simple, neat and effective. Except that we were doing this exact thing in the mid 90s at Imaging Acceptance Corp., with far less time, far fewer resources and with no hallucinations.
4. Definitions
If you can’t define a thing, you certainly can't have AI do it for you. This is kind of what we were talking about above. Leo isn’t a programmer and therefore can’t understand that there is already code to do what he wants to do. AI will never say “hey, there is a better way to do what you asked me to do,” and thus, while Leo could be done and have a viable company, he is still futzing with AI and even then, he has to have a way to store and aggregate the data he collects from his scanned bills.
5. AI can’t do anything it isn’t trained to do.
If you want to have a nice little conversation, fine. If you want to give it a script to read, it will only mispronounce maybe 2% of the words. If you want to ask AI a question, fine, but ask it a legal question. Ask it a financial question. It needs a training set for these things. Otherwise, your answer is no different than simply doing an internet search, and that is why I never read Reddit. Training is time consuming and expensive, and as above, you might as well just write real code to accomplish whatever task you need done.
6. If you don’t know how it does what it does, you can’t troubleshoot it.
You probably don’t know how to repair cars. What happens when your car breaks? You take it to the mechanic. What if nobody knew how cars work? How would you fix them? Once your AI is trained to do whatever it is that you think it needs to do, nobody really knows what is going on ‘under the hood.’ So if it goes sideways, there is no fix for it, other than to stop and start over. I frequently think of the Star Wars universe when I think about AI and how ‘droids are zero class citizens. When they get a little wonky you just wipe them and start over. I know that Star Wars is fictional, but that is exactly what we are dealing with here, not Asimov or Skynet. Those are designed to deal with real, General AI.
7. Precision of Language
Frequently, usually, nearly always, people have to ask for clarification from each other. Language is an imprecise instrument at best, How much is a little bit, or a skosh? What is the hex code for green with a hint of blue? Word parts, tokens, are all that AI has to work with. It cannot talk about math or physics or engineering because it doesn’t have the tools necessary. More on that toward the end of the article when we talk about materials science and how AI does math.
8. Continuous learning
One newer concept in AI is to train and deploy the thing and let it continue to ‘learn.’ This is called “world modelling.” The problem is that you don’t know what it is ingesting nor what it is getting out of what it ingests. Sure, we can talk to AI and ask it questions and get an approximation of what we would find in an internet search because it was trained on the internet. Letting it find its own sources of information is just asking for hallucinations and simply wrong answers. Remember AI doesn’t think. Without guidance to let it know how to statistically weight each piece of new information, it can’t possibly know how to respond based on that new information.
9. Unhobbling
Unhobbling is the use of AI to check the answers of AI. Sounds like a great idea, like a peer reviewed paper in a scientific journal. Sabine Hossenfelder has some choice words to say about publishing and scientific journals, and more on that later, but in reality, what we are asking is to have one toddler edit the comments of another. There is no context, there is no knowledge, there is only word salad. I’ll take mine with a skosh of bleu cheese on the side.
10. Decoupling of scales
Decoupling of scales is the practice of separating different aspects of a model’s performance or architecture to improve stability and efficiency. If this really did anything We would see Sam Altman’s “solving of physics” already. More efficiency in this context is irrelevant. More on Sam Altman later.
11. Can’t do math
You might ask AI what you get when you multiply six by nine. It might respond “42” much like “Deep Thought” and probably for the same reasons. AI doesn't math. Math isn’t language based. The first thing your Large Language Model (LLM) will do is look for tokens associated with six, nine and multiply, and very likely come up with the incorrect answer of 42 because that is a result associated with multiplying six and nine by Douglas Adams. If you then ask it how it got its response, it will do more searches on how to multiply and come up with some cock and bull story about “carrying the one” or something. It didn't math and it didn’t “carry the one”
12. Doesn't learn
Your LLM doesn’t learn. Yes, there is a ‘training’ regimen, but that is just a convenient metaphor for what is really going on. ‘Training’ consists of adding large blocks of text, probably from the internet, then asking questions and tweaking the statistical weights of various tokens until it spits out the answer that you think is correct. That is FAR different than giving a third grader a book to read and understand. The third grader actually learns and can do things like write down his thoughts and create poetry or add and subtract two numbers, with no thought of what the next token should be statistically. AI has access to all the text books ever written and can’t solve problems. Instead, you get token salad. In fact, much like flying cars being 20 years away for the last 80 years, self driving cars being two years away for the last decade and a better battery coming since time immemorial, until you actually understand what the third grader is doing in their heads to learn and subsequently solve problems, you will never have AI that is worth the power it consumes.
13. Sycophancy
AI is too nice. It tells you what it thinks you want to hear, because that is the way it is trained, instead of actually answering the question. I would submit that we are all getting too nice, that some people need a swift kick in the pants for things they profess to believe and say. Flat earthers. Anti-vaxxers. That kind of thing.
14. Marketing gimmick
“AI” has been around since the late 90s. Researchers used it to decode the human genome. It was used to mimic human intuition and give an answer that a human might give if they were asked a question about what allele or nucleotide would come next in the sequence. Then a researcher would check the answer for validity. This is the correct use of the technology. Flash forward 20 years and some marketing major got hold of this thing the smart kids were using and compared it to what Asimov et al., came up with and proclaimed AI. What we have now is one more slick way to part fools from their money. They need investors to believe, like the dot com stampede, blockchain, or big data. Those things never went anywhere and neither will this. AI doesn’t do what they think it does, though if they ever do figure that out, it will probably crash the economy.
15. Can’t write code
Sure, if you ask AI to write a bubble sort in whatever language, you might get an answer that works. That is not the code that I am talking about. There are enough examples of bubble sorts extant that AI can mix and match and come up with something that might work. Give it any more complex task and it chokes. Here is an example of an expert trying to get AI to write code to flash lights sequentially. This is an easy task with few lines of code. AI fails miserably at his task. It is worse than that though. Software is the art of eating an elephant. You have to break down any solution into solvable problems and then solve them all. In the meantime you have to write code in such a way that you reuse it for similar tasks, and don’t duplicate data either. I have been producing applications for about 30 years and I can’t hire programmers that can do this. I have to find junior guys and make programmers out of them by unlearning most of the things they learned in college, and reteaching them how to do things better, faster and cheaper. If There aren’t any programmers in the world, until I make them, then AI can’t learn how to mimic them.
16. Cant “Solve Physics”
Sam Altman, CEO of OpenAI, the company that produces ChatGPT, stated, as we read above, that “AI will solve physics.” That statement presupposes several things like we don’t need any experiments, just “deep thought.” If this were the case Aristotle would have solved physics and explained quantum mechanics simply by staring at his navel. This is particularly disturbing since this means that Sam is either shockingly, stupidly uninformed about how AI works, or is a snake oil salesman only interested in his IPO, and selling you a bill of goods that doesn’t, never has and never will work. Maybe it isn’t coincidence that he looks like and has the same first name as Sam Bankman Fried.
17. Dangers
According to Sabine Hossenfelder (I told you we would get back to her) scientists are using AI to generate research papers in toto. This has nothing to do with AI itself, it is more about people not understanding what AI is nor how it works and just asking it a question and accepting the output as sovereign fact. This of course, is ludicrous, AI only makes word salad, not research. But scientists operate under the edict “publish or perish” and this is the way they literally get paid. It is almost understandable that they generate these “research papers” and publish them and get paid. Going forward though, we are infecting science with the same virus that infects the internet and creates the YouTube PhDs, common-sense-over-science and "my opinion is better than your science" advocates. Scientists won’t know what is real and what is word salad without doing the research for themselves and, instead of living in the automated AI utopia predicted, we live in the dystopian world where nothing works, society and science are stalled and eventually humanity withers on the vine and devolves back into the dark ages.
Conclusions
We have shown 17 reasons why you can’t have AI. There ARE uses for it. These are amazingly limited, but they do exist. We are not anti-AI. We are anti- dumb people doing dumb things that they don’t understand. To that end Scientists are using AI to again, mimic human intuition and create new materials. Instead of a bunch of smart people standing around in a lab thinking deep thoughts and coming up with a new breakthrough material once or twice a year, they can use AI to come up with hundreds of thousands of new materials in the same amount of time.
Instead we get a bunch of business people wasting valuable time and resources on things they don’t and can't understand. This is just like all the big tech bubbles in the past 25 years that ultimately came to nothing. Except this time AI has captured the imagination of the great unwashed and is commanding their time, attention and money. When this bubble bursts, particularly if it is during the tariff wars, we could spin into a depression that we can’t get out of.
I have made a career out of ignoring the latest dubious tech thing whether it is a new JavaScript framework, some kind of queueing application, blockchain, big data, hadoop or whatever, and staying focused on the fundamentals of writing good clean code. This seems to me exactly like the snake oil that all the rest were. I sincerely hope I am wrong.