One of the reasons that imitative AI like ChatGPT is almost certainly a bubble is that running these systems simply cost too much money compared to the amount of value that they generate. Even putting aside the potential copyright apocalypse they may be facing, these systems are simply not good enough to justify the amount of money they require. The usual hypers claim that imitative AI will inevitably improve to the point where it can be used to replace entire industries. Unfortunately for them, that appears to be untrue.
Before the meat of the discussion, I should point out that I am not discussing the morality of imitative AI. While I think it’s use requires theft on a massive scale, and environmental damage on almost as large a scale, and is thus questionably moral at best. And this is not an argument that imitative AI is completely useless. While it’s more strident proponents over state what it can do, there are things it can do, with supervision. This is a practical argument: imitative AI cannot improve enough to take enough jobs to justify the enormous resources required to run imitative AI.
The problems are both human and inherent to the systems themselves. The math of imitative AI seems to pretty much forbid its improvement, at least to the level needed to replace massive numbers of human beings. A recent paper demonstrates that since imitative AI is probabilistic, it cannot rise beyond the level of its training data. It will always drift toward the most likely result, which means it will drift toward average at best. Novelty is simply not going to be outputted by these systems, so they cannot rise much past the level of amateur level creativity. And that means that they cannot be counted upon to replace significant portions of knowledge work.
Well, then, maybe the training can be improved. Part of training AI is classification, and there are arguments that classification can be leveraged to improve imitative AI. The problem is that much of that training is done by random people, not experts. This article details normal people doing classification on material that they do not understand. These people are often horrified by what they discover when they spend some time researching the material they are tasked with classifying. Not all training avoids experts, of course, but much of it does. Why? Cost. Paying real experts would, it appears, make classification too expensive to be economically viable.
The last refuge of the imitative AI will make money cohort is the notion that people will pay for imitative AI as companions and therapists. The problem is that as they ramped up the engagement factor — essentially, making the chatbots sycophants — they ramped up the incidences of the chatbots harming mental health. And when I say harming mental health, I mean suicide and psychosis. In the face of lawsuits over said behavior, OpenIA has dialed back the tool’s eagerness to suck up to users. but that in turn has driven down engagement and thus revenue. They can get people hooked, it seems, but at the cost of behavior so egregious it ruins people’s lives and invites lawsuits.
There simple does not seem to be a means for imitative AI to escape the basic fact that it cannot improve enough to justify the money being spent on it. The math, the training, and the human experience all argue against it.
Want more oddities like this? You can subscribe to my free newsletter