ChatGPT Isn't Intelligent Now and Never Can Be

When I read the discussions of ChatGPT going around, I find myself thinking of Kurt Vonnegut's novel Hocus Pocus. The protagonist, a professor at a school for unpromising kids from rich families, finds a bunch of perpetual motion machines in an attic, and puts them on display in the library with a plaque that reads: "The Complicated Futility of Ignorance." The parents complain, because maybe some of these could be made to work! They seem pretty close to perpetual!

The fact that perpetual motion machines are entirely impossible wasn't something those parents had learned, and without the knowledge of physics necessary to understand the problem, they saw things that seemed to keep moving for a long time, which feels like a step towards moving forever, right? Sadly, they existed in a time before we had access to Wikipedia and Google, and had to muddle through with whatever they half-remembered from school without being able to ask the phone in their pocket for the answer. 

The current crop of chat algorithms feels a lot like a new batch of perpetual motion machines. That isn't to say that Artificial Intelligence itself is a doomed enterprise the way that perpetual motion machines are — I certainly can't say — or even that there is no use case for this kind of natural language generation. But language models that can produce readable text only get facts right incidentally because they have been trained on texts that contain facts. They don't inherently know that racism is wrong, just that "is" and "wrong" often follow "racism" in their training materials. They don't actually understand the topics they are discussing, they just parrot back to us a statistically equivalent version of the word order in the training set they were fed.

Yes, they can answer questions, but only if the answers appeared in their training material. They can perform basic calculation, but only if they have read the problem they are being asked to solve a mathematically significant number of times. The way that they fail at math is instructive: they usually forget to carry the 1 when doing sums, because they know that 9 + 5 = 14 in their training materials, but have never encountered 9827 + 351, so they just do the parts they know and get an answer like 9178 - each digit is the result of one addition, but it's still very wrong. They obey the rules they know, but misunderstand what to do with them - because they don't understand anything. They've read lots of texts about how math works, but didn't learn how to perform mathematical calculations, just how to imitate people talking fluently about them.

When you push back on all of the hype, you'll hear a lot of people saying that while Bing Chat is flawed (and sure, maybe it constantly lies to people about nearly everything), it's actually very close to being useful and will of course get smarter over time, and I fundamentally disagree. A system that has no actual understanding of the words it reads and writes apart from their statistical likelihood is not intelligent in any meaningful way, and while some part of it may end up being used in an actually intelligent system, by itself it's a distraction that does not get us closer. 

But how can we expect people to understand what separates these glorified autocomplete tools from general artificial intelligence? While you can easily look up "perpetual motion" and get a definitive answer, the current news cycle features a hundred articles hyping up the potential of ChatGPT, and another hundred articles telling you to be afraid of losing your job to it for every one article that accurately expresses its limitations. Like the last cycle of boom and bust around blockchains and cryptocurrencies, hucksters and conmen are jumping onto the artificial intelligence bandwagon to try to make a buck off all of the rubes who believe that these language models are truly intelligent.

ChatGPT can be a lot of fun to play with, but that doesn't make it intelligent, and the world does not need new and better versions of ELIZA. What we need are systems that can divine truth on their own. Beyond summarizing a piece of content, we need them to analyze and synthesize information from multiple disparate sources. We need a system that reads billions of texts and understands them — and can cite them when asked for proof. What we have are systems that read billions of texts and repeat them back at us with the meaning removed, but the biases and limitations intact.

At least you can always ask ChatGPT why you should trust it, and it will give you an honest answer.

Screenshot form ChatGPT: Question: What are the dangers of chat-based interfaces to large language models? Answer: Chat-based interfaces to large language models can present several potential dangers, including: Misinformation: Large language models may generate responses that are not accurate or factually correct, which can lead to the spread of misinformation or fake news. Bias: Language models can be trained on biased data, which can lead to biased or discriminatory responses. This can reinforce existing stereotypes or prejudices and contribute to the perpetuation of social inequalities. Manipulation: Bad actors could potentially manipulate chat-based interfaces to spread disinformation, spam, or malicious content, or to impersonate others and engage in phishing or other fraudulent activities. Privacy concerns: Chat-based interfaces may collect and store personal data about users, such as their location, browsing history, and conversation logs. This can raise privacy concerns and lead to potential abuses of this data. Ethical concerns: The development and use of large language models raises broader ethical concerns around the potential misuse of artificial intelligence, the impact on employment and labor markets, and the broader social and economic implications of widespread automation.It is important to be aware of these potential dangers and to take steps to mitigate them, such as ensuring that language models are trained on diverse and representative data, implementing robust privacy and security measures, and developing ethical frameworks for the responsible use of artificial intelligence.

All posts

Explore Albums

See more photos from Rawlings Conservatory
See more photos from Davis Dogs
See more photos from Westminster Abbey