Yes, Philip, androids do dream of electric sheep (if you ask them to).
By Gabriel Makhoul
ChatGPT (Generative Pre-Trained Transformer) is a chatbot developed by OpenAI. It generates texts in response to any user request, attempting to simulate human interaction.
This extends far beyond just having conversations with users - it is capable of creating a text of virtually any kind, from the geography essay you’re giving in tomorrow to shopping lists to faux Shakespearean theatre plays. It is very versatile.
Image: ChatGPT writing a text, chat.openai.com
Released in November of last year, ChatGPT has since taken the world by storm, gathering over one million users in its first week of runtime and becoming extremely popular. It has since garnered a vast array of reactions, with some excited about the future of AI and some being wary and critical of its potential misuse, while others are sceptical of the hype surrounding it, considering it nothing extraordinary at all.
Labelled both as a herald of an upcoming era in technology (some even went as far as to compare it in significance to Gutenberg’s invention of the printing press) and a banal, trivial “bullshit generator,” it’s clear that nearly everyone has an opinion on ChatGPT. The wide range of responses by the public is analogous to the one recently received by AI-generated art, which has also been in the spotlight over the past few years due to systems like DALL-E and Midjourney.
"'the cat sat on the mat' is more likely to occur in English than 'sat the the mat cat on' and thus would be more likely to appear in a ChatGPT response.”
Image: Midjourney-generated art, Jason M. Allen, New York Times.
ChatGPT is based on the GPT-3.5 language model (Or, GPT-4 in the case of the paid version). Language models create text based on how statistically likely it is for a word to appear in a given series of words. GPT-3.5 was trained on about 45 terabytes of text data. Based on that, it infers which words are likely to appear in what context. For example, citing Britannica, "'the cat sat on the mat' is more likely to occur in English than 'sat the the mat cat on' and thus would be more likely to appear in a ChatGPT response.”
The process by which ChatGPT was trained is known as “non-supervised pre-training,” - the language model wasn’t trained to respond to particular data in a specific way, rather it was just allowed to parse through it with no definite goal in mind, allowing it to infer the various patterns and structures within human language. Essentially, instead of having predetermined answers to certain questions and requests, it only responds based on what is statistically probable.
Generative AI achieves this through neural networks, algorithms that try to imitate the workings of a human brain - information isn’t processed in a straight line, but rather through a net of connections. That’s where ChatGPT gets all that pattern recognition.
As said by David Gewirtz of ZDNet, “A neural network simulates the way a human brain works by processing information through layers of interconnected nodes. Think of a neural network like a hockey team: each player has a role, but they pass the puck back and forth among players with specific roles, all working together to score the goal.”
There is one strange downside to this, however. Since the AI doesn’t really “think” in the same way as we do, only analysing probability, not whether what it produces is actually true, it has a strange tendency to occasionally hallucinate. AI hallucinations are what happens when generative AI just "makes stuff up". This is rather problematic in the case of ChatGPT especially, because of its increasing usage in academia - it has even been cited on several research papers.
Since ChatGPT gives no sources and can spontaneously create false information out of the blue, this particular tendency has been a cause for concern, such as in this example, again provided by Britannica: “For example, ChatGPT was asked to tell the Greek myth of Hercules and the ants. There is no such Greek myth; nevertheless, ChatGPT told a story of Hercules learning to share his resources with a colony of talking ants when marooned on a desert island. “
A lack of quality can also be expected from ChatGPT in mathematics, where it often hallucinates completely untrue answers to problems, such as those where it has to multiply numbers with several digits - in this type of problem, it is only successful in 30% of cases on average.
Despite all the hype around it, the AI isn’t really that innovative - it’s just the last in line of long-term development rather than the product of some one-off genius breakthrough. The model which was modified for ChatGPT’s original version released in November, GPT-3, was developed all the way back in 2020. Even the creators of ChatGPT expressed surprise at this - as put by Jan Leike, research scientist at OpenAI: “I would love to understand better what’s driving all of this—what’s driving the virality. Like, honestly, we don’t understand. We don’t know.”
The idea of artificial intelligence itself can more-or-less be traced back to the famed mathematician Alan Turing, who proposed the idea of thinking machines in his 1950 paper, Computing, Machinery and Intelligence, in which he argues for the possibility of “thinking machines”. However, his ideas couldn’t directly be put into use as the technology for it to work wasn’t yet available at the time.
Image: “Alan Turing, c. 1930”, Britannica.com
Although it's not as revolutionary as a layman may think, we’ve still been able - over the course of roughly seventy years - to get from “thinking machines” existing only as a well-reasoned idea to the average person with internet access being able to talk to them at will. One can only wonder where artificial intelligence will lead us to in the future.
GERWITZ, David. “How does ChatGPT work?” ZDNet.com https://www.zdnet.com/article/how-does-chatgpt-work/
OpenAI.com, “Introducing ChatGPT” https://openai.com/blog/chatgpt
GREGERSEN, Erik. “ChatGPT,” Britannica.com https://www.britannica.com/technology/ChatGPT
TURING, Alan. “Computing, Machinery and intelligence,” MIND, https://phil415.pbworks.com/f/TuringComputing.pdf
ROOSE, Kevin. “An A.I.-Generated Picture Won an Art Prize. Artists Aren’t Happy.” New York Times, https://www.nytimes.com/2022/09/02/technology/ai-artificial-intelligence-artists.html
COPELAND, B. J. “Alan Turing” Britannica.com, https://www.britannica.com/biography/Alan-Turing