The last few years have seen astounding leaps forward in Natural Language Processing. And in just the past couple of months, people have become fascinated by impressive large language models like ChatGPT. But along with the excitement have been heated debates on the true nature of these programs.
In a TED Talk hosted last month at the TED Theater in New York, Amii Fellow and Canada CIFAR AI Chair Alona Fyshe asked the question: can AI truly understand language, or have we tricked ourselves into thinking so?
"I work in AI, and let me tell you, things are wild. There have been multiple examples of people being completely convinced that AI understands them," Fyshe told the crowd at the TED Theatre in New York.
Fyshe cites examples like ChatGPT or the case of a Google engineer in 2022 who was convinced that the company's language AI was sentient. However, she says others are not convinced. It comes down to what we mean when we say something "understands" language. When an AI model generates text, is it putting together words similarly to humans? Or is it following a very detailed set of instructions that creates an illusion of understanding?
The key to answering that question might come from the human brain, she says. Fyshe is an Assistant Professor of both Computing Science & Psychology at the University of Alberta. In the talk, she detailed research using brain imaging to map out the mental "scratchpad" humans create when reading words. Those images were then compared to a similar representation of what is going on inside an AI model when it is processing text. Those comparisons can help determine if AI actually understands us or if we are just seeing reflections of ourselves in the models we've created.
"We need to know what the AI is doing, and we need to be able to compare that to what people are doing when they understand language," Fyshe says.
Check out the full talk to learn more about the research and how we interact with AI language models.