News
In the latest episode of Approximately Correct, reinforcement learning legend Rich Sutton (Amii Fellow, Canada CIFAR AI Chair & Chief Scientific Advisor) talks about what he thinks is holding AI research back.
Sutton argues that the prevalent approach in AI—particularly the focus on linear static learning models—has made incredible strides but is now limiting further progress. Instead, he says it lacks the capacity for long-term, adaptive learning.
“I feel the aesthetics of the field have changed. The field wants to focus on what they can do instead of noticing what they can't do … it's just that simple, you know, we can do certain things, and so we work on those,” he says.
In the episode, Sutton also shares his belief that researchers might soon have a fuller understanding of intelligence and the wide-ranging benefits that understanding could have for society. Finally, he reveals a bit about how he develops his research ideas and how he chooses what to work on.
Approximately Correct: An AI Podcast from Amii is hosted by Alona Fyshe and Scott Lilwall. It is produced by Lynda Vang, with video production by Chris Onciul.
You can hear episode three of Approximately Correct on Spotify, Apple Podcasts, Google Podcasts and other podcasting services.
Oct 23rd 2024
News
On Sept. 13, Cory Efird — an MSc. student in the Computing Science Department at the University of Alberta — presented “Contrastive Decoding for Concepts in the Brain" at the AI Seminar.
Oct 23rd 2024
News
Amii Fellow, Canada CIFAR AI Chair and co-lead at BLINC Lab Patrick Pilarski is leading the team to compete in the 2024 Cybathlon, an international competition advancing leading-edge assistive technologies for individuals with limb differences.
Oct 23rd 2024
News
Read our monthly update on Alberta’s growing machine intelligence ecosystem and exciting opportunities to get involved.
Looking to build AI capacity? Need a speaker at your event?