News
The AI Seminar is a weekly meeting at the University of Alberta where researchers interested in artificial intelligence (AI) can share their research. Presenters include both local speakers from the University of Alberta and visitors from other institutions. Topics can be related in any way to artificial intelligence, from foundational theoretical work to innovative applications of AI techniques to new fields and problems.
On Jan 20, Shibhansh Dohare —a PhD student at the University of Alberta — presented “Maintaining Plasticity in Deep Continual Learning," at the AI Seminar.
Abstract: Modern deep-learning systems are specialized to problem settings in which training occurs once and then never again, as opposed to continual-learning settings in which training occurs continually. If deep-learning systems are applied in a continual learning setting, then it is well-known that they may fail catastrophically to remember earlier examples. More fundamental, but less well known, is that they may also lose their ability to adapt to new data, a phenomenon called \textit{loss of plasticity}.
In his presentation, Dohare shows loss of plasticity using the MNIST and ImageNet datasets repurposed for continual learning as sequences of tasks. In ImageNet, binary classification performance dropped from 89% correct on an early task to 77%, or to about the level of a linear network, on the 2000th task. Such loss of plasticity occurred with a wide range of deep network architectures, optimizers, and activation functions, and was not eased by batch normalization or dropout.
In the experiments, loss of plasticity was correlated with the proliferation of dead units, units with very large weights, and more generally with a loss of unit diversity. Loss of plasticity was substantially eased by L2-regularization, particularly when combined with weight perturbation (Shrink and Perturb). He shows that plasticity can be fully maintained by a new algorithm---called \emph{continual backpropagation}---which is just like conventional backpropagation except that a small fraction of less-used units are re-initialized after each example. This continual injection of diversity appears to maintain plasticity indefinitely in deep networks.
Watch the full presentation below:
Want to learn how you can kick-start your AI career? Find out more about Amii's Career Accelerator to find out more.
Nov 7th 2024
News
Amii partners with pipikwan pêhtâkwan and its startup company wâsikan kisewâtisiwin, to harness AI in efforts to challenge misinformation about Indigenous People and include Indigenous People in the development of AI. The project is supported by the PrairiesCan commitment to accelerate AI adoption among SMEs in the Prairie region.
Nov 7th 2024
News
Amii Fellow and Canada CIFAR AI Chair Russ Greiner and University of Alberta researcher and collaborator David Wishart were awarded the Brockhouse Canada Prize for Interdisciplinary Research in Science and Engineering from the National Sciences and Engineering Research Council of Canada (NSERC).
Nov 6th 2024
News
Amii founding member Jonathan Schaeffer has spent 40 years making huge impacts in game theory and AI. Now he’s retiring from academia and sharing some of the insights he’s gained over his impressive career.
Looking to build AI capacity? Need a speaker at your event?