News
Learn more about the research and work of Amii Fellow and Canada CIFAR AI Chair Nidhi Hegde. With experience working in both industry and academia, Nidhi’s research includes a wide range of topics, including social network analysis and resource allocation in networks. Much of her recent work has been done in robust machine learning methods, including practical algorithms that protect privacy and enhance fairness.
In a recent Q&A with Dave Staszak, Amii's Lead Machine Learning Scientist, Nidhi suggested the need for a drastic shift in how machine learning approaches privacy.
Nidhi says she first became interested in differential privacy when working with a company on a recommender system for movie suggestions. Differential privacy is an approach that uses mathematical methods to preserve the privacy of an individual in a dataset. In machine learning, it allows a model to make inferences and predictions based on a collection of data while making it difficult to gather information on a particular individual.
“Maybe you should think of privacy as an objective itself. Then it won't be like something you're giving up to get privacy, right? It will be that you're trying to satisfy both.”
- Nidhi Hegde
Nidhi says differential privacy is often approached as being a trade-off: increased privacy at the cost of accuracy. The question that is often asked is how much accuracy should be sacrificed while still achieving the objective of the AI model. However, Nidhi argues that privacy should be seen as a more foundational part of the process.
“Maybe you should think of privacy as an objective itself,” she says. “ Then it won't be like something you're giving up to get privacy, right? It will be that you're trying to satisfy both.”
This perspective can lead to models that balance both privacy and accuracy effectively.
She also goes into recent work on how current approaches to differential privacy might need to adjust in the face of different AI technologies, such as large language models. Because these models can memorize and recall past data, they might be vulnerable to attempts to pull out private information that was used for training.
“The way that differential privacy was originally conceived doesn't really apply anymore to these models. And we need to think about other ways of approaching this problem because the type of attack or the type of breach or breaches are different,” she says.
In addition to robust privacy and differential privacy, Nidhi is examining the concept of “machine unlearning.” She notes that once a model has been trained on data, it can be difficult to remove that data’s influence from the model. That can have serious implications when it comes to privacy, especially as more countries consider the idea of “right to be forgotten” laws. Differential privacy methods might have some benefit in that area, she says, although that kind of work is in the early stages. However, Nidhi notes that the way things are progressing, “early stages don’t last long.”
“It’s a fun challenge,” she says. I'm looking forward to it.”
During the Q&A, Nidhi and Dave also talk about new approaches to minimizing unfairness in machine learning models and other aspects of privacy in AI projects. Check out the full video to see the conversation.
Learn more about how AI works and how to make use of its potential. Head to our AI Literacy page to start your journey.
Nov 7th 2024
News
Amii partners with pipikwan pêhtâkwan and its startup company wâsikan kisewâtisiwin, to harness AI in efforts to challenge misinformation about Indigenous People and include Indigenous People in the development of AI. The project is supported by the PrairiesCan commitment to accelerate AI adoption among SMEs in the Prairie region.
Nov 7th 2024
News
Amii Fellow and Canada CIFAR AI Chair Russ Greiner and University of Alberta researcher and collaborator David Wishart were awarded the Brockhouse Canada Prize for Interdisciplinary Research in Science and Engineering from the National Sciences and Engineering Research Council of Canada (NSERC).
Nov 6th 2024
News
Amii founding member Jonathan Schaeffer has spent 40 years making huge impacts in game theory and AI. Now he’s retiring from academia and sharing some of the insights he’s gained over his impressive career.
Looking to build AI capacity? Need a speaker at your event?