News

Making fairness and privacy a priority: Meet Nidhi Hegde


Learn more about the research and work of Amii Fellow and Canada CIFAR AI Chair Nidhi Hegde. With experience working in both industry and academia, Nidhi’s research includes a wide range of topics, including social network analysis and resource allocation in networks. Much of her recent work has been done in robust machine learning methods, including practical algorithms that protect privacy and enhance fairness.

In a recent Q&A with Dave Staszak, Amii's Lead Machine Learning Scientist, Nidhi suggested the need for a drastic shift in how machine learning approaches privacy.

Differential Privacy: Balancing accuracy and privacy in AI

Nidhi says she first became interested in differential privacy when working with a company on a recommender system for movie suggestions. Differential privacy is an approach that uses mathematical methods to preserve the privacy of an individual in a dataset. In machine learning, it allows a model to make inferences and predictions based on a collection of data while making it difficult to gather information on a particular individual.

“Maybe you should think of privacy as an objective itself. Then it won't be like something you're giving up to get privacy, right? It will be that you're trying to satisfy both.”

- Nidhi Hegde

Nidhi says differential privacy is often approached as being a trade-off: increased privacy at the cost of accuracy. The question that is often asked is how much accuracy should be sacrificed while still achieving the objective of the AI model. However, Nidhi argues that privacy should be seen as a more foundational part of the process.

“Maybe you should think of privacy as an objective itself,” she says. “ Then it won't be like something you're giving up to get privacy, right? It will be that you're trying to satisfy both.”

This perspective can lead to models that balance both privacy and accuracy effectively.

She also goes into recent work on how current approaches to differential privacy might need to adjust in the face of different AI technologies, such as large language models. Because these models can memorize and recall past data, they might be vulnerable to attempts to pull out private information that was used for training.

“The way that differential privacy was originally conceived doesn't really apply anymore to these models. And we need to think about other ways of approaching this problem because the type of attack or the type of breach or breaches are different,” she says.

Machine Unlearning and the Right to be Forgotten

In addition to robust privacy and differential privacy, Nidhi is examining the concept of “machine unlearning.” She notes that once a model has been trained on data, it can be difficult to remove that data’s influence from the model. That can have serious implications when it comes to privacy, especially as more countries consider the idea of “right to be forgotten” laws. Differential privacy methods might have some benefit in that area, she says, although that kind of work is in the early stages. However, Nidhi notes that the way things are progressing, “early stages don’t last long.”

“It’s a fun challenge,” she says. I'm looking forward to it.”

During the Q&A, Nidhi and Dave also talk about new approaches to minimizing unfairness in machine learning models and other aspects of privacy in AI projects. Check out the full video to see the conversation.


Learn more about how AI works and how to make use of its potential. Head to our AI Literacy page to start your journey.

Latest News Articles

Connect with the community

Get involved in Alberta's growing AI ecosystem! Speaker, sponsorship, and letter of support requests welcome.

Explore training and advanced education

Curious about study options under one of our researchers? Want more information on training opportunities?

Harness the potential of artificial intelligence

Let us know about your goals and challenges for AI adoption in your business. Our Investments & Partnerships team will be in touch shortly!