AI Seminar – Cynthia Rudin
Online
Online
Title: Simpler Models Exist and How Can We Find Them?
Abstract: While the trend in machine learning has tended towards more complex hypothesis spaces, it is not clear that this extra complexity is always necessary or helpful for many domains. In particular, models and their predictions are often made easier to understand by adding interpretability constraints. These constraints shrink the hypothesis space; that is, they make the model simpler. Statistical learning theory suggests that generalization may be improved as a result as well. However, adding extra constraints can make optimization (exponentially) harder. For instance it is much easier in practice to create an accurate neural network than an accurate and sparse decision tree. We address the following question: Can we show that a simple-but-accurate machine learning model might exist for our problem, before actually finding it? If the answer is promising, it would then be worthwhile to solve the harder constrained optimization problem to find such a model. In this talk, I present an easy calculation to check for the possibility of a simpler model. This calculation indicates that simpler-but-accurate models do exist in practice more often than you might think. Time-permitting, I will then briefly overview our progress towards the challenging problem of finding optimal sparse decision trees.
Link: Lesia Semenova, Cynthia Rudin, and Ron Parr. A Study in Rashomon Curves and Volumes: A New Perspective on Generalization and Model Simplicity in Machine Learning. In progress, 2020.
https://arxiv.org/abs/1908.01755
Additional discussion will also cover this paper: Generalized and Scalable Optimal Sparse Decision Trees. ICML, 2020.
Jimmy Lin, Chudi Zhong, Diane Hu, Cynthia Rudin, Margo Seltzer
https://arxiv.org/abs/2006.08690
Bio: Cynthia Rudin is a professor of computer science, electrical and computer engineering, and statistical science at Duke University. Previously, Prof. Rudin held positions at MIT, Columbia, and NYU. Her degrees are from the University at Buffalo and Princeton University. She is a three-time winner of the INFORMS Innovative Applications in Analytics Award. She has served on committees for INFORMS, the National Academies, the American Statistical Association, DARPA, the NIJ, and AAAI. She is a fellow of both the American Statistical Association and Institute of Mathematical Statistics. She was a Thomas Langford Lecturer at Duke University for 2019-2020.
The University of Alberta Artificial Intelligence (AI) Seminar is a weekly meeting where researchers (including students, developers, and professors) interested in AI can share their current research. Presenters include local speakers from the University of Alberta and industry as well as other institutions. The seminars discuss a wide range of topics related in any way to Artificial Intelligence, from foundational theoretical work to innovative applications of AI techniques to new fields and problems are of interest.Learn more at the AI Seminar website and by subscribing to the mailing list!
Looking to build AI capacity? Need a speaker at your event?