CIFAR, the AI PULSE program at UCLA School of Law, and Amii are thrilled to host the inaugural Summer Institute on AI and Society in Edmonton this July 21 – 24, 2019.
Summer Institute brings together experts, grad students and researchers of all backgrounds to explore the societal, governmental, and ethical implications of AI. A combination of lectures, panels, and participatory problem-solving, this comprehensive and interdisciplinary event aims to build understanding and action around these high-stakes topics.
Summer Institute takes place right before Deep Learning and Reinforcement Learning Summer School and will include a combined event on July 24th for both Summer Institute and Summer School participants.
We spoke with one of the co-organizers of Summer Institute, UCLA School of Law professor Edward Parson, to talk about the origins of the event, what themes and topics might be covered, and why you should apply now. Check out what he had to say below:
Please note: this interview has been edited and condensed for space
Tell us about how you became interested in AI and its societal impact.
My main professional background has been in environment, energy and related policy areas. But because of my partial scientific and technical training, I’ve always had a central interest in technology and those areas – what it does, what forces determine how it changes, how, if at all, societies can get the benefits and limit the harms, and how that works. That was the bridge to thinking about AI.
How did you get involved in the Summer Institute?
Last year when I was on a sabbatical year at the University of Victoria, I became aware of CIFAR’s program supporting AI and related initiatives. And in particular, CIFAR’s interest in broadening its support from technical issues of AI out to societal impacts, regulatory, and governance issues. After speaking with them and then consulting with a couple of Canadian colleagues who are more on the technical side of AI – Alona Fyshe and Dan Lizotte – we submitted a proposal, it was approved, and we’re going forward with the three of us co-directing the institute, with joint support from CIFAR and from my project here at UCLA.
Can you tell us more about this project at UCLA?
It’s an outgrowth of a longer-standing activity at UCLA Law School that’s been on Science Technology in Law, called the AI PULSE Program. We’re looking at ways to think through potential impacts that are sort of intermediate in scale and time horizon. We’re looking for ways to get reasonably disciplined hooks on what the impacts might be five, 10, 20 years out, and how to anticipate, assess, and forestall the most disruptive and harmful aspects of those.
This also characterizes my main interest for the Summer Institute. But I’m one of three co-organizers. My two co-organizers’ interests come mainly from the side of technical aspects of AI. They’re more concerned with developing useful ethical guidelines that students and practitioners of AI and machine learning might observe in their current practice. So we expect to be covering a range of issues.
What do you believe the benefit is of the Summer Institute for attendees?
To be involved in conversations on these fascinating topics that don’t have a lot of place for consideration in the normal curriculum. Networking among a bunch of people with similar interests on issues that are likely to be really important and recurrent over time. And I expect it’ll be really interesting and fun.
What important ethics and societal implications should AI practitioners pay attention to?
AI is the weirdest technology in the world. I’ve spent decades studying social impacts of technology in all kinds of domains. AI is unlike any other technology that I’ve thought about before because nobody knows what it is. It is so diffused, so fuzzy in its boundaries, so diverse in the different strains of capability that contribute to what’s going on presently. And so limitless in the things it might be used for.
What might AI do? It might enable things that are not presently possible. It might enable an extraordinary advance in environmental protection management. It might displace human ingenuity, or augment human ingenuity, in dozens of fields of scientific and technological research. Some weeks ago, a new machine learning program out of DeepMind in London won the annual world competition for protein folding projections. It’s sort of like what happened to the Go masters just happened to the protein scientists.
On the other hand, things that become possible through technological advance often get done even if we disapprove. One of my colleagues who thinks about this stuff, Allen Dafoe at Oxford, has thrown out the slogan that “one of the social risks of AI is robust totalitarianism.” Comprehensive surveillance with perfect facial and human individual recognition and omnipresent information about everything you think, do, and say. In the hands of a tyrannical regime.
AI is big stuff. It is big, historical stuff. The possibility of capabilities that really fundamentally disrupt employment and livelihood and labour markets, that fundamentally disrupt the functioning of the state, that fundamentally disrupt the functioning of the economy, and every sub-sector thereof for good and ill.
The potential benefits are enormous, but even they will come with enormous disruption. So if we all get to move to a Jetsons world where we’re at leisure all day and the machines do the work, that might be really nice. But it will explode a bunch of foundations of social order. These are all the things we need to talk about at Summer Institute.
What aspects of its implications do you think are not being paid enough attention?
It’s the medium term – what happens five steps down the line, and how we can get any handle on thinking about that beforehand. To make an environment that makes it likely that people get the benefits and don’t get the worst harms from those rapid changes.
What are you most looking forward to about Summer Institute?
Talking about all this fabulous stuff with a bunch of really interesting and engaged people from all over the place spatially, and from all over the place in terms of intellectual background and how they think.