Alberta Machine Intelligence Institute

Meet Amii Fellow: Geoffrey Rockwell | Amii

Published

Dec 4, 2023

Learn more about the research and work of Geoffrey Rockwell, one of the latest Fellows to join Amii’s team of world-class researchers. Geoffrey is a professor in both Media Tech Studies and in the philosophy department at the University of Alberta.

Rockwell's research includes AI ethics, textual visualization and analysis and computing in the humanities

Check out his conversation with Alona Fyshe, Amii Fellow and Canada CIFAR Chair, where they discuss Rockwell's thoughts on dialogues with AI, computer ethics and the need for more AI literacy when it comes to large language models.

[This transcript has been edited for length and clarity. To see the expanded conversation, watch the video above.]


Alona Fyshe:

Geoffrey, thank you so much for coming in today. We're so excited to have you as a fellow and to have this interview with you today.

Geoffrey Rockwell:

Well, thank you for having me. I'm really pleased to be a fellow now. And to be here and to be learning with you.

Alona Fyshe:

So, what got you interested in AI?


Geoffrey Rockwell:

One of the things that got me interested is the whole relationship to dialogue. I wrote my PhD thesis on philosophical dialogue. And, of course, there's a long history of dialogue in AI going back to the Turing Test. That, in some ways, is a dialogue, ELIZA is a chatbot. And now, lo and behold, as of November (2022), one of the most extraordinarily successful AI systems, ChatGPT, is essentially built around dialogue.


Alona Fyshe:

So, what is the philosophy of dialogue? What does that mean from a simple perspective?

Geoffrey Rockwell:

Dialogues allow you to handle complex problems where there may not be a simple answer, where you actually want to bring together different positions on the same subject without resolving them. Without saying at the end, "he wins" and "this is the right answer."


Alona Fyshe:

So this may be related: what does philosophy have to do with AI?


Geoffrey Rockwell:

I mean, I think there was probably a time in which philosophers were some of the people who were seen as doing AI.

So, on the one hand, you have people like [Hubert] Dreyfus, who was bringing a phenomenological perspective, criticizing a certain view of how we're going to achieve AI through symbolic processing.

So that would be one way. I think another area where philosophy crosses with AI is theories of mind. You know, insofar as some people who are studying AI are studying it to get insight into human intelligence.

And as, of course, there is a mind or philosophy mind. So there's a lot of overlap, just as there is between AI and cognitive science and philosophy and cognitive science.

Thirdly, I think ethics is becoming more and more important. The tradition of computer ethics, I think, is one of the things that is maturing and thinking about AI now.


Alona Fyshe:

What are your thoughts on the large language models like ChatGPT?


Geoffrey Rockwell:

Where are my thoughts? My thoughts vary from day to day as I play with them.

I'm not particularly worried about the singularity, isn't it? So I would disagree with the people who see the large language models as a sign that we are very close to achieving [Artificial General Intelligence] and consequently super-intelligence or ultra-intelligence, whatever you want to call it. So I'm not in that particular camp.

I tend not to think that the problem we're dealing with right now, the ethical problem, is one of alignment of super-intelligence. I think there is a camp that says we have a series of things that we know are problematic with these large language models. You know, they tend to reflect the biases built into the training datasets. Those are immediate problems, and they are particularly going to be problems as these things are rolled out and deployed in ways that are not transparent.

So you get down to, I think, the brass tacks of how do we take advantage of the extraordinary advances in the field and the applications of these tools while still making sure that we have some level of transparency, accountability and so on. I'm particularly worried, I think, about the uses of this technology that are out of sight when they're used by a government office or a bank to decide your credit score. Those are the types of issues that I'm concerned about.


Alona Fyshe:

One thing that scares me about these large language models and kind of relates to what you've already touched on is the way that humans have used dialogue in the past to convince.

I'm worried that ChatGPT could have very convincing conversations with people and convince people of things that are not true.


Geoffrey Rockwell:

I agree. And I think, as [Douglas] Hofstadter talks about the Eliza Effect, how apparently [ Joseph] Weizenbaum's secretary even at one point asked him to leave the room because she was having a very personal conversation with Eliza.

We have evolved to attribute intelligence where it isn't necessarily there.


Alona Fyshe:

Yeah, and I think it comes from our sort of theories of mind that there are certain entities or certain interactions of which dialogue is the key one, where if we get a dialogue that has certain features, we project intelligence.


Geoffrey Rockwell:

And I agree entirely.

So then, just building on that, I mean, this is the concern around misinformation that these tools can be slaves to a meta-process that does what Cambridge Analytica was trying to do, you know, micro-target ads. But now you're micro-targeting dialogue, which would, as you said, be more convincing and perhaps more likely to sway people and manipulate them. People use the words they think and understand when they're talking about large language models, both the layperson and people with a technical background.


Alona Fyshe:

Do you think those are the right words to use? Should we be using different words?


Geoffrey Rockwell:

Once they coined the phrase artificial intelligence, the cat was out of the bag.

Having said that, I think there is awareness in the computer science field, at least maybe not in the public, maybe not even in philosophy. We need a level of AI literacy such that people understand that a large language model does not understand what you write, the way you understand it,

It's not making a series of associations. Instead, it's sort of predicting what words should come after. But it's not understanding the way we understand understanding.


Alona Fyshe:

Thank you so much for coming in.


Geoffery Rockwell:

It was a real pleasure to talk to you, and thanks for having me as a fellow.

Authors

Share