Learn more about the research and work of Dr. Patrick Pilarski, Amii Fellow and Canada CIFAR AI Chair.
Patrick co-leads the BLINC Lab at the University of Alberta alongside Dr. Jacqueline Hebert, MD. The lab is using artificial intelligence to help create the next generation of prosthetic limbs.
He sat down with Amii Fellow and Canada CIFAR AI Chair Dr. Alona Fyshe to discuss his work, how AI can help build better devices for people with limb difference and the close bond that humans have always had with technology.
[This transcript has been edited for length and clarity. To see the expanded conversation, watch the video above.]
Alona Fyshe:
Patrick, thank you so much for coming down today.
Patrick Pilarski:
Hey, thanks for having me.
So we're here today to talk about your research. Can you tell me first what got you interested in AI?
Yeah, I guess intelligence is really one of the most, I think amazing phenomena in the universe. It's complex and it's beautiful. And I've always been really fascinated with the natural world. So I grew up in rural Alberta. I guess it was a no surprise that I'd been drawn to study the complex patterns in nature.
One particular area that's really drawn me in over the years is the idea of intelligence amplification. The idea that one intelligence can amplify or boost another, that groups of minds can achieve in ways they can't when they're in isolation.
So I guess it's really no surprise you sort of take this interest in natural phenomena and in minds, sprinkling a bit of science fiction literature and like the leap to AI isn't actually that large after that point.
That's cool, I didn't know you were from rural Alberta. How did you find your way into AI? Like what was that connection?
I started actually with an undergraduate degree at the University of British Columbia. I moved from rural Alberta over to the beautiful Canadian coast and then moved back to Edmonton to work on handheld medical devices and really bridged from there into robotics, autonomous systems and reinforcement learning and artificial intelligence in the Department of Computing Science back at U of A.
Right, Alberta is a good place to do that
It's a fantastic place.
So give me the elevator pitch for your program.
Sure, so I co-lead the Bionic Limbs for Improved Natural Control laboratory alongside my amazing clinician collaborator, Dr. Jacqueline Hebert. And our whole deal is that we are trying to transform the science in the art of prosthetic restoration, improving the way that people with limb difference, for instance, people with upper limb amputations, use robots in their daily life. For instance, as actually the upper limbs of their body.
The key part of this is that artificial intelligence is being deployed to help those people work with their devices to help the person and their robotic body part align to each other, to improve over time and really get to become more than the sum of their parts.
Right, because each arm shouldn't be exactly the same for every person?
Exactly, this is one of the things I really love about this line of research is that it is centred on the individual. It's all about a person and a device that supports them in their life. And we work really hard on building new AI technologies and machine learning technologies that allow the device to sculpt itself to that person and what they need in their daily life.
So was there a moment where you thought, okay, this is it, I'm gonna work on limbs? When did that happen for you? Or your work on prosthetics, maybe as a broader category?
Yeah, I think it was an interesting transition. I was working a lot on mobile robotics and in human-facing devices, medical devices and non-medical devices. And there was this moment where it really clicks that, oh my goodness, the best place to study the future of AI technologies and also the best place to really add value to human life is in devices that are tightly coupled to the human body.
There's no more like a clear setting, I think, and a direct setting to study human-machine interaction than where a person is actually wearing a robot on their body as part of their daily life. This is just such a natural setting to study human-machine alignment, and how people and machines co-adapt and get better as they interact. And also to really stress test some of our existing AI algorithms, make sure that they can really work in this wild, wacky, and woolly domain of a person interacting with the world as part of their life.
So that was just this sort of transformative moment where I realized, oh my goodness, this is in fact a great place to study AI, to study the foundations of intelligence and to go forward with the pursuit of ambitious AI methods.
And so your work actually helps the prosthetics work better with a person. Can you give an example of how that happens?
Yeah, I think one way that the methods we build help limbs work better for the people using them is by truly understanding what the person does in their life. Not an average on someone else's life, but what they actually do. We've built methods that while a person's using a device can predict what they're maybe going to do next and when they're going to do it. So instead of forcing them to fight through the complexity of controlling a rehabilitation robot, which sometimes can be quite complex if there are a lot of modes or functions for them to use, the device can help to streamline that control for them, to allow them to think more about what they're trying to do and less about all of those sort of knobs and levers that they're trying to pull inside the device to live their daily life.
Okay, cool. So that makes me think of when I do the pull-down on my iPhone and it shows me a list of apps, right? Because it often will pop up the things that I often use at this particular point of the day, for example, evening time is YouTube time for me.
And so I'll pull that, so it's a similar sort of thing. You can kind of predict what people might want to do at a particular point in time with their limb based on other sort of environmental or time factors. Is that right?
That is one of my favourite examples. Absolutely, which apps have I used frequently? That kind of idea is largely missing from much of rehabilitation and technology, especially from how people interact with, say, a robotic limb that's affixed to their body. And so this is a great example, yes. It's like essentially which apps am I using?
Which grip, if I'm reaching down to pick up a coffee cup, or if I'm trying to grab a key or to zip up my cardigan, how should my hand be configured to best do that action? How should I be reaching out and how should my arm be reaching out with me to be able to engage with the coffee maker as I'm making that coffee?
Right, yeah. And you can think that so much of that is just natural for people that don't even think of it that they're doing it. But it would become so unnatural if your limb wasn't doing that.
Exactly, and this is part of our lab's name is the BLINC Lab, Intuitive Natural Control. It's all about helping people intuitively control the device that they use in their daily life, as opposed to thinking hard about the device and its control rather than what they're actually trying to achieve and how they're trying to express themselves.
Right, good. So to kind of get out of the way and let them do what they actually want to do. That's awesome.
So we kind of touched on this, but maybe you can make it super clear for people. Where is the AI in this application?
The way I like to frame the interaction between a person and like a prosthetic device, like a robotic artificial limb, is that the relationship is such that the person tries to express themselves to the device. They try to make their intent clear by way of changes to their body. Often this is how their muscles in their body are contracting or reading other signals from the body. And the robot device is trying to interpret those signals and then make appropriate motions that line up with what the person wants, what they're trying to achieve.
This is a very challenging communication problem. And so AI can really help streamline that communication problem, can align the parties better and can make sure that the person is understanding what the device is doing better. And the device can understand and best execute on the actual intent of the person.
So it acts almost like a translator in some sense, learning and adapting to how the person expresses themselves through their body and then mapping that to how that robotic device needs to move in the world to make it feel for the person like it's actually part of their body and doing what they're hoping it's going to do.
So, how does the device know when it has chosen an action that was incorrect?
So, there are two ways we've done this in the past and we can go into more detail if you like. One, which was a very sort of ambitious way is that you can train a robotic limb like you might train a puppy. You can give it signals of good or bad. So we actually had examples where we had people with limb difference who were actually trying to train a robotic limb with their biological limb. And signals are good or bad rewards was being used to help the robotic limb learn how to mirror the actions of the biological limb.
Or another case where a person was actually giving like yes/no feedback, say whether or not the arm was behaving as they wanted to do in response to the signals from their body. That's one way.
The other way that's been really successful I think in our work is to think about the machine learning the patterns of activity, the patterns of the life of the person and then be able to on its own begin to reorganize their control of the limb or to reorganize the way that the limb acts on their commands. This doesn't require the person to actively give feedback to the device but it does allow the device to adapt and change in real time.
Often in the field of upper limb prosthetics, devices have become able to adapt but they don't adapt while people are using them without any human intervention. And so I think that's one of the key ways we've deployed our official intelligence is to allow the device to over time and in a continual learning sense, adapt and improve to what the person needs. This is a sort of a big step change that we've introduced here at the University of Alberta and Amii that I think is unique in the world.
Really cool. What are the, would you say are the big challenges over the next 10 years for your area?
I think some of the main challenges are getting to the point that people and their limbs can continually improve together in daily life all the time.
And this is, in part, there's a regulatory challenge. In part, there's a technical challenge. And in part, there's sort of more of a sociological challenge and these really are, I think, coupled together. But to get to the point where a person and their device, think of it like it's an intelligent assistant that you just happen to have connected directly to your body to allow that device to continue to improve and to be able to improve means that we have to think about the infrastructure that needs to be in place.
Like you wouldn't want to be driving under a bridge with your bionic limb and you lose cell phone service and suddenly it doesn't do what you want because it's actually its brain's living in a cloud. That would be disastrous.
So how do we get more intelligence wearable and deployed on the devices in a way that is owned and trusted by the users and also certifiable by regulators, by the people who help to ensure that our health devices, medical devices are in fact safe and reliable?
So this is the sort of, it's the opposite of, I think some of these large trends we're seeing in the world right now, which is, oh, let's build very large scale systems that are deployed in global infrastructure. It's saying, how do we take all the benefits of some of that and distil it down to the point that it can actually be run and used and built upon on a device itself while that device is interacting with a person?
It's a big challenge. I think we have a good, like a good toehold to actually solve that challenge, but it takes a lot of work from a lot of people globally.
Yeah, and it makes me think of the self-driving car, which is something people have been watching for a while, but it requires that, you know, a lot of technology, a lot of infrastructure, but it also requires that people trust it, right? And want it integrated into their lives, it requires regulation.
So I see a lot of that mirrored in what you're talking about.
I've heard rumours that your lab is actually working on something kind of amazing, which is actually bone-anchoring prostheses.
Yeah, this is the next big step for us, I think, as a lab. It's the focus of our next five years of work. We have a large-scale grant to do bone-anchored prostheses. This is, as you mentioned, the attachment of a prosthetic device, not to the body by straps or harnesses or sockets, but in fact, connecting a robotic device directly into the skeleton of the human body, and then building a next-generation of AI methods to allow a person to fluidly and naturally control those devices that, again, are now originally fixed to their skeleton.
We're also building into this picture the idea that we can rewire the nerves of the human body to give more communication channels to and from the person and the machine.
Earlier, we talked about how it was a communication relationship between a person and their prosthesis. So as part of this picture, we were looking at anchoring robots directly to the bones, revising the nerves of the body so a person can send and receive more and more clear signals to the machine, and then developing, again, a new class of artificial intelligence and machine learning algorithms so that that machine can interpret what people want and provide very flexible natural control, but also send signals back to the person that are contextual, appropriate, and help them feel like this limb is more part of their body and more trusted by them.
Are there other people in Canada working on stuff like that?
We are the first site in Canada to bring both lower limb and upper limb bone anchoring surgeries to users, to patients and rehabilitation hospitals, and it's gonna be really exciting because it gives us the opportunity to leverage some of our local expertise in artificial intelligence to really change the way the whole world thinks about people using these bone-anchored prostheses.
It's just crazy that that's in Edmonton. You'll have to come back and tell us more about that as it progresses.
Absolutely.
Thank you so much, Patrick, for coming and joining us today.
Thank you for having me.