Case Study
For Shani Gwin, even something as simple as browsing news online can be a violent, hostile experience.
Gwin shares that she is a sixth-generation Métis woman, who is connected to the Cunningham family on her mother's side, and the Fergusons on her father's side, as well as a descendant of Michelle First Nation through her paternal grandmother.
She says any time spent online — whether it be social media, news sites or nearly anywhere else — likely means dealing with racist comments and threats against Indigenous People.
"For a long time since Facebook and Twitter came into the world, I’ve had to read a lot of comments about who I am. And maybe it doesn't say 'Shani is this person,' but it says 'Indigenous People deserve this.'" she says.
"It's violating. It's abusive, it's violent."
And it isn’t just readers who are harmed by online hatred — it also puts a toll on those who moderate the comments. Gwin is the founder of pipikwan pêhtâkwan, an Edmonton-based communications firm that focuses on bringing attention to Indigenous projects and organizations. She says many of the clients that she works with face an issue when it comes to curbing racist messages on their social media pages. Those messages have to be flagged, reviewed and dealt with, which can be a harmful task for Indigenous employees.
Gwin thought there might be a way to make the process safer for everyone involved. That’s what prompted her to approach Amii to explore whether artificial intelligence and machine learning could provide a solution.
"I felt like they did care. The team at Amii understood what I wanted to do and they were really interested in understanding the problem that I had. And I felt that I was working with the right experts to support me and what I wanted to do."
-Shani Gwin, founder of pipikwan pêhtâkwan
Her hope was to harness the potential of AI to help identify the potentially racist comments on platforms like Facebook. Machine learning has been used to identify and flag harmful comments online before, but Gwin says she hasn't found anything that is built specifically to handle anti-Indigenous hatred. A machine learning model needed to be trained to not only recognize explicitly racist comments but also pick out the coded language and hidden bias that characterizes much of online hate.
The project is in its first steps, which involve collecting data to train a machine learning model on what kinds of comments to flag. This could be done by using already-existing datasets that identify words associated with racist speech. Or it might involve gathering examples from social media pages, using a combination of human effort and automated software.
The behaviour of the person posting the comments might also prove useful in the model, especially for subtle or coded racist language. By adding data on how a commenter behaves or who they are interacting with, it can help pick up forms of implicit racism.
Once a comment has been flagged, it can be hidden from view and then put into a holding space until a human being is in the right state of mind or has the capacity to review it.
“If we can put it in some sort of holding pattern … it allows another level of safety, reduces harm, and reduces the potential for being retraumatized.”
Gwin says they are currently working through the data collection phase. The next step will be to train the model.
Pursuing the idea wasn’t just a technical challenge; it was also a matter of trust. Gwin notes Indigenous People have had to learn and adapt to a system that was built by — and designed for — a white majority. The need to navigate that system puts Indigenous People in a vulnerable position and requires a lot of trust.
"Trust is something that Indigenous People have had to be open with. It's not hard for us to get distrusting with a history of oppression,” she says.
Gwin has had her own experiences having to work around that system. While working in government communications, she felt keenly aware that she was essentially the only Indigenous employee there. She often had to challenge some of the decisions that she saw as being made through a primarily white lens, which could come with a professional cost.
It led her to form pipikwan pêhtâkwan in 2016. The firm is Indigenous-owned, led and majority staffed. Building trust with other Indigenous-led organizations allows them to help elevate the projects and issues that might otherwise be ignored, she says. While Indigenous communities are not homogenous, she says there are shared values and approaches that help pipikwan pêhtâkwan create those authentic relationships with their clients.
Gwin says that trust is just as important to her as an Indigenous woman in business, with her ideas often being dismissed or changed with little regard for her actual needs. But working on the machine learning project with Amii was a much different experience.
"I felt like they did care. The team at Amii understood what I wanted to do and they were really interested in understanding the problem that I had. And I felt that I was working with the right experts to support me and what I wanted to do." she says.
The project was the first time Gwin had worked with artificial intelligence. In fact, she says the experience working with Amii has her looking back on other ways her work could be supported by artificial intelligence.
"I feel validated enough that I think I should go back to that project that someone said no to me about, and just ... try again."
Apr 3rd 2023
Case Study
Machine learning is saving time and money while reducing emissions and increasing safety in the pipeline sector: find out how Venturi Engineering Solution is working towards the future of energy
Sep 19th 2022
Case Study
Synthetic fiber is everywhere. Learn how Canada-based Instrumar is using machine learning to improve quality and efficiency in fiber production.
Apr 12th 2022
Case Study
Discover how EZ Ops is using machine learning to give oil and gas operators the information they need to increase production
Looking to build AI capacity? Need a speaker at your event?