The website is currently undergoing maintenance. We appreciate your understanding!

“Everyone shares the responsibility of reducing the unconscious bias in artificial intelligence”: An interview with Kenza Ait Si Abbou

“Everyone shares the responsibility of reducing the unconscious bias in artificial intelligence”: An interview with Kenza Ait Si Abbou

As part of our macht.sprache. / case.sensitive. project, we’re speaking to various experts who deal with language, translation or artificial intelligence. Kenza Ait Si Abbou works as a manager of robotics and artificial intelligence. In her book Keine Panik, ist nur Technik (Don’t Panic, It’s Just Technology), she aims to give people a basic understanding of artificial intelligence with a touch of humour and examples from everyday life. In our conversation, she explains that there are of course challenges in the field – for example, in the form of unconscious prejudices – but that solutions can be found for everything, provided the will is there.

Kenza Ait Si Abbou has a clear goal in mind with her book Keine Panik, ist nur Technik: “I want to educate people about artificial intelligence (AI). I wrote an accessible nonfiction book so that people who aren’t experts in technology, who don’t really have anything to do with the subject, can still engage with AI.” This is so important to her because AI is going to change the whole world, including all professions – without exception. She tells me, “The sooner a person gets to grips with the subject, the better that person can prepare for their own future. Far too many people still think AI is very far away from their own lives.”

At poco.lit., we are currently developing a translation tool to recognize politically sensitive terms, e.g. related to race and gender. More and more people work with multiple languages and use machine translation on a daily basis. As publishers of poco.lit., we found our frustration growing with the fact that conventional translation programs, which are freely available, almost always make people masculine in German, for example, so that scientist usually becomes Wissenschaftler and not Wissenschaftlerin or, even less likely, even Wissenschaftler:in. Kenza Ait Si Abbou explains that programs have learned these modes of translation and don’t consciously make such choices. She points out that machine translation involves parallels to human behaviour: Humans don’t necessarily choose to be insensitive to language on purpose; they’ve learned to be that way. Upbringing and social environment play a central role. If most of society does not use gender-inclusive or gender-neutral language, and machines learn from the data available to them, they are a reflection of social norms.

In her book Keine Panik, ist nur Technik, Kenza Ait Si Abbou writes about cognitive bias as a distinctly dangerous phenomenon. Cognitive bias occurs when people think only from their own perspective and assume that the whole world ticks the way they do. In our conversation, Kenza Ait Si Abbou explained that it especially matters when people are in positions of power: “If I’m a judge and I have prejudices, then it’s very dangerous for the accused person if they don’t meet my standard of right and good. If I’m talking to other kids’ parents at my son’s daycare and they’re prejudiced, it’s not nice, but it’s not as dangerous.” Power and context play a key role. Racism, too, is ultimately about the power with which discrimination is associated. Kenza Ait Si Abbou makes clear that “the difficult thing about AI is that it gains power by being used so much. It’s a quantitative power.”

Kenza Ait Si Abbou first became aware of the structural dimensions of bias in AI a few years ago at a “Women in AI” gathering. When she started researching afterwards, she found what she was looking for in several sectors, gave a TED Talk on the topic, and realized how a real change was taking place: bias in AI, many were suddenly realizing, was not something individual, but something structural that affected the whole world. Many technologies are not neutral; they learn biases. Kenza Ait Si Abbou is accordingly now a sought-after interviewee, but she stresses that the media sometimes treats the problem too one-sidedly: “The homogeneity of development teams is of course part of the problem – it’s an industry that is largely made up of white men. Technologies are made by people, so people have to take responsibility for them. But the responsibility doesn’t just lie with the development team. After self-learning systems leave the factory floor, they continue to learn, and they learn from users. In other words, it is society that continues to learn, and so society is just as responsible as the development team.” Ultimately, Kenza Ait Si Abbou advocates for more critical reflection in development teams so that AI can work to contribute to more equality in the world. At the same time, society needs to deconstruct and relearn its unconscious biases so that AI can be less discriminatory. “If I’m unaware that my actions directs machines to behave one way or the other, then I can’t consciously influence them. If I do know, I have to think about the consequences of every like I give. I’m sure some people find that exhausting.” In her explanations, Kenza Ait Si Abbou points out the positive, but also the negative. Only those who understand the problem and don’t shy away from the work involved can contribute to tackling the problems.

Support poco.lit. by becoming a Steady member.

You can support our work with a monthly or yearly subscription.