Septiembre 18, 2020

#5: Algorithmic Colonisation with Abeba Birhane

Is there such a thing as "ethical AI"?

In this podcast episode, ETC Group speaks to Abeba Birhane, a PhD candidate in cognitive science at University College Dublin in the School of Computer Science. Birhane talks about her work on the algorithmic colonisation of Africa, why we need to normalise critical thinking on new technologies and if there is such a thing as “ethical" AI.

Listen below or subscribe via Apple, Spotify, Google, or use our RSS feed with any other service.

Transcript:

Zahra Moloo: Hello, everyone, and welcome to another episode of the ETC group podcast. I'm Zahra Moloo, based in Montreal. I work with Etcetera Group on technology assessment in Africa. Etcetera Group works on monitoring the impact of emerging technologies and corporate strategies on biodiversity, agriculture, and human rights. Today, I'm very happy to be speaking with Abeba Birhane.

Abeba is a PhD candidate in cognitive science at University College, Dublin, in the school of computer science. She studies the dynamic and reciprocal relationships between emerging technologies, personhood, and society. She's also a contributor to Aeon Magazine and blogs regularly about cognition, AI ethics, and data science. First of all, thank you so much for being with us today.

Abeba Birhane: Thank you for having me.

Zahra: To start us off, can you tell me a little bit more about your research? Your research looks at the relationship between emerging technologies, personhood, and society. Can you tell me a bit more about that? What does that mean?

Abeba: My background is in cognitive science and also my PhD is currently in cognitive science. What my research is trying to investigate is to map out what cognition is, what personhood is, what people are, from a very narrow tradition in cognitive science called embodied cognitive science. It's a newly emerging area of science. It takes its background from general systems thinking, cybernetics, that kind of tradition. Coming from that background, you get an image of a person where you understand the person as interactional, inherently embedded in social contexts and inherently relational with others.

The person cannot exist independent of others. This is philosophically metaphysical. What I try to get at the heart is how what you get is people that are ambiguous, people that are constantly changing, people that are inherently non-determinable. This is where machine learning systems come in. If people are so non-determinable and always changing and ambiguous, how can we then design machine learning systems that can predict human behavior or human action?

When you push this question further, the answer is no, you cannot predict, not with precision what a person will do or how a person will behave or how should we act. This is where the ethics aspect come in. My work sits at the intersection of cognitive science, machine learning, and ethics. These days you find machine learning being as a hammer that we work onto everything. If we are just developing and deploying machine learning systems, ignoring the fundamental nature of humans, then what are the ethical costs, and who pays that cost the most?

When you see machine learning systems being found discriminatory or biased, you will see that a lot of the discrimination, the bias, the harm, disproportionately falls on minoritized people, racial and gender minorities. Anybody who doesn't satisfy the status quo is often seen as outlier and those are the people that suffer the most. That's where the ethical aspect of my work comes in.

Zahra: It's very interesting what you said. I'd like to go later into this issue of minorities and the ways in which machine learning and AI end up marginalizing certain groups or having different impacts on different groups. You also mentioned that in some of the writing you've done. I want to talk to you about an article you wrote called The algorithmic colonization of Africa. You talk about how there's now an African equivalent of Silicon Valley.

In Addis Ababa, in Lagos, in Nairobi, you have these centers, these hubs where there's so much in terms of technological "innovation" that's going on and you talk about how data is being applied in agriculture and health. Can you explain what new projects or strategies are being pursued in these places on the continent, which are making use of data-driven technologies and AI?

Abeba: First, I'll say how that article was first a blog, and then it became an essay for Real Life Magazine. Then it became a journal paper. First, it emerged out of my frustration where I went to one of the biggest conferences in Morocco a year or so ago, which was hosted by the Moroccan government and the Royal family there. You had all sorts of scholars that work within AI, computer science, you had policymakers, you had governments, you had NGOs, you had people from the UN, ministers. All that.

The idea is to bring everybody who is invested in technology into one room to have a discussion. I was very, very excited to be there, but as the conference went on, I became skeptical and even agitated because it was not what I expected. I was expecting great dialogues that are very critical and that also discussed the limitations and the problems of AI as well as its positive side, but the former didn't happen. It was all about how the state-of-the-art technology is going to do this for Africa, how this machine learning system is going to leapfrog African people out of poverty, that kind of stuff.

There was literally no push back and not much criticism. That article came out of the frustration with the lack of critical thinking when we are thinking about technology in Africa. The core messages are, when it comes to technology, we seem to lack the ability to critically appraise technology before we import it. We go with, "This is technology, this is groundbreaking. This is innovation. This is development," without first asking, "Is it appropriate? Does it answer specific African questions? Is it going to be helpful in this context and what are the limitations? Who is going to be left out? Who is going to be even harmed? Who is going to benefit?" These kinds of questions we don't intend to look at.

Zahra: It's very interesting because at Etcetera group, one of the things we're really working on with other groups - in collaboration with civil society groups, lawyers, journalists, different people in Latin America, Africa, and Asia-Pacific- platforms which will allow people to assess new technologies before they arrive and to try and have a bit of a critical perspective. I'm wondering, on the African continent specifically, why is it necessary to be skeptical and to be critical of these so-called innovations, these game-changing technologies that are coming now?

Abeba: There is not much distinct about the African continent. There seems to be a general over-hype and lack of critical appraisal when it comes to technology globally, wherever you are, but what makes it a matter that is specific to Africa is, if you take Nigeria, for example, I look at this in the paper, I think about 90% of their infrastructure, their software, is mostly imported, while the engineers and the technologists themselves within Nigeria contribute for the rest of it.

Here, what's important to ask is not only does the importing of such technology impoverish African scholars, African technology systems themselves and their capability to create, but also there's the question of whether software that's imported from the global North can be actually relevant to the Nigerian or to the African context in general. Like any knowledge, like any products, like any app, software or technology, there is always people behind it. When you create something, you have burning questions, you have backgrounds, you have histories, you have motivations and you have interests.

So your creation is not separate or separable from your own personal and subjective interests. Those questions, those interests, may not necessarily be the questions of say, a community in Nigeria. So it might just end up being not that relevant.

Zahra: You also mention in your piece, you talk about the discourse around technology. A lot of the kind of language that we apply when it comes to Africa, for instance, "data-rich continent." - Africa is now a data-rich continent - There's a lot around “data mining;” These are often terms applied in the technology sphere to Africa. What is wrong with this discourse?

Abeba: I don't know about you, but for me, when I hear "data-rich continent" or "data mining," my mind goes immediately to the colonial era and colonial powers going into their colonies to mine some kind of mineral, some kind of things, some kind of objects. Not only is it wrong for people from the global North to go over and mine something elsewhere that doesn't belong to them, but most importantly, it's also really wrong to think of people or data subjects--

I don't even like the term "data subjects,”  think of people or where data comes from as objects that you can mine. You treat people as things, as objects, rather than entities that have hopes, that have fears, that have emotions, that are living systems. You reduce them to, I think, to something that you extract data from. I find it very problematic for those two reasons.

Also, the other discourse that's very prevalent is you hear things like, "Data is going to transform the continent," or as I said earlier, "Technology is going to leapfrog." I don't like that term, "leapfrog the continent out of poverty" into something, into development or something. So this narrative, this discourse reduces very complex, social, historical, cultural, problems as something that can be solved with just a simple technology or with just a gathering of data. Yes, data can be part of the solution, but not all those problems will disappear all of a sudden if you just have the technology or if you just have the data.

Announcer: You are listening to the ETC group podcast.

Zahra: In terms of the issue of ethics, which is also part of your research, I guess a lot of the arguments in terms of using data for all sorts of things on the continent, in other cities as well across the world, whether it's facial recognition technology, there are now drones being used in agriculture, there's AI being used in agriculture... I think the discourse about data mining, data-rich continent, and this idea of leapfrogging into the future, builds on the idea that there are gaps or resources that we don't have on the continent. What would be your answer to people who say, "'Well, the continent needs these kinds of technologies," whether it's in agriculture or health or even security?

Abeba: I guess my answer will depend on the specific question. When you come to health, or security, or facial recognition systems, I'm extremely skeptical because I'm a cognitive scientist. As I said earlier, the more you veer towards the people and social systems, the more you are entering a system that is inherently political. We have values, we have subjective contexts. So any application of technology in those circumstances, for example, in security or any kind of face recognition systems, I see the alarm, I see the potential harm and I see the potential for things to go wrong.

The face recognition system that's being imported might be trained on faces that are distinctly different from African faces. Not that there is an African face, it's a continent that has 54 distinct countries. I'm not saying there is an African face, but what I'm saying is when you apply it elsewhere, it's not going to be effective. It's not going to be accurate, but also, I would question the accurateness itself because I don't know whatever the application of security is, a suspicious face or a trustworthy face, these are the things that face recognition systems and security are supposed to detect. 

These are not things, these are psychological and social characteristics that do not manifest on faces, so you cannot detect them on faces. Even if we developed, say, for example, Ethiopia develops a distinctly Ethiopian face recognition system that's trained and validated on the same kind of data, that doesn't mean you will get a face recognition system that's accurate and that works. You won’t. These are human characteristics that cannot be read off of faces.

Zahra: Building on that, is there such a thing as ethical AI? Can you have a different way of applying artificial intelligence whether it's in security or elsewhere, or is it not really possible given that there are certain interests behind the use of artificial intelligence and these kinds of technologies?

Abeba: That's a very big question. Even though I tend to give the negative vibe, I'm positive in this regard. Whether an AI becomes a tool that can be used for social good or whether it becomes a surveillance tool or whether it's become a tool that harms minorities really depends on the objective. This was the core message of Cathy O'Neil's book Weapons of Math Destruction.

If your objective is, say, to trace and track people who commit white-collar crime or big bankers and CEOs that are doing all kinds of tax evasion or stuff like that - rich peoples’ crime - there is no political willingness because those are the people that are in power and you will not get the funding. There will not be the political will to do it. AI systems exist in a very capitalist system and there are rewards and incentives if you create something that makes money, but I don't think it's entirely impossible to develop AI that can be used to benefit people that are disproportionately harmed and discriminated against.

Zahra: Do there exist on the continent spaces where people can be critical of new technologies, of data-driven technologies, of artificial intelligence. What seems to be the case is that there are, like you described at this conference, a lot of push to involve young people to really expand these technology hubs. Do you think that there exist such spaces on the continent where you can be critical where you can start asking these kinds of questions, and then building on that, what needs to be done to expand the space for critical voices on this issue?

Abeba: I don't know if there is a space throughout the continent or even country-wise, but I know for sure so many individuals that have very sharp, critical voices, that are constantly speaking out, that are constantly writing in a very directed and clear manner about the harms and the problems of technology, about its colonial tendency, about how these kinds of technologies can inherently bring white supremacist ideas in a very nuanced internalized way. Say for example, how we define ‘beautiful’ or ‘professional’ or ‘successful’ or things like that, those kinds of definitions, if they come from the global North, they tend to take the norm from that countext. So the importing of those contexts in a very nuanced and indirect way, it has a component of white supremacy in it.

Zahra: What would we need to create, let's say, a voice on the continent that's really looking at the nuances of these technologies?

Abeba: I guess it's early stages. Everything is, "Move, move, move," or, "Go, go, go." If you take the "move fast and break things" metaphor, we are at the moving fast stage, so nothing is settled. So depending on how people come together, even me and you getting together discussing about these things, creates some kind of space. Maybe you need more of these dialogues, more of these engagements, and some invested people to create a forum, to create a space, if that space is not already there, and it's possible to make it a norm. It's possible to normalize critical thinking and technology. It's quite possible to carve out a space for this kind of community and this kind of thinking.

Zahra: Thank you so much. It was great speaking to you.

Abeba: Thank you so much for having me.

Zahra: That was Abeba Birhane. To follow more of her work on Twitter, you can find her @A-B-E-B-A-B. If you'd like to look at her blog, her blog is abebabirhane.wordpress.com. That brings us to the end of our podcast for today. Thank you very much and good evening.