A New Framework for Advancing Racial Literacy in Tech

By Brett Beasley

Mutale Nkonde, CEO of AI for the People, shares her approach to advancing racial literacy in the tech industry.

 

 

Chris Adkins:
Welcome, everyone, and welcome to September 2020. We are very excited at the University of Notre Dame and the Mendoza College of Business and the Notre Dame Deloitte Center for Ethical Leadership to bring you this year's Cahill Lecture. Every year we highlight a thought leader in the area of ethical leadership, especially someone who is really thinking ahead of what are the challenges of today and tomorrow and we couldn't think of a better speaker than Mutale Nkonde.

Now when we originally planned this session, this was in the fall of 2019 and was scheduled for March of 2020 where if you were on campus, we had not only a fantastic event scheduled in person but a reception. So remember what receptions were like when people could come together and eat and drink and really get to mix and build those relationships. We're so sorry that we can't do that this September, but we were so thankful and grateful that Mutale could join us, at least virtually, in September.

There's also some other great news for Notre Dame. First, that we have opened a tech ethics center. Second, that Mutale has been selected as part of the Notre Dame Institute for Advanced Study to be a fellow this year. So we're so excited to both have Mutale as part of the community here at Notre Dame as a collaborator, as a thought leader, to collaborate with all of us and as we say in the Mendoza College, to grow the good in business.

So without further ado, we want to talk just briefly about the Cahill Lecture series. So much of the work at Notre Dame is due to very dedicated organizations like Deloitte, but also individuals like the Cahill family that enable us to do great work, to pay it forward, to develop the next generation of leaders, business and beyond. So we are very, very thankful that the Cahill Lecture continues to celebrate and elevate thought leaders in the space of ethics, leadership, and for this year, really focusing on the emerging issues in technology. We also didn't want to just focus on some of the tech issues. We wanted to focus on some of the issues related to systemic racism, and also technology can be a hindrance, but it also can be a huge help. It can be a step function forward for so many of us. So we are very excited to have Mutale here to kind of talk about some of the research she's been doing.

You may have seen her background a bit in the past, but she has done work in a variety of ways around the CEO of AI for the People, AI Governance Space. She's collaborated with Harvard Business School in terms of their Berkman Klein Center for Ethics in Technology, but in particular this year, she's joining us at Notre Dame as part of the Institute for Advanced Study and she just shared with me, right as our call was starting, that Tik Tok reached out to her to be part of their content advisory board, which I think she's going to share a bit more.

So without further ado, I'm going to turn it over to Mutale. Thank you so much for joining us. We are so happy to have you. And by the way, the format for today will be about 30 to 40 minutes of content before we turn it over to Q&A and as she shared with me, she loves questions so please get them ready and then my colleague Brett and I will facilitate the questions. So thank you.

Mutale Nkonde:
Thank you so much, Chris, and thank you so much to the Notre Dame community and everybody that showed up. I am absolutely bursting with pride here from Brooklyn, New York, to say that I am this year on book leave, so I will be working with the Institute for Advanced Study on some of the themes that I'm going to discuss. One of the interesting things about last year when I decided that I would take this on and I would come and visit you all, I never, ever, ever thought that I would be coming in the guise of actively trying to make business better, and I'm so happy that that's your theme.

So to start off, my name is Mutale Nkonde. I'm a researcher, I lead a nonprofit, and one of the things that I'm really, really looking at is how can we build technical systems that will be not just for a small sector of society but for what I would consider the very least of us, the people who are negatively racialized, the people who are negatively gendered, women, trans folks, disabled folks, folks that are often in the margins but use these technologies in the ways that we all do.

So I'm coming to you really to discuss a framework that I developed with colleagues while I was a fellow at Data & Society, called Advancing Racial Literacy in Tech. So one of the things that we were really concerned about at the time was much of the media analysis, or much of the academic literature, was looking at ways that algorithmic systems take on meanings of real life. So they were taking on meanings of sexism, they were taking on meanings of racism, and they were taking meanings of many other "-isms" that make life miserable for those who are considered marginalized and we couldn't understand why. These are technical systems, they're machines, they're neutral, what is the issue? So we teamed up with educational researchers at the University of Pennsylvania who were already looking at this idea of racial literacy and we found out through using their work in psychology around what makes people racist or not to ground this theory and then extend it to technology.

So the racial literacy framework is actually really simple, in our mind. It has three basic elements. The first one is a cognitive element. So it's understanding the interplay between race, technology, and society and what it means to develop technical systems for a society that is marked by people that have individual identities. We also, from our friends in psychology, started to explore this emotional aspect around race. How can we talk about it? Why don't we talk about it? What are the barriers? We found that in America when people start to talk about race or more specifically where my work sits anti-blackness racism, their throats get tight, their shoulders hunch up, their stomachs come in and they don't want to talk about it, they want to avoid it. That's considered what psychologists would consider an emotional five, six, or seven. So you're uncomfortable and your alert is on and you're about to kind of shrink back.

What we wanted to do was to get people to a situation where they can easily, as Chris said in the introduction, face systemic racism and not want to throw up, run outside the room, and be somewhere else. So how do we do that? How do we talk about that? What is the emotional driver? Once we have an understanding of the interplay between race, society, and technology, the acknowledgment that it's difficult to talk about, how can we use anti-racism, and for the purposes of the case study I'm going to share, as a tool for democracy?

As I go through my case study, I want us just to think about that framework, this idea that race has an element of cognition, we have to understand its origins, why it works and how it works in society. We have to also acknowledge that it's difficult to talk about and do the work of deconstructing and examining that. Then we obviously want an action plan that will move us towards anti-racism and it's really ... as I go forward, I'm going to be describing what is part of the research for a book I'm working on this year, which is called Automated Anti-Racism : Why We Have to Actually Name Race to Go Towards What I Call Just Technological Futures. So that is what we would consider racial literacy.

For one of the chapters in my book is going to really look at our responses to political disinformation, and the question that I'm posing in this talk and even in the book is what would a racially literate response look like to this hard and urgent question? So in 2017, Robert Muller, who is depicted right here on the screen, led an investigation into election interference in to the 2016 election. One of the things that they found were that African Americans were the most targeted group by Russian disinformation, and as a somebody who is a race scholar and somebody who's interested in racial literacy, this was very interesting to me because social media platforms are driven by algorithms, which are technological systems, so this racism was being developed through technology, but why African Americans and why them so much?

One of the things that we found is that there has been a long history of this, which I would love to discuss in a different venue, but just for the purpose of this talk, we found that a history of about 100 years Russian interference into black freedom struggles. We saw it in 2016 and then we started to see it in 2020. This is the text of a tweet. I have taken off who sent the tweet, for ethical purposes. I want to maintain their privacy. I've also, you'll notice here, blocked out the name of the group and the reason that I've done that is that the work that we're doing is an investigation and it's ongoing so we don't want to alert them. But just to let you know the type of thing that is happening on black Twitter right now are messages like this. "Vote Down Ballot this 2020 Presidential Election. This means when you get your ballot leave the line for President blank or write in this group." They've also talked about writing in Kanye West here. "No Biden. No Trump. Then go ahead and vote locally. Vote Down Ballot."

So what does this actually mean, right, and because of the way social algorithms work, people that have interacted with this tweet that will turn up on their timelines, through retweets, and people who are really looking to ... we've seen evidence of this tweet being shared within Facebook groups. This is an idea that is circulating in the populous. This was shared on August 6, 2020 on Twitter, and it actually references something that happened in 2016.

So in 2016, 70 thousand people in Detroit voted down ballot. They used this strategy. This swung the state of Michigan from Obama in 2012 to Trump in 2016 by only 10 thousand votes. So this was a real problem for us. It was an ethical problem because this is obviously not true, particularly because this campaign that shall remain unnamed as well as Kanye West are not going to return a government that would be functional for us. It would be what many would consider a distraction and people should be allowed to vote Biden or Trump, depending on their preference. That is why we are in a liberal democracy.

As we were looking at this phenomenon, we were like, "Well, who actually believes this? Is this something we should be worried about?" And we saw this tweet yesterday. So Cori Bush is the congresswoman-elect from Ferguson, Missouri. She is very active in helping to get people to vote, and one of the things that you'll see in this tweet is that she says, she's a Democrat, "But that doesn't mean as much if Trump is in the White House. Vote him out, vote down ballot, and let's get to work pushing the party to the left."

Now this is the vote down ballot and narrative that I just explained and I've just explained how that vote down ballot narrative, if taken up in black communities, is going to favor the incumbent candidate over the challenger. So this is something that we are obviously seeing as a problem, but because this information is being algorithmically mediated, this is a technology problem as much as it's a race problem as much as it's a democracy problem.

So what have we been doing to try and figure this out? This is a computational analysis that we have been doing. It's 106 thousand Twitter data set, and we've identified at least 600 accounts within this data set that are behaving like bots. The top mentions in that 100 thousand tweet analysis are one of which, number seven, is wishbumpacoulda, and this was the top ten during the DNC convention and that actually relates to this particular tweet, and we believe that this account is inauthentic. You'll see that the name of the account is Reporting-live-from-the-Kremlin. We did a reverse Google search on this picture, and this looks to be somebody in Puerto Rico. We are hypothesizing that this account is a Russian bot, but it's also number seven in the top ten mentions in the accounts that we're following.

So where are these messages being tweeted in? So the question we're still asking is, "Well, is this really a problem?" And it's very strategic. We have them in Georgia, which is a swing state. We have it in Florida, which is a swing state. We're looking in Texas, which is a swing state. We're also looking in Michigan, that's a swing state, and we're also looking in Pennsylvania. So these are all swing states where these messages are being advanced and obviously we know that if they are swing states, that these are states like Michigan that I just mentioned, that could potentially change the course of an election.

So what's business response to this? Well, it's been varied, but the one that we were really interested in are what we would consider the giant in social media. What are Facebook, Facebook that has a following of 2.3 billion people which is larger than Christianity. This isn't a spiritual leader and this isn't a leader for the people. This is a business. What they decided that they were going to do two days ago, which now I'm speaking to you September 4th so this was September 2nd, was that Facebook was going to limit political ads for the week before the election. It sounds good, but I've just shown you that a tweet that says, "Vote down ballot" on the 6th, can then get into mainstream political strategy by September 3rd and have a candidate, a mainstream candidate, somebody that will be in Congress, advising their constituents to do the same. So I would suggest that while this intervention is needed, we need business to stand up in this way, that it's not enough. It's too little, it's too late, and the stakes are too high.

So the question then becomes what would happen if our Facebook response to this was actually racially literate? This is where I'm going to go back to the theory that I posed at the beginning and really try and tease out what could be, or what should be different, given this response. So as I said at the beginning, the racial literacy framework has three elements. We have cognitive, we have emotional, we have an action plan.

So cognitively, the hundred year history of black people being targeted by Russian disinformation and what that basically ... well, it wasn't called disinformation. It was originally called propaganda and then after the Cold War moved to disinformation. In order to understand the cognition, the history of that, we really have to go back to the 1930s when the U.S. was in a similar economic situation as it is now. The country was suffering under economic collapse. The reason now is because of the COVID pandemic and how quarantine has kept people in their homes and unable to go to work, but it was what would be considered a depression time. As people who are interested in business, we also know that we just had the worst quarter in U.S. history, this past second quarter of 2020, where GDP contracted by 32.9%.

What happened then was the American Communist Party was gaining steam in the same way that we're seeing socialists and socialist Democrat candidates really coming in and providing people aid and them being voted. Cori Bush, who I showed you the tweet supporting vote down ballot, considers herself a social Democrat. What the American Communist Party did was they joined anti-lynching campaigns. There is a famous case in 1932 of the Scottsboro boys, nine black boys who were accused of raping a white woman and they were caught on trial. The NAACP at the time refused to take their case but the American Communist Party did. They found them lawyers, they fought for their freedom, and they became anti-lynching campaigners. The communists and the black people in the South led these campaigns until pre-war. So until the U.S. entered the second World War, there was this alliance.

What happened to that crusade, it was great for black people but what they actually did was go to Latin America and other burgeoning freedom movements in Africa and showed how racial discrimination in Africa meant that America was not fit to be world leader and led to really wide-spread dissent, and that is the reason that we're seeing Russian disinformation before. So I would think that if social media companies understood that black people have always been targets for this, we saw this later with the idea that the CIA were responsible for the AIDS crisis in the 80's. That was a disinformation campaign that took a huge, huge, huge, huge, huge, flight in the black community. If you are a listener of Kanye West, as I was until very recently, then you'll know that Kanye actually talks about the CIA bringing AIDS to black Americans, the reason that black people cannot trust the government. That was political Kanye, not the Kanye that we're seeing now, which is political in a different way. But that lets you know how this seat into the infrastructure and it also takes you back to the Muller report where we find, look, almost a hundred years later, it's 2020. I started off this story in 1932. Black people are still targets. So there wouldn't be this shock. There wouldn't be this shock.

So then the emotional. People still, in 2020, despite the fact that anti-black racism is one of the major ways in which Russians are sowing division and amplifying division in U.S. society, we see in current news ... prior to this talk I did a media analysis of QAnon. QAnon is conspiracy theory that operates on YouTube and other algorithmically driven information systems which we think are in use and they're not. They're algorithmically driven information systems. Facebook was another company that just recently removed a thousand accounts that were spreading conspiracy theories that were meant to really stoke resentment of current Black Lives Matter protests and really embolden people that would be considered to sit on the extreme right to take this very kind of anti-immigrant, racist stance to their fellow Americans.

In comparison, fifty articles about race and disinformation. Fifty articles. So this is a hundred year campaign. We know it's a major weakness. We know that it's effective and it's successful, but there are fifty campaigns. The reason that I would put forward, prior to me becoming a researcher, I was a journalist for fifteen years and the reason I left the newsroom was because stories about race were not being taken up by editors. If anything, editors were so uncomfortable about talking about race that they would avoid those stories later and that's something we actually saw in the research and I'll discuss just before I close.

So this unwillingness to talk about race really creates, in my opinion, a national security crisis because it really gets to this idea that politically we do not trust what will happen if we bring up these issues and therefore we do not have the tools to save our own democracy.

So what would the action plan? Well, as I said in my introduction, I sit on the Tik Tok content advisory board. If this were a question that were being to be put to me by Tik Tok, I would say that we need more urgency and a complete ban on accounts that we see spreading conspiracy theory, but there is a cost. These accounts are generating advertising revenue, because the way social media platforms work is when an account has high engagement, so when we like, when we retweet, when we comment, that provides an advertising opportunity for the platform. That advertising opportunity is going to generate content. So we saw in 2016 $100 thousand being spent in one month by the IRA, the Internet Research Agency, which was a troll farm based in St. Petersburg that was sending messages that benefited one candidate, the Republican candidate, way over the Democratic candidate. I have no problem with people winning elections as long as those elections are fair.

So is business going to give up this revenue? As Facebook has said, it would give it up for one week. Or is business going to take the side of promoting democracy and say, "It is more ethical for us to not take this type of business because of the cost to society."

This takes me back to my final slide and where I would really like to start our discussion. One of the things that we did in Advancing Racial Literacy in Tech was we did ethnographic interviews with thirty people in tech, so half of set came from ... were VPs of engineering in major tech firms, the other half were diversity and inclusion staff. One of the things that we found were the engineers really didn't see race. They really didn't think it was an issue. They really didn't think it was something that needed to be talked about in terms of technical design. Because they didn't see race, they also didn't think about how to deconstruct it in their processes. They didn't think about the ways that zip code data and other forms of metadata actually come with meanings of race. They didn't think that that was their problem.

The DI folks, on the other hand, were very conversant in race. They understood the history of race and racism and how it still is with us today. They understood the potential dangers with democratic processes, as it stood for their companies, whether they were working at YouTube or Google or Facebook or some of the other big companies that we've discussed during my talk. One of the things that really came through was there wasn't a literacy, there wasn't a language, there wasn't a process by which these two groups could talk to each other and what that resulted in 2016 were Facebook saying that they weren't responsible for this, people could post what they want, they had protections under the 1996 Communications and Decency Act that didn't make them liable for content. Then fast forward four years, only banning political advertising for a week, and what is deemed political? The tweet that I showed you that recommended voting down ballot only has a hundred followers. It appears to be an ordinary person just airing their opinion. If those are the types of the Facebook posts that are going to be targeted through this, then the question is, is political only coming from a campaign, or is it also messages that harm democracy but seemingly look like they're harming them from authentic accounts when I've already shown you one account that we strongly believe to be Russian and even labels itself as reporting live from the Kremlin.

So these are the questions that I wrestle with in my work every day and these are the questions that some of which are going to be raised in my book. I'm at the one thirty mark and I'm really sorry that folks didn't get to see all the slides, but in question and answer, we can certainly go back and we can go through them just to make sure that you have that full experience. But I would love to fall into conversation with you all about what business could do or should do and how being racially literate is not just good for black people, it's good for us all because we have a linked and shared history.

So thank you for inviting me. It's my absolute honor and I'm excited about the conversation.

Chris Adkins:
Thank you so much, Mutale. Hope everyone can hear me. Can you hear me okay? Great. So, wow. I did not expect, frankly, such a tour de force in terms of the history of some of the relationships, where they started, where they've grown. So I think that's incredibly powerful, as well as to understand how much of this may be facilitated through algorithms.

We're getting a number of questions in the chat that Brett is going to facilitate, but I'm wondering just as a question, because one of the benefits of being the facilitator is I get to go first, so if I could, and we actually had some good conversation earlier. When you think about racial literacy, it was informative for me, but for so many, we're at a university at Notre Dame as well as my wife is a preschool teacher and I know so many folks on the call today are in some form of education. Where does this being when it comes to, in some sense, telling this narrative, explaining to people ... typically I would say it this way. Most of us ... we use things like Facebook or Instagram or other things and we don't know the backstory, we don't know what goes on behind the curtain and I think about that even with three adolescents. How do I start teaching them about all the ways that their reality is being shaped because of the way that they tune in to those feeds?

I know it's a super loaded question, and probably a life project, but I'll throw that out there.

Mutale Nkonde:
An actual major part of my parenting project. I was really happy when Tik Tok approached me and they said that they ... they approached me after the George Floyd protests and what had been happening were hashtags related to Black Lives Matter were being downgraded and they didn't understand why and they wanted to understand that social context. They had all the engineers in the world but they wanted to know and black creators were really angry.

So I tell my sons, "Mommy's going to be doing some work with Tik Tok," and they looked at me and they were just like, "Oh, my God. Are you spying on us?" And I said, "Yeah. I really am." They're both avid Tik Tok users. One of the things that really shocked me was my 12 year old was like, "Mommy, that would never happen to me because of my algorithm." I said to him, "Because of your algorithm?" He said, "Yeah, because I notice that when we go onto Netflix all your boring house shows and reality shows come up, but when I go to my channel, action movies show up. So we're not seeing the same thing, even though we log in the same way." It turned out that there had been a discussion in one of my kid's text groups about a show and some of them could see it and some of them couldn't.

So I would suggest that young people are probably already having these conversations because they're not able to interact in the same way that I was able to interact with my 12 year old friends, where I could be like, "Did you see Dynasty last night?" And they would go, "Yeah, it was on channel 3." We were all being fed the same thing via television. That's not the reality that many young people live in and especially now through COVID where they're physically distanced. So they're not seeing each other every day and shaping culture in the same way. The way that they're shaping culture is through screens.

If it were left to me, I would start talking about trust and how we know what we know very early. I would start in the K through 12 universe, because I think one of the scariest things that I see are classroom educators telling my children to Google things and I have to explain to my kids that Google is an advertising algorithm. It is not the supplier of information. We should be going to libraries and we should be asking librarians because they have a professional obligation to tell us the truth, to point us towards the truth. If we don't have truth, we don't have concepts like justice, because justice for whom? We don't have concepts like democracy, because liberal democracies are predicated on our ability to choose.

Mutale Nkonde:
If we're having our choices shaped for us and then algorithms, because they want to sell us advertising, are only feeding back our own choices, where is the ability to change your mind? Where is the ability to grow and where is the ability to learn? I don't know that I answered your question, but I do think that this critical ... the idea that what you see is being fed to you by a company is very, very compelling even for the youngest of learners. Really pushing them back to libraries and the public sector and one of the things I didn't talk about in my presentation is my Congressional work and being on teams that look to algorithmic accountability, the Algorithmic Accountability Act for example, one of the things that we were arguing were that we need laws that make sure that these business algorithms are not making public sector decisions, which may be a different question.

Chris Adkins:
No, it's an excellent question and you extended it from how do we in some sense help our children who are in some sense being presented with information that in many ways is reflecting "reality" to them when it may not be and I love the fact that it was ... your son was aware of the fact that there were different versions of reality being presented. I was wondering, this is partly again, and I'm going to move beyond this, but were they okay with that? Were they resigned to that? In some of the literature, they call this sometimes the fatigue, the technology fatigue. You just get used to this is the normal.

But then you talked briefly about integrating this in K12 but beyond, but then what does it mean for policy and we have a number of questions to that effect. I'll pause to see if you want to comment on any of that and then I can go to some of the Q&A.

Mutale Nkonde:
I'll just continue very quickly, but in the case of my own children, which is an incomplete data set, they live with a parent who's always ... especially now that I work and live in the ... we all work and live together so they have access to my work more than maybe they would if I was on campus or in an office. So I think for them, they think it's interesting because people find me interesting and I've been pretty boring up until doing this work. So they're like, "Oh, my God. That's really cool." One of their teachers followed me on Twitter and they were like, "Mommy, I think that this is cool. What are you doing?" So there was the excitement of, in their eyes, celebrity. Totally not.

Then there was also this idea of power. They're really attracted to the idea that they can shape their world and their world isn't being shaped for them. It was more in those terms that I placed this information. I love technology, that's why I live in technology. I'm not somebody that would always be like, "Don't use it," in some cases, if it's more dangerous for humans ... the risk benefit analysis doesn't come out on the side of technology. For them, it was incredibly exciting.

I think for ... certainly when I look at the uptake of interest in things like QAnon, two thousand articles, I think society generally want to live and be comfortable and use tools that serve them. I think that we're at a point in society where we're ready for these questions, both to our legislators, but we're also looking for our businesses to be better. We want to grow the good in business. If we're going to return a Democrat or a Republican, and I really don't care which, lets' do it because that's what we chose. Let's not do it because that's what we were manipulated into. That, personally, makes me feel good and unfortunately the killing of George Floyd, which I hope that we would all agree was a travesty not just for the family but for us as a nation, has brought so many good minded white people to this conversation and brought so many of them to say, "You know? We can't be silent."

So when I come along and say, "Racism within algorithms doesn't just hurt me, it hurts us all," there is now an audience for that. That's where my work with Tik Tok really started out. They don't want to hurt us all. They don't want to hurt anybody.

Chris Adkins:
What a fantastic way to take us into the conversation. I'll just mention briefly that it's exciting when a child sees their mom or dad as an influencer. I'm going to use a YouTube word, an influencer, and this idea that we're not just on the receiving end of our reality, but we can shape our reality, so that's what an influencer is and it's great that your children see you as that and we're excited that you're a part of that. But then of course, how do you become an intelligent and intentional influencer, and intended in a way that it's not just for me but it's for all of us to kind of elevate together and also understanding, and this is why I work in a business school, that business can be a powerful force for good and that's why I chose to come to Notre Dame.

So we do have a question here which is, "Okay, so if you think about business as a way to sometimes shape intention, influence behavior, influence what people see and read and hear and then of course process and think about, are there some tech companies that you think are really quite good at influencing this, particularly racial literacy, through their channels or the way that they communicate or just in their overall approach at connecting outside of the business?"

Mutale Nkonde:
So I've been really impressed with IBM over this summer and IBM has actually a horrible history of race and racism, if we look at their history in the Holocaust and how they produced the punch cards for the Nazi regime, but what they did this summer is they were the first company to come out and say, "We're not going to sell facial recognition technology to police forces." So facial recognition technology is part of what we would call biometric technologies and the way that it works, very simply, is that it will ... it's a machine learning system which means that it learns from the data that you feed it. Typically, the data has been pictures of white men, lots of them, millions of them, lots and lots of pictures of white men. Every stripe, every type, right?

Then when it has all these pictures, it will measure the median distance between people's eyes, the circumference of their noses, circumference of their eyes, cheekbone to ... it takes all of these various measurements of your face and then it does what's called labeling. It will look at faces and there will be ranges of eye circumferences, etc., That it will label men and ranges it will label women. It will look at other things like hair length, etc.

What's happened in, and if you look into the literature, what happened with the development of facial recognition technology is that it has ... IBM system had a 90% accuracy rate for white men. 90%. If you were a white man and it's looking at you, it's going to recognize you. It had a 40% recognition for people with darker skin. So that's not just black. That's anybody who isn't white. That's Mediterranean whites, that's going all the way.

These systems were being used for crime prevention so whether it came down to recognizing which black person did it ... like if it was between white people, great. But which black person did the crime, they didn't know. There was a case in Detroit, Michigan in June where a facial recognition system placed a black man at the scene of a crime in a place that he hadn't been in four years. He literally could prove he hadn't been there in four years and at the time the crime took place, he was actually doing a Facebook Live because he was picking up his daughter from school. So he could actually place himself somewhere else using electronic records.

So when IBM came out and said, "We're not going to sell this to police forces," this is after nearly ten years of research and development with the NYPD. They had been involved in all kinds of egregious practices around facial recognition, but 2020 was the time that they said, "Stop." And that was really important for a number of reasons. IBM were not a big player in the market so they were criticized about that, but what they did was they sent a letter to the Congressional Black Caucus at the time the Congressional Black Caucus were also thinking about developing the Justice and Policing Act and they were able, through using their influence and through using their credibility in the market, to explain not just how facial recognition was developed and why it had an anti-black bias, but also why police use of facial recognition really didn't achieve the goal of correct identification of suspects.

Mutale Nkonde:
That led to Microsoft making a similar pledge, Amazon making a similar pledge. Now we do have the more prolific companies like Clearview AI and others that are still in that marketplace that are problematic, but that was a really good example where the influence of business was used, in my opinion, in a way that opened the door for a much larger ethical discussion.

Chris Adkins:
That's a great example, and I know that IBM has recently connected with Notre Dame in support of our Tech Ethics Center. I think you highlighted something important there, which is even if IBM's not necessarily a big player in the facial recognition space, people are watching what they do and taking cues from them or the others responded. I also, in particular, think that this idea that we look at tech almost agnostically. It can be helpful. It can be harmful. When we start to see that when you're trying to use it for something good but it's actually not getting you there, you say, "Okay, we're going to put the brakes on this until we figure this out." I think that's so much a part of what we're trying to think about at Notre Dame, in terms of the ethical use of technology because we want those ethical ends. Technology can be a powerful force to get there, but it also can actually be pretty powerful in maybe leading us away from that destination.

So just to build off of that, to get really practical, are there specific ... this comes from one of our Q&As here. Are there particular tools for combating the type of interference you might see from, say, those who are trying to manipulate, whether it's Russia or others in terms of dealing with disinformation and propaganda? I know this is an age-old challenge, historically. Propaganda, now it's called disinformation, but we'd love to hear what you think in terms of the technology solutions that might be there.

Mutale Nkonde:
So that's really ... I was, prior to coming on, I was in a meeting about this particular project, and there are machine learning technologies that we can use. The problem, machine learning technologies are only as good as the data sets that they have and we personally did not think that we had a large enough data set for what we're trying to do. Please forgive me for being so vague. It's just that part of disinformation is that once you name the people doing it, you actually amplify it further, which I've done slightly in this talk by discussing the campaign but I'm hoping that the people listening won't take up ... I hope I've done a good enough job to say that we should just vote for who we should vote for and not who we see on the internet.

So the question, and this is what I wrote about in my Harvard Business Review article is the technology itself is agnostic, totally. Machine learning is completely agnostic. What humans feed into it isn't, and so for something like disinformation where we have a hundred year history of initially propaganda and then what differentiates propaganda from disinformation is that propaganda is the intentional use if information to influence. Disinformation is the intentional use of false information to influence. It's that false information that creates this distinction that we decided in our team meeting that the technology that we had would not be sufficient now but if we could build a larger data set maybe we could think about this. And because this is a long term strategy, black disinformation alone a hundred years, I haven't talked about other forms of disinformation, we feel that as we carry on we're going to be able to build the data set.

So yes, it exists. Is the data clean enough to think that we will be effective? Not for my particular project and not for anything that we've seen thus far.

Chris Adkins:
Thank you for that important distinction between propaganda and disinformation, both of which are concerning because we all know that people who might have levers to shape what we attend to and of course there's all sort of research around the way that you influence and shape with what you show people. But I'm also wondering, too, I mean you highlighted, and this comes from again the Q&A and I'm translating this a bit, but obviously Russia has played a role recently and in the past. That was really fascinating to hear. What other countries or where do you see this in terms of interference from other countries, or also on the flip side, is who else might be trying to influence at least the United States population but vice verse, are they targeting other countries, and then if I add a third piece to it, it would be what might we do to be able to catch those sorts of influences, if at all, because we may not be able to see it because it's so subtle.

Mutale Nkonde:
So first part of the question, which are the countries. Iran, China, and oh my God, there's another one and it will come to me probably while I'm talking. We've actually ... we've been looking into this very closely and hopefully ... yeah, I'm not going to share that. There are many secrets, but there is more to this. We have an article under peer review which will talk about some of the other countries, but Iran and China have been very active in wanting to change the balance of power because I think what's embedded within the other countries is the other countries that would really benefit from the United States not being a world leader and we are definitely in an AI race and a technology race with China and I would suggest that ... I recently became an American. I took ... I keep telling people I took the oath of office, and I'm sure I didn't, but I did take an oath. I'm very proud to be an American and one of the reasons I'm proud to be an American is that I do believe in American values and American democracy. I don't know that I want to live in a world that is led by any country that has well publicized humanitarian issues, specifically around technology, and that certainly would be China, if we look at facial recognition with the [inaudible 00:46:39].

The second part of the question was is still it happening, I believe, and yes it is absolutely still happening and it will always be a part of our elections. It has actually always been a part of our elections, but we now have domestic influences. The campaign that I'm following seemingly looks domestic, but the more that we're doing the investigation and the larger the data set grows, we are now seeing evidence that they are being influenced by Russia. And so what Russia has always done, and we saw this in the news the other day, is that they'll set up these seemingly international looking organizations. If we look back to the Cold War era when the Russians were really attacking the Catholic church, it was the World Churches Foundation, I believe, which was in Paris and that was a Russian organization based in Paris, Parisians working there, but all the money was coming from the Soviets and it was an attack on the Catholic church.

The IRA was not called the Russian Internet Troll Farm. It was called the Internet Research Agency. We just saw that Facebook identified a disinformation ring that were using Peace News and that was U.S. journalists that were being hired by what they thought was an American news agency, but it was actually funded through Russian money and they were spreading disinformation. The saddest thing about the last example I've given you is that they only paid those American journalists $200. Because people are out of work, because people are furloughed, because people can't make rent, we are so vulnerable economically that maybe we would take $200 and not think about the cost and do our jobs.

Then I think the third part of that question was around what do we do about it? Well, we stay vigilant. We figure out what are pieces. My project is looking at one particular message that's in one particular election and focused amongst black people, but this is happening across the board. I don't know if you noticed that there is now, suddenly out of nowhere, like a black man's strategy to the White House that's never existed in history. This is very strange for those of us that are interested in campaigns and political communication. We have to start questioning why is that suddenly a thing. We're seeing a lot around Islamophobia. We're seeing a lot around women, particularly white women in power being attacked.

So just think about the fault lines are in our society culturally, and then go and provide support there, and that's something that I always say to my philanthropic partners, that you really need to be funding the organizations that are doing the healing work in American society because it's those organizations like mine that have a mission to heal and have a mission to create an America that everybody can live in and thrive in. Those are the ones that are going to be most willing to take on these types of campaigns.

Chris Adkins:
Fantastic. Thank you. We got time for one more question, I believe, before we'll have to wrap up, and I'm going to try to synthesize several of the questions. So in the questions that I'm seeing, there's a tension between everyone has a voice and they can use the tools at their disposal to influence, to target, to select as well as ... and what does that mean for free speech, to allow people the access to influence whoever they see they want to influence, as well as what should the restrictions be? And then of course you have business in the middle here, which is making decisions like at Facebook to decide what are they going to restrict, what are they going to allow. So it's a two part question, and I know it's a loaded question, and I know we're almost out of time, but is there anything that you think businesses should be doing more of when it comes to managing, whether it's disinformation propaganda or achieving racial literacy, and also is there anything we should be doing individually? We started our conversation by how we maybe educate our children, but just in helping us screen a little bit better on the receiving end of whatever the businesses are sending us. So what can business be doing better? What can we be doing better?

Mutale Nkonde:
So, I think this problem is too big for the individual, and I don't think that we should become social media investigators and police. It's too much of a big task, we have other things to do with our time. I really think that this lands at the role of business. When I was doing Congressional work, one of the bills that I worked on was the DEEP FAKES Accountability Act, and that was an information integrity act really looking at social media companies. One of the things that we did along with our legal team were thinking about social media users as consumers as opposed to users and once you become a consumer, legally you have the right to truthful advertising. Since social media platforms are basically online advertising agencies, it created this role where business now has the obligation through the invocation of consumer rights law to let us know when what we're seeing is truthful and what we're seeing isn't.

What happened there? Well, Twitter now labeled the President's tweets. They fact check. Twitter now says, "Before you share," there is a prompt that says, "Have you read this article? Do you want to share this?" That was Twitter's response. Facebook's response is the one that I offered you in the presentation, which is going to ban, four weeks before the election which is inadequate in my sense, but the reason that you're seeing this differential response from business is because of the 1996 Communications and Decent Act. In clause 230, section 230, what it says is, "These platform companies cannot be responsible for the content that they carry," because we have the freedom of speech.

I would offer that when we were given that First Amendment right, the assumption being made by our founding fathers were we were going to act in the good of society. We weren't going to just wildly start speaking freely in a way that's harmful, plus they were really talking about government. They weren't talking about private citizens. So private citizens really should have that right.

So in my role, it's a two-pronged approach. It's a governmental approach that says ... that takes us out of that user category and out of that, "It's your individual ability to see if this is right." Allow us to invoke rights that we already have, do some type of labeling even though that in itself is problematic. Do some type of labeling but we're always going to be playing whack a mole because all that's going to happen is that the disinformation is going to be become more sophisticated and also have this higher level of literacy which includes racial literacy.

So within the example I gave, it's not that I want people to become race scholars and anti-racists protesters, it's just that I want them to think when they're seeing all of this information and it's only been targeted to Muslims or black people, that's really weird. Why they would give me information that would make me mad at black people? I just want to eat my dinner. I'm not concerned about that. That isn't helping, but that level of literacy that race is a vulnerability and is a historic vulnerability also helps the same people that marched for George Floyd and the same white people that were saying, "Not in our name. We don't want to see this." That helps them, it empowers them to say, "Well, I probably won't share that. It's none of my business," because these campaigns are only successful when they're shared. When we ignore them, no impact.

Chris Adkins:
Fantastic. Thank you so very much. We're very happy to not only have all of your insights and your research, excited for you to be a part of this next year with Notre Dame through the Notre Dame Institute for Advanced Study and that you're on book leave. So fantastic to get the insights. I'm so sorry that we couldn't get to all of the questions. We did our best to synthesize. So thank you so very much. I hope everyone will follow Mutale's work going forward, and also we will be praying with you that we will be working together to grow the good in technology and in business.

Related Content

Ethics in Investing: A Discussion with Kristen Bitterly Michell (ND '02)

Ethics in Investing: A Discussion with Kristen Bitterly Michell (ND '02)

Kristen Bitterly Michell (ND '02), the Managing Director and Head of Investing Solutions at Citi Global Wealth, sat down with us to discuss the industry. Kristen was a featured speaker of the 2024 Women’s Investing Summit (WIS) hosted by the Notre Dame Institute for Global Investing (NDIGI). She offers…

Rejuvenation at Work:  Leveraging Connections for Employee Engagement

Rejuvenation at Work: Leveraging Connections for Employee Engagement

Resolutions, diets, the latest exercise routines - we are inundated with “new year, new you” messages each January. This time of year also marks a bit of a reset at work - many of us return to the office after extra time with loved ones over the holidays.

As we consider our priorities and well-being,…