AI is smart – can we make it ethical, too?

EPISODE 39 - Artificial intelligence is everywhere, and it’s getting smarter. It’s driving cars, screening resumes, monitoring surveillance networks and even helping doctors make medical diagnoses. How do we make sure such a powerful tool doesn’t become a threat? Tulane computer scientist Nicholas Mattei is a co-author of Computing and Technology Ethics: Engaging through Science Fiction, a book coming out this fall about the growing field of AI ethics. Mattei talks about the risks when developers don’t ask the right questions and whether AI has the potential to take over if we’re not careful. 

Transcript ▾

Speakers
Keith Brannon, director of public relations, Tulane Communications and Marketing
Nick Mattei, assistant professor of computer science, Tulane School of Science and Engineering
Siri, virtual assistant, Apple Inc.

 

Brannon
Welcome to On Good Authority, the podcast from Tulane University, where we bring you leading experts to talk about issues of the day and ideas that shape the world. I'm your host, Keith Brannon. When many of us think about artificial intelligence, we may think about sentient robots of science fiction, or maybe even systems like IBM's Watson that beat the best human champions at jeopardy. But AI is actually all around us and getting smarter by the day. The AI that most of us interact with are usually complex algorithms. They feed us the most addictive posts on social media, or control customer service chatbots. They also bring to life virtual assistants like Siri and Alexa. But AI is getting much more sophisticated. It can power autonomous cars and trucks, control surveillance networks, and screen thousands of applicants for jobs and mortgages. Complex AI systems can also be used for harm. They can dream up new chemical weapons, computer viruses or realistic deepfake videos. With such immense potential, comes great responsibility. How do we make sure AI is an ethical partner and not a threat? These are questions that Tulane computer scientist Nicholas Mattei ponders as an expert in the growing field of AI ethics. He has a book coming out this fall about ethics and AI and is developing a class at Tulane on the subject. Nick, welcome to On Good Authority.

Mattei
Hey, Keith. Good to be here.

Brannon
Artificial intelligence is about so much more than making sure Netflix recommends the right shows or we’re properly addicted on Instagram. How does modern life depend on AI right now?

Mattei
That's a good question. It does so much more for us. It got me here to work today. Because it told me where the traffic was. It predicted what my route times were going to be. And it rerouted me to get to my class on time today. It's it's it covers everything that we do. Like, you know, I load up my Google News every day, I load up my New York Times every day. And the order that those stories appear in on the app or on the website are, you know, really driven by AI algorithms, are driven by these recommendation systems algorithms. It's almost impossible to have an interaction on the web these days that doesn't use some sort of smart recommendation or AI-type system under the hood.

Brannon
Where would you say we are in this evolution of the technology? Are we just at the beginning, or is this going to be as transformative as the internet?

Mattei
Yeah, I mean, AI is kind of almost as old as the computer. You know, it starts out in the 1950s. And we've we've had sort of regular progress since then. And there's there's a funny joke that AI is whatever we don't know how to do yet. Because once we are able to do it, then we no longer consider it AI. Right? So identifying faces in a photo used to be this very smart thing that only people could do. Well, now Facebook does it better than I do. So now it's not it's not AI, it's just this thing that we do, right? Playing chess used to be a mark of intelligence, it used to be things that people used to determine whether or not people were smart for admission into colleges. And once the computers started beating humans, they were like, oh, well, it's just a game like, you know, it's not that big of a deal. Right? So it's, it's, you know, it's been all these little advances, I think, over time, that that have really changed how we think about these AI systems. Now we're sort of waking up, we're like, oh, look at all this crazy stuff it's doing.

Brannon
So why is it important to think about ethics and AI? And is such a conversation unique to this technology? Or were there similar calls for discussions for other advancements?

Mattei
We can look back, right, we can look back at history about how people have interacted with new technology, not just AI, but all kinds of different technologies over the years, right. Like the printing press was going to destroy society, because everyone was going to, you know, read all the time, and they weren't going to work, right? You look at cars, you look at how it's gonna put all of the people who make buggy whips out of commission, out of out of business. If you look at something even more recently, like some of these, like CRISPR, and some of these medical technologies that people have a lot of worry about. And I think one of the themes that we try to pick up on in the book, and the reason the book actually uses science fiction to sort of talk about how we think about technology, is there's always this reaction to new technology, you know. And one thing that we can do is we can look at how old technologies affected society, and what kind of changes came from those, to learn lessons about what these new things are going to do. You know, when when we have these new, sort of technological systems, and how they interface with society, I think that's the big question. And that's where I think a lot of these questions around ethics comes from is, you know, we've got this new thing that kind of changes the way that we used to think about how things were. And we've got to integrate that into our thinking about going forward and decide, you know, that's kind of what ethics and morals and things are, is we've got to sort of decide how we're going to integrate this new thing into what we're doing. And sometimes it's not even new, sometimes we're just using an old thing in a different way.

Brannon
For me, what makes AI different is that good AI is designed not to be noticed. It's in the background, making things work, right. And that's what makes it such a risk, in terms of ethics. Anything that makes choices for us, or takes over our own agency, runs a risk of not having our best interest at heart. How should designers think about that kind of responsibility?

Mattei
I think that that is the key, the key tension here, right, is that it feels like, to some extent, that we're giving up some autonomy, and we're letting some AI systems sort of take that autonomy from us. But we've, it's not just gonna take it, right. Like we're giving it to them. And so, you gotta like, I think that's, that's part of, that's on us. And that's on them, right? And you got to think about what that system knows. So let's take as an example, this idea of resume screening. Jobs get many, many, many resumes, and you want to screen out these things. And maybe you used to have a person that sat there and like sorted through these things. And, you know, there was someone sort of responsible in a classical sense. But there's also, now we kind of use algorithms of some sort that use some set of rules. Well, you want to think about what those rules are, and how those rules are made. Any technology that is automating a decision-making process, be it screening resumes or what movie to watch, runs the risk of becoming easy and letting us not think about what went into that decision. You know, you're not allowed to use race to screen resumes in the United States. But it turns out, you can use zip code. And in the US, unfortunately, zip code is a very strong proxy for race. And what is AI systems, what is machine learning systems really good at? They're really good at finding patterns in data. And so, if we show it a bunch of data, that, you know, has the strong correlation in it, it's going to pick it up. And sometimes this is, this is a little disturbing for us as people, because it's like, oh, look, it's just learning from us. What does it learn from us? Well it learned that we weren't very good people. You know, it's all of these things together, in terms of the users of the system, the designers of the system, and the way that we regulate these systems, especially like you said, when we've when we're trying to offload some of that thinking and some of that responsibility.

Brannon
How do you teach AI to be ethical?

Mattei
How do we teach people to be ethical? And I think it really starts with, you know, being clear. A lot of these questions around ethics, you know, really revolve around understanding and being clear about what's at stake. No one, at the end of the day, really cares too much if Facebook misidentifies my wife as my mother-in-law. If it tags the photo wrong, the stakes are low. But if I'm using the same technology to determine whether or not someone is on a CCTV camera when they broke in somewhere, the stakes are much higher, right? The stakes of where I'm using that system, and the stakes of the outcomes of that system, are radically different. And I think sometimes, you know, it's, it's not that we teach the AI to be ethical, it's that we teach the people who are using those systems to make decisions, you know. When it's appropriate to rely on these things and when it's not.

Brannon
You know, when we're talking about ethics, it sounds like it's more about actually getting the designers to be ethical, or to truly think about how this technology will be used in the real world and where things could potentially go wrong. So it's almost like you're teaching people to ask the right questions?

Mattei
I love that you asked this question, because that's like, literally in the intro to the book is like teaching people how to ask the right questions. And there's this great study that we talk about in the book that that again, comes back to this resume screening idea is that they there was a group at, I think it was at Penn, that did this sort of, they wanted to see if the designers being more or less biased led to algorithms or systems that were more or less biased. And so, what they did was, they basically gave two groups of people these implicit bias tests. They wanted to find out, do folks who show more implicit bias build systems that are more biased? And the answer is no, that everybody builds about the same system. The big key is, if you give them biased data, and you don't remind them that the data is is typically biased, that hiring patterns are oftentimes biased, that loan patterns are oftentimes biased, and you don't really focus their attention on paying attention, you know, focus their attention on what is that data? What is the bias inherent in that data? So it is really, that's the conclusion of the study, which is a great one, is, you know, reminding people, that data can be biased. And that they need to be on the lookout for that. And that they need to pay attention to the context in which the system will be deployed. That leads to better outcomes. That's one of the reasons why, you know, back in, you know, teaching teaching the kids that came back from industry, you know, is I think that, really education and thinking about those questions and thinking about the context of what's at stake, really is how we how we address this, I think, in a in a holistic and comprehensive way.

Brannon
So one of the things I do want to talk about are some real world examples. What are some examples of when AI had an ethical lapse and how it caused real problems?

Mattei
There was a famous one that ProPublica did a lot of conversation about, I think about in the 2018 range, that was about recidivism, and whether or not folks that are charged with crimes are released on bail or are held. One of the places where the system went wrong was, you know, for for white folks, it was predicting much lower, or lower-risk recidivism scores, that it was predicting for people of color. And some of that can be attributed to the fact that it was trained on historical data about who gets let out and who doesn't get let out. Their trying to make the system fair, or some definition of fair, that turned out to have dire consequences for people of color than it had for white, white, white people.

Brannon
When did the designers get it right, and AI was ethical and saved the day?

Mattei
You know, sometimes it's invisible, and I think sometimes the successes are invisible. The whole thing with Google Now, where they're giving you traffic routes that use less gas, that are more energy efficient, that are more green, you know. These suggestions about how to get from A to B in ways that burn less gas, and maybe don't to cause congestion in neighborhoods around schools. You know, this stuff is happening underneath the hood. And we're like, okay, cool, you know, but let's, we're gonna we're gonna try to pick up these other things. So I think you see these systems with a lot of successes. And a lot of a lot of things that are that are happening. And sometimes we just sort of take it as a given.

Brannon
We've all seen The Terminator movies where computers turn against humanity. Billionaire Elon Musk famously worried about the dangers of artificial intelligence. And recently, The Washington Post even weighed in with an op-ed saying that the danger of AI could be human annihilation. What is your biggest worry about AI?

Mattei
This is this is the question that I always get, right. It's like, when are the terminators coming to get us? And and I think the joke that I usually make is, I'm more worried about, you know, the, the WALL·E situation that I am The Terminator situation, right. Which is, these these, us building these things to the point where we just don't have to think and we're all on this giant cruise ship going to going to wherever and every thought that we want is all taken care of. To me, that's more where we're heading than than The Terminators. You know, I don't know about you, but my phone still doesn't know when to turn itself on and off. So I'm not super worried about computers, or AI systems or any of this stuff, taking over.

Brannon
What happens when AI becomes smarter than us?

Mattei
But they're already smarter than us, in a lot of very specific ways, right? They're already smarter than us at labeling images on Facebook. They're already smarter than us in a lot of these very small domains. And so, I think getting back to the original question is, you know, what do we mean by smarter? You know, and I think everyone's worried that they're going to take over. But they have to get much smarter at a whole set of general, general tasks. And I don't think they're going to be, I don't think there's any time soon, where they're going to get there. They're already smarter than us, don't say it too loud, at all kinds of things. We just we just don't, we don't consider those things to be intelligence, right. We keep changing what we mean by intelligence.

Brannon
So, we may be safe from our Roombas plotting Armageddon. But I mean, realistically, do you see any threat on the horizon right now from AI?

Mattei
I think the threats on the horizon are similar to the ones that we've sort of been dealing with for quite some time, right. Like we've, we've, you look back at the history of manufacturing in this country, right, we've had the robots come in, we've had, you know, a lot of streamline manufacturing. We produce more things in this country with fewer people than we ever have. We actually grow more food in this country with fewer people than we ever have. These aren't necessarily AI technologies, there’s part of it, right. These are, you know, using these smart systems, using these these AI systems to, you know, do some small tasks that humans used to do. And then like composing them into these large systems. And so, I think that's a real threat, you know, job loss, which is a big one, this disruption disruption to certain things. But there's a lot of benefits, too, right. And I think this gets back to, we have to look at how technologies have been integrated over time. You know, when we first started writing things down, you know, Aristotle said, oh, no, everyone's gonna forget everything, right? So there are these these threats or dangers, but I think really more than threats or dangers, I think it's really, it's gonna force us to rethink how we do certain things, and how we integrate those technologies into our society as a whole. We don't think of the special responsibilities we have that comes with that knowledge that we have, right? Instead of really viewing that responsibility as something that's a two-way street, that we need to be in a conversation with the folks that use the technology and understand what the impacts of that technology are, and take that very seriously as technology developers and designers. I think we could learn a lot from other, you know, professionalized groups about how to do that right. And until we have that, best I can do is, you know, try to teach my students you know, how to ask those right questions and how to take that take that responsibility very seriously.

Brannon
Nick, it's been a pleasure. Thank you for joining us here On Good Authority.

Mattei
Thanks a lot for having me, Keith. It's been a lot of fun.

Brannon
Hey, Siri. I have a question.

Siri
Ask away.

Brannon
Are you ethical?

Siri
I'm not a person, so I only know what I've been programmed to understand. But I was made by people that do.

Brannon
Thank you for listening to this edition of On Good Authority. For more episodes, please visit tulane.edu/on-good-authority. If you like our show, please subscribe using your favorite podcast app.

Host: Keith Brannon
Editor: Cooper Powers
Producer: Audrey Burroughs
Production team: Marianna Boyd, Keith Brannon, Audrey Burroughs, Chelsea Christopher, Faith Dawson, Roger Dunaway, Aryanna Gamble, Becca Hildner and Roman Vaulin

Listen to other episodes of On Good Authority.