The Ethics of Artificial Intelligence – AI in the 21st Century
As Artificial Intelligence – AI systems have become more powerful, they’ve been deployed to tackle an increasing number of problems.
Take Computer Vision. Less than a decade ago, one of the most advanced applications of computer vision algorithms was to classify hand-written digits on mail. And yet today, computer vision is being applied to everything from self-driving cars to facial recognition and cancer diagnostics.
Practically useful AI systems have now firmly moved from “what if?” territory to “what now?” territory. And as more and more of our lives are run by algorithms, an increasing number of researchers from domains outside computer science and engineering are starting to take notice. Most notably among these are philosophers, many of whom are concerned about the ethical implications of outsourcing our decision-making to machines whose reasoning we often can’t understand or even interpret.
One of the most important voices in the world of AI ethics has been that of Dr Annette Zimmermann, a Technology & Human Rights Fellow at the Carr Center for Human Rights Policy at Harvard University, and a Lecturer in Philosophy at the University of York. Annette is has focused a lot of her work on exploring the overlap between algorithms, society and governance, and I had the chance to sit down with her to discuss her views on bias in machine learning, algorithmic fairness, and the big picture of AI ethics.
Here were some of my favourite take-homes from our conversation:
- Machine learning problems are often framed as engineering problems, rather than philosophical ones. As a result, if they go wrong, we tend to think of technical solutions (“should we augment the dataset or try a different algorithm?”) rather than more fundamental ones (“maybe this isn’t an appropriate use case for an automated system to begin with?”). For this reason, the option to not deploy controversial systems often isn’t given as much weight as it should.
- Engineers and data scientists don’t realize that the features they engineer, select, and feed into their models become the lens through which these models see the world. A model that’s applied to a different set of engineered features literally sees the world differently — with all of the biases that this new perspective entails. So contrary to a widely held view, bias in machine learning doesn’t just come from datasets: the very features we choose to flag as “relevant” or “useful” are a direct reflection of our beliefs and our values.
- AI systems often create the illusion of being almost clinically objective. After all, we tend to think, if a decision is automated, doesn’t that remove human subjectivity — and therefore bias — from the process? In reality, of course, the outputs from machine learning models simply reflect the decisions of the programmers who trained the algorithm, and just because those programmers aren’t directly involved in generating each of the model’s outputs doesn’t mean that their worldview and their assumptions aren’t buried implicitly in the model itself.
- As AIs get more powerful, there’s a natural temptation to apply them to predicting various forms of human behavior, from job performance to academic ability. But these applications often to challenge human agency by removing our ability to defy trends in past performance. If a super accurate machine learning model predicted that you’d fail your next driving test, would you even bother to study? AI systems are increasingly undermining our sense of free will, and it seems important to question whether that’s a good thing, or whether there are certain things we might not want to predict about ourselves or others, even if our models were perfectly accurate.
- 0:00 Intro
- 2:36 What is AI ethics?
- 6:08 The core problem
- 10:31 How do we want to fail?
- 12:00 Promising strategies
- 16:17 Optimization as a practice, not a goal
- 17:13 Data and exploitation
- 20:55 Human decision-making
- 24:32 Power and responsibility
- 26:34 Democratic decision-making
- 29: 33 Democratizing AI and algorithmic justice
- 31:06 Systems with time domain flexibility
- 34:12 Untrustworthiness in AI
- 37:33 Long-term issues
- 40:33 Construct validity
- 45:11 UK school exams (COVID)
- 50:25 Moral philosophy around free will
- 53:32 Automation bias
- 55:44 Wrap-up
Please find below the transcript for Season 2 Episode 3:
Jeremie Harris (00:00):
Hey, everyone. Jeremie here. As you know, I am the host of the podcast and a member of the team at the SharpestMinds Data Science Mentorship program. I’m really excited about today’s episode of the podcast because it’s our first chance really to dive deep into the issue of AI ethics. AI ethics obviously is something that is in the air today. A lot of people are talking about it, certainly a lot more people than used to. It’s increasingly important as machines start to take over really more and more of our collective cognition as a species. We’re outsourcing more and more of our thinking to machines. And as we do that, it’s becoming important for us to start reflecting on how we do that and whether or not we should be doing that in certain areas.
Jeremie Harris (00:38):
The field of AI ethics emerged around questions like that. And one of the dominant voices in that field has been that of Dr. Annette Zimmermann, who is, I’m very excited to say, our guest for today’s episode of the podcast. Now technically, Annette calls herself a political philosopher, but I actually think that title sort of hides some of the intricacies of her thinking. She’s, in reality, one part an ethicist, one part a philosopher and then one part a technologist. I mean, she actually has to know an awful lot about engineering, data science and artificial intelligence in terms of how systems are deployed and how they’re built in order to be able to offer up her opinion and her insights on the field at large.
Jeremie Harris (01:16):
AI ethics really is this marriage of different subspecialties and she’s really got them all on lock. You’ll see that on full display during the conversation. I should also mention that Annette has an awful lot of experience talking about and thinking about AI ethics and related issues. She’s currently a Technology and Human Rights Fellow at the Carr Center for Human Rights at Harvard University. In addition to that, she’s also a lecturer at York University. She’s really seen the stuff at the forefront of academe. She’s also done applied research and written a whole bunch about this topic in a whole bunch of different popular publications. You can definitely check those out. We’ll be linking those in the description of the video as well as in the blog post that will come with the podcast.
Jeremie Harris (01:53):
I hope you enjoy listening to the episode as much as I did recording it. This is definitely one of those episodes that I’d love to do more of in the future. We’ll be having more people on to discuss things like AI bias and AI ethics for sure going forward. In the meantime, enjoy the show.
Jeremie Harris (02:06):
Hi, Annette. Thanks so much for joining us for the podcast.
Dr. Annette Zimmermann (02:10):
Hi, Jeremie. Thanks so much for having me on.
Jeremie Harris (02:12):
I’m really excited to have you on. I think it’s fair to say you’re one of the guests I’ve been most excited to talk to because of the breadth of your interest and just how focused you are on this idea of AI ethics. I think AI ethics is obviously getting more and more important. I think it’s already incredibly important, but I also don’t think that many people, myself included, have a complete sense of what AI ethics really is. So maybe that’s a good place to start. What is AI ethics?
Dr. Annette Zimmermann (02:36):
I think you’re absolutely right to diagnose a kind of state of confusion and public debate around this topic at the moment. I think a lot of people disagree about what AI ethics really is and whether it’s useful in any way. From where I stand, that’s the kind of perspective of a trained philosopher, I would view ethics as the kind of discipline that tries to identify how society should work. So how should we distribute power? How should we arrange our social and political institutions? How ought we to act when we engage with other people? What do we owe to other people as a matter of morality?
Dr. Annette Zimmermann (03:18):
Now of course, that’s a very particularly way of framing ethics. There are a lot of other people who do AI ethics but who basically view ethics as something like a legal compliance framework or something like a corporate aspiration, a corporate articulation of values. So a lot of big tech companies are putting out AI ethics principles at the moment and very often they also take the form of a statement that says, you ought to do X, Y, Z. So for instance, a do no harm. That’s relevantly similar to what a philosopher might say, but it’s often much less focused on a kind of noncommercial motive. Obviously, big tech has a commercial incentive to articulate these principles.
Dr. Annette Zimmermann (04:09):
I do think that very often they are articulated in good faith, but I think we need to think critically about them because very often they’re nonbinding principles. So it’s very easy to put out a kind of voluntary statement of what we would like to do ideally. But if there’s no accountability mechanism in place and if you’re the one who can set the agenda because you have a lot of computational power behind you and a lot of resources, then the question is well, what does AI ethics really amount to? I think that really explains why a lot of people are quite skeptical of AI ethics because it just seems really toothless and kind of voluntaristic.
Jeremie Harris (04:46):
Dr. Annette Zimmermann (04:48):
Just to add to that, I think that skepticism is right, but I don’t think we should therefore conclude that we should throw AI ethics out of the window. I do think that AI ethics properly construed can be a good tool to reason our way through what we ought to do not only as members of big tech corporations but also as members of society at large. So how should I as an ordinary citizen think about the ways in which we use AI and ML in really high stakes decisions in many different domains given that AI is now replacing a lot of human decision makers in our public institutions, for instance. So thinking about those questions morally and politically, which is what AI ethics is good for and is supposed to do, that I think is really valuable. So we shouldn’t ditch that effort.
Jeremie Harris (05:39):
I find that fascinating partly because of what it revealed about my priors. When I think about AI ethics, I think I’m joined here by probably the majority of people. I do think of these, like you said, these big ethics statements put out by Google and Facebook and so on. And I guess to some degree that also reflects the asymmetry in terms of both knowledge and resources that these companies have to even experiment with these technologies. I mean it feels like there’s a sense in which governments are playing catch up on a regular basis and-
Dr. Annette Zimmermann (06:07):
Jeremie Harris (06:08):
… all the cutting edge technologies being developed in house at big tech firms, which therefore can lay claim to being at the edge of not just the technology but also the ethics. Is there something in your research focus or some thinking you’ve done around the question of how do we bring social decision making at a broader level, a higher level of abstraction maybe, that isn’t confined to just companies and enterprises? How do we get that into the game if the resources are a constraint and the knowledge is too?
Dr. Annette Zimmermann (06:39):
Yeah. I think you’re absolutely right to diagnose the core problem that we’re facing at the moment. Right now there aren’t really very much incentives for really talented ML programmers to work in the public sector or to stay in research. Obviously, there are a lot of incentives in favor of going into the corporate domain, and so for that reason it is true that a lot of talent is in private industry at the moment. The question is should that mean that big tech corporations get to set the ethical agenda on top of setting the technological agenda?
Dr. Annette Zimmermann (07:13):
Agenda setting, of course, is a kind of key democratic problem, right? So democratic decision making isn’t only about making actual choices. It’s also about deciding together what we should be thinking about in the first place. So which kinds of solutions are we even considering in our decision making? That’s where I think decision making on AI should and could feasibly trickle down into normal democratic discourse.
Dr. Annette Zimmermann (07:42):
Of course, a lot of people in government don’t understand much about emerging technologies, and I think that explains a kind of widely shared reluctance to put these kinds of AI-related issues on democratic agendas. But at the end of the day if you think about the ethical and political space of using many of these kinds of tools, ultimately they’re not purely technical and you can explain them to ordinary citizens in ways that aren’t reliant on having done an applied mathematics or applied statistics degree.
Dr. Annette Zimmermann (08:16):
For instance, if we think about a system like COMPAS, a criminal recidivism risk scoring tool, that obviously attracted a lot of controversy because investigative journalists at ProPublica found out it had massive racial disparities, which failed worse for Black defendants. That seems like a clear ethical and political problem. If you explain that to people, you might say, “Well, one policy goal we might have in this area is to just improve this algorithm. So we could try to optimize it to get rid of this kind of racial disparity.” So the hope is that will make that system more fair.
Dr. Annette Zimmermann (08:53):
That’s something that you can explain to an ordinary citizen, who might then follow that up, of course, with the statement, “Well, computer scientists have actually shown us that optimizing that kind of algorithm perfectly is very, very hard/impossible.” A lot of computer scientists have articulated impossibility theorems that relate to optimizing for many different kinds of fairness that are all individually plausible at the same time. So we can’t be perfect when we mitigate and optimize algorithmic systems.
Jeremie Harris (09:27):
Right. This has always struck me to be a feature of machine learning that starts to force us to really confront our philosophical lacuna, I mean the areas where we lack in philosophy. Machine learning in some sense really forces us to codify in equations, in really concrete terms what our moral philosophy is in a context where we really don’t know. You just alluded to these no-go theorems, impossibility theorems. You can’t have privacy at the same time as you have a certain level of performance or whatever, and these things are in conflict. I mean there seems to be similar issues even just with the concept of, for example, democracy where we have not just things like tyranny of the masses or whatever but other more fundamental constraints like Arrow’s impossibility theorem.
Jeremie Harris (10:13):
There are no-go theorems in terms of how well you can generalize knowledge from individuals into collectives. Do you see these as being along the same continuum? Is this really one big problem, or is there a sense in which we can actually divide meaningfully between the machine learning part of the problem and then the social part of the problem?
Dr. Annette Zimmermann (10:31):
I think a lot of these questions are part of a structurally similar problem. So I think when people think about AI, they’re really hyper-focused on identifying really unique features of AI because obviously people are really excited about AI and ML at the moment. But as you just suggested, very many collective decision making scenarios are actually structured in a similar way where it becomes very hard to optimize fully. So then the question for us as democratic citizens is well, if we know that we’ll fail to some extent, how do we want to fail?
Dr. Annette Zimmermann (11:08):
Do we want that failure to disproportionately burden people who are already disadvantaged, or can we find a different way of dealing with imperfection and harm and uncertainty? I think the worst possible case, which unfortunately has been realized in many domains recently, is that in fact imperfections and harms and injustices compound for exactly those people who have already been at the receiving end of those injustices and harms. That seems fundamentally antithetical to democratic values. So if we really care about equal freedom for all, then that’s the one case that we should absolutely avoid.
Dr. Annette Zimmermann (11:49):
As you say, we find that in many other domains of democratic decision making that actually don’t have anything to do with AI. As soon as we’re dealing with social complexity, we’re going to find ourselves in that kind of problem space.
Jeremie Harris (12:00):
I guess to some degree this almost belies the fact that human beings themselves are machine learning agents in some meaningful sense. I mean we have a neural network. We have some kind of reinforcement learning circuitry. Yeah. What are some of the more, I don’t want to say the more concrete side, but what are some of the strategies that you see as most promising in terms of dealing with some of these problems where we start to really entrench current norms, whatever they may be, in our algorithms?
Dr. Annette Zimmermann (12:27):
Well, there is one very concrete but very controversial strategy, which is non-deployment in targeted domains. A lot of people recently have been focused on non-deployment of facial recognition technology in particular, again because of the racial disparate impact of that technology. Actually, a lot of big tech corporations have now said, “Okay. We’re going to have a moratorium here. We’re not going to deploy these tools for a set amount of time.” So IBM, Amazon, but also many others have responded to pressure by researchers like Joy Buolamwini and others. So clearly, there’s a kind of public awareness that actually limited and targeted non-deployment may be the best way to deal with these kinds of problems until we have a better technological solution to attack these problems.
Dr. Annette Zimmermann (13:21):
Now, whether there always will be a feasible technological solution, that I think really depends on the domain of deployment and really depends on our goals. Think back to that other case that I mentioned before, that COMPAS case. If you look at that case, you might think, “Well, why are we predicting recidivism risk rates for people in the first place? Is that really the best way of ensuring that our criminal justice system works optimally and what would that mean?” I guess another alternative we could put on our agenda or democratic decision making agenda would be to say, “Well, couldn’t we transform prisons? Couldn’t we have institutional and social and structural and legal changes, changing sentencing guidelines, changing mandatory minimums?”
Dr. Annette Zimmermann (14:06):
All of these are possible solutions. Not all of them will be algorithmic. Some of them might be, but I think we need to get a really, really clear sense of the solution space in its entirety and then contextualize AI approaches within that. That I think will help us identify where we should hold off on deploying something maybe until we’ve tested technology and made it more reliable, or maybe as soon as we’ve deployed other social and institutional mechanisms, then it might be safer to deploy in that domain. But I think we really need to be attuned to this interplay between the social world and the technological world because those will necessarily interact. So that should guide our decision making about things like non-deployment.
Jeremie Harris (14:55):
Yeah. And you really see that temptation to default to deploying when you have a problem, especially one that feels concrete, right? As an engineer, you can look at this problem, say, “Oh, there’s a loss function I can define. Why don’t I just train and treat this as a competition and go ahead and deploy?” Whereas yeah, this idea that actually you’re dealing with a very complex system. The different parts interact. You can’t really solve a subproblem without invoking other problems. Yeah, it’s really interesting, the non-deployment as its own option really in this whole situation.
Dr. Annette Zimmermann (15:26):
Right. I think it’s really important not to fall into an optimization trap. It’s often funny for philosophers to engage with tech practitioners on this issue because in philosophy there is this well-known slogan called ought implies can. So it just basically means that I can’t impose a moral duty on you if it’s just impossible for you to meet that duty. That would be overly demanding, and so that wouldn’t be ethical. But in the tech practitioner space, a lot of people turn that on its head. So they can say, “Can implies ought.” So if I can do something, I should innovate. I should put it out there.
Dr. Annette Zimmermann (16:05):
As you rightly say, there is a presumption in favor of putting things out there and seeing if they work rather than thinking about, do we want this? What’s our purpose with using this tool in the first place?
Jeremie Harris (16:17):
You’ve spoken as well about this idea of optimization as not being intrinsically necessarily a desirable, not goal, but practice. Could you speak to that a little bit more? What’s your thinking on that?
Dr. Annette Zimmermann (16:31):
Right. I think very often we view optimization as making incremental improvements, and so that’s a really reasonable ad hoc view, right? So if you think about how can I improve my daily habits, well, I’ll do five minutes of this and maybe I’ll establish a routine of exercising daily or something. Over time, I’ll make these incremental improvements. In life in general, I think that’s a good principle. The problem is that very often in a machine learning space when we make incremental improvements without asking, should we be optimizing in that direction in the first place, it can actually drive us further away from justice.
Dr. Annette Zimmermann (17:13):
Just to give you a very concrete example, think back to that facial recognition case that we just discussed. Before companies put out these non-deployment moratoriums, they actually had a different approach. They said, “Well, we’re just going to improve this technology.” How do you do that? Well, you need more data. And in particular, in order to tackle the racial injustice problem that came to the forefront with these technologies, people in these corporations said, “Well, we need more data about black and brown faces.” Unfortunately, very often that process of gaining more data is very exploitative.
Dr. Annette Zimmermann (17:51):
So a Google subcontractor, for instance, went to the streets in LA and said to homeless people, “You know, let’s play a game. We have the selfie game. We’ll just record your face and you get a five dollar voucher.” They didn’t tell these people that their faces were being used as biometric data for facial recognition tools. So the community that is already quite vulnerable and probably will be at the receiving end of further injustices once this tool gets scaled up, that’s exactly the community that was being used to optimize.
Dr. Annette Zimmermann (18:26):
I think that’s a really good indicator that our effort of optimization puts us on a path that isn’t really oriented towards greater justice. Because if the people who have to pay for that optimization or who have to provide some sort of services for optimization, if they are not actually taking ownership of that process and if they’re not being informed what is happening, then I think we’re really on the wrong track. That’s a way in which incremental improvements actually end up burdening people further.
Jeremie Harris (18:59):
I think this is one of the most fascinating aspects of this whole conversation is this debate versus incrementalism versus some sort of almost first principles rethinking of the entire approach that we’re taking social structure and so on. To some degree, I mean, the political philosopher in me, this makes me think of the distinction between the two famous American political philosophers. There’s Edmund Burke and there’s Thomas Paine. Burke is this classical conservative who says, “Incrementalism is really the solution. We have decent structures that have evolved through time through a combination of evolution and economics. We’re now in a good place and we should respect that and not risk it.”
Jeremie Harris (19:37):
Whereas Thomas Paine says, “Well, we… “ I think his words were like, “We have it in our power to begin the world anew,” or something like that, really rethinking the entire structure from first principles. Maybe Burke thinking a little bit more like an optimization engine. Thomas Paine thinking more like a physicist, come up with the equation from scratch. Let’s redo this and make a beautiful system. Is there the possibility to take that Burkeian position, take that gradualist position one step further and say, look, so if they go out to the homeless people and they do this, obviously that’s a huge mistake.
Jeremie Harris (20:08):
What if they then take that information, iterate, try again? Is it possible that we just haven’t tried that long enough? Is that a plausible sort of counter-position to this idea?
Dr. Annette Zimmermann (20:19):
I think it again really depends on the domains. I’m a big fan of iterative decision making and critique approaches. I think very often that is really, really helpful and needed mainly because a lot of technological problems actually only surface over time. So in many different domains I can start out with a machine learning system that actually exhibits no bias, that is just completely unobjectionable on the bias front, but tiny incremental changes compound over time and end up still giving us massive disparities.
Dr. Annette Zimmermann (20:55):
We know a similar phenomenon from human decision making, actually. Think about the sociological phenomenon of belief polarization. You can give two people exactly the same amount of data, same amount of evidence, and depending on what their priors are, they’re going to end up with really dramatically different views. It’s kind of similar with machine learning systems. Contingencies in the social world, just random happenstances may affect a path dependency in a system that isn’t entirely foreseeable and that we have to mitigate sequentially. That’s why sequential intervention can be a really, really helpful tool for improving systems in a dynamic way. That would then again bolster the case against a pure first principles approach.
Dr. Annette Zimmermann (21:45):
That being said, I think we do need some sort of reflection on what our overarching goal should be. That can be a flexible articulation, but I think if we just say, “We’re going to be gradualists all the way down,” then we might lose track of why we started trying to optimize something in the first place.
Jeremie Harris (22:08):
Dr. Annette Zimmermann (22:09):
That kind of path dependency is something that I think we should be worried about. There needs to be an approach that reconciles both of these ends of a spectrum with each other, and I think the way to do that is to have these first principle-based conversations about what the goal definition should be but then to iterate sequentially and be ready to change our goalposts if that is actually proving necessary.
Jeremie Harris (22:34):
That’s a good point. That’s a conversation that’s not even really happening at this point, or at least it doesn’t feel like it’s particularly happening outside of, I guess, pretty narrow areas like OpenAI and GovAI policy, policy think tanks that are thinking about what are these optimization processes. What are some of the big discussion points in that ecosystem as people step back and ask themselves, what should we be building? Is that something that you’ve looked at?
Dr. Annette Zimmermann (23:00):
Yeah. I mean, I think a lot of people who are tech practitioners are really interested in questions of personal responsibility. One issue that I focused on recently was this Facebook employee who released the 6,000-word memo. I’m not sure if you followed the story, but essentially there was a person employed by Facebook who became increasingly concerned that she had a huge amount of professional responsibility. And according to her, she would see geopolitical responses to minute decisions that she would make as part of her function at Facebook. This became so disconcerting to her that she tried to raise this within the company but was met with backlash.
Dr. Annette Zimmermann (23:47):
The problem that she seemed to face was one where she wasn’t able to adapt her behavior in a way to prevent these bad outcomes, so she as an individual was unable to prevent harm, right, just because there was a surplus of power. And even if she had rejected that power, she would’ve still been part of a really, really bad process, which is why she was then critiquing the company and ultimately got fired for that. That was a less encouraging example of ethical deliberation happening among practitioners, but I did find it very interesting to see somebody articulate this very specific worry that one person might have too much power, right?
Jeremie Harris (24:32):
Dr. Annette Zimmermann (24:32):
Because we often think about an increase in responsibility as an honor or as an opportunity to do something really good, but I think the really necessary and really useful perspective that this person was bringing into the debate was, well, is there maybe a kind of sufficient degree of responsibility that one person shouldn’t exceed? Again, I think that might ultimately be a democratic concern, right?
Dr. Annette Zimmermann (24:58):
Obviously, if we elect somebody into a position of massive power, then that person has to report back to us, technically at least, if things go well and why they do certain things. So they have to put themselves through the motion of justifying to us and explaining and rationalizing and arguing, which is something that that Facebook engineer will never do with society at large. I mean even if they try to get that process going within the company, it’s not going to happen on the same kind of democratic scale. So I think intuitively that worries people who want to do well with their work.
Jeremie Harris (25:34):
Right, yeah. I mean to some degree, this harks back to the conversation we were having earlier about democratic systems and their interaction with some of this technology. I guess to some degree one of the problems is that the democratic process is just really, really slow. And as the rate of technological development increases, there may not actually be enough time for meaningful feedback, democratically speaking, to circulate back to the decision makers. That tends to pile on and get away from us.
Dr. Annette Zimmermann (26:03):
Right, yeah. I do think that it’s a huge obstacle for AI policy in particular that democratic institutions are for good reason designed to be robust, which slows them down. So there’s a clear trade off there. On the one hand, we want solid checks and balances, but on the other hand we want responsiveness in our democratic institutions. Those can come apart exactly in these domains where we have a rapidly changing decision landscape.
Dr. Annette Zimmermann (26:34):
I would add here that democratic decision making doesn’t always have to mean electoral politics that involves the entire democratic constituency. I think that’s maybe the most obvious form of democratic decision making but maybe not even the most important one. A lot of people are focused on domain experts in AI, for instance. And I think the choice about what constitutes expertise in this area might be really contestable.
Jeremie Harris (27:04):
Dr. Annette Zimmermann (27:05):
Very often when we think about medically our innovations, we might be tempted to ask doctors, but we could also ask chronically ill people, right?
Jeremie Harris (27:13):
Dr. Annette Zimmermann (27:14):
It’s not necessarily obvious which one of these choices might be better or if a combination might actually yield more insightful results. Similarly, when we talk about AI deployed in criminal justice and policing contexts or immigration enforcement contexts and we only have people in the room who serve on a police force, I think we’re going to miss out on a really important perspective there. I think we need to rethink who can be an expert in these discussions and whose voice we are valuing when we’re trying to make really fast decisions. It doesn’t have to be comprehensive, but it does have to be a kind of reasoned and egalitarian thinking that helps us select these people.
Jeremie Harris (27:59):
Just in the interest of playing devil’s advocate on the democracy side of things, so one thing that at least has come to mind, especially as we start to see some of these developments in terms of increasingly sophisticated language models, GPT-3 and so on, when you look at the people who are currently in the AI ethics, AI alignment especially but broadly sort of AI safety, AI policy ecosystem, I’ve always been struck in my conversations with just the almost absurd level of quality of these people and their integrity and the depth of thought. In which case, to the extent that that’s true, it sparks the thought in me that it may not be immediately desirable to actually open this up to the full array, let’s say, to put it nicely, of human behavior and pathology that we see on full display on things like Twitter, where people get into it.
Jeremie Harris (28:53):
I mean, I can’t imagine what those arguments might look like if they had to do with the performance of an algorithm that determined whether or not a person got a bank loan or something. I mean it kind of freaks me out to think about that. I don’t know. Is that something that you’ve been thinking about, the extent to which closed systems that are sort of high trust and high specialization might have trade offs with the democratic open systems?
Dr. Annette Zimmermann (29:16):
Yeah, absolutely. That’s a really, really central problem in political philosophy and also just in moral philosophy. I think the problem that you’re getting at here is justice and democracy could be orthogonal to each other, right?
Jeremie Harris (29:33):
Dr. Annette Zimmermann (29:33):
I could have a democratic constituency that is just really anti-egalitarian or really divided, really hostile, or just doesn’t really operate in good faith with each other anymore. So that could taint democratic decision making in a way that actually doesn’t care about justice and equality at all anymore and that might also not care about decision quality at all. It could just end up with really terrible decision making that isn’t grounded in any sort of facts anymore, and so that could be massively unjust and massively harmful for a lot of people.
Dr. Annette Zimmermann (30:11):
I don’t think we should conclude that democratizing AI will necessarily get rid of problems like algorithmic injustice. I certainly don’t think that’s true. I think a whole lot more is required to establish just forms of AI and machine learning or otherwise ethically defensible forms of AI and machine learning.
Jeremie Harris (30:32):
And how mutable… Oh, sorry.
Dr. Annette Zimmermann (30:34):
Oh, that’s okay.
Jeremie Harris (30:35):
I was, sorry, just going to ask your thoughts on mutability of these moral frameworks. There’s a sense in which we design algorithms, and by designing them and certainly by deploying them, we do start, as you’ve said, to enshrine some of these social norms, whether we do it with the awareness that that’s what’s happening or just implicitly. But then over time, obviously, we want our morality to shift. I mean 50 years ago interracial marriage was controversial. Certainly, the way we treated homosexuality and so on has just been completely revolutionized.
Jeremie Harris (31:06):
The idea that today we’ve landed on a set of satisfactory moral norms that’s going to be static for all time is probably pretty shaky. Any ideas about how we might design systems that would have that kind of time domain flexibility?
Dr. Annette Zimmermann (31:21):
Yeah. I mean, I think the first step to doing that would be to acknowledge that any choice we make is ethically and politically shaped in some way. A lot of the time I think we want to think of ourselves as making a totally ethically neutral decision, a purely technological decision. So for instance, when we decide exactly which features to select and measure when we design a system, and so at the face of it that looks really neutral and objective because we’re just trying to represent facts and we’re trying to abstract from those facts in order to come up with a generalizable decision rule.
Dr. Annette Zimmermann (31:57):
At the end of the day, I can make many different choices about exactly what to look at. So depending on what sort of story I want my model to tell, I’m going to get a kind of reinforcement effect from that. If I decide to measure things like arrest rates but actually I’m interested in coming up with a model that tells me a story about crime, I’m going to have a really imperfect story. And that story might start spinning out from reality more and more the less I acknowledge that actually I am working with an imperfect approximation in the data.
Dr. Annette Zimmermann (32:32):
The antidote I think to that is to always realize the degree of uncertainty that we’re working with and the degree of choice dependency that we have in our design processes. I think we shouldn’t say, “Here’s our static principle. Let’s try to get as close to that as possible. One time we’re going to measure something once, and then we’re just going to see what happens.” No, we have to return to that and we have to ask, hang on, did we actually measure the right thing? Did we actually come up with the right decision rule? Why did we adopt that value in the first place?
Dr. Annette Zimmermann (33:06):
It’s not good to have a kind of shopping list in mind of values that we’re trying to reach by making these ostensibly objective design decisions. I think we have to always be flexible, and that flexibility includes questioning our prior assumptions and our prior choices. As defensible as they might have seemed at that point, we need to be ready to completely change them if it turns out that they were deeply wrong.
Jeremie Harris (33:32):
Yeah. One of the things that looking at some of your work, your writing and your YouTube videos as well is made clear, at least to me, that I hadn’t fully appreciated yet was the extent to which feature selection and feature engineering really amounts to choosing the lens through which your algorithm will see the world. We know the effects of getting somebody to read, for example, the New York Times over the course of the year versus getting somebody to read just Fox News over the course of the year. You’re going to end up with two very different models of the world because the dimensionality reduction, essentially the feature selection, they have done on the world to present you with nominally the same events but from different perspectives very often just, I mean, completely changes the way you interpret things.
Jeremie Harris (34:12):
It’s kind of interesting to map that onto these algorithms and say hey, if you actually just choose to feed this algorithm the equivalent of MSNBC versus CNN, you will get a different algorithm with different conclusions.
Dr. Annette Zimmermann (34:24):
Yeah. I think that’s absolutely right, and I think that’s a really underappreciated dimension of untrustworthiness in AI. I think when you ask somebody, what would make AI trustworthy, their gut reaction is going to be something like, “Well, we want to ensure that AI doesn’t make any egregious mistakes,” so the kinds of mistakes that we’ve already discussed. So no egregious harm, no egregious injustice. But think about the kind of counterfactual to that. Think about an AI system that actually hasn’t really failed yet. It hasn’t really betrayed our trust yet.
Dr. Annette Zimmermann (35:04):
We’re kind of comfortable delegating to that system either because we have uncertainty whether it’ll perform well or because we have some form of information about it that makes us reasonably confident that that system won’t fail in certain ways. But then we still haven’t excluded the possibility that the ontology of that system might just not be a faithful representation of the world as we want to represent it, right? That system might be idealizing in ways that we find alienating and weird and somehow not apt. That could be a totally different facet of untrustworthy AI that isn’t actually that outcome dependent. It’s a more process-based concern that we might have about AI.
Dr. Annette Zimmermann (35:48):
The reason why I say it’s process based is I think we don’t only care about doing the right thing. We also care about doing the right thing for the right reason. That’s something that we would say about humans too. If you had a friend who routinely just does good things and hasn’t given you any reason to doubt their loyalty to you, but then let’s say you find out at some point that they just buy into a really, really weird reasoning structure. The whole reason why they were so nice to you as a friend was because they took pity on you. And they thought you really needed help and you can’t hack it on your own, but they feel no friendship towards you. If you find that out, you would say, “Oh god, all of the reasons why you did all of these nice things for me were terrible reasons and they’re not in the spirit of friendship.” So you really missed what was going on here.
Dr. Annette Zimmermann (36:39):
I think a similar problem can arise with AI when AI is built on a weird and alienating ontology that just doesn’t connect the right dots in the right way. That in my mind would make AI untrustworthy because you wouldn’t have guarantees that it would continue to do right things because it doesn’t do them for the right reason.
Jeremie Harris (37:03):
Because I guess there’s a countervailing philosophical position that says that the purpose of a system is what it does, so if a system does nice things to you, then it’s a nice system independent of what’s going on between its ears. I guess what you’re really getting at though as well here is that ultimately there might be emergent behaviors that might not be obvious in the short term, but then all of a sudden you realize, “Oh crap, this thing was really just trying to look nice to gain my trust so that it could do something absolutely devastating.” Is that a fair kind of assessment?
Dr. Annette Zimmermann (37:33):
Yes. It’s true that the main worry that I’m getting at here is a long-term worry. I don’t think we need to be necessarily assuming that AI in a really sophomore form would be malicious, so I think it’s very dangerous to anthropomorphize AI. I’m not even imagining a kind of malicious AGI case here. I think these kinds of long-term problems can crop up even with narrow AI applications that themselves have weird ontologies but that obviously don’t pursue their own agendas because they’re not conscious agents.
Jeremie Harris (38:11):
Okay, yeah. Very interesting. All right. Let me make the best steel man argument I can for some kind of objectivist framing of the AI problem. Let’s say I want to get rid of the feature selection problem because I get that selecting features and engineering my own features is biased. It’s going to reflect what I think is important and not what really is.
Jeremie Harris (38:33):
Can I push this to the limit of what really is? Could I develop a system that is aware of the positions and momenta of every atom in the very least planet earth or the solar system or something, and would that be more satisfactory? Would that at least get me past the hurdle of feature selection and feature engineering, or am I still missing something even by doing something like that?
Dr. Annette Zimmermann (38:59):
Well, it might get you around the problem of feature selection, but I think you would be dealing with a different problem, which philosophers would call the benevolent dictator problem. Again, this is a kind of thought experiment from political philosophy where you imagine what if you had a dictator that routinely does really amazing things, like a dictator who rules well. He makes sure that everybody’s treated equally, just less equal than them. So the question is, well, what’s your complaint if things go well in that way for everybody?
Dr. Annette Zimmermann (39:35):
Let’s say we have no reason to believe that that dictator will ever change their ways. We don’t need to worry that they’ll ever turn bad. But intuitively, we’re still very worried about our total lack of control. And very often I think as humans we do have that need to be able to say, “Well, I have agency in this sort of interaction.” So I think if we had this really, really, really, really sophisticated fine-grained AI, we might have more construct validity. So we are better approximating this problem of faithful abstraction. We’re doing better on that dimension, but we’re then dealing with the benevolent dictator issue. Then that might be undermining our agency in the process.
Jeremie Harris (40:24):
Right. I guess back to that idea of moral mutability as well, we want to be able to change our moral thinking and this locks us in in a certain sense.
Dr. Annette Zimmermann (40:33):
Exactly. So then there is in fact a trade off between these two different kind of meta goals that we, I think, should be pursuing in AI ethics. On the one hand, we want high construct validity, but that might again be irreconcilable with a lot of agency and sequential mitigation, so that’s a major problem.
Jeremie Harris (40:54):
Actually, can you expand on that idea of construct validity? Because it was new to me before I dove into your stuff, so…
Dr. Annette Zimmermann (41:00):
Right. Construct validity is more the computer science term. Philosophers would probably say faithful abstraction or faithful representation. The idea is just that when we make models and when we articulate fundamental laws and theories, we always have to abstract in some way from the social world. So a lot of people get worried about that because I think intuitively we all know as soon as you build a general theory, you’re going to lose some of its applicability to individual instances of real life. But that might be okay because we want theories and models to be general. Statisticians have a neat slogan for this. Most statisticians buy into the claim all models are wrong, and that’s fine. They can still be useful.
Dr. Annette Zimmermann (41:51):
My question would be, well, how do we sort out when a model isn’t as useful anymore? Where’s that line where we diverse so much from social reality that the model actually loses explanatory power? I would also add here that very often when people think about things like construct validity, so like getting a really fine-grained picture of the real world, people forget that looking at the real world might just reproduce injustices as well. This is another important trade off.
Dr. Annette Zimmermann (42:25):
So computer scientists have talked a lot about this. If I have a really well-calibrated algorithmic system, it’ll just regurgitate exactly the kinds of social stratifications that we already have. Again, this is why our intervention is so important because we can choose which realities we want to represent, right? Which constructs do we want to abstract and idealize and thereby reproduce? Maintaining an awareness of the fact that there’s often multiple stories that I can tell about a person or a group or a society and selecting the right story to tell there, I think that’s really crucial.
Dr. Annette Zimmermann (43:07):
There’s a kind of standard philosophical example of this from Plato. Plato was thinking a lot about idealizations and whether idealizations correspond to reality. He gives the example of a Greek statue. In ancient Greece, statues were built kind of top heavy, so with a bigger head and bigger shoulders and so on because you look at them from below. So your ordinary perspective as a spectator requires that we actually distort the statue in order for it to give us a realistic image. It’s an idealization, but it’s not the only idealization we could come up with because, of course, we could’ve reproduced statues in exactly human proportions. But then that would’ve seemed really alienating and distorting to spectators. I think that’s a really good example of thinking about different ways of representing reality and the implications of making those decisions.
Jeremie Harris (44:07):
That’s really interesting, and it also invites, I guess, some thoughts about the idea of free will versus determinism too. You mentioned earlier, you said, “We have the choice to decide what our abstractions are, to decide what the world looks like to our model.” If these models get really good, at a certain point they’ll be able, in fact, in many cases they do predict our behavior better than we could. I know I’ve had Duolingo will ping me every once in a while and tell me, “It’s about time for a lesson.” I go, “Yeah, you know what? It’s about time for a lesson.” To some degree, I guess there’s this play off… It definitely is uncomfortable when your behavior is predicted better than you might be able to.
Jeremie Harris (44:47):
Is there a way of sort of splicing reality at its joints there, sort of deciding where it makes sense to say, “Okay, let’s treat this system as deterministic, but let’s make room for free will explicitly”? Maybe you want to speak to as well, there’s an example you drew on from an exam I think in the UK that was contested recently. Maybe I’ll park the thought there and let you take it from there.
Dr. Annette Zimmermann (45:11):
Right. The UK case that you mention was really, really interesting. Some UK government decided because of COVID-19 that they were just going to cancel final exams for high school students. And instead, what they were going to do was just make predictions about which final grades high school students would get. In the UK, that really determines your entire life, including which university you get to go to. Their system is you apply way before you finish high school, and then you get an initial prediction about your grades from are teacher. If you meet those predictions with the final exam, then you actually get to take up your place in uni.
Dr. Annette Zimmermann (45:52):
In this case, unfortunately, the UK decided to use a really crude statistical model that predicted grades well for people who went to private schools that had a really strong historical track record of high educational achievement. We have reliable long-term data about these private schools, and so the model worked well there. But it really didn’t work well for people who had a large range of different grades who were kind of middle-ranked students. It also worked terribly for people who are in schools that used to be quite low achievement schools but that have rapidly improved in the recent past. So de facto, the model ends up disadvantaging working class students, students of color.
Dr. Annette Zimmermann (46:43):
So it had these massively unjust outcomes where somebody who would’ve gotten a B suddenly failed their A-levels because of the different idiosyncrasies of the model. The model didn’t really take into account uncertainty enough and so really made blunt predictions where allowing for flexibility would’ve been much more apt and would’ve been a better representation of educational attainment. In this case it seems really misguided to use these kinds of predictions.
Dr. Annette Zimmermann (47:13):
Even if that model actually had been very accurate, which it wasn’t, but assume for a moment that we could have a really good model. You might think, “Well, maybe you want to have the opportunity to put yourself through the final exam process.” Yes, it’s going to be extremely stressful, but many students articulated this feeling of you know, I really worked hard for this, and this was my chance to actually prove to people that I can do it. I’ve invested all this time and energy into it, and the whole point of it is to go through that stress and to really try my best. I mean you had kids on the street during the COVID-19 outbreak with signs saying, “Let’s ditch this algorithm. I want to actually do this exam.”
Dr. Annette Zimmermann (47:59):
I think that really shows us that very often the mere prediction isn’t what we’re trying to get at. We want the experience. And whenever we want the experience and the hard human process, that’s where I think we can’t really replace these kinds of processes. The trouble is I think it’s going to be really difficult to predict which areas of life that’s going to apply to. And I also think people are going to dramatically disagree on that. So I don’t think there’s a really neat principled solution to this problem.
Dr. Annette Zimmermann (48:31):
I think we need a process of ongoing political arguing about that because otherwise we’re going to end up imposing a policy on people that basically says, well, in this area of your life, you’re allowed to have agency. And in this area of your life, we don’t really care about your free will. We’re just going to give you that prediction and guess what, it’s pretty accurate. That would be unsatisfactory.
Jeremie Harris (48:57):
Yeah. And you can definitely understand the resentment too of even if, as you say, even if the algorithm were basically 100% accurate or had a wicked good F1 score or whatever value, if somebody tells you, “Hey, you know what? We ran the numbers. Based on all this data, much of which has nothing to do with features you would normally associate with yourself personally. It’s features about your community, features about your family life, your upbringing and this and that.”
Jeremie Harris (49:25):
Then we tell you, “Okay. With 99% confidence, we can tell you, you’re going to score between a B and a B+ here.” That’s a tough pill to swallow, in some ways even worse the more accurate the system gets. I mean you can see people getting more resentful as they’re being told what their worth is to society in some sense by these systems.
Dr. Annette Zimmermann (49:45):
Yeah. Sometimes the process of quantification itself can be offensive in some ways, right? Sometimes getting ever more fine grained in our assessment of what it means to be somebody isn’t actually what we want. Sometimes I think we want to take a step back and not care about certain granular details. I mean you can have marginal differences between people. Again, obviously philosophers have a concept for that because I guess we label each tiny idea that we have. In this context the concept that comes to mind for me is this notion of opacity respect.
Dr. Annette Zimmermann (50:25):
Opacity respect basically means that with some kinds of questions, we don’t want to look too closely at differences between people because we just decide these differences might not be relevant for our decision about what we owe to that kind of person or how we should distribute certain benefits and burdens in society. Sometimes we don’t actually need to know everything about somebody in order to treat them respectfully. So it might undermine respect if we ask too many questions. How good are you actually at this, or have you actually thought about this? These kinds of questions, especially when it comes to allocating responsibilities and rights to people can often be undermining.
Jeremie Harris (51:08):
Yeah. It really makes me think of there’s a whole body, obviously, of moral philosophy around free will and this question that if you have somebody who’s just an absolute psychopath and they go around murdering people, you feel entitled to be very angry with them. But then later you discover that they have a brain tumor, and that brain tumor completely explains everything they did. All of a sudden they’ve done a 180 from being this just absolute monster to being someone who’s really worthy of a lot of sympathy and you just feel bad for the person.
Jeremie Harris (51:39):
Sometimes uncovering more and more… I guess uncovering a cancer or something like that, a tumor, is something we’d want to know about, but then there’s a continuum where you start pushing that much further and you start to go, “Oh, okay. Well, Jeremie just said that horribly obnoxious thing because neuron number 25 is connected to neuron number 47 this way, so it’s really purely deterministic. It’s not his fault. It’s just the way his brain is.” But at a certain point these things start to get very uncomfortable.
Dr. Annette Zimmermann (52:10):
Right. I think that’s right. I mean one concern is intrusiveness in this area, and intrusiveness can be disrespectful. But I also think that very often it’s just unhelpful to find more information about somebody because very often we’re just interested in a question of blameworthiness and responsibility. We can answer questions about responsibility and blameworthiness without having a really hyper-detailed explanation of just why this person just said this terribly obnoxious thing.
Dr. Annette Zimmermann (52:45):
So in many domains we just want to say, “Look, you did this thing. We don’t really care why, but can you just apologize? Did some harm and we’re going to hold you accountable for that.” I think that’s a kind of common view that we would have about people, and I think we’re going to have it in an AI context as well. Information as such doesn’t really tell us much. It’s more about what type of information we’re working with and what we do with that.
Jeremie Harris (53:16):
As you’ve pointed out too in the past, I guess doing this algorithmically also causes us to have a little too much sometimes, too much confidence in the outputs of these systems. We start to go, “Oh, if the algorithm says that this person’s morally responsible, then no question. We don’t need trial by jury,” right?
Dr. Annette Zimmermann (53:32):
Yeah. I mean there’s a well-studied psychological phenomenon called automation bias, and that is definitely something that we have to grapple with. We as humans just have this bias that when we’re dealing with a quantified story about something, we’re more likely to trust that quantified story because we think, well, as soon as something is measurable, it makes it more tangible and more objective, perhaps even impartial. Because we kind of think, “Well, if it’s just numbers, there can’t be some agenda behind it. There can’t be a bias behind it.”
Dr. Annette Zimmermann (54:07):
That is dangerous thinking. I mean, obviously, we got to be open to the idea that some modes of quantifying something are indeed impartial, but it doesn’t go without saying that all forms of quantification are. That’s something to always have at the forefront of our minds and I think especially important for tech practitioners who have to make these daily judgment calls about how much trust they should place in a system.
Jeremie Harris (54:33):
Yeah. Yeah. It’s almost like the fact that the machine learning engineer gets to… They kind of make these meta decisions, as you’ve alluded to, by choosing architecture, by choosing levels of abstraction in engineering features. They make those choices. Then they move away, and that creates the illusion that no choices were ever made because the system is just sort of running on its own. Whereas if you had humans doing it, it’s like no, every instance the person is re-engineering features, redeploying their judgment. Yeah, it’s an interesting world we’re headed to.
Dr. Annette Zimmermann (55:03):
Yes. And I think the main thing is there isn’t one easy solution that applies to all forms of technology and all domains, so I think we should be extremely skeptical of blanket tech optimism. But we should be equally skeptical of blanket tech pessimism or tech hostility because I don’t think that any ethical and political argument will just apply across the board to all sorts of different applications.
Jeremie Harris (55:35):
Well, unfortunately, as somebody who’s incapable of holding more than one thought in my head at the same time, I’m going to have to figure one out or the other but really appreciate-
Dr. Annette Zimmermann (55:43):
Jeremie Harris (55:44):
That’s it, as with all things nowadays, it seems. Thanks so much for your time, Annette. This was a great conversation. I do want to make sure if people want to follow you on various social media, what would be the best way to follow you?
Dr. Annette Zimmermann (55:56):
Best way is probably on Twitter, so that’s @DrZimmermann with two N’s at the end. Yeah. I’m on Twitter all the time.
Jeremie Harris (56:06):
Perfect, yeah. Me too, unfortunately. A topic for another day. Great. We’ll make sure we link to that. We’re going to write up a blog post to come with the podcast. People can read that as well, and we’ll provide a bunch of links to your work, which is very fascinating as well. Thanks so much for making the time. Really appreciate it.
Dr. Annette Zimmermann (56:21):
Thanks so much. Thanks for having me on. It was a fascinating conversation.
Jeremie Harris (56:26):
Author: Dr. Annette Zimmermann | Twitter: @DrZimmermann
Originally Posted on: Towards Data Science
Editor’s Note: This episode is part of our podcast series on emerging problems in Data Science and machine learning, hosted by Jeremie Harris. Apart from hosting the podcast, Jeremie helps run a Data Science Mentorship startup called SharpestMinds