Earlier this year, Annette Zimmermann gave a keynote lecture to a crowd of hundreds at the Artificial Intelligence, Ethics, and Society conference in Montreal, Canada.
In her lecture, she was calling for more caution and well-reasoned debate surrounding AI deployment as industry actors are rushing powerful new AI tools to market.
The topic about which Zimmermann was speaking is notoriously controversial in the AI ethics and tech industry community. Biggest case in point: Last November, developer OpenAI released the generative AI ChatGPT upon the world, throwing everything from academia to graphic design and journalism into confusion and chaos.
Zimmermann, an assistant professor of philosophy whose research camps at the intersection of political and moral philosophy and the philosophy of AI and machine learning, took a more bird’s-eye view.
“There are some really interesting, longer-standing democratic problems that arise when a very small group of actors in a society has the ability to unilaterally make deployment decisions, to make widely accessible these really powerful tools,” says Zimmermann. “And ordinary citizens and also governments have to catch up with these initial deployment decisions.”
Democratizing the deployment of AI technology is just one of the ways professors in the College of Letters & Science are wrestling with big-ticket issues tied to modern computing technology. As part of a 2020 cluster hire, the College of Letters & Science paired researchers from the Department of Philosophy, the Information School and the Department of Computer Sciences. They’re collaborating to parse emerging issues with AI, smartphone apps and Big Data.
A large part of Zimmermann’s work is concerned with automation bias, which refers to our propensity to place an unduly high level of faith in automated decision-making and scoring systems, even when other sources of information contradict them. Automation bias is closely tied to algorithmic fairness — the notion of ensuring that automated systems don’t treat people differently when they merit similar treatment.
Zimmermann has spent a lot of time researching both phenomena in the criminal justice space, especially as it relates to algorithmic recidivism risk scoring tools. Often, such tools — including in a case that was appealed before the Wisconsin Supreme Court in 2016 — were scoring Black defendants as high-risk repeat offenders by algorithms while white defendants were consistently being scored lower.
“It’s important that decision makers who are interacting with AI in any capacity are mindful of the risks of jumping to conclusions about the people who are actually being scored by one of these systems,” says Zimmermann. “We see this phenomenon that automating a process can replicate existing social inequalities. That’s particularly concerning in a domain like criminal justice, one of the core pillars of a functioning democratic society.”
Zimmermann debates these issues with her students in class, as well as issues related to the moral status of artificial agents, privacy concerns and questions around whether TikTok should be banned.
“I think people see AI and machine learning as their everyday reality,” she says. “And they want to understand their own ethical judgments about that domain better and in a more structured way. That’s where philosophy can really help.”
Property Is Power
While Zimmermann is examining democratic theory and the criminal justice system, Assistant Professor of Philosophy James Goodrich, her departmental colleague, has his eye on the massive mountains of data tech giants like Meta and Google have collected from their users — and the property rights that potentially surround it. Personal data, he notes, isn’t the same as a physical object like an apple or an automobile. As sole owners of that data, Goodrich argues, these companies end up accumulating a shocking amount of power.
“All that data is a source of profit that’s also a source of economic power, and economic power translates into political power very quickly,” notes Goodrich, who views these dynamics through the lenses of classical philosophers like David Hume and Immanuel Kant. “It gives them gigantic bargaining power. Some people are going to want access to that data to do things with it. But the only way you can get it is by partnering with these companies. They can set outrageous terms to partner with them, and they can also pick winners and losers in the market in the future.”
During the pandemic, for instance, the National Institutes of Health partnered with Meta to study the impacts of vaccine use. Meta gave the government agency access to its data, but it was entirely on Meta’s terms. Earlier this year, Professor of Journalism and Mass Communication Mike Wagner was allowed to view Facebook data related to a study of whether changes to the platform’s algorithms impacted users’ levels of polarization — but Meta determined which internal records were released.
Goodrich finds that kind of control worrisome. One of the things he’s studying is whether traditional institutions that manage and regulate concentrations of political power — organizations like the Federal Trade Commission and the Federal Communications Commission — are equipped to handle this new type of power concentration. He’s also studying the impacts that come out of the argument that users of popular social media platforms are having their data stolen and/or exploited by the platform owners.
“I think exploitation and theft are best understood in terms of claims about property rights,” says Goodrich. “So, we need to know the nature of the property rights involved with data in order to understand those claims.”
Tap Now. No, Seriously. Right Now.
Are you unwittingly trapped by the attention economy? Clinton Castro argues that it’s very likely.
Consider: Is the first thing you do each day to lunge for your smartphone, to respond to red-circled notifications that have come in on multiple apps? You may think you’re just being polite and interacting with the people in your world, but to Castro, an assistant professor in the Information School, what you’re really doing is trading attention for services. You’re ceding authorship of your own life.
To Castro, who studies data and information ethics and co-authored the upcoming book Kantian Ethics and the Attention Economy, there’s a devious strategy at work. The cycle that keeps us engaged in these apps has been carefully designed and executed by the developers. It’s designed around the vulnerabilities in our psychology and includes steps like triggering (you got a notification!), unpredictable rewards (I wonder what’s in that new text?) and investment (hey, why don’t you add a picture to your profile?). It’s a cycle that’s trying to create and reinforce a habit — and it also raises several interesting ethical questions.
“There’s the question of what happens to my basic capacities to direct myself and be the author of my life, when I can be so disrupted or influenced by this thing in my pocket,” Castro says. “It’s about what that might do to me, how it might condition my obligations to other people and what it does to our ability to work together to respond to important large-scale problems that require sustained attention and widespread cooperation.”
Castro likes to illustrate his point by citing a joke he heard comedian Esther Povitsky make about the ongoing impact of the attention economy.
“She’s talking about how she wants to read, she wishes she could read, but she can’t read,” says Castro. “She buys books, she picks up books, she looks at books. And then she blacks out and she’s on Instagram. I think that’s a very familiar occurrence. It’s like someone else is directing you. You’re not directing yourself.”