Sharon Li sits in a chair in front of a white board
Sharon Li stresses the importance of creating a more sound and reasonable approach to how researchers train deep learning and neural network models. Photo: Sara Stathas

A self-driving, autonomous car. A generative AI that can compose your cover letter. An AI-driven medical imaging system.

Each of these technologies carries the potential to make our lives easier and more efficient. But if they’re not well-designed, each of them also carries the potential to wreak devastating havoc.

It’s the latter part of that equation that concerns Sharon Li, an assistant professor of computer sciences and faculty affiliate with the Data Science Institute. Li focuses on ensuring that machine learning systems — from automated machinery to the large language models that fuel generative AI — are as safe and accurate as possible when they’re unleashed onto the world.

Li, who joined UW–Madison in 2020, points to an ongoing assumption in machine learning research that the data an AI model was trained on will always match what the model will see in the real world in the future.

“What we want is for the model to abstain from making a prediction when it encounters something new it doesn’t know,” Li explains. “That’s really the virtue of teaching AI to know what it doesn’t know — so it can be more conservative and honest in answering a question and operating in the zone about which it is knowledgeable. That has been the central theme of my research.”

Most people tend to focus on all the amazing things [an AI] model can do, but the blind spots are something the industry is not investing enough in exploring.

Sharon Li

The problems can crop up in many ways — and not just when ChatGPT coughs up an incorrect set of facts. Li cites a serious example: In December of last year, a driver was killed when his car’s autopilot function failed to correctly recognize a tractor-trailer crossing the road.

“Most people tend to focus on all the amazing things a model can do, but the blind spots are something the industry is not investing enough in exploring,” says Li.

For Li, the answer has been creating a more sound and reasonable approach to how researchers train deep learning and neural network models — a strategy that is sometimes at odds with the rush to quickly release technology into the market. But significant progress is being made. When she started her research efforts six years ago, the error detection rate was an alarming 50% for a simple image classification task (is the image a cat, a dog or a truck?). Improvements have reduced that number to less than five percent.

Initially — maybe even astoundingly — Li’s early efforts to improve AI machine learning were dismissed by some members of the computing community.

At the time, many researchers were focused on improving a massive dataset of visual images called ImageNet, aiming to make it a few tenths of a percentage more accurate. Li was focused on the dataset’s potential blind spots — images that didn’t fall into its prescribed training categories.

“How would the model react?” she asked. “Would it actually know that it’s failing in some sound manner?”

When Li and her collaborators presented the first set of out-of-distribution (OOD) detection algorithms they had created to begin addressing the issue, some in the AI community openly wondered if it was even a problem worth solving.

Li’s stint in industry at Meta from 2017 to 2019 gave her real-world experience with machine learning models and convinced her she was on the right track. At UW–Madison, she’s been able to focus much more directly on the problem. Just last year, MIT Technology Review named her “Innovator of the Year” for her work on OOD detection, and she received a prestigious CAREER Award from the National Science Foundation.

Large language models — the kind that fuel generative AI like OpenAI’s ChatGPT — is a much more problematic space, one that can’t be improved by simply porting improvements from image classification models. In many cases, the mountains of data that fuel large language models are proprietary.

“That makes research harder, determining what the model doesn’t know,” she notes.

The technical term to describe the phenomenon is “hallucinations.” These are the instances when a generative AI produces factually incorrect information.

“There are multiple stages when things could go wrong,” says Li. “The training data might already contain some misinformation because the data is crawled from the entire internet. Solving that requires the industry to really put more effort into having transparency about their training data and how they go after ensuring the factuality of that data.”

Li remains driven both by a passion for advancing her field and the opportunity to work with her students, mentoring them to influence a future wave of AI researchers.

“We’ve made a lot more progress than we could have done in the industry,” she says, “because here, we’re able to dedicate our time to understanding how these systems work. What are the capabilities? But more importantly, how do we fix those blind spots in a more reliable manner?” 

More From Spring 2024

From the Dean
Shining Example

This magazine celebrates our mission to tackle the tough questions, make new discoveries and embrace the pursuit of knowledge for a lifetime.

Explore&Discover
Teaching
On the Rocks

Luke Zoet teaches one of the toughest geoscience classes on campus, yet he manages to make it a student favorite year after year.