
Picture a terrarium. It’s a controlled environment that contains and sustains life. But what if that life was artificial intelligence? The AI Terrarium is an interdisciplinary research study led by Timothy Rogers, a professor of psychology. He’s working with researchers across the College of Letters & Science and at other universities to bring to life research that leverages AI to create realistic, simulated personas that imitate the beliefs, language and behaviors of real humans. Here’s what they’ve learned so far.
1. AI can roleplay.
And not just as a member of your Dungeons & Dragons campaign. When instructed, AI can roleplay as different types of people. Take, for example, a Midwestern farmer. Researchers could prompt AI with questions to get an idea of what that person might say. “If the AI is accurate, this could provide an opportunity to simulate, for instance, results of surveys or responses to different public messaging strategies,” Rogers says.
2. That said, AI isn’t as human-like as it seems.
Sure, chatbots may seem like they are having a reasonable, human-like conversation with you, but one key way you can clock an AI is if it is quick to change its mind. That’s not common with humans, who often stubbornly cling to their opinions regardless of counter arguments. Rogers says this can limit the effectiveness of using AI to simulate human responses, which is why a crucial goal for this research is to understand how to make the AI respond in more human-like ways.
3. It’s more than demographics.
Thanks to large survey studies from Dhavan Shah (’89), the Jack M. McLeod Professor of Communication Research and Louis A. & Mary E. Maier-Bascom Chair in the School of Journalism and Mass Communication (SJMC), and Sijia Yang, an associate professor in SJMC, researchers know that people’s attitudes on important topics tend to be related to sociodemographics like age, race, gender, class and income. And while all of this information can be used to shape how AI roleplays, it surprisingly does not lead the AI to produce beliefs or opinions characteristic of the corresponding groups. “For example, if we want AI to roleplay as a strongly religious person, it is more effective to tell the AI to act like someone who believes that miracles are real than to ask it to roleplay based on the demographics of highly religious people,” Rogers explains.
4. How humans change their beliefs is not well understood.
To figure out how to make AI more human-like, you first have to understand, well, humans. That’s where Rogers’ expertise in psychology comes in. He’s collaborating with Jerry Zhu, a professor of computer sciences, and Robert Hawkins, an assistant professor of linguistics at Stanford University, to create computationally grounded models of human learning and behavior that help explain when and why people dig in on their beliefs despite facing contradictory evidence. One of these new models suggests that people jointly weigh trust and consensus significantly when updating their opinions.
5. It’s essential to study influence.
Historically, research has focused on how and why people change their beliefs, but the AI Terrarium is taking a new approach — studying the people doing the persuading. That’s why Rogers’ team is developing new mathematical theories coupled with behavioral experiments to better understand mutual, competitive influence under more realistic models of how people behave.
6. AI is learning from humans and humans are learning from AI.
It’s a symbiotic relationship. Perhaps the best illustration of this is looking at social media algorithms. The AI is learning what videos and ads will be most effective with the scroller while the scroller is taking in information from the content being served to them. Researchers have learned that these complex systems can be steered by specific inputs, but the question is: How can it be steered to support the greater good? “We need to understand the central dynamics of this system to ensure that such steering pushes society toward beliefs and behaviors that aid everyone,” Rogers says.