An interdisciplinary team at the University of Chicago has launched a new initiative to examine how the humanities can contribute to and benefit from advancements in artificial intelligence. The Humanistic AI project, led by professors Hoyt Long and Chris Kennedy, is based at the Neubauer Collegium for Culture and Society. It brings together scholars from diverse fields including literature, linguistics, philosophy, sociology, and computer science.
The project's main goal is to identify both opportunities and challenges presented by generative AI models across academic disciplines. The researchers also aim to develop a strategic vision for collaboration between the humanities, humanistic social sciences, and computer sciences in advancing AI research. Beyond academia, the team is exploring how generative models are affecting creative processes that use AI tools. Case studies will be developed to support new approaches in humanistic research that address these technological changes.
Deborah Nelson, Dean of the Arts & Humanities at UChicago, commented on the significance of this effort: “I could not be more excited about the ways in which our faculty in the arts and humanities are thinking about innovative ways to work at the nexus of AI and culture. UChicago is uniquely positioned to be the leading voice in national discussions of how emerging AI technologies can positively advance humanistic research, and, at the same time, how humanistic expertise in analyzing and understanding information in historical and cultural contexts can help catalyze the next generation of breakthroughs in AI.”
The project's first workshop was held on October 17-18 at the Neubauer Collegium. Nearly two dozen scholars from 12 institutions participated alongside UChicago faculty members representing five departments as well as graduate students and postdoctoral researchers.
“Everyone was able to get onto the same page and get to work,” said Hoyt Long, Andrew W. Mellon Professor of Japanese Literature and Digital Studies. “This was a rare opportunity for us to brainstorm research ideas that transcend any one discipline, and for those of us in the humanities and computer sciences to connect on an intellectual level. That made the whole event really exciting.”
During a "lightning round," participants shared their current research projects related to AI's impact on their respective fields. Breakout sessions then led groups with different perspectives to propose collaborative pilot projects.
From these discussions emerged three main areas—or "research nodes"—that will guide ongoing work until June 2027: using large language models for simulated scenarios addressing humanistic questions; examining how AI tools may aid knowledge discovery and creative production; and studying historical roots, philosophical implications, and creative potential of low-quality AI-generated content known as "AI slop."
Chris Kennedy expressed optimism about these directions: “There’s clearly the seed of something that could be really new and different here,” said Kennedy, William H. Colvin Professor of Linguistics. “But there is also a challenge: how will we know whether we are learning something interesting about humans, rather than learning something about language models?”
A public roundtable discussion took place during Arts & Humanities Day on October 18 as part of broader efforts by UChicago’s Division of Arts & Humanities—which partnered with Chicago Humanities for this year’s event—to showcase scholarship related to arts programming.
Ted Underwood from University of Illinois highlighted limitations as well as opportunities arising from current generative AI technology: “There are things that AI doesn't do very well that are opportunities for people in the humanities,” he said. “AI is really good at writing lots of kinds of texts. It summarizes things well. But it has not done a great job of putting novelists out of work.”
Researchers involved noted that studying differences between human-generated language production and outputs from language models can lead to deeper insights into both systems. As Kennedy explained: “In studying the differences between two kinds of language production systems—humans and language models—we can learn something about both of them, and the differences then become the basis for new insights. Humanists are particularly good at this kind of comparative work, and our hope is that extending it to a comparison of humans and AI will lead to unexpected and exciting discoveries.”
The Humanistic AI group plans further collaboration through June 2026 before presenting results at a final session tentatively scheduled for spring 2027.
