Artificial intelligence is increasingly affecting the legal profession, prompting the University of Chicago Law School to revise its curriculum to better prepare students for technological changes. Faculty and administrators are incorporating AI into courses while maintaining a focus on core skills such as research, analysis, and judgment.
“We have spent a lot of time thinking about AI in the Law School,” said William H. J. Hubbard, deputy dean and Harry N. Wyatt Professor of Law. “Our aim is to find the right balance between encouraging students to explore these new tools that could be very useful, and not short-cutting in ways that would be pedagogically unhelpful.”
Hubbard noted that law firms, especially larger ones, are using AI more frequently. As a result, it is important for graduates to have some proficiency with this technology. “We want to equip them with all tools of legal research and analysis—and that now includes AI,” he said.
In response, last year the Law School introduced an orientation session called “AI and the Legal Profession” for incoming students. The session outlines policies regarding AI use at the school and provides an overview of how AI can both help and hinder learning as students begin their studies.
Hubbard explained that key messages include recognizing generative AI as a valuable tool within legal education; acknowledging its risks and limitations; and understanding that it does not fundamentally alter the practice of law.
Additionally, several upper-level electives focused on AI were added last year to encourage critical thinking about these technologies’ uses in legal work. Mark Templeton, clinical professor and director of the Abrams Environmental Law Clinic, emphasized potential drawbacks: “We can’t outsource expertise and knowledge to these AI models,” said Templeton. “These tools can generate what look to be beautiful pieces of writing, but when you look closely, there are so many errors because the tools don’t understand technical terms sufficiently. When you use AI, there is a duty to supervise it like you would a junior attorney or paralegal. And to fulfill that duty, you have to be the expert yourself.”
The Law School is developing mandatory self-directed modules on generative AI for first-year students beginning in early 2026. These modules will provide foundational literacy in using generative AI for legal tasks while allowing more experienced users to advance quickly.
“There are so many generative AI tools out there now,” said Hubbard. “With these modules we also want to steer students toward the right tools for legal work, ones that are both more tailored to the needs of lawyers and less likely to cause problems with confidentiality and privilege. These modules are meant to be building blocks for what we will continue to introduce to them during their time here.”
AI has also been integrated into existing coursework through changes made in programs like Bigelow—which teaches first-year students research and writing skills over their initial academic year. Students are not allowed any use of AI during their first quarter while they learn foundational methods but may begin using such tools under supervision later in the year.
Faculty approaches differ regarding classroom use of AI technology. In his clinic course, Templeton allows full exploration with disclosure requirements: “There are so many tools out there designed to make a lawyer’s work easier,” he said. “We use Lexis and Westlaw... So, the question... is how can these new tools augment our ability...while ensuring thoughtful, high-quality representation of our clients? I’m excited to be exploring this with students.”
Conversely, Joan Neal—professor from practice who teaches transactional skills—prohibits any use of artificial intelligence in her contract drafting course due largely to concerns about insufficient student experience: “This is the first time they’re drafting transactional documents,” explained Neal...“They don’t have base-level knowledge yet...” She nonetheless addresses ethical considerations around technology throughout her classes.
Neal permits limited use in her ethics class where students may utilize artificial intelligence for brainstorming or supplementary research but not content creation; disclosure remains required along with an assessment by each student regarding usefulness: “I think students are finding that while AI can be helpful in some ways,...it can fall short...Generative AI tends to give circular,...generic answers...It’s not (yet) good at understanding nuances....”
A recent initiative called the "AI Lab" offers hands-on experience building legal tech products related specifically this term on renters’ rights—a project led by Kimball Dean Parker (JD’13), founder/CEO SixFifty—with plans for public release upon completion.
“It’s unlike anything we have ever done,” said Hubbard.“There are only a few intensive workshops like [this] anywhere in country.”
Parker described engagement as essential: “AI is like putty,...The Lab...is an opportunity...to get hands-on experience....Developing an understanding for nature [of] technology will make it easier...to learn any similar tool...even ones [that] may not exist yet.”
Adam Chilton—dean—summed up broader goals: “Our graduates are hired for their judgment,...for their mastery...[which] is not something you can outsource....A UChicago Law graduate’s good judgment,...intellectual robustness,...and reputation...[are] priceless.” He added,“By incorporating AI thoughtfully across curriculum...,the Law School [ensures] its graduates not only adapt...,but lead [the profession].”
