By Erica Noonan
Suffolk University Law School Dean Andrew Perlman caused a minor media stir recently when he dashed off a dense academic paper in about an hour. He didn’t do it alone. His co-author, ChatGPT, actually did most of the heavy lifting, so Perlman listed it as lead author on the paper.
Their 16-page article, “The Implications of ChatGPT for Legal Services and Society,” is one of the earliest high-profile academic demonstrations of the new chatbot tool’s ability to impact the legal profession.
The tool, created by California-based OpenAI, features a sophisticated language model that uses AI to interact with users in conversational ways and respond to questions. In the consumer space, AI is increasingly used to engage in a wide range of tasks, from creating original content for emails and documents to offering advice on how to negotiate an airline ticket refund.
Just a few days after ChatGPT was released in late 2022, Perlman—who admits to being an enthusiastic futurist—decided to experiment with feeding the program a series of legal questions and prompts. “I’ve always enjoyed technology and been interested in the role it can play in the delivery of legal services,” he told Reuters.
AI has already crept into the periphery of the legal industry in recent years, and it’s expected that AI tech will soon become commonplace in the generation of legal documents and information. But some prominent academics have called for restrictions—or even outright bans—warning that increased use of AI tools in university classrooms will lead to endemic levels of cheating and academic dishonesty, a decline in the development of critical thinking skills, and overreliance on technology that can be vulnerable to racial bias.
Perlman said he takes such concerns very seriously. “There are real risks that this technology can be used for malicious purposes,” he says. “But the technology is going to advance, it’s going to be used, so we need to find a balance between ensuring that students develop the knowledge and skills that they need, with also being able to use these tools to their advantage.”
Shortly after Perlman’s initial experiment, a group of University of Minnesota researchers tried out some law classwork and bar exam questions on ChatGPT. The bot’s performance was significantly worse than that of most human students.
Perlman’s perspective on this? Give the technology time. When he tried the new version of ChatGPT that Microsoft has incorporated into its Bing search engine, Perlman found it was able to answer 12 out of 15 challenging multiple-choice questions about legal ethics that he had drafted himself. “The analysis for each answer was surprisingly sophisticated,” he says, even for the questions that it answered incorrectly.
In his paper’s conclusion (which along with the abstract, outline, and prompts are the only human-created content), Perlman somewhat cheekily references the Borg, the evil, AI-like beings of Star Trek fame: “We need to find ways to adapt to these developments,” Perlman writes, “because resistance is futile.”
Return to Table of Contents
Photographs: (top) Adobe, (right) Michael J. Clarke
noteworthy
