By Robert Schlesinger
features
It has been an eternity-spanning year since generative AI exploded into the public consciousness, making real what had long been the province of science fiction. Artificial intelligence has touched seemingly every part of society, launching a thousand discussions about the future, running the gamut from utopian to dystopian. One thing is certain, for better and worse—change is coming. A spring Goldman Sachs report, for example, predicted that AI will produce “significant disruption” and that generative AI could take over one quarter of current work tasks in the U.S.—a figure that climbs to 44% for the legal sector.
Return to Table of Contents
winter 2024
A BRAVE
ChatGPT and other apps like it are built on large language models, which contain immense datasets (e.g., much of the Internet) that can find relationships between words and produce new, plain-language responses in ways previous AI iterations (think Siri) could not. They are, in essence, text-completion engines, able to use the massive amounts of data upon which they are trained to produce new combinations of words in plain, sometimes conversational, language. Less well understood is their ability to absorb and analyze large new data sets, whether the text of a new law, a legal brief, or a lifetime’s worth of email.
Casetext, a firm specializing in leveraging AI technology for lawyers, had been around for nearly a decade when its founders got an advance look at GPT-4 last year. “We slept like three hours a night for the first two weeks,” co-founder Pablo Arredondo recalls. “It was one of the most intellectually exhilarating and professionally exciting periods of my life.” They immediately grasped the technology’s implications and “pivoted the entire company around it.” The hard work and the rapidity of change paid dividends: Thomson Reuters (the owner of Westlaw and many other law-related services) purchased Casetext for more than $650 million earlier this year because of the company’s cutting-edge tech.
Casetext has been connecting with Suffolk Law for years. Arredondo and co-founder Jake Heller had student ambassadors at the school, and they guest-lectured in legal innovation classes. Within months of ChatGPT’s public release, the company launched CoCounsel, an AI legal assistant that uses generative AI to help with a host of time-consuming tasks: summarizing documents, synthesizing information and putting it into memo form, extracting data from contracts, conducting document review, creating timelines, and so on. It’s a taste of what lawyers can expect in an AI-empowered future. “Tasks that lawyers have to do that are repetitive, that require doing the same thing over and over again, they can just use AI to make things faster and more efficient,” says Quinten Steenhuis, a practitioner in residence and adjunct professor in Suffolk’s Legal Innovation & Technology (LIT) Lab. “That’s a little grease in the wheels to speed things up.”
Casetext gave the California Innocence Project, which advocates for those falsely convicted and incarcerated, access to a beta test of CoCounsel, and the group used it to analyze its backlog of cases and extract the key data points that indicate how promising a case is. In short order, the group was able to cut its case backlog from four years to two.
The promise of making legal processes more efficient is one of generative AI’s bright upsides. Vedika Mehera, JD ’15, an early graduate of Suffolk Law’s legal innovation program, now runs Orrick Labs, her firm’s in-house development team that builds custom solutions for their attorneys and clients. She’s working on an AI tool that can ingest a complex document like the 2022 Inflation Reduction Act and quickly answer client-specific questions about its contents—doing a job in minutes that would have taken a human hours to perform. This work is designed to help lawyers better use their time.
Our editors used an AI text-to-image tool to create an attorney robot, shown above. Except for the headshots, the images in this article were generated by the AI tool Adobe Firefly.
Maybe you’ve heard about the law partner who wants a virtual bot to handle mundane tasks; or perhaps you’ve heard about the legal services nonprofit that employed a generative artificial intelligence (AI) tool to cut its backlog in half; you’ve likely heard about the unfortunate New York attorneys who inadvertently cited bogus precedents invented by ChatGPT in a court filing.
Tech companies are engaged in a generative AI arms race, sweeping society along with it and leaving everyone else to work through the benefits and pitfalls. And Suffolk Law, long a leader in legal innovation and technology, is uniquely prepared for the ongoing changes—and the ones coming—for the legal profession.
“It is mind-boggling to think how far we will go in the next five to ten years in light of how far we’ve gotten in less than a year,” Suffolk Law Dean Andrew Perlman says, reflecting on how this revolution is already affecting legal education and the practice of law.
NEW WORLD
AI “hallucinations” are the tip of the digital iceberg.
As TikTok’s top lawyer in the United States, Matthew Penarczyk, JD ’95, runs a multidisciplinary legal practice that fields the gamut of legal questions—“some of them are complex and strategic, and some of them are more mundane and more tactical,” he says. “It’s that latter category that tends to come in at a high-volume rate and tends to be the same questions over and over again. ‘Where can I get an NDA?’ for example.” Trained on the proper data set, generative AI can efficiently tackle such “tactical, higher-volume, lower-risk scenarios,” he says, adding, “That will enable legal professionals around the world to free themselves from that sort of thrash and churn and … focus on higher-order, more complex issues.”
That’s precisely what a partner at Orrick wants, Mehera says. Her marching orders: “Take all of my emails and deals and content and everything I’ve created, and turn it into a bot of me,” she recalls. Ideally, the partner should be able to delegate mundane tasks such as replying to routine emails or drawing up simple documents to the AI, and because it’s trained on the partner’s previous output, it could respond appropriately.
Even in the context of generative AI, it may seem like science fiction, but Mehera is excited to tackle it. “An attorney’s bot is a great, thoughtful exercise that we would love to spend our time building,” Mehera says. “That’s my next big priority.”
Generative AI can also act as a sounding board and partner. The LIT Lab’s Steenhuis, for example, uses it to help him refine outlines for presentations and other documents. “I use it quite a lot as an editor: Here’s what I wrote, what am I missing, what do I say?” he says. Lawyers can feed drafts of articles or interrogatories into an AI and ask it to identify holes in their arguments or questions that they’ve missed. “It’s great to have a second set of eyes that doesn’t tire of hearing the same thing over and over and over again with minor changes,” he says.
In the long run, such a massive increase in efficiency could profoundly change the entire legal services business model. “It will change law firm and law practice economics,” says Gabriel Teninbaum, JD ’05, Suffolk’s assistant dean of innovation, strategic initiatives, and distance education. The billable hour, that basic unit of the professional legal economy, may be on the way out.
Dean Perlman learned about generative AI’s shortcomings firsthand. Testing ChatGPT, he typed out a series of multiple-choice questions on legal ethics, his own area of expertise. When it returned an answer he knew to be incorrect, he asked how it had arrived at its conclusion. It gave him a link as a source—which didn’t work. When he asked ChatGPT about it, “It said my internet must not be working, which was surprising because that’s how I was talking to ChatGPT,” he recalls.
But for just a moment, and despite his own expertise, doubt had wormed into Perlman’s mind. Could he be the one in the wrong? “I’m an expert in the field, and it got me wondering,” he says. “If it gets me—an expert—to second-guess in an area I know about, what could it do in an area where we’re not experts?”
Perlman was just conducting an experiment—others have been less lucky. New York attorneys Steven Schwartz and Peter LoDuca achieved viral infamy after filing a ChatGPT-powered brief that cited fictitious cases, drawing a sanction from the presiding judge. Inaccurate content from these tools is so common that a generative AI term of art has been coined to describe it: hallucinations.
In cases like Schwartz and LoDuca’s, the problem was that the AI had to extrapolate beyond its base data. So it followed its programming and made a best guess at how to autocomplete. “It’s not that it’s trying to be mean. It has no concept of true or false,” Casetext’s Arredondo says. “Whenever you ask it something outside of its general knowledge base, instead of saying ‘I don’t know,’ it will say things that are very, very believable—but completely fabricated.”
He adds: “We very much advocate a ‘trust but verify’ system. We know it’s something to have the same relationship with our product that Ronald Reagan had with the Soviet Union, but that’s where we are.”
These “hallucinations” are the tip of the digital iceberg in terms of issues legal professionals have to think through. “While generative AI shows much promise, the potential downsides are enormous,” Perlman says. In August, he was named to the Advisory Council of the American Bar Association’s newly formed Task Force on Law and AI, which will address a host of issues that have cropped up in the wake of the technological rush into the unknown. These include how to handle client privacy (ensuring sensitive information and data remains protected if fed into third party, web-based AIs), how to account for and deal with AI bias, and of course the spread of misinformation—not simply AI hallucinations but also disinformation deliberately created and spread by humans—among other issues.
In a sense, the hallucinations are low-hanging fruit—a problem that can be mitigated with common sense and a baseline understanding of how to use the tools. In important ways, the legal profession is actually well-suited to avoid such pitfalls because it has a wealth of reliable data upon which generative AI can be trained: case files, court dockets, and the kind of internal document management systems many law firms have. That doesn’t obviate the “verify” side of Arredondo’s formulation, but it creates the conditions necessary to meet the “trust” side.
“In some ways, the term generative AI is a misnomer because it puts the focus on the fact that it generates stuff,” says Casetext’s Heller. David Colarusso, who runs Suffolk Law’s Legal Innovation and Technology (LIT) Lab, puts it another way: Focusing on the generative side of these tools is “like looking through the wrong end of a telescope.” Instead of entering a brief prompt and expecting a lengthy reply, he says, smart users will leverage these tools’ ability to digest, analyze, and summarize vast quantities of data.
The bottom line is that as with any powerful tool, those wielding it need to have a basic understanding of how to use it. “This is like the great power, great responsibility thing,” Colarusso says. “Giving [students] enough knowledge to know both the power and the danger that comes with these tools—I feel like that’s something that Suffolk uniquely does.”
DIGITAL
hallucinations
Long before nearly any other school, Suffolk Law has anticipated this moment. “For a decade, we have been at the forefront of preparing law students for a profession that is undergoing rapid change,” Perlman says. Ten years ago, the school created the nation’s first concentration in Legal Innovation & Technology (LIT), and in 2017, it opened the LIT Lab to give students hand-on, real-world opportunities to reimagine the delivery of legal services. National Jurist has twice named Suffolk Law the top school in the country for legal technology, and Bloomberg named the school a Top 10 Law School Innovator in 2023.
These and related efforts have produced substantive results: The Lab has created mobile-friendly apps that guide users through the filing of court forms—and tens of thousands of people have already used them. Given that nearly 90% of the essential civil legal needs of people with modest means go unmet each year, such tools are becoming essential.
While the initial effort was Massachusetts-focused, students and faculty met this year with court administrators in eight other states to assist those jurisdictions in building their own smartforms. The school also became the first to launch a multistate smart system for electronic filing of court documents, with profound and rapidly evolving implications for equity and access to justice.
Nevertheless, generative AI has also brought challenges with which the school is wrestling, including how to teach and assess student performance in this new age. “These are issues that all organizations that do knowledge work need to address, not just Suffolk,” Teninbaum says. “How will we train students to work in a legal future that looks quite different than the present? How will we deal with academic integrity issues around generative AI, predicting how generative AI will change legal work and the practice of law? And what does it mean to learn the law when software tools give you shortcuts, even if they’re effective?”
In keeping with the innovative spirit it has fostered, Suffolk Law has largely let its professors set their own rules regarding how to incorporate or limit generative AI in their courses. It adopted an academic integrity policy generally restricting the use of generative AI—except when professors give permission for its use.
Some professors are affirmed generative-AI critics. “I have long feared a devolution of society, whereby instead of technology existing to serve humans, we exist to serve technology and its financial benefactors,” says Professor David Yamada, director of the school’s New Workplace Institute. “With the advent of ChatGPT, we now have reached that sad tipping point. It’s human-designed technology that can supplant human reasoning, judgment, and creativity.”
Yamada raised a host of concerns, including plagiarism, discriminatory impacts, and estimates that generative AI will eliminate hundreds of millions of jobs. He adds: “Stay tuned, for this awful misadventure is only beginning.”
Other professors are embracing this brave new world, searching for the right balance between accepting generative AI as a fait accompli and actively addressing the broad ethical questions surrounding it.
“We’re training the students to be competent users of legal technology,” says Dyane O’Leary, JD ’05, professor of legal writing and director of the LIT Concentration. “We’re not teaching them legal technology—and by that I mean I don’t think my goal is to have students become master prompters of ChatGPT or understand the ins and outs of [Google’s generative AI] Bard or [Microsoft’s] Bing or other tools. Instead, it’s to instill the idea of being aware of what they don’t know and how to become competent.”
O’Leary is teaching an intersession course with Colin Black, a legal writing professor, on the emerging technology and its challenges. “Our approach is to touch on the high-level theory and concerns and principles about the technology, such as copyright, ethics, plagiarism, competence, the AI Bill of Rights, what’s going on in Congress,” she says. “But then every day the class will have a ‘boot camp’ session, where students will review real-life legal situations and use different generative AI tools and workshop, edit, and discuss.”
O’Leary and other professors have also been struck by how much the students, despite growing up in the digital age, are themselves working through how and when to use the technology. That realization has helped allay fears that generative AI would spur cheating. “In my view, many legal educators over the last six months have assumed that students would be all ablaze about this,” she says. “In my experience so far, most students get the issues, and most are responsible and open to using tech in an appropriate way.”
One reason that Suffolk Law has emphasized legal innovation and technology for so many years is that knowledge in these areas can make students more competitive in a rapidly changing world, says Dean Perlman. “The future will not involve a competitive battle between lawyers and generative AI. The battle will be between lawyers who are comfortable learning how to use these new tools and lawyers who are not. Suffolk Law is preparing its graduates for that world.”
the suffolk
approach
Suffolk Law was the first to launch a multistate smart system for efficient electronic filing of court documents.
Large language models are trained using massive datasets.
Jake Heller
Co-founder of Casetext, a company that has built an AI tool to summarize documents, extract data, conduct document review, and more
Vedika Mehera, JD ’15
Mehera is working on an AI tool that can ingest a complex document like the 2022 Inflation Reduction Act and help answer clients' questions.
Matthew Penarczyk,
JD ’95
The leader of TikTok’s Business Solutions & Corporate Compliance legal team says AI will pick up simple, high-volume tasks.
Gabriel Teninbaum,
JD ’05
Suffolk Law’s assistant dean of innovation, strategic initiatives and distance education
David Colarusso
Director of Suffolk Law’s Legal Innovation and Technology (LIT) Lab
Pablo Arredondo
Co-founder of Casetext, a company that has built an AI tool to summarize documents, extract data, conduct document review, and more
Suffolk Law Professor Dyane O’Leary, JD ’05
Leads the school’s Legal Innovation & Technology Concentration
“We’re training the students to be competent users of legal technology,” says Dyane O’Leary, JD ’05, professor of legal writing and director of the LIT Concentration. “We’re not teaching them legal technology—and by that I mean I don’t think my goal is to have students become master prompters of ChatGPT or understand the ins and outs of [Google’s generative AI] Bard or [Microsoft’s] Bing or other tools. Instead, it’s to instill the idea of being aware of what they don’t know and how to become competent.”
O’Leary is teaching an intersession course with Colin Black, a legal writing professor, on the emerging technology and its challenges. “Our approach is to touch on the high-level theory and concerns and principles about the technology, such as copyright, ethics, plagiarism, competence, the AI Bill of Rights, what’s going on in Congress,” she says. “But then every day the class will have a ‘boot camp’ session, where students will review real-life legal situations and use different generative AI tools and workshop, edit, and discuss.”
O’Leary and other professors have also been struck by how much the students, despite growing up in the digital age, are themselves working through how and when to use the technology. That realization has helped allay fears that generative AI would spur cheating. “In my view, many legal educators over the last six months have assumed that students would be all ablaze about this,” she says. “In my experience so far, most students get the issues, and most are responsible and open to using tech in an appropriate way.”
One reason that Suffolk Law has emphasized legal innovation and technology for so many years is that knowledge in these areas can make students more competitive in a rapidly changing world, says Dean Perlman. “The future will not involve a competitive battle between lawyers and generative AI. The battle will be between lawyers who are comfortable learning how to use these new tools and lawyers who are not. Suffolk Law is preparing its graduates for that world.”