AI will change not just legal work, but legal education and lawyer training – here are five examples of how. AI’s application to the law also raises key questions about evaluating legal work. What is ‘a good contract’? How do we measure this? In this article, Dr. Megan Ma, Associate Director at the CodeX group within Stanford Law School, sets out multiple examples of pioneering projects in this rapidly developing field.
Dr Ma will be a speaker at the Legal Innovators California conference in San Francisco, on June 4 and 5 (see below for more info and tickets to attend.)
—
‘What’s in a Lawyer: The Future of Legal Education and Practice,’ by Dr. Megan Ma, Associate Director, CodeX and Law, Science, and Technology Program, Stanford Law School
With the advent of frontier technology, the question often asked is whether lawyers will be replaced or augmented. Yet, with generative AI, the question extends beyond this replacement/augmentation binary. It is much more fundamental: lawyers are being asked about the value they offer.
That is, foundation models are encouraging legal professionals and the broader community to better understand the mechanics of our own practice. What are our necessary skillsets? What metrics are we able to weigh ourselves against? Today, when we are asked about the delivery of quality legal services (e.g. what defines a good from a great contract?), there is not one explicit answer. We are forcibly asked to be introspective and, unfortunately, we may be woefully unprepared for a response.
The lack of empirically verifiable information on what qualifies as a legal task and the requirements on ensuring that this task is done at a certain level of quality have largely made use and integration of generative AI difficult to navigate. Accordingly, we have seen a year of exploration on the potential of the technology, but not its potential for the legal profession. Current use cases have cabined applications of large language models (LLMs) to assist with the completion of high volume-low sophistication tasks. Without sufficient benchmarks against human performance, how would we know whether LLMs make good assistants, or even co-pilots?
I have previously discussed that one of the core complexities of the legal industry is that much of the work is deeply diverse and the value-add is implicit. It is driven by experience and the specific know-how of the client and industry-base. Importantly, the knowledge of the profession is found in the processes, not the product. So, what about their application in legal education and training?
At the Stanford Center for Legal Informatics (CodeX), we consider that translating tacit legal knowledge could enable us to build valuable training tools for our next generation of lawyers. Effectively, we see a need to better equip law students and young legal professionals with tools that could continuously train them in the practice of law.
Accordingly, we have begun to experiment in this direction via five key projects co-created and developed with our key law firm and industry partners: (1) simulating negotiations; (2) deposition strategy and feedback reviewer; (3) implicit redlining; (4) multi-agent law firm simulation; and (5) supervisory AI.
The first project is inspired by seminal work on communicative agents and has significant applications into the legal education space. Trained on the expert insight of M&A attorneys with over 40 years of experience, the aim is to build negotiation games at varying degrees of assertiveness, corporate structure, industry nuance for the purposes of enabling more personalized training and granular strategy management. Applying a traditional agentic model known as BDI (otherwise, belief-desire-intention), we have released our alpha prototype and anticipate extending the platform with more modules and robust UI/UX in the coming months.
The second project delves into litigation preparation. Depositions often are positioned as an important strategic moment for counsel to evaluate and assess the direction of the argument they are developing for their client. The deposition strategy and feedback tool leverages multimodal LLMs to allow an attorney to receive feedback on their performance based on the goals and anticipated outcomes defined for that deposition.
We envision that the litigation partner would upload a deposition outline that identifies in abstract what they hope to achieve from the deposition (e.g. themes, trial theory, etc.). This tool then would provide referenceable feedback that could be revisited at a later date to determine how the deposition may be impactful to their broader argument. We have already deployed the prototype under a secure environment with zero data retention and SSL-encryption. We are actively undergoing user testing from experienced litigators for feedback.
With our third project, we unpack the assumptions of standardization and best practices in contracting via implicit redlining. While certain types of contracts lend themselves well to standardization, much of complex legal work has historically been highly bespoke. In fact, the correlative impact between expertise and experience has demonstrated that insights are fairly heterogeneous and can even vary rather dramatically between associates at the senior level to the partner level.
To better understand the specific value-add of highly trained experts and determine ways to capture the knowledge senior partners possess, we are developing a MS Word-integration that leverages LLMs to draw out implicit insights from contract redlines and comments made by partners. We then build “custom contract personas” that could articulate specific rules and reasoning behind why redlines and other linguistic cues correlate with certain understandings of market practice.
Inspired by seminal work on generative agents and its recent application into the medical space, our fourth project explores the Agentic Law Firm: Legal Workflow Automation and Multi-Agent Simulation. We consider this area of research could enable better understanding of the dynamics, social interactions, and operational efficiencies in legal practice. Within this track, there are two sub-projects: (1) LLM-based law firm simulation; and (2) autonomous legal colleagues.
In the former, we aim to empirically test the operational efficiency of existing workflows and internal collaborations in a law firm setting by creating custom LLM agents that are prompted to interact with one another in a social network simulation system. The aim of these simulations is to assess the ways in which certain actions and interactions across various different types of agents have respective impacts on value generation. In the latter, we seek to develop the first legal Devin, identifying workflows and processes that encourage human-machine collaboration. The broader aim is to support in-house counsel, boutique law firms, and/or other small legal teams (e.g. in legal aid) to distribute work in a more productive manner.
The last project reflects on the future of legal practice and how responsibilities of the lawyer may be upheld in the era of human-machine collaboration. That is, what mechanisms must be in place to ensure and continuously verify that appropriate expert human oversight and accountability exists? We consider that an ideal approach dynamically integrates LLM guardrails and legal analysis in a way that continuously captures the nuance of the legal profession. Supervisory AI agents, in effect, can serve as a technical risk-mitigating measure for safe deployment of AI systems in industries that have professional codes of conduct and important certification regimes, such as legal services. We are currently engaging in several case studies on how supervisory AI agents would play a role in the verification and assurance of state bar rules of professional conduct.
We consider these projects as both a preview and our response to the transformation the legal profession is facing. More importantly, they rely on substantive collaboration across cross-disciplinary academia and industry. In the coming months, Stanford CodeX will be bringing these projects to their next stage of experimentation and technological development via rapid prototyping and iterative feedback. In effect, generative AI is not simply an integration into our legal processes; rather, we anticipate it to be integral to the very fabric of how we educate and train our future legal professionals, capable of cultivating the next generation of legal experts.
–
If this subject is of interest – and if you work in the legal field it’s arguably going to be vital to the future development of the profession – then come along to the Legal Innovators California conference on June 4 and 5 in San Francisco to learn more. Tickets are available now.
Legal Innovators California conference, June 4 + 5.
The event will take place in San Francisco with Day One focused on law firms, and Day Two on inhouse and legal ops. We have many great speakers, including Dr Ma, along with a group of pioneering legal tech companies and service providers – you can see some more about our speakers here. It will be two great days of education and inspiration! Join us!
For ticket information, please see here. Come along to what will be a great event in San Francisco focused on how the legal world is changing.