The Impact of LLMs on the Legal Industry

10.11.2023

Artificial intelligence (“AI”) has long been a hot topic across various industries, including the legal sector, especially with the recent breakthrough of innovative generative AI system—Large Language Model (“LLM”) applications like ChatGPT that can process and generate humanlike text in real-time. These technologies can revolutionize the way legal cases are managed, making it more critical than ever for professionals in the legal industry to learn how to harness the power of generative AI to their advantage—and to understand the limits of its capabilities. In this article, we’ll explore how generative AI systems, with an emphasis on LLMs and their related applications, could be incorporated into legal work, as well as potential challenges faced by economic expert witnesses.

AI is a field encompassing a wide range of technologies, including machine learning and natural language processing. Generative AI, a subset of AI evolved from machine learning, refers to when the AI system is not only capable of learning from the data but can also generate new data. While the actual technology is rather complicated, the idea of how generative AI systems work is quite simple—such a system learns from large data sets by manipulating algorithms to find patterns in the data and adapts the pattern to generate new data including text, images, and code.[1]

LLMs are a type of generative AI system that work with language and generate text. ChatGPT, an LLM application, is specifically “trained” on large amounts of text data obtained from the internet, and is able to analyze text inputs and generate humanlike responses.[2] Since its launch in late 2022, ChatGPT has sparked discussions regarding its applicability in various industries given its ability to “understand” the context of text inputs and generate “thoughtful” responses in real-time. However, the technology is not without challenges. For example, because it relies on patterns from public information available on the internet, without any confirmation that those patterns reflect reality, it can sometimes generate fabricated information. In fact, many professionals working in the legal industry are aware of the involvement of ChatGPT in the courtroom earlier this year, regarding the Mata v. Avianca, Inc case in the Southern District of New York.[3] 

  1. A Lesson from Mata v. Avianca, Inc

In a personal injury suit filed by Roberto Mata against Avianca Airlines for injuries allegedly sustained from a serving cart due to the airline employee’s negligence, Plaintiff’s counsel was found to have submitted a brief that contained references to a number of nonexistent cases. In an affidavit filed by Plaintiff’s counsel, he admitted that he “consulted the artificial intelligence website ChatGPT in order to supplement the legal research performed” and was “unaware of the possibility that its content could be false.”[4] The Judge in this matter commented that these cited cases were “bogus judicial decisions with bogus quotes and bogus internal citations,” and represented an “unprecedented circumstance.”[5] This case serves as an important reminder for legal professionals considering using LLM applications (such as ChatGPT) in preparing their legal documents that it is important to ensure proper verification of AI-generated work. 

  1. How Can Legal Professionals Integrate LLMs into Everyday Work?

Despite the challenges illustrated in the example above, LLM applications like ChatGPT hold tremendous potential to transform the way legal professionals work, including the potential to improve the research process by significantly accelerating the identification of legal precedents and statutes. While traditional AI applications have existed in eDiscovery data collection and investigations, the ability of generative AI to comprehend massive amounts of legal documents and create humanlike text makes it more likely to impact how legal work is currently defined. 

In addition to accelerating processes like legal research and drafting summaries of documents, LLM applications such as ChatGPT can also assist in framing legal arguments or outlining sophisticated legal documents. By inputting relevant legal opinions and case backgrounds, professionals can use LLMs as a writing aid to help create structured arguments and enhance their writing efficiency. For example, when ChatGPT was asked a general question of how it would structure a brief if it was retained by a Defendant facing allegations of price-fixing, it generated the following response:

1.     Introduction

Provide a concise summary of the case, including the allegations against the Defendant and the specific time period in question. State that the purpose of the brief is to demonstrate that the Defendant did not engage in price-fixing during that period.

2.     Legal Standards

Outline the legal standards applicable to price-fixing allegations, such as the relevant antitrust laws and the elements that the Plaintiff must prove to establish a price-fixing violation. Emphasize the burden of proof on the Plaintiff to demonstrate the existence of an illegal agreement to fix prices.

3.     Lack of Evidence

Present a detailed analysis of the evidence available, focusing on demonstrating the absence of any agreement or concerted action to fix prices during the specific time period. Highlight any inconsistencies or gaps in the Plaintiff's evidence, and challenge the credibility or reliability of their witnesses or documents.

4.     Market Conditions and Competitive Factors

Discuss the market conditions and competitive factors during the alleged period of price-fixing. Provide evidence to show that the pricing behavior observed was the result of legitimate market forces, such as changes in supply and demand, input costs, industry dynamics, or independent business decisions.

5.     Independent Decision-Making

Highlight the Defendant's independent decision-making processes and emphasize that any parallel pricing or similarities in pricing strategies were the result of individual business judgment rather than collusion. Provide evidence, such as internal documents, emails, or testimonies, that support the Defendant's independent pricing decisions.

6.     Expert Testimony

If available, introduce expert testimony from economic or industry experts who can analyze the market conditions, pricing data, and provide an opinion that supports the absence of price-fixing during the relevant time period. Expert analysis can strengthen the Defendant's argument and provide additional credibility to the defense.

7.     Precedent and Case Law

Refer to relevant legal precedents or case law that support the Defendant's position. Identify cases with similar factual circumstances where courts have rejected price-fixing claims, emphasizing the importance of considering the specific evidence and market dynamics.

8.     Conclusion

Summarize the key arguments and evidence presented in the brief, reiterating that the Defendant did not engage in price-fixing during the specific time period. Request the court to dismiss the allegations or rule in favor of the Defendant based on the lack of evidence and failure to meet the burden of proof.

Source: ChatGPT.

Some question whether the rise of generative AI tools like LLMs will result in job loss in the legal industry, since it appears that these technologies may possess similar skills as legal professionals. As illustrated by the Mata v. Avianca, Inc case, it does not appear that the “AI takeover” will happen anytime soon. But given how quickly technology in this area has been advancing, a time is coming when legal professionals will need to rethink their everyday work patterns and assess the areas where a human reviewer still possesses advantages versus the parts of their work that can more efficiently be handled by LLMs.

Law firms can take the initiative of reevaluating job responsibilities in the context of AI. For example, given that LLMs are becoming better at handling certain labor-intensive research tasks, it may enable paralegals to take on more analytical tasks that require human judgment and critical thinking. Instead of spending large amounts of time on tasks such as searching for relevant statutes, compiling research summaries, and drafting legal documents from scratch, these legal professionals would now use their time more efficiently to streamline information, factcheck, and brainstorm recommendations for attorneys. This transformation may also call for new training and skill development programs for legal support professionals to learn how to monitor, validate and tailor work product generated by LLMs for specific tasks.

Additionally, the importance of client confidentiality and data security in legal work raise legitimate questions about the use of open-source LLM applications like ChatGPT in the legal industry. However, law firms are exploring the development of their own LLM applications, trained on their own databases and customized for each case.

For example, Harvey is an AI startup founded in late 2022 that specializes in generating custom language models for law firms. It now has 15,000 firms on its waiting list.[6] Unlike ChatGPT, which is trained on publicly-available data from the internet, Harvey is trained on legal data. Once engaged with a law firm, it can be trained by the firm’s own work product to automate document drafting and research tasks.[7]

In addition to Harvey, there are a number of other firms introducing a variety of LLM applications applying generative AI to legal work, including Casetext’s AI legal assistant product, CoCounsel. Introduced in March 2023, CoCounsel was the first LLM to pass the Uniform Bar Exam and also describes itself as maintaining the data security that law firms require, giving it the potential to train on client data and automate a variety of legal tasks at a high standard.[8] Some law firms, such as Baker McKenzie, have also been investing in in-house machine learning teams to develop their own generative AI systems.  They plan to combine LLM technology with other business applications such as web data mining and expert validation to augment lawyers’ judgments with “ground truth” and “real predictions.”[9]

  1. Navigating the Risks of LLMs in Expert Witness Work

The use of LLMs in legal cases may also pose challenges to the work of expert witnesses presented in court. For example, economists often provide expert testimony in legal cases, presenting analyses to assess a range of economic issues or developing economic theories to estimate potential damages. LLMs can theoretically be used to assist in preparing these analyses, including summarizing economic theories, developing pricing models, or describing lost earnings methodologies. However, if experts consider relying on generative AI’s work product as witnesses, it is important to be mindful of the potential limitations faced by these tools.

Expert witnesses share their scientific, technical, or other specialized knowledge as opinions based on facts and data, validated by human judgement. An economist relies on economic theory and their research and understanding of a certain industry to evaluate whether the proposed methodology is sound, and the results are reasonable. However, while a LLM like ChatGPT can generate answers when fed relevant background information, these “AI answers” are derived merely from patterns in the data that was used to train it, without any actual human evaluation of whether the economic theory applied is appropriate or whether any important facts have been left out of the data from which the LLM system learns. Further, the same challenges around confidentiality and data security that are faced by attorneys would also typically apply to expert witnesses.

Additionally, an expert who utilized generative AI tools like a LLM application would need to demonstrate that their methodology was based on sufficient facts and data, reliable principles, the reliable application of the principles to the facts, and that their work product could satisfy replicable testing. However, because LLMs operate autonomously, information about how their calculations are performed is not necessarily fully transparent and replicable, which may lead to questions about how work product generated using these tools can be sufficiently tested and replicated to be admissible in court.  

  1. Conclusion

Demand for higher productivity in the legal industry will continue to be a driver for the increased adoption of LLM applications in this field. With more law firms and expert witnesses adopting or planning to adopt generative AI tools in their work, there is no doubt that we will see more examples of work product created with the help of LLMs used in court. However, the risks associated with adopting generative AI like LLMs in legal and expert work underlines it is critical to ensure the technology is fully assessed and used properly with thorough verification before any work product is presented in court.

[1] https://www.nvidia.com/en-us/glossary/data-science/generative-ai/#:~:text=Generative%20AI%20models%20use%20neural,semi%2Dsupervised%20learning%20for%20training.

[2] https://help.openai.com/en/articles/6783457-what-is-chatgpt

[3] Roberto Mata v. Avianca Inc., 1:22-cv-01461, in the U.S. District Court for the Southern District of New York, February 22, 2022.

[4] Affidavit dated May 24, 2023.

[5] Order to Show Cause dated May 4, 2023.

[6] https://www.reuters.com/legal/legal-ai-race-draws-more-investors-law-firms-line-up-2023-04-26/.

[7] https://www.harvey.ai/blog.

[8] https://casetext.com/blog/cocounsel-powered-by-openai-gpt-4/.

[9] https://www.law.com/legaltechnews/2023/02/02/baker-mckenzies-machine-learning-leader-discusses-the-firms-plans-for-generative-ai/.

Experts

Jump to Page

This website uses cookies to improve functionality and performance. By continuing to use this website, you agree to the use of cookies in accordance with our Privacy Policy.  If you are a California resident, read our California Information Practices.