LexisNexis Adopts Generative AI to Simplify Legal Writing and Enhance Research Efficiency

In June, just months after OpenAI launched ChatGPT, two New York City lawyers made headlines by using the AI to draft a subpar legal brief. The tool cited fictitious cases, inciting outrage, an irate judge, and two mortified attorneys. This incident highlighted a crucial point: while AI like ChatGPT can be beneficial, rigorous verification of its outputs is essential, particularly in legal contexts.

LexisNexis, a prominent legal software company, is keenly aware of this case. Their tools assist lawyers in locating pertinent case law needed for their legal arguments. They acknowledge the promise of AI to alleviate much of the repetitive legal work lawyers face, but they are also cautious about the pitfalls as they embark on their own generative AI initiatives.

Jeff Reihl, the Chief Technology Officer at LexisNexis, recognizes the transformative power of AI. His organization has been integrating AI technologies into its platform for years. The recent advancements, particularly in generating text and enhancing conversational interactions, present a significant opportunity to improve lawyers’ productivity by streamlining brief writing and expediting citation searches. “Since the release of ChatGPT in November, we've seen a remarkable shift in the capability to produce text and engage in dialogue,” Reihl explained.

However, Reihl also acknowledges the inherent risks. “With generative AI, we’re becoming increasingly aware of the technology’s strengths and limitations,” he noted. “By merging our resources and data with the capabilities of large language models, we can create a solution that could fundamentally transform the legal sector.”

Survey Insights: The Lawyer's Perspective

LexisNexis recently conducted a survey of 1,000 lawyers regarding the potential impacts of generative AI on their profession. The findings indicated generally optimistic sentiments, yet revealed an awareness of the technology's limitations that tempers some enthusiasm.

For instance, when asked about the anticipated influence of generative AI, 46% of participants responded with the moderate assessment of "some impact," while 38% anticipated a "significant impact." Though these opinions reflect subjective evaluations of an evolving technology surrounded by considerable hype, a majority of respondents clearly recognize the potential of generative AI to enhance their work.

While vendor-driven surveys can introduce bias regarding question types and data interpretations, the insights gleaned here offer a glimpse into how legal professionals perceive this emerging technology.

One pressing concern remains trust in AI-generated information. LexisNexis is proactively addressing this challenge, focusing on ways to enhance user confidence in AI-generated results.

Addressing AI Trust Issues

The trust issue is a common hurdle for all generative AI users at this time, but Reihl affirms that LexisNexis fully understands the stakes for its clients. “For legal professionals, accuracy is non-negotiable,” he stated. “They must ensure that any case cited is still valid and appropriately represents their clients."

Inaccurate representation can have dire consequences for lawyers, including disbarment. LexisNexis boasts a proven track record of delivering reliable information to its customers.

Seth Berns, a doctoral researcher at Queen Mary University of London, has noted that all deployed large language models (LLMs) are prone to hallucinations—a challenge that cannot be ignored. “An LLM is trained to generate output regardless of input relevance,” he explained. “It lacks the ability to assess its reliability in answering a query or making predictions.”

Enhancing Reliability: LexisNexis' Approach

LexisNexis is tackling the hallucination issue through multiple strategies. Initially, it is training models with its extensive legal dataset to improve reliability—a step that addresses some concerns associated with foundational models. Additionally, the company utilizes the latest case law in its databases, avoiding timing pitfalls seen in tools like ChatGPT, which is trained only on information available before 2021.

By leveraging relevant and current training data, LexisNexis anticipates improved outcomes. “We ensure that any case references included in the results come from our extensive database, meaning we avoid issuing citations for non-existent cases,” Reihl assured.

While the complete elimination of hallucinations may be unattainable, LexisNexis allows legal professionals using its software to trace the AI's rationale back to the original source. “By integrating the strengths of large language models with our technology, we provide users with references to corresponding cases, enabling them to validate the information independently,” Reihl added.

It’s vital to recognize that this initiative is an ongoing process. LexisNexis is currently collaborating with six customers to refine its approach based on their valuable feedback, with plans to launch AI-powered tools in the upcoming months.

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles