US law firms prioritise jobs and safety in AI rollout
US law firms’ use of generative artificial intelligence tools for training lawyers, automating workflows and tackling complex tasks highlights the technology’s growing significance two years on from the launch of OpenAI’s groundbreaking ChatGPT.
The release of the chatbot gave the public its first real demonstration of the power of generative AI tools and their ability to produce code, text or image responses to natural language prompts.
But, since then, the legal sector has faced a similar quandary to other industries: how to capitalise on the new technology without cannibalising existing jobs or compromising quality?
“We see huge potential for generative AI to make us better lawyers, and we want our people to feel confident in using the technology,” says Isabel Parker, chief innovation officer at White & Case. “We also have a duty to our clients to ensure that we are using generative AI safely and responsibly.”
Firms are now closer to understanding how the technology could make legal work better, faster and cheaper.
In mid-2023, Crowell & Moring began using generative AI for “legal adjacent” matters that did not involve confidential information. The firm encouraged its use for legal work “on a case-by-case basis where generative AI adds value, risks are mitigated, and the client has consented”, says Alma Asay, chief innovation and value officer at Crowell.
The firm has gradually used AI to help with more core tasks such as drafting letters and summarising testimonies with a client’s consent. That has cut the time taken to summarise a client’s intake notes to under 30 minutes, compared with two to four hours previously, says Asay.
Now, many firms — having tested the technology in relatively low-stakes environments, and allowed their clients to grow more comfortable with generative AI — are looking at how to make a bigger difference to workflows and find a competitive advantage.
“Summarising a document is helpful, but it’s not a game-changer . . . It’s a cost avoidance play, allowing us to ask better questions of vendors or bypass them,” says Thor Alden, associate director of innovation at Dechert, which is building its own AI tools on top of models from leading developers.
More important, he says, are the custom tools Dechert has built “to take data sets and infuse them into our workflows”. These tools are able to trawl huge data sets for specific information and respond to queries in the style of an expert lawyer.
The next target is to develop AI agents that are capable of performing a string of legal tasks — in effect, acting as an additional team member.
“AI allows you to look at any document in any context on any day,” says Alden. The tool allows you to search “in a way you couldn’t otherwise, and it may come up with a response you wouldn’t have thought of”.
Two of the biggest barriers to adoption, to date, are technological literacy and client caution — particularly when it comes to giving generative AI tools access to sensitive data.
A number of firms are emphasising the importance of staff mastering the technology, seeing it as a competitive edge in the sector. Crowell has rolled out mandatory AI training for its staff, and 45 per cent of the firm’s lawyers have used the technology in a professional capacity. Similarly, Davis Wright Tremaine has developed an AI tool to train young lawyers how to write more effectively.
But using generative AI for meatier legal issues brings additional complexity. Even the best chatbots today are prone to errors and invention, known as hallucinations. Those are serious concerns for a sector in which data privacy and accuracy are paramount.
“There are a lot of reasons why a client may say no to AI,” says Alden. “Sometimes it’s just caution about the risks; sometimes they just want you to ‘ask permission’ [before using AI].”
At Crowell, legal professionals must undergo training that addresses issues, including hallucinations, the use of client data, and their own ethical responsibilities. The firm emphasises the limitations of AI tools as well as their potential, says Asay.
White & Case, meanwhile, has sought to protect client data by developing its own large language model in house. It is trained on an array of legal sources but privately licensed and deployed securely on the firm’s private network, says Janet Sullivan, global director of practice technology.
This approach gives lawyers “flexibility to explore the full potential of this technology”, and gives the firm access to powerful frontier open source models, while still protecting its data, she says.
The full potential of AI in a legal setting remains some way from being realised, as firms and their clients warm up to a technology that is still too error-prone to be used in highly sensitive settings.
But it is already cutting the time spent on onerous work such as trawling through data and summarising documents. And more efficiency gains are anticipated in the short term.
“I’ve always been a believer that technology helps lawyers get back to lawyering” says Asay. “We didn’t need these tools decades ago when the amount of information was manageable. As the volume of information grows, technology helps us keep apace and ensures that humans are able to focus on their highest and best uses.”
Case studies: read about the law firms innovating as businesses and the individual ‘intrapreneurs’ driving change within their firms.
#law #firms #prioritise #jobs #safety #rollout