Global law firm, Herbert Smith Freehills (HSF), has set out its approach to the use of legal AI in a major report published this week.
It joins a growing number of law firms to now publicly embrace the use of AI and to actively engage with clients to find out what services they want to be supported by machine learning and natural language processing (NLP) technology.
The report is in part a sort of ‘beginner’s guide’ educational work to explain to readers what legal AI is all about, but also more interestingly sets out HSF’s own views on what legal AI will do for the legal sector and how the firm is approaching the subject.
The firm has also published feedback from its clients on how they see the adoption of AI among their legal advisers. And this is perhaps the best bit of the report.
For example, HSF says: ‘General counsel say that when it comes to investment in AI, they expect law firms to be able to lead the discussion: to go to their clients with holistic answers to risk issues, such as predictive coding and regulatory investigation. Law firms have a responsibility to lead the debate, and help their clients establish fact from fiction.’
Well…that’s a view Artificial Lawyer has to heartily agree with.
They go on to add that firms such as HSF have to take on some risk here, as exploring what AI can do will naturally involve learning and experimentation.
‘If law firms are not prepared to wear the risk of these endeavours they may meet scepticism from their clients. One general counsel warns that unless a legal provider is prepared to put its professional indemnity cover on the line to back its claims to technological ability, there is a risk of not being taken seriously,’ the firm states.
That is to say, clients want law firms to carry the can in terms of risk, liability and testing new technology. This certainly tallies with what inhouse lawyers have told Artificial Lawyer, with some noting that they are simply not confident in handling any type of risk that AI may bring (even though in reality it would be minimal) and hence prefer external advisers to take it on.
This also echoes the view that GCs are not always best placed to make decisions about the use of technology and often smaller inhouse teams don’t have sufficient operational and tech support to make use of AI initially. Though, again, those corporate inhouse teams using AI (of which there are a growing number) don’t seem to have found it to be that complex or risky. But…..perception is half the battle here. So perhaps more GC will become less risk averse as they become more used to AI.
It was also heartening to see this quote from the COO of a large bank, who said: ‘To gain credibility you need to lead from the top. If your leaders aren’t talking about AI, then you’re not in the game.’
This is another theme Artificial Lawyer has been trying to promote: legal AI adoption is so significant a change that it has clear strategic implications in the medium to long term business planning of a law firm. Such strategic issues need top level management input, instead of being sent off to internal tech units where key business questions get buried.
The report then moves through very familiar territory such as the time = money vs efficiency challenge; whether lawyers need to code; that AI is there to augment not replace lawyers; and the demand for more emotional intelligence as process tasks become automated. But, it’s good to see HSF is up to speed on these too.
The other highlight of the report is its summary of how it sees itself embracing the use of AI, which it sets out as the following:
‘AI technologies have the potential to improve the ways in which law firms conduct their business.
Reducing total costs
• Automation/self-service options for lower-risk legal work
• Refocusing in-house resources on higher-value activities, and reducing external spend on mid-level work
• Disaggregation and cost reduction for high-volume or repetitive legal work (document review and discovery, NDAs, standard contracts)
• Reducing the need for large teams on some matters, leading to greater competition for higher-value work between global elite and boutique law firms.
Improving legal risk management
• Early warning of emerging discrimination risks through analysis of complaints and EAP requests
• Analysis of standard contract terms to identify negotiation “over-investment” and terms which are never disputed
• Combining current business activity with risk profiles, to predict the legal function’s future resource needs
Improving legal team performance
• Raised performance expectations for in-house teams as legal AI becomes more prevalent
• Change in benchmarking of team performance from time/cost measures to speed/value-delivered measures
• Shift in the roles and types of people required in-house, as teams hire legal technologists, knowledge managers and process engineers
Alignment and influence
• Better delivery on two critical business priorities, innovation and efficiency, while improving service standards.’
All in all, a very positive report that seeks to embrace legal AI, rather than look for excuses not to use it or find barriers to its adoption.
2 Trackbacks / Pingbacks
Comments are closed.