The emergence of Artificial Intelligence (AI) technology has been welcomed by many, not least by tech and innovation bosses at legal businesses tasked with providing quicker and smarter ways to deliver services to clients and colleagues.
But, with its benefits come challenges, and hence rolling out AI-inspired projects may not always be an easy task. And, being a fairly young industry, there’s still some factors that need ironing out.
It raises the question: what do firms wish AI systems could do, or do better? Top tech executives at a range of businesses spoke to Artificial Lawyer about what they wish this machine learning technology could really do, and shared some of their frustrations with the current state of the art.
We started by asking: Generally, are you satisfied with AI tech as it is?
Andrew McManus, Global Co-Head of Innovation at Eversheds Sutherland – We are really excited by what it’s doing, but there’s potential for more. AI is being roundly deployed on operations or usage for documents, contracts, due diligence and sets of info that need to be renewed. A challenge, however, is you can’t replace what the human brain does. That said, there are bespoke situations where it can be trained by a lawyer.
Dal Virdi, IT director at Shakespeare Martineau – We should never be prepared to accept the status quo. IT is a natural disruptor. AI tech as it is today is a maturing capability and we are still within the early stages of that evolutionary journey. The tech that does exist is varied and enables different levels of capability.
“We should never be prepared to accept the status quo”
However, this has to be balanced against the processes it is applied to, to ensure that increased automation and value can be derived in relatively predictable scenarios and without introducing risk to outcomes. The rate of change means that these capabilities will rapidly evolve, delivering further operational enhancements, as well as advancements in the development of synthetic and continuously-learning transactional opportunities.
Laura Bygrave, Innovation Lead (Deloitte Legal) at Deloitte LLP – Yes, the legal tech vendors I’ve partnered with to date have worked hard to ensure their product can be deployed efficiently. They’ve done so by minimising the training time required for lawyers and providing a templated deployment approach for a variety of use cases. One example could be found with document review automation products, by providing guidance on how many examples an algorithm requires to train to an acceptable level of precision if it is a non-standard provision (not trained ‘out of the box’).
Karim Derrick, Head of Research and Development at Kennedys – In general, my job is to get the best out of what I am presented with. Data is the biggest challenge to getting the most out of the current array of technologies. Without good data it is difficult to get good results.
Robert Shooter, partner and Head of Technology, Outsourcing & Privacy at Fieldfisher – At the moment, the vendors will demonstrate their wares to law firms, typically by getting their AI engines to crunch simple documents such as non-disclosure agreements. The output of the review is shown in various dashboards which allows for the triaging of issues.
The trouble is that most of the legal AI I’ve seen can only cope with simple documents ‘out of the box’ – and certainly not a bespoke set of precedents. For example, we would need something specifically suitable for Fieldfisher, and investing further to improve this can be a very expensive experience. You are taking quite a bet if you are going to spend 12 to 18 months getting an AI engine to ‘learn’ from Fieldfisher’s standard document set, not least if the AI-engine you back is no longer flavour of the month.
“The trouble is that most of the legal AI I’ve seen can only cope with simple documents ‘out of the box’”
Additionally, much of the current ‘AI’ out there is not genuine AI – it’s really document automation: there’s no machine learning, just data crunching. There’s nothing wrong with that, but it’s perhaps not as clever as we are being led to believe.
In terms of improving efficiency, what do you wish AI could do for you?
Shakespeare Martineau’s Dal Virdi – In a nutshell, it would be beneficial if AI could consolidate and learn from multiple sources of information and inputted variables.
By doing so, it would be able to provide more certain and predictable outcomes to facilitate accelerated delivery time, while also benefitting from being more efficient and accurate.
Additionally, being able to offer almost ‘human-like’ engagement, like a truly virtual assistant (supplemented by a user-friendly holographic and interactive image) would be the gold standard.
This engagement would facilitate communication, understand requests and process like nothing we’ve seen before. Imagine Siri or a chatbot on overdrive. Fundamentally, AI’s success hinges on being able to monitor and learn from everyday business processes. This insight can then be used to supplement current processes, identify ways of doing things differently and innovate within organisations – improving efficiency and outcomes.
Jane Stewart, Head of Knowledge and Innovation at Slaughter and May – We want to get to a place where AI can perform automated analysis and tagging of precedent documents for our knowledge system – which is theoretically possible, but quite a big ask.
Eversheds’ Andrew McManus – An important point is the increasing level of data that law firms are starting to collect and decisions that need to be made using that data.
We need data-based decisions that will enable [lawyers] to make decisions. Hence, ideally, we’d like to see AI-solutions for helping add more data, and to improve the quality of data. We would like to use technology to help our legal teams to gather data and use it more, to get more insight and make data use more accurate.
The rate of adoption of AI for the legal industry is probably slower than it could be. This is probably because the quality of our activities has to be more accurate, after all we are dealing with law!
Our lawyers provide advice based on quality and we need to be sure any use of deep AI has to completely accurate. Once we find areas where we are comfortable with the quality from the AI, then we can expand its usage. For us it’s important that any AI-inspired advice is correct.
Vendors should be putting out systems that are good and accurate. The interpretation of AI-inspired information needs to be supervised and checked and not just assumed to be correct. Systems must ensure algorithms are working properly, for example. We want anything that comes out of an AI platform to be accurate.
Because with AI it’s a new set of capabilities, the usability of systems should therefore be as simple as possible so that it’s easier for lawyers to be able to use systems that support their advice. They should be able to use AI products without a huge amount of technical support. There is an ongoing need for improvement of user interfaces, and vendors should make it easier for users.
“the usability of systems should therefore be as simple as possible so that it’s easier for lawyers to be able to use systems”
There is also a need for demystification – making sure that lawyers know how the system can benefit clients and advising them. It’s also important to ensure people know what solutions we have and what they can do. Ideally, I’d like to see all the legal teams operate on a good tech level, that way they can then be independent and do more on their own. If you are relying on a small tech team, you can only go so far. So if the legal teams could do things themselves, it would be even better.
Fieldfisher’s Robert Shooter – There needs to be more investment on the vendor side, they need to come up with tech that has more AI and can crunch complex documents out of the box.
It’s frustrating to see a brilliant demo, only to find that the product won’t be ready to use for at least 6-12 months.
Deloitte’s Laura Bygrave – When procuring AI technology, a key factor in deciding to proceed with a particular vendor is the ability (or appetite) to integrate with other products already within Deloitte so as to enable us to scale usage quickly.
Bringing together vendors who solve discrete problems with other vendors who have complementary products to one another, and therefore could be purchased as a platform, would create significant efficiencies.
What challenges do departments such as yours face when it comes to AI tech. Are there any issues, frustrations?
Kennedys’ Karim Derrick – That too much time is spent improving existing practice and too little time is spent exploring how AI tech can be used to transform practice.
Also that, in practice, as much time is spent preparing data for processing as making inferences from that data.
“too much time is spent improving existing practice and too little time spent exploring how AI tech can be used to transform practice.”
Eversheds’ Andrew McManus – A challenge is changing the way a less established service, such as AI, is provided. For example, any change means people have to alter the way things are done, considering that the legal profession has been established in a certain traditional way for a long time. So it’s a cultural change.
There’s not much resistance, though, we have a big advantage of having agile intelligent people in our legal teams. In addition, our lawyers need data that is accurate, which is why we stress that AI products must be accurate.
“any change means people have to alter the way things are done”
Shakespeare Martineau’s Dal Virdi – There needs to be a wider understanding that leveraging AI’s capabilities takes a significant amount of training, time and acclimatisation. This must happen in order to truly test and have confidence in the final capabilities and outcomes that may be delivered.
Also, people need to be more accommodating to business change as these types of technologies are introduced. This includes not seeing them as threats but instead as opportunities to change the way law firms operate and do things differently.
Finally, AI must avoid continuing to be the ‘buzz’ ambition for everyone, without a full understanding of its capabilities and what this innovation actually mean.
Deloitte’s Laura Bygrave – With AI technology, a volume of data points are created that are proprietary to your business; at present it can be difficult to utilise, visualise and export this data so it can be used to make decisions or mitigate business risks. Vendors understand the challenge and are trying to adequately address this, but there is more to be done given the commercial value of data.
“it can be difficult to utilise, visualise and export this data so it can be used to make decisions or mitigate business risks”
Fieldfisher’s Robert Shooter – Regarding challenges, I’d say that the market is still not ready to give up on people. Presently, the AI accuracy rate is considered to be close to 70-95% (depending on the package).
If we take the high end of 95%, which may well be a better rate than the human equivalent, there is a 5% chance of inaccuracy. And when that happens – i.e. when the machine gets it wrong – you can’t speak to the machine to fix it because it’s a machine!
However, when its humans – in this case lawyers – when they get something wrong, clients can call you up and discuss this with you, and ultimately agree a way forward. In addition, if you look at the lower end of the accuracy scale – 70% – the vendors are describing the offering as ‘a first pass’ – humans need to carry out the second pass. In that case, what is the purpose of implementing systems whose work will need to be checked by humans again? You may well just give the work to humans in the first place. Things will change for sure. I’m not in denial. I just think we’re not as advanced as the market would have us believe.
So there you go, some bright ideas to mull over. Overall there seems to be a lot of enthusiasm and creativity coming from the corridors of law firms as far as AI tech goes. That said, there are clearly some frustrations with some of the systems in the market and there remains a need for improvement.
Here are some key take-aways from the AI wish list:
1. Leverage That Data
A number of the firms highlighted the need for change regarding making better use of data in relation to AI tools. This had various aspects, from helping to generate more useful data that lawyers can then act upon, to just making sure you have clean and useable data in the first place, to more easily moving output data through the business.
2. Accuracy: No lie
Issues around accuracy are another important factor, and although current systems can provide a ‘fairly high’ degree of accuracy (70-95% depending on the scenario), they are still not as advanced as some firms want.
3. Demystification and Great User Experience
The tech bosses want sound and sensible systems that can facilitate better working procedures. The experts above identified an ongoing need for improvement of user interfaces so lawyers can easily grasp them and use the tech to benefit clients.
4. Better Out Of The Box
Some of the experts above still showed reticence over having to take on training themselves, and would prefer systems that could work out of the box on more than the most basic of document types. Although, some also seemed content with the current approaches to training for bespoke matters.
5. Connected Solutions
The need for systems to talk to each other so that all this tech can deliver real value across the business and leverage multiple applications working together in a connected offering.
(By Irene Madongo)
Legal Innovators Conference: Tickets Now On Sale