The IEEE, the global standards-setting body, is to put in place benchmarks for legal AI applications and seek to establish order in what some see as a ‘Wild West’ market for machine learning tools. This will include a certification system. The first of these standards is set to be rolled out as early as next year.
Nicolas Economou, chair of the Law Committee of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, told Artificial Lawyer that the organisation is currently working on the standards, including scientific evidence for standards, to determine best practice for the industry.
The moves could create significant demands on legal AI businesses and other types of AI companies to adhere to the standards and prove they were in compliance in order to retain client trust. They could also mean that law firms wielding such tools also may feel compelled to follow the new benchmarks.
Under the plans, law firms would be able to check that tech vendors approaching them with products can prove that their services adhere to the IEEE’s AI product standards.
A concern for the likes of the globally influential IEEE is that it is ‘very hard for law firms to distinguish sound innovation from snake oil’ because there are no standards, said Economou.
‘They are buying something that is truly a scientific technological product. But, how do you do due diligence [on such a product]?
‘The reason we trust medicine is that before a drug is put on the market there are clinical trials that demonstrate that the benefits of the drug far exceed the risk, so we have a formal, normative approach on that front,’ he explained.
‘In AI, this approach does not exist – anyone who puts a product out there and markets to lawyers, law firms, or courts, can say, (for example), ‘my AI works perfectly to assess the risks that this person will commit or not a crime if they are released from jail.’’
“The reason we trust medicine is that before a drug is put on the market there are clinical trials that demonstrate that the benefits of the drug far exceed the risk.”
‘All we have is marketing claims. There is no standard to determine the extent to which those claims are true. For drugs we have that, but we don’t have that for AI in the law, and that is intolerable.’
Economou gave an example of electronic disclosure.
‘There is a lot of machine learning used in electronic disclosure, but the accuracy of machine learning in electronic disclosure is generally unproven, so you would say that, if you are a law firm, for example, technical standards allow you to distinguish advertisements from truth and you’d be able to say ‘well, I’m only going to buy tools and technologies and services that have passed that standard, because then I know that my electronic disclosure is actually going to be accurate’,’ he added.
Legal Loopholes
He adds that it is not only about the technology itself, but also about those that wield it – including lawyers.
‘Coming back to the analogy with medicine, even once you’ve put drugs out in the market, we still want to make sure that doctors are certified, because if they are not they could be prescribing the wrong drug. In the world of AI in the law, anyone has the competence, or is deemed to have the competence, to determine their own competence in using AI.
‘Today you have effectively in every part of the legal system where AI is used, any lawyer, any techie, anybody can say I am competent to use AI. And that too is no more tolerable, in my view, in the law than it would be in medicine to not have certifications to show that people are actually competent to use AI,’ he noted.
New Standards Ahead
The first of these standards is set to be rolled out as early as next year, Economou said.
‘We are developing sound standards and certifications that law firms, lawyers, and judges will be able to rely on and say ‘if I buy something with that standard, that stamp of approval, that certification, or I retain a lawyer who is certified as capable to use AI, I can trust it.’ That is what we are doing.’
“Anybody can say I am competent to use AI.”
The organisation hopes that the guidance could help simplify the procurement process for tech departments in law firms or corporates, by defining a standard they can expect industry to adhere to.
‘When they procure, or when they retain people who claim that they are capable of using AI, as part of the procurement, [they can] say ‘what standards do you need, what certification do you have?’ and that would simplify [procurement] for corporate legal departments, for law firms, and for courts.
‘It’s just like we do in data security. You don’t use a data centre unless it’s ISO certified, for example. This makes procurement very easy and trustworthy. That’s the idea,’ he added.
In terms of roll-out, he explained: ‘This is not going to come out with 100 standards of certification all in one year – it’s going to be progressive. There’s still a lot of work to do, but we are very actively involved in that, because we think it’s essential work for the justice system.
‘Again, I want to be a little cautious because it may start with one set of certificates for example on accountability, on transparency, to show that something is transparent or that accountability can be traced. Things like that will be our first efforts,’ he said.
To conclude, he added: ‘My hope is that law firms could then hire attorneys or others who are experts in the use of AI because they have those certifications. So you don’t have to train them, you don’t have to make up your own classes and courses [because they are already certified with the IEEE standards, for example], or certification as an accreditation.’
Economou is also CEO of e-discovery services firm H5 and leads the Future Society’s Law and Society Initiative.
—
And, in other legal tech certification news….
Norton Rose Fulbright Launches MicroCertificate in Disruptive Technologies
Global law firm Norton Rose Fulbright has launched a MicroCert in Disruptive Technologies at its annual Tech Business & Law Conference in Silicon Valley.
The NRF MicroCert program aims to introduce participants to key technologies shaping business as well as the legal, regulatory and risk issues related to their deployment and can be accessed through the NRF Institute, the firm’s premium knowledge site. The program, which is available at no cost to clients and key contacts of the firm, is designed to equip participants with practical disruptive technology legal skills.
To obtain a MicroCert in Disruptive Technologies, participants must complete five core modules and five elective modules. The five core modules cover: artificial intelligence (AI); blockchain /distributed ledger technology (DLT); autonomous vehicles; data; and the Internet of Things (IoT).
The 13 available elective modules delve into industry-specific topics including cryptocurrencies, InfraTech, payments, and digital health. Each module consists of a 30-minute on-demand webinar led by the firm’s lawyers from across all regions, followed by a multiple choice assessment.
Nick Abrahams, Norton Rose Fulbright’s Global Head of Technology and Innovation and producer of the MicroCert Program, commented: ‘Our new MicroCert is a powerful addition to the firm’s range of learning solutions and provides clients and key contacts with a series of easy to follow resources for all levels. Disruptive technologies are changing the world and the way it works.’
‘Given the pace of change in this space, it can be challenging to stay on top of the latest developments. We have built a wealth of global knowhow in the disruptive tech space and want to share this with broader industries, through knowledge transfer, to improve productivity and continue to drive innovation,’ he added.
—
By Irene Madongo
It will be very interesting to see what the standards will cover and how these will be used to ensure that best practice is in fact implemented in real world scenarios.
For the most part, legal AI applications are essentially ‘black boxes’. The algorithms that are encapsulated within tend to be proprietary technology and very little of the detail that would be required to make a scientific assessment is actually available. This being the case it is likely that the kind and range of benchmarks that emerge will be quite high level and whilst they may provide some indication as to the quality of a product’s AI Engine they will not provide any indication as to the value of the product when deployed within a firm’s technology and legal operations environments. Here, user experience and ability to integrate with other products is much more important.
In recent weeks a number of leading AI vendors are announcing Client Advisory Boards comprising senior legal professionals who are well-versed in practical uses of legal AI. The key and essential role played by these Boards is to drive product design from a user perspective. This will certainly include a focus on the quality of AI driven outputs created by these products. Neota Logic and Luminance are two such vendors; expect more to follow.
Also in recent weeks the Law Society of Scotland has announced plans for an Accredited Legal Technologist certification. Here again, the focus is on practical knowledge and application of the technology.
A final example of how the industry is beginning to ‘tame the wild west’ of legal technology is the initiative being carried out at Reynen Court which aims to provide potential buyers with pre-packaged due diligence on a number of legal technology solutions including AI products. The due-diligence includes product quality, usability, vendor stability and a whole raft of information security assessments.
In essence, the right standards are always a good thing but when taking a holistic view of legal technology the IEEE proposals are a small slice of what legal operations actually require.