New York Bulletin – AI Workshop + Do We Need an FDA for Algorithms?

Morning from New York, here’s the first of several New York Bulletins from the Legal Tech show.

Artificial Lawyer got into the AI Workshop on Monday afternoon and went straight into a series of excellent product demos. Here’s some of the key points.

Laura van Wyngaarden, co-founder of legal AI doc review company Diligen was first up. Diligen has been around for a while and perhaps doesn’t get the credit it deserves. Van Wyngaarden explained how developed now their system is, especially around the number of pre-trained provisions for a range of different legal documents and the clauses therein that they have.

She also explained that they are very scalable, working with 100,000s of docs, to law firms that may just want to handle 50 per month in terms of review. This is possible because of their simple sliding scale of pricing. Interestingly she added that they’re happy for a law firm or company to use the system for a couple of months and then stop when they don’t need it – which would be a big help to smaller law firms that may only have large deals every few months.

Van Wyngaarden also set out the point that smaller law firms gain additional capacity from using AI systems, making them able to handle larger matters that they could not do before.

Next was Rick Merrill of California-based, Gavelytics, which is simply put a legal research tool that helps lawyers figure out what judges will do, among other things. They’re expanding into other parts of the US such as Florida. And he pointed out that the platform also provides lawyers with speed metrics around how long cases will usually take. Fundamentally this is a predictive tool and one that litigators may see as a useful business winning device, i.e. they can tell a potential client just how much they already know about how a certain case may go.

Merrill noted that there is a lot of competition for this type of application now, but stressed that his company has better State Court data than most of the big legal research companies that tend to focus on Federal cases.

Then we had James Lee of Legal Mation, which simply put, reads a complaint and generates high quality first drafts of answers and initial written discovery in about 2 minutes, tasks that normally take attorneys or paralegals hours to complete in the early stages of litigation.

It’s fair to say that the audience was really struck by this. They likely have seen review tools before, but an automated service that actually produces draft answers….well…..that was something else. To Lee’s credit even the other founders seemed impressed.

He added that this type of tool could be used by Government and police departments: ‘Which police like to create reports all day?’ Indeed – huge potential for this one.

And finally there was Emily Foges, of the UK’s Luminance. Artificial Lawyer has seen this presentation many times (or versions of it – and it’s a good one), but in a nutshell Foges’ key point is that ‘there is no training involved’. How so…? Well, from Luminance’s point of view the education of the system happens in real time as the lawyer interacts with the system. Luminance provides the lawyer with a tonne of information about the documents you have up front, as soon as it’s ‘on’, and then the lawyer helps the system learn as they work through the documents.

Isn’t this training by any other name….? Hmmm….well, I think that depends on how you see things.

And then there was Jay Lieb, NexLP, which helps with investigations of company communications. It all felt a little bit Big Brother. In fact it was Big Brother, but a corporate version of it. Artificial Lawyer asked the inhouse lawyer sitting nearby what he thought. He replied: ‘It’s on company devices, it’s fine.’

Hmmm…it does seem worrying to AL that we’re so happy to allow a company to record and continually analyse employee activity. After all, we spend most of our lives at work. Then when we get home we have Alexa et al too….hmmm…..not sure about all of this at all.

OK, now the last bit, there was then a panel, featuring Jake Heller of Casetext, the grande dame of e-discovery  Maura Grossman, Lee Tiedrich of Covington & Burling, and yours truly, with Thomas Hamilton of Ross as chair.

We covered a lot of ground and Artificial Lawyer couldn’t really take notes while speaking at the same time. But, there were two key debate subjects that seemed to get people excited.

On the classic subject of ethics and automobiles Grossman went beyond the usual themes and noted that there are clear cultural differences in how companies may apply ethics to AI and algorithms.

I.e. one company in the US may have a different approach to altruism and selflessness in the event of an accident, as compared to, for example, a Swedish car company’s autonomous vehicle. One may say to itself: hey – I need to protect the passengers of this car above all else, while in another country the system may be more prone to protect pedestrians and the public in general.

This created the fascinating prospect, Artificial Lawyer suggested, of autonomous vehicles having ‘morality scores’ on a scale that told the buyer how they would behave.

And this in turn led to a great discussion on algorithmic transparency and regulation. Artificial Lawyer suggested that maybe we need an FDA for algorithms – with a Government agency certifying that a program was not biased, or dangerous, or was in conformity with local rules and moral expectations. They could even dish out approved algorithms to those who wanted them.

This latter point on dishing out was not popular with the panel, who preferred a free enterprise / the market will decide approach. Though, they didn’t rule out the idea of a body checking algorithms. However, Heller stressed that algorithms that work well will be used and those that don’t will be dropped by their clients.

I.e. if company A is selling an AI system that promises to provide insights into case law and it doesn’t work well, or misses things, then the lawyers will stop using it. The opposite would also be true.

The problem there is that commercial uses is one thing, public safety is another.

Any road, that’s all for now. It’s now Tuesday and 8AM and the event is about to start again. More reports tomorrow.

1 Trackback / Pingback

  1. Today in Legal Artificial Intelligence | Sportal Template

Comments are closed.