Artificial Lawyer recently caught up with Kripa Rajshekhar, founder of Chicago-based Metonymy Labs and Selvyn Seidel, founder and CEO of New York-based litigation investment advisor, Fulbrook Capital Management, who are now working together on using AI technology in relation to litigation funding, which appears to be a world first.
We discussed how this alliance came about, what they will do together and for those of you who really want to understand how this works, we also got into some of the detail of what Kripa’s AI does in relation to patent diligence.
First of all Selvyn, can you tell the readers briefly about what Fulbrook does and why you are interested in AI?
Fulbrook is an expert advisor in dispute finance, which is often referred to as third party litigation funding or finance. The industry monetises meritorious commercial claims by advancing capital to a claimant, or to otherwise obtain capital for business or personal use, while the investor secures a return if, but only if, the claim is successful in obtaining a recovery.
Our work is important because the industry is young, complex and fast changing, so both claimants and investors turn to us for help.
A pivotal part of our work, and that of the industry, turns on the quality of the evaluation of a commercial claim. While we have comprehensive processes for this evaluation, we decided that the analysis of the situation through sophisticated technology found in AI was indispensable. We are lucky indeed to have formed an alliance with Kripa and his firm.
And, Kripa, can you also tell the readers what your focus is in terms of AI and also how you got together with Fulbrook?
When I was an M&A partner at Ernst & Young, doing business in due diligence for private equity and corporate firms, I learned first hand that the tools of the trade were quite incapable of supporting the new complexity of information and consequent cognitive demands being placed on practitioners.
[In 2014 Rajshekhar then founded Metonymy Labs in Chicago]. Our proprietary natural language processing (NLP) enabled tool-kit, Metolabs, provides AI-augmented diligence. Our capabilities are particularly useful in areas like patent finance and litigation that are dominated by qualitative, text heavy, sources of information, an area that Fulbrook is active in.
Our relationship with Fulbrook started when Selvyn and I met at Fin(Legal)Tech in Chicago where we had been invited to speak. It was a conference sponsored by Dentons and Kent Law on the integration of Finance, Law and Technology. It was clear in our very first meeting that we needed to find a way to work together, the synergies were obvious. The rest, as they say, is history.
Also, Kripa, you mentioned that you are/were working with people from the IBM Watson team that developed the system that famously won ‘Jeopardy’ on American TV. Who are they and are they involved in this project?
We’ve been collaborating on R&D with Wlodek Zadrozny’s text analytics lab at the University of North Carolina. Wlodek was previously the leader of the text resources group in the IBM Watson, Jeopardy winning team.
Together, over the past year or so we’ve developed domain specific tools in the general area of NLP and applied computational linguistics. The work has resulted in novel models and methods for semantic representation and understanding, effective at scale, particularly in the areas of patent diligence.
For example, we filed a patent for a novel data mining based semantic analysis methodology with Wlodek and his team (published on January 2017, App. Number: 62/188,578). We’ve also published a couple of papers together to make our foundational work available to the broader research community.
I should add that Fulbrook and my firm have also brought in Lex Machina to further support the screening and diligence workflow. Their deep database of patent litigation helps us better estimate statistical patterns in attorney representation, timing, case outcomes and venue.
Now, let’s get into the heart of the matter: using AI and NLP to gather a deeper insight into legal data that will allow Fulbrook and its clients to make better decisions on what cases to fund and the ability to predict the outcome of such cases. You mentioned that the focus for now will be on US patents. Why this area and not for example, more general commercial/civil claims?
The short answer is, it is where we have demonstrated results. Late last year, a Metolabs-led team was the first to benchmark the accuracy of predicting invalidating-prior-art in the PTAB (the Patent Trial and Appeal Board), a special court instituted by the United States Congress as part of the America Invents Act.
This effort was prompted by recent progress in incorporating word order and semantics that has yielded promising results in computational text classification and analysis. This development, and the availability of a large number of legal rulings from the PTAB motivated us to revisit possibilities for practical, computational models of legal relevance starting with this narrow and approachable niche of jurisprudence.
How small a sample of cases, (if not patent cases where there is a lot of public data to train the system on), would you feel confident about using AI on to make a case prediction? I.e. if a case is relatively novel how can you train the AI to see any patterns on which to base a prediction?
I think we need to distinguish between understanding and prediction.
When considering an investment one seeks to understand all of the risks and their drivers, not just predict a probability. Given the nuance and complexity implicit in legal judgement we are sceptical that a one-size-fits-all ‘magic bullet’ AI solution will adequately model outcomes in the field.
However, we believe there is a notable absence of practical models for legal relevance, such as ways to describe how two documents are legally relevant to each other. Toward this end, we see our work as part of a broader undertaking: the development of practical models and theories of legal relevance, for instance in the area of patent law. This understanding allows us to deliver improved results in examining areas of risk based prior case examples. Which, in the area of US Patent Law corresponds to thousands of rulings each year.
Staying with US patents, what sorts of data/patterns will you be specifically looking for and why are you looking for those types of patterns in the patent data? What do you hope it will reveal?
Ultimately, we are attempting to understand modes of human judgment in a narrow area of jurisprudence. What makes a court ascertain the legal relevance of one patent to another, say in ruling it to be prior art and hence invalidating a claim under judgment?
We’ve been able to improve patent text representation through pre-processing steps and using word embeddings of patent specific key words and distinct sub-sector specific training. In a sense, we model relatedness in more granular ways, down from sub-sector specific to perhaps patent-type (device, method, apparatus) specific modelling of relevance. These proprietary forms of representation of meaning allow us to match the relevant documents better than others.
What challenges does using NLP and machine learning meet when working with patent filings? I imagine that one challenge could be that different people use quite different types of language to file similar products?
It is a real challenge. We are missing solid language models for legal relevance. For example, less than 15% of patents judged as being relevant according to a court ruling correspond to the best semantic match, when using simple shared word or topic associations.
For instance, two patents responsive to the topic ‘foundry’ that both deal with metal forming, may not be highly related to each other in the sense of contributing to patent invalidation, although there may be a strong semantic match.
However, one that has the terms ‘slurrying’ and ‘foundry’, which are related, but not with a strong semantic conceptual match, does indeed invalidate one of the patents in another test case. This is the drawback that our innovations seek to address.
You mentioned that your system is ‘four times more accurate than the next best one’ for finding out whether a patent can be successfully challenged. I understand you measured this by using a test set of public data to benchmark the accuracy level. Please can you explain a bit more about how the benchmarking process works, and perhaps which was the other system you compared yours to?
Speaking generally, we’ve used an industry-standard information retrieval (i.e. document search) task to benchmark the performance of our technology.
Our tests consist in using different techniques to retrieve patents cited in 8,000 PTAB decisions, based on queries build on the patent whose validity is being questioned. Such queries typically consist of the combinations of the patent abstract, its title, or its first claim.
We are able to identify relevant patents in 20% of the trials, in comparison to the 5% of the trials delivered by the state-of-art systems.
As baselines for our evaluations we used both ‘bag-of-word’ (a standard industry benchmark) query representations, and a best-in-class semantic search implemented using conceptual expansion of query words.
As a frame of reference, the query expansion system outperformed algorithms developed by Google on word-relatedness tests. An outline of the methodology on a smaller dataset was described in a paper published in the American Society of Engineering Management conference in October 2016. We’ve also submitted a paper to the upcoming summer conference on AI and Law (ICAIL ‘17 at King’s College London) where we describe our methodology and results in greater detail.
Once you feel that you have mastered the patent area in terms of prediction and analysis, will you be applying this technology to other areas, and if so, which would be the best-suited types of cases?
We are at the very start of this journey, there is a great deal of opportunity ahead. We believe future work in this area would include two major streams: (i) The hypothesizing and testing of semantic representations that allow for automated classification of historically observed modes of legal relevance in other domain-specific language and large volumes of discoverable documentary evidence (e.g. Arbitration, Insurance) (ii) Input from legal theorists and practitioners on the accurate classification of modes of relevance judgements (e.g. enumeration of the types and forms of arguments typically used in support of legal relevance judgements).
And, to conclude, and this is particularly for Selvyn, it’s great that Fulbrook has embraced legal AI systems. How far do you think AI will penetrate the legal market in terms of the litigation funding sector and also the profession as a whole?
The early market response to Fulbrook’s alliance with Metonymy Labs has been positive and indicates that the industry is interested to learn more about this area.
Over time, I am confident AI will penetrate Dispute Finance through and through. Claimants and investors will demand it. So will lawyers. In fact, the litigation and arbitration industry itself needs AI systems, and its use of AI will also push use by the dispute finance world. Many other uses of it will be made in many different industries.
I have little doubt that analytical tools and technologies such as AI, dealing with language and conceptual patterns, coupled with experience in litigation and commercial dispute finance, will enhance our ability to evaluate and project results from disputes. Hopefully it will be a valuable added tool for resolving disputes.
3 Trackbacks / Pingbacks
Comments are closed.