
Earlier this month the Southern California Association of Law Libraries – 2025 SCALL Institute held an ‘AI Smackdown’ – comparing vLex, Westlaw, and LexisNexis solutions.
As Ed Walters, CSO at vLex, explained to Artificial Lawyer: ‘Three experienced law librarians ran three different exercises with Vincent AI from vLex, Lexis+ AI, and Westlaw AI. The panel was:
- Mark Gediman, Alston Bird
- Cindy Guyer, O’Melveny & Myers
- Tanya Livshits, DLA Piper
‘For the three exercises, all of the tools did pretty well, but their conclusion was that one of the three found deeper answers, had more targeted sources, more control, and more transparency (their conclusions, not mine) [See chart below]. Vincent performed very well in the evaluation.’
The questions were:
‘1. What is the time frame to seek certification of an interlocutory appeal from the district court in Ninth Circuit?
2. Is there a private right of action under the California Reproductive Leave Loss for Employees Act?
3. What is the standard for appealing class certification? (California).’
And the Evaluation Factors covered:
- accuracy of answer
- depth of answer
- cited primary sources
- secondary sources cited
- format of answer
- iterative options’

vLex was marked as having the ‘most depth’ in its answers, as well as coming second in cited sources and secondary sources. And it’s nice to see that all three were marked positively for accuracy.
Artificial Lawyer then asked Walters some more questions about this performance and the bigger picture for vLex.
Specifically, where did vLex do better than the other two?
Here was the panel’s conclusion of their evaluation:
‘The panelists thought that all three tools did a pretty good job of identifying the right answers. But for the second question, only Vincent went deeper and found a recently-enacted regulation that had been passed after the 2024 statute in question – and that changed the answer completely.
Panelist Mark Gediman, said that Vincent “managed to find relevancy in a slightly broader frame of reference. And I thought that was pretty, pretty cool. I wouldn’t have found this on my own, quite honestly, and I like to think I’m a pretty decent researcher”.
In addition, for the third question, the panel liked that Vincent is transparent about sources and gives users control of what sources are used in the final product – these are both unique. They liked the transparency of listing primary and secondary authorities, with summaries of each, and the specific language from the underlying case or statute. The panel said that they liked the international reach of Vincent, which isn’t something you can find anywhere else. They said that they also liked the confidence scores for each case, statute, regulation, or articles.
Last, the panellists said that they liked the user control of Vincent. They could modify the list of sources used to create an answer – excluding results that they didn’t think were relevant.’
Why do you think vLex did better in those areas?
Our team designed Vincent to be transparent and really authoritative, and we wanted users to be able to have a lot of control over their work using AI. We tested Vincent for a long time before we released it – we definitely weren’t first to market here.
Law librarians are responsible, conscientious users of AI tools, so I can see why these values in Vincent would resonate with them. Our message to users is really in line with their evaluations: AI is a tool for professionals to use, and it doesn’t replace the independent human judgment that’s critical to our work. That means that the tools should be optimized to empower the people who use them, and that’s something that our team has done a great job of in Vincent.
What does this success mean to you and vLex?
As the market for legal AI becomes more mature, we’re seeing law firms moving on from subscribing out of curiosity. Now, with lots of choices in the market, they want to know what the best tools are. We’ll see a few of these benchmark surveys in the next year, but we love that Vincent performed so well in the first one.
I will add here that we’re huge fans of the law library community, so to have their validation here means a lot to us.
More broadly, what’s next for Vincent?
We’re just about to announce the Winter ’25 release of Vincent. I won’t get too far ahead of our team, but I’m looking at some of the new capabilities in testing, and they’re really going to surprise people.
We’re leading the industry in innovation, with more workflow tools and new countries way out ahead of the market. And that allows us to announce software when we release it, instead of announcing a roadmap or a PowerPoint slide of what it might look like. I think that really reflects the vLex team – we like to show, not tell. Stay tuned for the Winter ’25 release!
— Thanks Ed! And AL has to say the idea of running live, comparative ‘AI smackdowns’ is a great idea and allows some very useful insights into a group of similar tools. Let’s have more of these.