The Legalia Project: How a Canadian University is Pioneering Legal AI’s Future

Canada is already on the map for its expertise in legal AI, home to Kira Systems, Diligen, Beagle, and the birthplace of ROSS Intelligence, to name a few, with much of this talent centred around Toronto.

But, it’s not all in Toronto. Artificial Lawyer caught up with Hugo Cyr, Dean of the Faculté de Science Politique et de Droit at the University of Québec à Montréal (UQAM) to hear what the Francophone part of Canada is doing. And it is doing some very interesting work on legal AI.

First, as you’ve probably noted, Cyr is an academic, rather than a traditional legal tech founder. Nevertheless, Cyr and his team are very much focused on developing an extremely comprehensive legal tech application that will encompass NLP, machine learning, data analytics and prediction.

The project is called Legalia – legal plus ‘I.A.’, the French for AI. Cyr explains what it’s all about and its multiple capabilities, some developed already, some in progress.

Hugo Cyr, Dean of the Faculté de Science Politique et de Droit at the University of Québec à Montréal (UQAM)

Cyr starts off by saying this is a project that combines labour law, linguistics and computer science – and a good dose of legal tech.

‘It’s for citizens who need legal information in the area of workplace harassment. It’s being designed so that citizens can ask questions and receive answers. This part will use NLP,’ Cyr says.

But why workplace harassment? That seems a bit of a narrow field.

‘Exactly,’ says Cyr. ‘The topic is narrow, so it’s smaller.’

In short, trying to build an ‘employment law information service for the whole of Quebec’ would have been a mammoth piece of work. By keeping it narrow it gives the team a chance to make something effective.

Cyr then gives the obligatory ‘this is not legal advice’ bit, which all designers of legal bots and expert systems always add as a caveat.

Although, to Artificial Lawyer this has always seemed to be a grey area. When is information that changes what a person does in terms of their legal issues not advice? The problem is not with the legal tech companies, it’s with medieval rules that restrict legal services.

On that point Cyr is clear though: ‘Advice is telling a client what they should do. Information is telling someone what the law is.’

The city of Montreal in Canada, home to the Faculté de Science Politique et de Droit at the University of Québec à Montréal (UQAM).

Cyr explains that the idea of the system is that it will work hand in hand with lawyers – none of that ‘end of lawyers’ stuff here. And it’s a nice idea. Once a person has been through the Q&A expert system, given information about their issues, identified the kind of things that relate to them, then the ‘case files’ as they stand can then be sent on to a lawyer, with a lot of the initial process work of defining the matter and gathering basic data already done.

So, two immediate benefits: the citizen gets useful info on what they can do and what legal remedies may apply without the need to pay for a lawyer; then the lawyer becomes more efficient as they get the file with some of the work already done, which also reduces the cost to the client.

The next step is to then scrape the data to make predictions that may help the citizen and the lawyer. For example, data about the time taken to resolve a claim, or the average awards for certain types of matter.

How that will work exactly is a work in progress, but this is an approach other legal AI and contract management systems with automated processes have also followed, i.e. do legal work on a digital platform, then study that work for patterns that help predict future recurring matters.

Cyr adds that judges can also make use of the tool so that they can consider what are the normal outcomes for certain types of matter. Sounds good, but now we stray into a controversial area: ethics and algorithms.

‘This will help to de-bias the data, as a judge may be biased versus certain claims. We want to avoid the reproduction of old biases, so we could flag this,’ he says.

Could showing judges some cold, hard data help to reduce extreme results and drive them closer to the statistical norm? Perhaps. At least they will, at some point in the future, have the data on hand. If they are deviating from the norm for a judgment then they’ll know they need a good reason for this.

It’s not a small challenge to show this, Cyr agrees. That said, Cyr wants to have a serious attempt to see if it can be done. He’s going to look at clusters of data and what would normally be a ‘proper distribution’ of cases.

But, even the latter point will need sufficient data to provide that big picture. Nevertheless, Cyr sees this as vital work. And he is right. If we are going to have judges making use of algorithms to provide support for their decisions then we have to be able to show clearly how they work and be able to easily show where bias may have crept in.

‘It’s important because algorithmic decisions could be huge in number,’ he points out.

For example, if an algorithm has gone wrong its automated penalties could reach thousands of people. And we have already seen the debacle over the US system Compas – something that Cyr has looked into in great detail.

Cyr also notes that as salaried academics they have no financial pressure to rush out an application for sale. They can take their time and try and get this right, something that commercial operations may not be able to do.

Finally, we move into more day to day issues. How far can this go? Cyr notes that one challenge is that both the US and Canada have a lot of legal jurisdictions. Law in Quebec is not the same as Ontario, the province where Toronto is based.

And there are the language issues. Much Canadian legal documentation is in both English and French. But, translations are not always perfect, or if written from scratch in each language little differences and turns of phrase may create new interpretations that can slip in, which really doesn’t help with automated systems.

But, Cyr and his team are up for the challenge. And these challenges will have to be faced one way or another if the use of AI and automated systems are going to have the major impact the industry is hoping for.

In which case, thanks to Dean Cyr for working on this project. It’s a great example of what legal academia can do to be part of the legal AI world. We look forward to seeing how it all goes.