In a startling intervention that seeks to limit the emerging litigation analytics and prediction sector, the French Government has banned the publication of statistical information about judges’ decisions – with a five year prison sentence set as the maximum punishment for anyone who breaks the new law.
Owners of legal tech companies focused on litigation analytics are the most likely to suffer from this new measure.
The new law, encoded in Article 33 of the Justice Reform Act, is aimed at preventing anyone – but especially legal tech companies focused on litigation prediction and analytics – from publicly revealing the pattern of judges’ behaviour in relation to court decisions.
A key passage of the new law states:
‘The identity data of magistrates and members of the judiciary cannot be reused with the purpose or effect of evaluating, analysing, comparing or predicting their actual or alleged professional practices.’ *
As far as Artificial Lawyer understands, this is the very first example of such a ban anywhere in the world.
Insiders in France told Artificial Lawyer that the new law is a direct result of an earlier effort to make all case law easily accessible to the general public, which was seen at the time as improving access to justice and a big step forward for transparency in the justice sector.
However, judges in France had not reckoned on NLP and machine learning companies taking the public data and using it to model how certain judges behave in relation to particular types of legal matter or argument, or how they compare to other judges.
In short, they didn’t like how the pattern of their decisions – now relatively easy to model – were potentially open for all to see.
Unlike in the US and the UK, where judges appear to have accepted the fait accompli of legal AI companies analysing their decisions in extreme detail and then creating models as to how they may behave in the future, French judges have decided to stamp it out.
Various reasons for this move have been shared on the Paris legal tech grapevine, ranging from the general need for anonymity, to the fear among judges that their decisions may reveal too great a variance from expected Civil Law norms.
One legal tech expert in France, who wished to remain anonymous, told Artificial Lawyer: ‘In the past few years there has been a growing debate in France about whether the names of judges should be removed from the decisions when those decisions are published online. The proponents of this view obtained this [new law] as a compromise from the Government, i.e. that judges’ names shouldn’t be redacted (with some exceptions to be determined) but that they cannot be used for statistical purposes.’
Whatever the reason, the law is now in effect and legal tech experts in Paris have told Artificial Lawyer that, as far as they interpret the regulations, anyone breaking the new rule can face up to five years in prison – which has to be the harshest example of legal tech regulation on the planet right now.
That said, French case law publishers, and AI litigation prediction companies, such as Prédictice, appear to be ‘doing OK’ without this specific information being made available. This is perhaps because even if you take out the judges from the equation there is still enough information remaining from the rest of the case law material to be of use.
Moreover, it’s unclear if a law firm, if asked to by a client, could not manually, or using an NLP system, collect data on a judge’s behaviour over many previous cases and create a statistical model for use by that client, as long as they didn’t then publish this to any third party. That said, it’s not clear this would be OK either. And with five years in prison hanging over your head, would anyone want to take the risk?
But, the point remains: a government and its justice system have decided to make it a crime for information about how its judges think about certain legal issues to be revealed in terms of statistical and comparative analysis.
Some of the French legal experts Artificial Lawyer talked to this week asked what this site’s perspective was. Well, if you really want to know, it’s this:
- If a case is already in the public domain, then anyone who wants to should have a right to conduct statistical analysis upon the data stemming from that in order to show or reveal anything they wish to. After all, how can a society dictate how its citizens are allowed to use data and interpret it – if that data is already placed in public view, by a public body, such as a court?
- This seems to be like giving someone access to a public library, but banning them from reading certain books that are sitting right there on the shelf for all to see. This is a sort of coercive censorship, but of the most bizarre kind – as it’s the censorship of justice’s own output.
Clearly there are limits to the information tech companies should be allowed to gather on private individuals. But the decisions of judges in open court do seem to be ‘public data’ and hence de facto beyond any censorship or control.
However, as one contact in Paris added, the old law against ‘Scandalising the Judiciary’ was only recently abolished in England & Wales, which shows that judges over here have not always liked to be scrutinised too closely either.
Clearly this is a hot potato – what do you think?
Is it right to wall off the decisions of named judges from statistical analysis?
Part of the French text covering the new law is below:
‘Les données d’identité des magistrats et des membres du greffe ne peuvent faire l’objet d’une réutilisation ayant pour objet ou pour effet d’évaluer, d’analyser, de comparer ou de prédire leurs pratiques professionnelles réelles ou supposées.
La violation de cette interdiction est punie des peines prévues aux articles 226-18, 226-24 et 226-31 du code pénal, sans préjudice des mesures et sanctions prévues par la loi n° 78-17 du 6 janvier 1978 relative à l’informatique, aux fichiers et aux libertés.’
(* Translated version above, via Google.)
I suspect legal tech companies will break the law with impunity. The beauty of most AI is that it’s a huge black box in that you can’t often tie results to specific inputs. That means no one will know if a judge’s biases are used to inform an AI output. Taking your analogy, this law isn’t banning people from reading public library books. Rather, it’s banning them from comprehending what they’ve read. But how would anyone know what was comprehended?
If you hire a top-tier law firm to defend you in France, I guarantee you they have lawyers and support staff whose entire job it is to analyze judge patterns and behavior, what technicalities to employ, what province to try a case in, etc. This type of “meta-lawyering” existed long before AI was on the horizon and will always be available to anyone with the money to pay for it. Because these firms can charge sky-high rates for this white glove service, they have zero incentive to share the information with anyone else.
All this new law does is ban NORMAL PEOPLE – people not in the top 5% richest – from having access to data about judges. It doesn’t actually protect judges, or maintain the law of the land, or any other nonsense – it keeps the middle class firmly where they belong, and the judges secure in the knowledge that only the people with a vested interest in the status quo — wealthy lawyers and their clients — will know what’s really going on.
This is exactly correct. Think about the ink spilled over analysis of SCOTUS decisions as an extreme example — even non-lawyers with an interest and law and policy probably have some notions about the jurisprudential trends among the various active justices. This law would presumably make it illegal to publish a news article in France saying something like, “Justice Ginsburg tends to strongly support liberal policy outcomes, particularly those having to do with equal rights for women, although she’s usually fairly sympathetic to business interests such as strong IP protection.”
I’m kind of shocked that they were considering redacting judges’ names from decisions but instead decided on this “compromise”. I guess the concern was that the litigants would of course know the name of their trial judge, which would potentially create a weird side-market for information linking decisions to judges, and perhaps this “no analytics allowed” rule was seen as less bizarre and convoluted than that result. Still… “you’re not allowed to look at me” is a very weird thing for a judiciary to announce. Perhaps judges in France will return to wearing hoods like the executioners of the revolution? Honestly that would be a better system than this weird gag rule.
I thought that predictability of legal process was a key principle?
And if judges’ decisions are at variance with “expected Civil Law norms” (!) surely they should welcome the feedback, not jail people who point it out?
There is an obvious risk that some large law firms will do it anyway, while chilling a public discourse that is potentially beneficial to everybody.
This is fascinating, in part because it is unenforceable. If someone asks some other person what tack to take in front of a particular judge, who’s to say the answer was arrived at via statistical modeling, or plain old fashioned informed opinion? Or by bribing the judge? (No, forget I mentioned that. That NEVER happens!)
I think that 100 years from now, this law and others like it will be viewed as merely primitive and childish attempts to hold back the future. Hopefully, there won’t be too many prosecutions for this, before that realization is reached.
The annexed report offers further surprises. For example, it says that “The ministry’s decision-making information system will evolve to provide effective tools for analysis and management of the activity at the national and local levels. Users will have to be able to access on-line practical information, enriching what is already on the Justice.fr site (accessibility of jurisdictions, pedagogy of procedures, simulators …), but also, for example, to indicators of procedural delay before the court they intend to refer to or to indicative scales or indicative benchmark”. My view is featured here (of course quoting Artificial Lawyer, where I originally learned from this outstanding news): https://www.linkedin.com/pulse/laws-choke-innovation-chickens-judge-analytics-jos%C3%A9-mar%C3%ADa-de-la-jara/
You have to understand that the french legal system is rather different from the one in the US and UK.
Moreover, statistics are rather crude instruments with well-known biases, and, in this case, at least in France, they would give inaccurate views which would be harmful in the long run.
Do their decisions affect people’s lives? Are those decisions (generally) public?
Then they should be subject to analysis. Lawyers already do so; they just don’t (generally) use machines to do so. This law is akin to saying you can take notes on a case with a pencil, but not using a computer.
This is perhaps akin to the principle that you can’t take photographs in court, but pencil sketches are permissible.
Of course they should be subject to analysis. And they are, all the time.
However, although some lawyers seem to believe that computing statistics is indeed an analysis, scientists know this is not the case.
What is an example of credible experiment in which a hypothesis was accepted without analyzing experimental data using statistics? Or are you perhaps implying that NLP research only involves statistical analysis and not experimentation? Computational linguists know this is not the case.
Could you point me to a place where this is explained in a bit more detail? Thanks.
This is a clear violation of freedom of expression, academic freedom, and plain common sense. Analysing individual judicial behaviour has been done with doctrinal research for centuries. NLP is just improving on it.
Might judges read their own (possibly unrecognized) tendencies and intentionally throw a curveball? Or might that also be predicted? I recall YEARS ago seeing that sub captains had right or left tendencies when leaving their Seattle area harbor for the Pacific. To overcome this I THINK the navy imposed some random requirements. Do you recall Hunt for Red October, where Sean Connery predicted the Russian sub’s next move based on his knowledge of the captain? Later Connery admitted that he knew he had a 50-50 chance of being right.