Legal AI Project Could Help Judges to Cut Crime

US data scientists have developed a new legal AI analysis they believe could help judges to reduce crime by up to 25% in certain situations.

The work led by Cornell University’s Jon Kleinberg and his team, and published in a paper this month titled ‘Human Decisions and Machine Predictions’looks at the decisions judges must make on whether a defendant awaiting their court date for an alleged crime should be released to go home before the trial starts. In each case the judge has to decide if that person is likely to commit a crime during the period before their trial begins.

As the paper states: ‘[It] focuses on a decision that has important policy consequences [which is taken] millions of times each year.’

The scientists say: ‘A policy simulation [using their algorithms] shows crime can be reduced by up to 24.8% with no change in jailing rates, or jail populations can be reduced by 42.0% with no increase in crime rates. Moreover, we see reductions in all categories of crime, including violent ones.’ The data-driven system also appears to remove some of the inherent bias versus ethnic minorities, who tend to experience a disproportionate level of imprisonment in the US.

This is what one might call a big data problem with major social and legal benefits. There are thousands of past cases to examine, which is necessary to create accurate models. There is also a major real world pay off if the decisions of judges can be improved by machine learning. Such findings could also help reduce the jail population as there would far less people on remand.

In turn this type of study could help to create better predictive algorithms to assign risk scores to defendants and criminals, such as are already used by the US justice system (see the use of COMPAS below).

Fundamentally, assigning risk scores to people is a question of prediction and that means making assumptions on future human behaviour based on the data you choose to put into the system, or even have available to help make that decision.

And it remains highly controversial as it creates new challenges, such as: what happens if court staff input the wrong data into the system, or the algorithm’s developers choose to give certain factors a greater weighting than others based on unconscious biases?

One might say that the algorithms used to make predictions on future human behaviour are only as good as the subtlety and perception of those who create them. Data scientists who approach complex problems, such as crime, using simplistic group identifiers could add bias into the system rather than reduce it. The same goes for the judges who then use such algorithms in their judgments and court orders. Although, in a more positive scenario, such as Kleinberg’s, data scientists can also use algorithms to reduce the bias in the system by focusing on the factual data. It’s a two-edged sword.

As Kleinberg says of the current system: ‘In addition, by focusing the algorithm on predicting judges’ decisions, rather than defendant behaviour, we gain some insight into decision-making: a key problem appears to be that judges respond to ‘noise’ as if it were signal.’

And by noise it could be meant that judges can be tempted to look at non-specific data, e.g. related to a person’s home location, which could be in a high crime area, rather than looking at the individual. Or, using ethnicity as a factor, which because of existing biases in arrest rates that then creates a further bias against that person when considering if they would commit a crime in the future. These biases could then end up being added into any formula that judges use to make their decisions.

As the US investigative news site ProPublica has explored ‘algorithmic justice’ may have its challenges. In particular the group looked at COMPAS, a US algorithmic system that assigns risk scores to criminals and found what it believes to be bias based on background.

ProPublica stated that: ‘The data [in relation to COMPAS’s risk scores] showed that black defendants were twice as likely to be incorrectly labeled as higher risk than white defendants. Conversely, white defendants labeled low risk were far more likely to end up being charged with new offences than black people with comparably low COMPAS risk scores.’

ProPublica added that Northpointe, the company that sells the COMPAS system, said in response that the test was ethnically neutral. Interestingly, Kleinberg also looked into COMPAS last year in terms of its ability to provide ‘fairness’, ahead of the new paper this month, according to ProPublica.

So, are AI and algorithmic systems that predict future human behaviour a good or bad thing?

The answer has to be that AI systems used to make judgments, or any other form of algorithmic prediction, are only as fact-based as the data scientists make them to be.

But, once we get into the social sciences, such as defining causes of crime, clear empirical systems can break down. Inevitably we end up relying on human judgment to define the key factors in an algorithm. The algorithm won’t a make a ‘mistake’ as such, but if you insert a bias, then that bias will be replicated throughout the results. Although, as Kleinberg has shown, data analysis can also be used to remove these biases to produce more accurate and socially beneficial outcomes.

In which case the conclusion, for now, seems to be that AI and algorithms are like most tools, it’s all about how you use them. They can do a lot of good and they can also be made to cause harm if people don’t understand what they are doing. One potential answer is to ensure what is now being called ‘algorithmic transparency’, so people can see how decisions are made and then challenge them if they wish to.

In January 2017, Artificial Lawyer covered the story that legal AI company Legal Robot had signed up to the Association for Computing Machinery (ACM) statement on Algorithmic Transparency and Accountability. ACM’s statement outlines seven principles: Awareness; Access and Redress; Accountability; Explanation; Data Provenance; Auditability; and Validation & Testing.

At the time Dan Rubins, founder of Legal Robot said: ‘As a young AI company, this is something we feel very strongly about and believe will become increasingly important as algorithms grow in their influence.’

This is clearly a growing area and Artificial Lawyer will keep you posted as it evolves.


The full details for Kleinberg’s paper are:

Human Decisions and Machine Predictions, by Jon Kleinberg, Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig, Sendhil Mullainathan

NBER Working Paper No. 23180. Issued in February 2017