The justice system is increasingly turning to complicated computer algorithms to help make decisions about bail, sentencing and parole. But many question whether paying private software companies to use secret algorithms in criminal justice is in the public’s best interest.

Last month, New York City passed the country’s first legislation to subject such algorithms to greater public scrutiny. Known as the Algorithmic Accountability Bill, it established a task force to examine how algorithms are used by city agencies. Lauded by some as a watershed moment for ending the algorithmic bias of so-called “black box” systems in the justice system and elsewhere, it was called too ambitious by others.

But there’s another way to make some of the algorithms in courts more accountable—by using transparent models derived from public data and public source code. The new models are free, the new algorithms are already in public code repositories, and they could save taxpayers money.

Risk assessment tools, which have been in use since the 1920s, analyze how people with similar profiles have behaved in the past to predict a defendant’s likelihood of committing a crime again in the future. As many as 60 such tools are in use across the country.

The American Law Institute’s Model Penal Code, currently being revised for the first time since 1962, has adopted language endorsing the role of risk assessments. Advocates say they help judges determine the risk that an individual poses to society more consistently than predictions based on human intuition and experience alone.

So the question is not whether the justice system should embrace risk assessment algorithms, but which ones they should use.

However, the opaque and proprietary nature of many of the new prediction tools presents unique challenges.

One commonly used tool, COMPAS, is proprietary. We do not know its secret formula. It scores a person’s risk of recidivism and assesses their “needs” based on 130-plus items including criminal history, age, gender and other information, such as whether their mother was ever arrested or whether they have trouble paying bills.

And its use has led to mistakes.

In 2016, Glenn Rodríguez, an inmate at the Eastern Correctional Facility in upstate New York, was mistakenly denied parole–despite a record of good behavior behind bars—because a corrections employee checked a wrong answer on his COMPAS survey.

And in 2017, a 19-year-old San Francisco man was released from jail based on a miscalculation of a different risk score that deemed him only medium risk, just days before he allegedly killed someone.

Such errors are possible in any risk assessment. Data could be flawed due to typos, missing data, inaccurate information or other problems. But it is hard to know when or why a flaw occurs if the calculation is proprietary. When these mistakes go unnoticed, courts could easily base high-stakes decisions on information that isn’t true.

New methods for interpretable machine learning have developed over the last few years. The new methods can provide predictions for future criminal behavior just as accurately as “black box” models, but their predictions are completely transparent.

They enable people to see exactly why they received the risk score they did. They can make the justice system more reliable and could save millions of dollars.

Since they are developed using public data and public source code, outside researchers can test them for accuracy and racial bias, or evaluate them against other models.

In a recent academic paper, my colleagues Elaine Angelino, Nicholas Larus-Stone, Daniel Alabi, Margo Seltzer and I recently used a new machine-learning algorithm we designed, called CORELS, to produce simple yet accurate models that predict a person’s likelihood of re-arrest.

One predictive model from the CORELS algorithm says that if someone has (i) at least 3 prior offenses, or (ii) if they have 2 or more priors and are between 21 and 23 years old, or (iii) they are younger than 21 and male, then we should predict that they will be arrested within two years of release. If none of the conditions are met, the model predicts they will not be arrested. Even though the models from CORELS are simple, our study using data from thousands of individuals in Broward County, Florida, shows they are as accurate as COMPAS and many other state-of-the-art machine-learning methods, for both blacks and whites.

The other machine-learning methods often produce formulas that are too complicated to fit on a page, rather than a set of rules like the CORELS model above. All of CORELS’ code and the data are publicly available.

Given the existence of these simpler models, why do we still use proprietary models instead?

Last June, the U.S. Supreme Court declined to hear an appeal by a Wisconsin man named Eric Loomis, who said he was denied due process because his prison sentence was based on a prediction made by a secret computer algorithm that its private developer, the maker of COMPAS, refused to explain.

New York City’s Algorithmic Accountability Bill represents an opportunity for decision-makers to consider these issues again.

The task force established by the bill will have 18 months to figure out how to test algorithms that could be used by courts, police and city agencies for bias, and make them more understandable to the public.

But when it comes to criminal justice, simply providing an explanation of a black box prediction, and a means to seek redress—as the bill proposes—is not enough. Explanations do not reveal the full truth.

If New York City takes this bill seriously, it would not allow proprietary models at all for risk assessments. Proprietary models are error prone (leading to dangerous situations for the public), potentially unfair, raise due process questions, are a waste of taxpayer dollars, and have not been shown to be any more accurate than extremely simple transparent models.

Transparent models are strictly better for the justice system in every possible way.

This article originally appeared on The Crime Report.

Cynthia Rudin

Duke University

Cynthia Rudin is an associate professor of computer science, electrical engineering and statistical science at Duke University and directs the Prediction Analysis Lab. Previously, Prof. Rudin held positions at MIT, Columbia, and NYU.

0 Comments