“You Don’t Want A Cop Arresting Anyone When They Haven’t Done Anything Wrong”

gunkkid

Prejudice in the justice system is acknowledged to be a bad thing, but is the slippery slope of predictive analytics much different? 

Schools and social services have always strived to identify children who might be headed for trouble, and that’s a good thing, but algorithms now being used in this area seem to have a flawless authority when identifying some minors as criminals-in-waiting. Couldn’t that system guide policing to pre-judge? Should we have to defend ourselves from guilt before having done anything wrong?

From Matt Stroud’s Pacific•Standard piece, “Should Los Angeles County Predict Which Children Will Become Criminals?“:

When people talk about predictive analytics—whether it’s in reference to policing, banking, gas drilling, or whatever else—they’re often talking about identifying trends: using predictive tools to intuit how groups of people and/or objects might behave in the future. But that’s changing.

In a growing number of places, prediction is getting more personal. In Chicago, for example, there’s the “heat list”—a Chicago Police Department project designed to identify the Chicagoans most likely to be involved in a shooting. In some state prison systems, analysts are working on projects designed to identify which particular prisoners will re-offend. In 2014, Rochester, New York, rolled out its version of L.A. County’s DPP program—with the distinction that it’s run by cops, and spearheaded by IBM—which offered the public just enough information to cause concern.

“It’s worrisome,” says Andrew G. Ferguson, a law professor at the University of the District of Columbia who studies and writes about predictive policing. “You don’t want a cop arresting anyone when they haven’t done anything wrong. The idea that some of these programs are branching into child welfare systems—and that kids might get arrested when they haven’t done anything wrong—only raises more questions.”

Ferguson says the threat of arrest poses a problem in all the most widely reported predictive programs in the country. But he acknowledges that there are valid arguments underpinning all of them.•