AI experts say research into algorithms that purport to predict crime must end

A coalition of AI researchers, data scientists, and sociologists has called on the academic world to stop publishing studies that purport to predict an individual’s criminality using data-trained algorithms like facial scans and criminal statistics.

Such work is not only illiterate science, says the Coalition for Critical Technology, but it perpetuates a cycle of prejudice against black people and people of color. Numerous studies show that the justice system treats these groups more harshly than whites, so any software trained on these data simply amplifies and entrenches prejudice and racism in society.

“Let’s be clear: There is no way to develop a system that can predict or identify ‘criminality’ that is not racially biased, because the category of ‘criminality’ itself is racially biased,” the group writes. “Investigation of this nature, and the accompanying claims of precision, are based on the assumption that data on criminal arrest and conviction can serve as reliable and neutral indicators of underlying criminal activity. However, these records are far from neutral. ”

An open letter written by the Coalition was drafted in response to news that Springer, the world’s largest academic book publisher, was planning to publish such a study. The letter, which has now been signed by 1,700 experts, asks Springer to terminate the document and that other academic editors refrain from publishing similar work in the future.

“At a time when the legitimacy of the prison state, and the police in particular, is being questioned for fundamental reasons in the United States, there is a great demand in law enforcement for investigation of this nature,” the group writes. . “Circulation of this work by a major publisher such as Springer would represent a significant step toward legitimizing and applying socially damaging and repeatedly discredited research in the real world.”

In the study in question, titled “A Deep Neural Network Model for Predicting Crime Through Image Processing,” the researchers claimed to have created a facial recognition system that was “capable of predicting whether someone is likely to be a criminal … with 80 percent accuracy and no racial bias, “according to a press release now removed. The authors of the article included doctoral student and former NYPD cop Jonathan W. Korn.

In response to the open letter, Springer said it would not publish the document, according to MIT Technology Review. “The document to which it refers was presented at an upcoming conference for which Springer had planned to publish the minutes, ”the company said. “After a thorough peer review process, the document was rejected.”

However, as the Coalition for Critical Technology makes clear, this incident is just one example in a broader trend within data science and machine learning, where researchers use socially contingent data to try to predict or classify behavior. complex human.

In a notable example from 2016, researchers at Shanghai Jiao Tong University claimed to have created an algorithm that could also predict the criminality of facial features. The study was criticized and refuted, and researchers from Google and Princeton issued a long rebuttal warning that AI researchers were reviewing physiognomy pseudoscience. This was a discipline founded in the 19th century by Cesare Lombroso, who claimed that he could identify “born criminals” by measuring the dimensions of their faces.

“When put into practice, the pseudoscience of physiognomy becomes the pseudoscience of scientific racism,” the researchers wrote. “Rapid developments in machine intelligence and machine learning have allowed scientific racism to enter a new era, in which machine-learned models incorporate biases present in the human behavior used to model development.”

Images used in a 2016 article that purported to predict crime from facial features. The top row shows “criminal” faces while the bottom row shows “non-criminal”.
Image: Xiaolin Wu and Xi Zhang, “Automated inference about crime using facial images”

The 2016 document also demonstrated how easy it is for AI professionals to deceive themselves into thinking that they have found an objective system to measure crime. The Google and Princeton investigators noted that, according to data shared in the document, all of the “non-criminals” appeared to be smiling and wearing collared shirts and suits, while none of the (scowling) criminals did. It is possible that this simple and misleading visual information guided the supposedly sophisticated analysis of the algorithm.

The Coalition for Critical Technology letter comes at a time when movements around the world highlight issues of racial justice, sparked by the police murder of George Floyd. These protests have also seen major technology companies withdraw in their use of facial recognition systems, which research by black academics has shown is racially biased.

The authors and signatories of the letter call on the AI ​​community to reconsider how they evaluate the “goodness” of their work, thinking not only about metrics such as precision and accuracy, but also about the social effect that such technology can have. have in the world. “If machine learning is to achieve the” social good “promoted in grant proposals and press releases, researchers in this space should actively reflect on the power structures (and concomitant oppressions) that make their work possible,” the authors write.