Facial recognition to ‘predict criminals’ sparks dispute over AI bias


A woman holds her phone in front of her as she scans her face with points of light in this photo illustration combination.Image copyright
fake pictures

An American university’s claim that it can use facial recognition to “predict crime” has renewed the debate on racial bias in technology.

Harrisburg University researchers said their software “can predict whether someone is a criminal, based solely on an image of their face.”

The software “is intended to help the police prevent crime,” he said.

But 1,700 academics have signed an open letter demanding that the research remain unpublished.

A Harrisburg investigative member, a former police officer, wrote: “Identify the criminality of [a] Your facial image person will allow a significant advantage for law enforcement agencies and other intelligence agencies to prevent crime. “

The researchers claimed that their software works “without racial bias.”

But the organizers of the open letter, the Coalition for Critical Technology, said: “Such claims are based on weak scientific assumptions, research, and methods, which numerous studies spanning our respective disciplines have denied over the years.

“These discredited claims continue to resurface.”

The group notes “countless studies” suggesting that people belonging to some ethnic minorities receive more severe treatment in the criminal justice system, distorting data on what they allegedly “appear” to be a criminal.

Commenting on the controversy, Cambridge University computer science researcher Krittika D’Silva said: “It is irresponsible that anyone thinks they can predict crime based solely on an image of a person’s face.”

“The implications of this are that crime ‘prediction’ software can cause serious harm, and it is important that investigators and policy makers take these problems seriously.

“Numerous studies have shown that machine learning algorithms, particularly facial recognition software, have racial, gender, and age biases,” she said, as a 2019 study indicates that facial recognition works poorly in women and people. older and black or Asian. persons.

Last week, an example of such a failure went viral online, when an artificial intelligence climber facing “depixels” made former US President Barack Obama a target in the process.

The climber himself simply invents new faces based on an initial pixelated photo, without really aiming for a true recreation of the real person.

But the team behind the project, Pulse, has modified their document to say that it can “shed some light” on one of the tools they use to generate faces.

The New York Times also reported this week on the case of a black man who became the first known case of wrongful arrest based on a false facial recognition algorithm.

In the Harrisburg case, the university had said the research would appear in a book published by Springer Nature, whose titles include the prestigious academic journal Nature.

Springer, however, said the document was “at no time” accepted for publication. Instead, it underwent a conference in which Springer will publish the minutes of, and had been rejected when the open letter was issued.

“[It] It went through a thorough peer review process. The series publisher’s decision to reject the final document was made on Tuesday, June 16, and the authors were officially notified on Monday, June 22, “the company said in a statement.

Harrisburg University, meanwhile, withdrew its own press release “at the request of the faculty involved.”

The document was being updated “to address concerns,” he said.

And while he supported academic freedom, his staff’s research “does not necessarily reflect the views and goals of this university.”

Meanwhile, organizers of the Critical Technology Coalition have demanded that “all publishers should refrain from publishing similar studies in the future.”