[ad_1]
Police conducted a trial of controversial facial recognition software without consulting their own bosses or the Privacy Commissioner.
Police conducted a trial of a controversial facial recognition software.
Source: RNZ / Richard Tindiller
By Mackenzie Smith of rnz.co.nz
The American firm Clearview AI’s system, which is used by hundreds of police departments in the United States and several other countries, is effectively a search engine for faces – billing itself as a crime-fighting tool to identify perpetrators and victims.
New Zealand Police first contacted the firm in January, and later set up a trial of the software, according to documents RNZ obtained under the Official Information Act.
However, the high tech crime unit handling the technology appears to have not sought the necessary clearance before using it.
Privacy Commissioner John Edwards, who was not aware police had trialled Clearview Al when RNZ contacted him, said he would expect to be briefed on it before a trial was underway. He said Police Commissioner Andrew Coster told him he was also unaware of the trial.
“He’s concerned it was able to happen without a high-level sign-off, and [the] involvement of my office, “Edwards said, following a phone conversation with Coster on Tuesday.” They will be looking at protocols, how they do evaluate new technologies. “
Police declined to be interviewed, and would not address the record of Coster’s remarks.
“Police undertook a short trial of Clearview AI earlier this year to assess whether it offered any value to police investigations,” said Detective Superintendent Tom Fitzgerald, who is the national manager of criminal investigations, in a statement.
“This was a very limited trial to assess investigative value. The trial has now ceased and the value to investigations has been assessed as very limited and the technology at this stage will not be used by New Zealand Police.”
Prior to the statement from Fitzgerald, police spokespeople told RNZ on two separate occasions in the past week – on 7 and 11 May – that the trial was still underway. A spokesperson later said these statements were incorrect.
Clearview Al, whose early financial backers include New Zealand citizen Peter Thiel, has built a database of about 2.8 billion faces by lifting users ‘images from social media sites like Facebook, a practice that violates most of these companies’ terms of service.
The software has not been independently tested, but in one “accuracy test result” the company sent New Zealand Police, the report concluded the software had 100 percent accuracy in a facial recognition test of all US members of congress.
“Clearview can be used for counter-terrorism to quickly and accurately identify suspects and build up investigations using public information,” employee Marko Jukic told police in a 31 January email. The company reportedly later fired Jukic after it emerged I have published controversial views online.
Police would not say what they used Clearview AI for in the trial, or who had access to it. Clearview, which has been used in the US to solve everything from mailbox thefts to cases of child sexual abuse, did not answer to questions about its relationship with New Zealand police.
A law lecturer at Victoria University of Wellington, Marcin Betkier, said New Zealand’s privacy laws offered no legal protections to individuals whose data was used by Clearview.
“The data can be used against us, whether by police, or maybe by some other third party,” he said.
Edwards said facial recognition technology was inevitable in New Zealand, but there were ways to reduce its risks, including by reviewing the technology before it was trialled.
“I was a little surprised by this one,” he said.