Things are different on the other side of the mirror.
The text is backwards. Clocks run counterclockwise. Cars are driving on the wrong side of the road. Right hands become left hands.
Intrigued by how reflection changes images subtly and not so subtly, a team of researchers at Cornell University used artificial intelligence to investigate what differentiates originals from their reflections. Their algorithms learned to capture unexpected clues like hair parts, gaze direction, and, surprisingly, beards, findings with implications for machine learning model training and false image detection.
“The universe is not symmetric. If you flip an image, there are differences,” said Noah Snavely, associate professor of computer science at Cornell Tech and lead author of the study, “Visual Chirality,” presented at the 2020 Vision Conference by Computer. and Pattern recognition, performed virtually from June 14 to 19. “I am intrigued by the discoveries you can make with new ways of obtaining information.”
Zhiqui Lin is the first author of the article; co-authors are Abe Davis, an assistant professor of computer science and Cornell Tech postdoctoral fellow Jin Sun.
Snavely said that differentiating between original images and reflections is a surprisingly easy task: a basic deep learning algorithm can quickly learn how to classify if an image was flipped with 60% to 90% accuracy, depending on the type of images used to train the algorithm Many of the clues it picks up are difficult for humans to perceive.
For this study, the team developed technology to create a heat map that indicates the parts of the image that are of interest to the algorithm, to gain insight into how it makes these decisions.
They found, not surprisingly, that the most commonly used clue was text, which looks different backwards in each written language. To learn more, they removed text images from their dataset, and discovered that the next set of features the model focused on included wristwatches, shirt collars (buttons tend to be on the left side), faces, and phones, which most people tend to carry in their right hands, as well as other factors that reveal the right hand.
The researchers were intrigued by the algorithm’s tendency to focus on faces, which don’t seem obviously asymmetric. “Somehow, it left more questions than answers,” Snavely said.
They then conducted another study focused on faces and found that the heatmap lit up in areas like the hair part, the gaze, most people, for reasons the researchers don’t know, they look left in photos. of portraits and beards.
Snavely said that he and his team members have no idea what information the algorithm is finding in the beards, but they hypothesized that the way people comb or shave their faces could reveal the hand.
“It is a form of visual discovery,” said Snavely. “If you can run machine learning at scale on millions and millions of images, maybe you can start discovering new facts about the world.”
Each of these tracks individually may not be reliable, but the algorithm can generate greater confidence by combining multiple tracks, the findings showed. The researchers also found that the algorithm uses low-level signals, derived from the way cameras process images, to make their decisions.
Although more study is needed, the findings could affect the way machine learning models are trained. These models require large numbers of images to learn how to classify and identify images, so computer scientists often use reflections from existing images to effectively duplicate their data sets.
Examining how these reflected images differ from the originals could reveal information about possible biases in machine learning that could lead to inaccurate results, Snavely said.
“This leads to an open question for the computer vision community, which is, when is it okay to make this change to augment your dataset, and when is it not okay?” he said. “I hope this makes people think more about these questions and start developing tools to understand how the algorithm is biasing.”
Understanding how reflection changes an image could also help use artificial intelligence to identify images that have been tampered with or tampered with, an issue of growing concern on the Internet.
“This is perhaps a new tool or information that can be used in the forensic imaging universe, if you want to know if something is real or not,” Snavely said.
New algorithm to help process biological images
linzhiqiu.github.io/papers/chirality/main.pdf
Provided by Cornell University
Citation: Research reflects how AI sees through the mirror (2020, July 2) retrieved on July 3, 2020 from https://techxplore.com/news/2020-07-ai-glass.html
This document is subject to copyright. Other than fair dealing for private study or research purposes, no part may be reproduced without written permission. The content is provided for informational purposes only.