Artificial intelligence researchers use heartbeat detection to identify deepfake videos



[ad_1]

Facebook and Twitter earlier this week removed social media accounts associated with the Internet Investigation Agency, Russia’s troll farm that interfered in the US presidential election four years ago, spreading misinformation to up to 126 million Internet users. Facebook. Today, Facebook implemented measures aimed at curbing misinformation before Election Day in November. Deepfakes can create epic memes or put Nicholas Cage in every movie, but they can also undermine choices. As electoral interference threats increase, two teams of artificial intelligence researchers have recently introduced novel approaches to identify deepfakes by looking at evidence of heartbeats.

Existing deepfake detection models focus on traditional media forensic methods, such as tracking unnatural movements of the eyelids or distortions at the edge of the face. The first study for the detection of unique GAN fingerprints was introduced in 2018. But photoplethysmography (PPG) translates visual signals such as how blood flow causes slight changes in skin color in a human heartbeat. Remote PPG applications are being explored in areas such as healthcare, but PPG is also being used to identify deepfakes because generative models are not currently known to mimic human blood movements.

In work published last week, researchers from Binghamton University and Intel introduced an AI that goes beyond deepfake detection to recognize which deepfake model made a tampered video. The researchers found that the videos of deepfake models leave unique generative and biological noise signals, what they call “deepfake heartbeats.” The detection approach looks for residual biological signals from 32 different spots on a person’s face, what the researchers call PPG cells.

“We propose a deepfake source detector that predicts the generative source model for any given video. As far as we know, our approach is the first to perform a more in-depth analysis for source detection that interprets generative model residuals for deep fake videos, ”the document reads. “Our key finding arises from the fact that we can interpret these biological signals as false heartbeats that contain a characteristic transformation of the residues per pattern. Thus, it gives rise to a new exploration of these biological signals not only to determine the authenticity of a video, but also to classify its source model that generates the video ”.

In experiments with deepfake video datasets, the PPG cell approach detected deepfakes with 97.3% accuracy and identified generative deepfake models from the popular FaceForensics ++ deepfake dataset with 93.4% accuracy.

Researchers’ Article “How Do Deep Fake Hearts Beat? Deep detection of false sources by interpreting residues with biological signals ”was published last week and accepted for publication at the Joint International Conference on Biometrics, which will take place later this month.

In other recent work, artificial intelligence researchers from the Alibaba Group, Kyushu University, Nanyang Technological University, and Tianjin University presented DeepRhythm, a deepfake sensing model that recognizes human heartbeats from visual PPG. The authors said that DeepRhythm differs from previously existing models for identifying live people in a video because it tries to recognize rhythm patterns, “since fake videos may still have heart rates, but their patterns decrease with deepfake methods and are different. of the real ones.

DeepRhythm incorporates a heart rhythm movement amplification module and a spatio-temporal attention mechanism that can be learned in various stages of the network model. Researchers say that DeepRhythm outperforms numerous state-of-the-art deepfake methods when using FaceForensics ++ as a benchmark.

“Experimental results in FaceForensics ++ and DeepFake’s detection challenge preview dataset demonstrate that our method not only outperforms state-of-the-art methods, but is also resistant to various degradations,” the paper reads. titled “DeepRhythm: Exposing DeepFakes with Attentive Visual Rhythms of the Heartbeat”. . “The document was published in June and reviewed last week, and was accepted by the ACM Multimedia conference that will take place in October.

Both groups of researchers say they want to explore ways to combine PPG systems with existing video authentication methods in future work. This would allow them to achieve more accurate or robust ways to identify deepfake videos.

Earlier this week, Microsoft introduced the Video Authentication Deepfake Detection Service for Azure. As part of its launch, video authentication will be made available to the media and political campaigns through the AI ​​Foundation’s Reality Defender program.

As concerns about electoral interference mount, today, manipulated videos and falsehoods spread by President Trump and his team appear to pose bigger threats than deepfakes.

On Monday, White House director of social media Dan Scavino shared a video that Twitter labeled “manipulated media.” The original video showed Harry Belafonte asleep in a news interview, while the manipulated video featured Democratic presidential candidate Joe Biden who appeared to be asleep. A CBS Sacramento host joined in calling the video bogus on Monday, and Twitter removed the video due to a report filed by the copyright owner. But the tampered video has been viewed more than a million times.

[ad_2]