[ad_1]
Microsoft has released a new tool to identify ‘deepfake’ photos and videos that have been created to trick people into creating false information online.
Deepfakes, also known as synthetic media, are photos, videos or audio files that have been manipulated with AI to show or say something that is not real.
There were at least 96 deep fake ‘foreign-influenced’ social media campaigns targeting people in 30 countries between 2013 and 2019, according to Microsoft.
To combat campaigns that use this manipulated form of media, the tech giant has launched a new ‘Video Authenticator’ tool that can analyze a still photo or video and provide a percentage probability that the media source has been tampered with.
It works by detecting the deepfake melting boundary and subtle fading or grayscale elements that may not be detectable by the human eye.
Microsoft says its Video Authenticator tool looks for signs of blending around the eyes and can give a ‘score’ that shows how likely the video is to have been tampered with.
Deepfake campaigns, carried out on social media, often seek to smear notable people, persuade the public, or polarize debates during elections or a major crisis.
About 93 percent of deepfake campaigns include new original content, 86 percent amplify pre-existing content, and 74 percent distort verifiable facts, Microsoft’s research revealed.
A fake video could cause people to say things they didn’t say or that are places they weren’t, potentially damaging reputations or even influencing election results.
To combat this, Microsoft has also launched an online questionnaire designed to ‘improve digital literacy’ by teaching people how to detect a deepfake.
You can test the quiz online at the SpotTheDeepFake.org website.
Some researchers have created their own deepfake videos to demonstrate the scale of the technological threat; for example, MIT produced a video that appears to show Richard Nixon telling the world that Apollo 11 ended in disaster.
During the last UK general election, deepfake videos were also made of Boris Johnson and Jeremy Corbyn supporting each other.
The fact that they are generated by artificial intelligence that can continue to learn makes it inevitable that they will continue to outperform conventional sensing technology, Microsoft said.
However, in the short term, such as the upcoming US election, advanced detection technologies can be a useful tool to help demanding users identify deepfakes.
Microsoft says its new Video Authenticator can analyze a still photo or video to provide a percentage probability, or confidence score, that the media is artificially tampered with.
In the case of video, you can provide this percentage in real time in each frame as the video plays.
Video Authenticator was created using a Face Forensic ++ public dataset and tested on the DeepFake detection challenge dataset, both leading models for training and testing deepfake detection technologies.
“We expect that the methods to generate synthetic media continue to grow in sophistication,” explained the technology giant.
As all AI detection methods have failure rates, we must understand and be prepared to respond to the deep fakes that are escaped through detection methods.
Politicians are a major target for deepfake videos: Videos were released during the last UK general election that appeared to show Boris Johnson and Jeremy Corbyn supporting each other.
Therefore, in the long term, we must seek stronger methods to maintain and certify the authenticity of news articles and other media.
“Currently, there are few tools that help reassure readers that the media they view online is from a trusted source and has not been tampered with.”
Microsoft has also released another new technology that can detect tampered content and assure people whether it is real or not.
Content producers will be able to add a certificate to say that the content is real, that it travels with it wherever it is online. A browser extension will be able to read the certificate and tell people if it is the authentic version of a video.
To improve digital literacy, Microsoft released a tool that aims to train people to be better at detecting fake images and videos.
Even with new technology to ‘catch and warn’ people about deepfakes, it is impossible to prevent everyone from being misled or all deepfakes from arriving.
Microsoft has also worked with the University of Washington to improve media literacy to help people separate misinformation from genuine facts.
“Practical media knowledge can allow us all to think critically about the media context and become more engaged citizens while still appreciating satire and parody,” the firm wrote in a blog post.
“While not all synthetic media are bad, even a brief intervention with media literacy resources has been shown to help people identify and treat them more cautiously.”
Microsoft released an interactive questionnaire for voters in the upcoming US elections to help them learn about synthetic media, develop critical media literacy skills, and gain a deeper understanding of the impact deepfakes can have on democracy.