[ad_1]
In an effort to combat the prevalence of deep fakes, Microsoft has released a new video authentication tool, which can analyze a still photo or video to provide a percentage of the probability that a medium will be artificially manipulated.
In the case of a video, Microsoft said it could provide this percentage in real time for each frame as the video plays. It works by detecting the melting limit of deepfake elements and subtle fading or grayscale that may not be detectable by the human eye.
Deepfakes, or synthetic media, can be photos, videos, or audio files manipulated by artificial intelligence (AI). Microsoft said deepfake detection is crucial in the run-up to the US elections.
See also: The threat of Deepfakes for the 2020 US elections is not what you would think (CNET)
The technology was created using a public dataset from Face Forensic ++ and Microsoft said it was tested on the DeepFake detection challenge dataset, which it considers a leading model for training and testing deepfake detection technologies.
“We expect that methods for generating synthetic media will continue to grow in sophistication. As all AI detection methods have failure rates, we have to understand and be ready to respond to the deep fakes that are leaked through detection methods.” the company said in a blog. Send.
“Therefore, in the long term, we must seek stronger methods to maintain and certify the authenticity of news articles and other media.”
With few tools available to do this, Microsoft has also introduced new technology that it says can detect tampered content and reassure people that the media they are watching is authentic.
The technology has two components, the first being a tool built into Microsoft Azure that allows a content producer to add digital hashes and certificates to a piece of content.
“Hashes and certificates live with content as metadata wherever it travels online,” Microsoft explained.
The second is a reader, which can be included in a browser extension, that verifies the certificates and compares the hashes to determine authenticity.
In its deepfake fight, Microsoft has also partnered with the AI Foundation. The partnership will make the video authenticator available to both parties to organizations involved in the democratic process, including the media and political campaigns through the foundation’s Reality Defender 2020 initiative.
The video authenticator will initially be available only through the initiative.
Another partnership with a consortium of media companies, known as Project Origin, will see Microsoft’s authenticity technology tested. An initiative of several publishers and social media companies, the Trusted News Initiative, has also agreed to engage with Microsoft to test its technology.
The University of Washington, the deepfake detection firm Sensity and USA Today they have also teamed up with Microsoft to boost media literacy.
“Improving media literacy will help people separate misinformation from genuine facts and manage the risks posed by deep fakes and cheap fakes,” Microsoft said. “Practical knowledge of the media can enable us all to think critically about the media context and become more engaged citizens while still appreciating satire and parody.”
Through the partnership, there will be a public service announcement campaign that encourages people to take a “pause to reflect” and verify that the information comes from a reputable news organization before sharing or promoting it on social media before the elections.
The parties have also launched a questionnaire for American voters to learn about synthetic media.