[ad_1]
Microsoft has developed a technology for detecting deepfakes that it calls Video Authenticator, which can analyze a still photo or video to provide a confidence percentage or score that the media is artificially manipulated. Analyze combinations of subtle dimming or grayscale in the image that may not be detectable by the human eye.
This technology was originally developed by Microsoft Research in coordination with Microsoft’s responsible AI team and Microsoft’s Committee on AI, Ethics and Effects in Engineering and Research (AETHER), which is a Microsoft advisory board that helps ensure that develop and apply new technology. Responsibly.
Video Authenticator was created using a public data set of Facial forensics and tested on the DeepFake detection challenge dataset, both models for training and testing deepfake detection technologies.
ad
ad
The other technology can detect tampered content and assure people that the media they are watching is authentic and has two components. The first technology is a tool built into Microsoft Azure that enables a content producer to add digital hashes and certificates to content.
Digital certificates and hashes live with content as metadata wherever it travels online.
The second technology is a reader that can exist as a browser extension or in other forms. It checks the certificates and matches the digital hashes, allowing people to know with a high degree of precision that the content is authentic and that it has not been modified, as well as providing details about who produced it.
Disinformation isn’t new, but marketers may not know that Microsoft has been involved in an investigation that cataloged 96 foreign influence campaigns targeting 30 countries between 2013 and 2019 with disinformation on social media platforms that attempt to smear people. notable people, persuade the public or polarize debates.
While 26% of these campaigns targeted the US, other countries targeted include Armenia, Australia, Brazil, Canada, France, Germany, the Netherlands, Poland, Saudi Arabia, South Africa, Taiwan, Ukraine. , United Kingdom and Yemen.
Approximately 93% included the creation of original content, while 86% amplified pre-existing content and 74% distorted objectively verifiable facts.
The data is part of research endorsed by Jacob Shapiro at Princeton University by Microsoft and updated this month.
Microsoft has been working on two technologies to address different parts of the misinformation problem as part of the company’s Defending Democracy Program, which in addition to fighting misinformation, helps protect the vote through ElectionGuard and helps secure campaigns and other stakeholders. in the democratic process. via AccountGuard, Microsoft 365 for Election Security Advisors and Campaigns.
[ad_2]