[ad_1]
San Francisco, United States:
Microsoft has introduced software that can help detect “deepfake” photos or videos, adding to the list of programs designed to combat hard-to-detect images ahead of the US presidential election.
Video Authenticator software analyzes an image or each frame of a video, looking for evidence of tampering that might even be invisible to the naked eye.
Deepfakes are photos, videos or audio clips altered using artificial intelligence to appear authentic and are already the target of initiatives on Facebook and Twitter.
“They can appear to make people say things they didn’t do or are places they weren’t,” a company blog post said Tuesday.
Microsoft said it has partnered with the AI Foundation in San Francisco to make the video authentication tool available to political campaigns, the media and others involved in the democratic process.
Deepfakes are part of the world of online misinformation, which experts have warned can carry misleading or completely false messages.
Fake posts that appear to be real are of particular concern ahead of the US presidential election in November, especially after fake social media posts spiked in number during the 2016 vote that brought Donald Trump to power.
Microsoft also announced that it has incorporated technology into its Azure cloud computing platform that enables photo or video creators to add background data that can be used to check whether images have been altered.
The tech titan said he plans to test the program with media organizations like the BBC and the New York Times.
Microsoft is also working with the University of Washington and others to help people be smarter when it comes to distinguishing misinformation from reliable facts.
“Practical knowledge of the media can enable us all to think critically about the media context and become more engaged citizens while appreciating satire and parody,” the Microsoft publication said.
(Except for the headline, this story has not been edited by NDTV staff and is posted from a syndicated feed.)