Adobe wants to create Photoshop as a tool for detecting fake photos


Illustration for article titled Adobe wants to make Photoshop a tool for spotting fake photos

Screenshot: Gizmodo (Other)

For 30 years, smart pixel pushers have used Photoshop to manipulate and edit images, but now that computers can only create doctored photos with advanced AI, Adobe wants to use its image editing tool to help verify the authenticity of photos.

As Wired reports, Adobe collaborated with a handful of companies last year to help develop their Content Authenticity Initiative: an open standard that stamps photos with cryptographically protected metadata such as the photographer’s name when the photo was taken, the exact location where an image was created, and the names of editors who may have manipulated the photo in some way, even if it’s just a basic color correction.

Later this year, Adobe plans to include the CAI tagging capabilities in a preview release of Photoshop, so that when images are opened, processed and stored through the app, a record of who has processed or manipulated the photo can be continuously documented in an ever-growing log embedded in the image itself. The CAI system will also contain information about when a photo was published on a news website such as The New York Times (one of Adobe’s original partners on this initiative last year) as a social network, like Adobe’s Behance, where users and artists can easily share their creations online.

The CAI system has the potential to help slow down the dissemination of disinformation and manipulated photos, but it will require users to have access to the secure metadata, and take the initiative to verify that an image they are looking at look is authentic. For example, photos of an alleged violent protest that he shared on Facebook the other day could easily be debunked by the metadata that revealed that the images were actually shot years ago in another part of the world.

To be effective, the CAI approach must be widely accepted and implemented on a large scale by those who create photo content, those who publish it, and those who consume it. Photographers working for official news organizations could easily be required to use it, but that is a small sliver of the image that is uploaded to the internet on a daily basis. First of all, it does not seem that social media giants like Twitter or Facebook (which has Instagram) are planning to jump on the CAI bandwagon, which is problematic because it is where a lot of so-called “fake news” gets posted and extensively shared.

Using cryptography makes it difficult for the CAI metadata to be embedded in the images, but it is not impossible. There is also the potential for the metadata to completely reject an image and replace it with false information. The CAI system, at least in its current form, does not include any protection measures to prevent people from taking screenshots and then modifying the signed and authorized images. One day, such protections could be built into an operating system, limiting the ability to screen an image based on its CAI references, but that’s a long way off.

In the world of digital photography, Adobe carries a lot of weight and influence, and it will have to use it as much as it can for its Content Authenticity Initiative to have all the hope of an effective tool against fakes. Getting a handful of big name newspapers on board is simply not enough. Support for the CAI system must be embedded in digital cameras, computers, mobile devices, and any platform that can be used to share images. And it should be combined with a great urge to educate users not only on how to use this information to find fake news, but also a desire to actually take a few extra seconds to check for themselves if an image is real or not – and that is perhaps the biggest obstacle. If the pandemic has taught us anything, it is that people are afraid to believe everything that supports their own ideals, regardless of what the experts may say.

.