How close are deepfakes to being used in big budget movies and TV shows? Pretty close, if a new Disney demo is something to go by. In a video and document presented at a computer graphics conference this week, researchers at the Casa del Ratón show what they say is the first false photorealistic fake with megapixel resolution.
And the results are … pretty good! They’re not mind-boggling, by the way, and they’re not good enough to be used in the upcoming Marvel movie, but it’s a solid step against the deep fakes we’ve seen in the past.
As the researchers suggest, what’s new here is megapixel resolution. Megapixels can no longer be synonymous with the high-quality images they used to be. (The camera on your phone probably has a double-digit megapixel count to start with.) But until now, deepfake technology has focused on smooth facial transfers instead of increasing pixel count.
The deep fakes you’ve probably seen to date can look impressive on your phone, but its flaws would be much more apparent on a larger screen. As an example, the Disney researchers point out that the highest resolution videos they could create from the popular open source model DeepFakeLab were only 256 x 256 pixels in size. In comparison, its model can produce video with a resolution of 1024 x 1024, a considerable increase.
In addition to this, the functionality of the Disney deepfake model is quite conventional: it is capable of swapping the appearances of two people while maintaining the target’s facial expressions. However, if you watch the video, note how technically restricted the result appears to be. It only produces deep fakes from well-lit people who look more or less directly at the camera. Challenging angles and lighting are not yet on the agenda for this technology.
However, as the researchers point out, we are getting closer to creating deep fakes good enough for commercial projects. Right now, when a company like Disney wants to do a face swap, it will use traditional visuals, just like the studio did when it created virtual models of the late actors Peter Cushing and Carrie Fisher for the Star Wars movie Rogue One.
“While those results are impressive, they are expensive to produce and generally require many months of work to achieve a few seconds of footage,” the researchers write. In comparison, deepfakes require much less oversight once the original model has been built, and can produce video in a matter of hours (given the right budget for computing power).
Sooner or later, deepfakes will cease to be a research project and become a viable option for large studies. In fact, some would say they are already there.