Facebook has developed an artificial intelligence that it claims can detect deepfake images and even reverse-engineer them to figure out how they were made and perhaps trace their creators.
Deepfakes are wholly artificial images created by an AI. Facebook’s new AI looks at similarities among a collection of deepfakes to see if they have a shared origin, looking for unique patterns such as small speckles of noise or slight oddities in the colour spectrum of an image.
By identifying the minor fingerprints in an image, Facebook’s AI is able to discern details of how the neural network that created the image was designed, such as how large the model is or how it was trained.
“I thought there’s no way this is going to work,” says Tal Hassner at Facebook. “How would we, just by looking at a photo, be able to tell how many layers a deep neural network had, or what loss function it was trained with?”
Hassner and his colleagues tested the AI on a database of 100,000 deepfake images generated by 100 different generative models making 1000 images each. Some of those images were used to train the model, while others were held back and presented to the model as images of unknown origin.
That helped test the AI in its ultimate goal. “What we’re doing is looking at a photo and trying to estimate what is the design of the generative model that created it, even if we’ve never seen that model before,” says Hassner. He declined to share how accurate the AI’s estimates were, but says “we’re way better than random”.
“It’s a big step forward for fingerprinting,” says Nina Schick, author of Deep Fakes and the Infocalypse. But she points out – as do Hassner and his colleagues – that the AI only works on images that have been fully artificially generated, while many deepfakes are videos created by pasting one face on to someone else’s body.
Schick also wonders how effective the AI would be outside lab environments, encountering deepfakes in the “wild”. “The kind of face detection models we see are broadly based on academic data sets and are deployed in controlled environments,” she says.
Hassner declined to talk about how Facebook would be using its new AI, but says that this kind of work is a cat-and-mouse game against people creating deepfakes. “We’re developing better identifying models while others are developing better and better generative models,” he says. “I don’t doubt that at some point there’ll be a method that will fool us completely.”
More on these topics:
Read more at New Scientist