With the rise of Digital Media, the one thing that also rises is the uses of fake videos and pictures. Adobe, with its tool Photoshop, leads the race of most use image manipulation software in fraud cases.
But this time, Adobe is coming with a new feature or tool that uses machine learning to automatically detect when images of faces have been manipulated.
This new tool will help to spot an altered face and also to see what the original image likely looked like.
This tool is joint research of Adobe in collaboration with scientists from UC Berkeley. The aim of this research work is to restore faith in digital media, in today’s when countless fakes and touch-ups occur.
Adobe said, “while we are proud of the impact that Photoshop and Adobe’s other creative tools have made on the world, we also recognize the ethical implications of our technology,” said Adobe.
“Fake content is a serious and increasingly pressing issue.”
The researchers specially designed this tool to spot edits made with Photoshop’s Liquify tool, which is commonly used to adjust the shape of faces and alter facial expressions.
How did the team train AI?
The team of researchers first studied the Photoshop feature called Face Away Liquify, that has been used to change people’s faces, eyes, and mouths.
The team then collected, pre-processed and created a dataset of paired faces, containing images both before and after they’d been edited using Liquify.
After that, the researchers use CNN (convolutional neural network) and train it on the dataset to analyze visual imagery and to pick up the changes made to the faces in the images.
The software looks for several different clues, from warping artifacts to the layout of the face.
In terms of results, the AI tool achieved 99 percent accuracy in picking out the faked photo. If we compare it to humans than the human eyes were able to judge the altered face 53 percent of the time.
And another surprising thing is that the software also goes one step further and make a rough estimate of what the original image likely looked like, reverse engineering the image based on the different artifacts and signals that manipulation was used in the first place.
The researchers said the work was the first of its kind designed to spot these sort of facial edits and constitutes an “important step” toward creating tools that can identify complex changes including “body manipulations and photometric edits such as skin smoothing.”
Recommended: Application Of AI To Help People Quit Smoking
“The feature’s effects can be delicate which made it an intriguing test case for detecting both drastic and subtle alterations to faces,” said Adobe.
Adobe’s head of research, Gavin Miller, said in a statement, “This is an important step in being able to detect certain types of image editing, and the undo capability works surprisingly well.”
“Beyond technologies like this, the best defense will be a sophisticated public who know that content can be manipulated — often to delight them, but sometimes to mislead them.”
“The idea of a magic universal ‘undo’ button to revert image edits is still far from reality,” Adobe researcher Richard Zhang, who helped conduct the work, said in a company blog post.
“But we live in a world where it’s becoming harder to trust the digital information we consume, and I look forward to further exploring this area of research.”
Adobe says that continued research into the software to detect image manipulation could help democratize image forensics in other words, make it easier for the average person scrolling through social media or a webpage to spot a manipulated photograph.
“The company’s research team will continue to explore the topic of authenticity, including discussing balancing safeguards with creativity and storytelling tools.”
More in AI