Deep fakes is a very dangerous technology if it’s not used properly. In deep fake, images or videos are combined and superimpose with different audio and visual sources to create an entirely new (fake) video or image that can fool even digital forensic and image analysis experts.
The false and fake doctored media created from a deep fake can easily make anyone fool. Deep fake is heavily getting used to create fake celebrity pornographic videos or revenge porn. It has also been used to misrepresent well-known politicians on video portals or chatrooms.
Facebook has 2.23 billion monthly active users. As 350 million new photos each day get uploaded on Facebook, things are pretty much clear that Facebook will be the biggest dumping ground of the deep fake content.
Recently, Sens. Angus King, a Maine independent, and James Lankford, an Oklahoma Republican had also asked Facebook that does they have any technology which can determine that a video uploaded on Facebook has been manipulated by deep fake, which gives a warning like -this video or image been manipulated.
Following last week’s hearing instructions, Facebook has now announced that they are expanding there fact-checking efforts to scan photos and videos uploaded to the social network for evidence that they’ve been manipulated, as lawmakers sound new alarms that foreign adversaries might try to spread misinformation through fake visual content.
In 17 countries, including the United States, Facebook said it has deployed its powerful algorithms to “identify potentially false” images and videos, then send those flagged posts to outside fact-checkers for further review.
Facebook said it’s trying to stamp out content that has been doctored, taken out of context or paired with misleading text. US fact-checking parties include the Associated Press, factcheck.org, Politifact, Snopes, and conservative paper The Weekly Standard.
We are also leveraging other technologies to better recognize false or misleading content. For example, we use optical character recognition (OCR) to extract text from photos and compare that text to headlines from fact-checkers’ articles. We are also working on new ways to detect if a photo or video has been manipulated,” Antonia Woodford, a project manager at Facebook wrote Thursday.
Facebook has built a machine-learning model to detect potentially bogus photos or videos, and then sends these to its fact-checkers for review. The platform said it’s using AI to flag photo and video content that may be false. That content would then be evaluated by their fact-checkers.
In one of the examples, Facebook shared, fact-checkers in Mexico previously identified a “false photo” of a local politician whose face had been Photoshopped onto a US green card, wrongly suggesting he is a US citizen.
The AI is able to automatically spot manipulated images, out of context images that don’t show what they say they do, or text and audio claims that are provably false. The third-party fact-checking partners can also use visual verification techniques, including reverse image searching and image metadata analysis to review the content.
It is built to keep a close eye over inappropriate or harmful content on Facebook and Instagram. It’s AI system can sift through an immense amount of data in a very short period of time. It can extracts text from more than a billion public Facebook and Instagram images and video frames daily in real time.
Fact-checkers are able to assess the truth or falsity of a photo or video by combining these skills with other journalistic practices, like using research from experts, academics or government agencies.
“As we get more ratings from fact-checkers on photos and videos, we will be able to improve the accuracy of our machine learning model,” Antonia Woodford, a project manager at Facebook wrote Thursday.
The AI can define three types of misinformation in photos and video, including: manipulated or fabricated content; content that’s presented out of context; and false claims in text or audio.
The Facebook team said, they will use the AI with Facebook, Messenger, Instagram, and WhatsApp to automatically identify content that violates our hate-speech policy on the platform in various languages.
“We use optical character recognition (OCR) to extract text from photos and compare that text to headlines from fact-checkers’ articles. We are also working on new ways to detect if a photo or video has been manipulated” Woodford notes, referring to DeepFakes that use AI video editing software to make someone appear to say or do something they haven’t.
Facebook offers a high-level overview of the difficulties identifying false information in image and video content compared to text, and some of the techniques it’s using to overcome them. But overall the impression is that Facebook isn’t close to having an automated system for detecting misinformation in video and photos at scale.
Lyons said Facebook is “pretty good” at finding exact duplicates of photos, but when images are slightly manipulated it’s much harder for Facebook to automatically detect.
“We need to continue to invest in technology that will help us identify very near duplicates that have been changed in small ways,” said Lyons.
And detecting when something has been presented out of context is also a major challenge, according to Lyons.
“Understanding if something has been taken out of context is an area we’re investing in but have a lot of work left to do, because you need to understand the original context of the media, the context in which it’s being presented, and whether there’s a discrepancy between the two,” she noted.
The impact Deep Fake in photo and video content also differs by country. Facebook has found that in the US most people report seeing misinformation in articles, whereas in Indonesia people more often report seeing misleading information in photos.
“In countries where the media ecosystem is less developed or literacy rates are lower, people might be more likely to see a false headline on a photo, or see a manipulated photo, and interpret it as news, whereas in countries with robust news ecosystems, the concept of “news” is more tied to articles,” said Lyons.
Facebook CEO Mark Zuckerberg told members of Congress in April that similar systems would be some of the most powerful ways the site could combat hate speech, fake news, discrimination and propaganda across the world. “Building AI tools is going to be the scalable way to identify and root out most of this harmful content,” he said.
More in AI :