In 2019, the biggest concern about technology is privacy, but what if technology is fighting against technology. With the help of artificial intelligence, researchers create fake fingerprints that could be a hacker’s dream tool.
According to the research paper by New York University and Michigan State University researchers detailed how deep learning technologies could be used to weaken biometric security systems.
The group has developed machine learning methods for generating fake fingerprints called DeepMasterPrints that not only dupe smartphone sensors but can successfully masquerade as prints from numerous different people.
The researcher claims that their AI-based system which they call “DeepMasterPrints.” can fool fingerprint scanners on smartphones, raising the risk of hackers using the vulnerability to steal from victims’ online bank accounts.
The team includes five researchers, led by Philip Bontrager of the New York University engineering school. The research, supported by a United States National Science Foundation grant, won the best paper award at a conference on biometrics and cybersecurity in October.
According to the paper, the researchers used neural networks, the foundational software for data training, to create convincing looking digital fingerprints that performed even better than the images used in the earlier study.
Researchers use GANs to produce convincing-looking but fabricated photos and videos known as “deep fakes”.
With GANs, researchers use a combination of two neural networks that work together to create realistic images embedded with mysterious properties that can fool image-recognition software.
The researchers trained neural networks on thousands of publicly available fingerprint images, so the system could begin to output a variety of realistic snippets.
The researchers trained one neural network to recognize real fingerprint images and trained the other to create its own fake fingerprints.
Ater that the researcher fed the second neural network’s fake fingerprint images into the first neural network to test how effective they were.
Over time, the second neural network learned to generate realistic-looking fingerprint images that could trick the other neural network.
Then they used a technique called “evolutionary optimization” to assess what would succeed as a master print with every character as familiar and convincing as possible and guide the output of the neural networks.
Julian Togelius, one of the paper’s authors and an NYU associate computer science professor, said the team created the fake fingerprints, dubbed DeepMasterPrints, using a variant of neural network technology called “generative adversarial networks (GANs),” which he said “have taken the AI world by storm for the last two years.”
In term of results, against a moderately stringent setting, the researcher team’s DeepMasterPrints matched with anywhere from two or three percent of the records in the different commercial platforms up to about 20 percent, depending on which prints they tested.
Overall, the master prints got 30 times more matches than the average real fingerprint—even at the highest security settings, where the master prints didn’t perform particularly well.
Think of a master print attack, then, like a password dictionary attack, in which hackers don’t need to get it right in one shot, but instead systematically try common combinations to break into an account.
“Even as these synthetic measures get better and better, if you’re paying attention to it you should be able to design systems that are at higher and higher resolution and aren’t easily attacked,” Bontrager says. “But it will affect cost and design.”
More in AI :