Impossible is a meaningless world for AI and the new research paper published by Nvidia has once again prove it correct. Nvidia has developed an AI system that could make protecting photos online much harder. This AI can automatically remove noise, grain, and even watermarks from photos.
It is created by Nvidia, along with researchers from Aalto University and the Massachusetts Institute of Technology. The researchers included in this project are Jaako Lehtinen, Jacob Munkberg, Jon Hasselgren, Samuli Laine, Tero Karras, Miika Maittala, and Timo Aila.
“Recent deep learning work in the field has focused on training a neural network to restore images by showing example pairs of noisy and clean images,” NVIDIA write.
This AI system is powered by deep learning neural network. Researcher has trained the AI over 50,000 photos. They used Nvidia Tesla P100 GPUs with the cuDNN-accelerated TensorFlow deep learning framework, and trained their system on 50,000 images in the ImageNet validation set.
The most impressive thing about this AI is that it can teach itself to fix corrupted photos just by looking at them instead of offering it before-and-after photos with both corrupted and optimal. It only requires two corrupted images to proceed with removing noise.
“It is possible to learn to restore signals without ever observing clean ones, at performance sometimes exceeding training using clean exemplars,” reads the paper.
If we talk about previous deep learning system, it works in image retouching which was centered on training neural networks to restore images by comparing noisy and clear images.
This AI system uses a clever AI algorithms which make possible for a computer to zero in on the exact watermark and remove it from a photo as if it was rubbing away a smudge.
The removals work by identifying repeating patterns (such as watermarks) in a large collection of photos with the exact same watermark — you may have the same thing on your photos if you use an action to apply your watermark. The computer can then establish a rough estimate of the watermark and what exactly it looks like by viewing the image as noise, and the watermark as the target.
The original photo is then recovered by solving what Google calls a “multi-image optimization problem.” This involves separating the watermark (the foreground) from the photo itself (the background).
The optimization can produce “very accurate estimations” of the watermark’s own components and is able to deal with most watermarks seen on all kinds of photos.
“It is possible to learn to restore signals without ever observing clean ones, at performance sometimes exceeding training using clean exemplars,” the researchers said in their paper. “[The neural network] is on par with state-of-the-art methods that make use of clean examples — using precisely the same training methodology, and often without appreciable drawbacks in training time or performance.”
The solution is relatively simple, though. The problem lies in the fact that there is “consistency in watermarks across image collections.” So, to counteract the ease of removal, photographers need to somehow introduce inconsistencies for their watermarks. Even a subtle warp of your watermark in each photo is enough.
“There are several real-world situations where obtaining clean training data is difficult: low-light photography (e.g., astronomical imaging), physically-based rendering, and magnetic resonance imaging,” reads the paper’s discussion section. “Our proof-of-concept demonstrations point the way to significant potential benefits in these applications by removing the need for potentially strenuous collection of clean data.
Perhaps, the system’s best asset is that it can perform faster, sometimes rendering frames in just 7 minutes, and as well or better than professional photo restorers. “[The system] is on par with state-of-the-art methods that make use of clean examples — using precisely the same training methodology, and often without appreciable drawbacks in training time or performance,” reads the paper.
The system does have limitations. The researchers point out that it can not yet detect elements unavailable in the input photos.However, the same drawbacks apply to softwares that employ clean inputs. “Of course, there is no free lunch – we cannot learn to pick up features that are not there in the input data – but this applies equally to training with clean targets,” reads the paper.
The researchers had present their work at the International Conference on Machine Learning in Stockholm, Sweden.
More in AI :