Built-In Neural Hardware Allows Image Recognition in Nanoseconds

Computer Vision is one of the many important applications of AI and in today’s world from recognizing that the person has cancer or not from his MRI or CT to unlocking the iPhone with your beautiful face, the image recognization technology is getting used in every other field of the industry.

But the one major disadvantage we still have with image recognization is that performance is real-time when images get complex & the amount of it takes for image processing & display the result, especially when the number of images recorded per second is high, a large data volume is generated that can hardly be handled.



To overcome this disadvantage researchers at Vienna University of Technology (TU Wien) have developed an image sensor that can be trained to recognize certain objects in a matter of nanoseconds.

The image sensor chip represents an artificial neural network capable of learning. The data does not have to be read out and processed by a computer, but the chip itself provides information about what it is currently seeing within nanoseconds. The work has now been presented in the scientific journal “Nature.”




The chip was developed at the TU Vienna. It is based on photodetectors made of tungsten diselenide an ultra-thin material consisting of only three atomic layers. The individual photodetectors, the “pixels” of the camera system, are all connected to a small number of output elements that provide the result of object recognition.

The researcher demonstrates both supervised and unsupervised learning and trains the sensor to classify and encode images that are optically projected onto the chip with a throughput of 20 million bits per second.




The system is designed to work like neurons within the brain: when one cell is active it can influence the activity of neighboring nerve cells. Neural networks are artificial systems that are similar to our brains.

Artificial learning on the computer works according to the same principle that our brain follows. A network of neurons is simulated digitally, & the strength with which one node of this network influences the other is changed until the network shows the desired behavior that we have expected.

In the research paper, the team of researchers demonstrates that an image sensor can itself constitute an ANN that can simultaneously sense and process optical images without latency.

“Typically the image data is read out pixel by pixel and then processed on the computer,” professor Thomas Mueller said. “We, on the other hand, integrate the neural network with its artificial intelligence directly into the hardware of the image sensor. This makes object recognition many orders of magnitude faster.”

Each individual detector element is able to be adjusted, which allows control of the way in which detected signals affect the output signal, said Lukas Mennel, first author of the study. “All we have to do is simply adjust a local electric field directly at the photodetector,” he said.

“In our chip, we can specifically adjust the sensitivity of each individual detector element, in other words, we can control the way the signal picked up by a particular detector affects the output signal,” says Lukas Mennel author of publication. “All we have to do is simply adjust a local electric field directly at the photodetector.”

This adaptation is done externally, with the help of a computer program. One can, for example, use the sensor to record different letters and change the sensitivities of the individual pixels step by step until a certain letter always leads exactly to a corresponding output signal.

This way neural network in chip is configured making some connections in network stronger & others weaker.

Once the learning process is complete, the computer is no longer needed. The neural network is then able to work alone. If a certain letter is presented to the sensor, it generates the trained output signal within 50 nanoseconds — for example, a numerical code representing the letter that the chip has just recognized.

“Our test chip is still small at the moment, but you can easily scale up the technology depending on the task you want to solve,” says Thomas Mueller. “In principle, the chip could also be trained to distinguish apples from bananas, but we see its use more in scientific experiments or other specialized applications.”

The technology can be usefully applied wherever extremely high speed is required: “From fracture mechanics to particle detection in many research areas, short events are investigated,” says Thomas Mueller.

“Often it is not necessary to keep all the data about this event, but rather to answer a very specific question: Does a crack propagate from left to right? Which of several possible particles has just passed by? This is exactly what our technology is good for.”

More in AI

Google Ph.D. fellowship programme: Indian students can apply by April 15

Microsoft’s New AI Generates 3D Objects From 2D Images

NASA’s New AI Spotted 11 Dangerous Asteroids That Humans Missed

Alibaba’s New AI System Detects Coronavirus Infection In 20 SECONDS


Leave a Reply

Your email address will not be published.