I think we are pretty close to future. Do you remember all those movies in which the hero has the power to see through objects which are not even transparent. Researchers at MIT has developed a new machine learning algorithm that performs the same task. This AI can actually see people moving on the other side of the object.
The name of the project is “RF-Pose.” It is created by team of researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) which is led by Professor Dina Katab. Dina Katab is professor of electrical engineering and computer science at MIT.
The AI system works with the omnipresent wireless signals which are pretty similar to X-rays. Omnipresent signals can travel through walls and they are totally invisible to the human eye. When this radio signals hits any object they get scatter and bounce back to the device.
As the scientists explain: “RF-Pose transmits a low power wireless signal (1000 times lower power than WiFi).
Detecting radio signal was a easy task for AI, but the challenge for researcher was how to interpret this signals. “Nobody can take a wireless signal and label it where the head is, and where the joints are, and stuff like that,” Dina says. In other words: labeling an image is easy, labeling radio wave data that’s bounced off a person, not so much.
As a solution, researcher started training the AI. They taught it not only to analyze radio signals that bounce off people’s bodies but also label it with correct body parts. The team trained the AI to recognize human motion in RF by showing it examples of both on-camera movement and signals reflected from people’s bodies, helping it understand how the reflections correlate to a given posture.
The AI system uses a neural network to focuses on key points of the body like joints, elbows, hips, and feet. After lots of training the neural network starts showing it’s magic.
The AI generate a 2D stick figure skeletons by analyzing the scattered radio signal data, representing the poses and movements of the people in the photos, and then how those skeletons corresponded and matched up to the measurements of scattered radio signals.
When a person—either occluded by a wall or not—takes a step, “you see that skeleton, or stick figure, that you created, takes a step with it,” Dina says. “If the person sits down, you see that stick figure sitting down. The AI can identify individuals with an accuracy of more than 83 percent, even through walls.
“We leverage the fact that wireless signals in the WiFi frequencies traverse walls and reflect off the human body,” an open-access paper on the research explains, adding that the project introduces a “deep neural network approach that parses such radio signals to estimate 2D poses”.
“If you think of the computer vision system as the teacher, this is a truly fascinating example of the student outperforming the teacher,” says Torralba. “The Michael J. Fox foundation is funding further research” Katabi says.
All data the team collected has subjects’ consent and is anonymized and encrypted to protect user privacy. Speaking at a recent conference, Katabi said her group is taking steps to ensure that the data is only collected with consent and is anonymized. She also suggested that technical countermeasures could be taken to block monitoring.
According to MIT, the technology could be used to help study diseases like Parkinson’s, multiple sclerosis (MS), and muscular dystrophy, with RF-Pose offering a detailed monitoring system for patient movement and therefore progression of the disease. The team also claims it could be used to help the elderly live more independently, with any falls picked up by the system, even if it happens out of view.
“We’ve seen that monitoring patients’ walking speed and ability to do basic activities on their own gives health care providers a window into their lives that they didn’t have before, which could be meaningful for a whole range of diseases,” says Katabi, who co-wrote a new paper about the project. “A key advantage of our approach is that patients do not have to wear sensors or remember to charge their devices.”
The authors also showed that how this AI could be particularly useful for the application for search-and-rescue operations where it’s important to know who you’re looking for. Refinements could lead to 3D images that reveal even slight movements, such as a shaking hand.
CSAIL claims that future iterations of the technology could use a “consent mechanism” to ensure those being watched are in control of the system, with users needing to perform a certain set to movements to activate the mechanism.
Katabi co-wrote the new paper with PhD student and lead author Mingmin Zhao, MIT Professor Antonio Torralba, postdoc Mohammad Abu Alsheikh, graduate student Tianhong Li, and PhD students Yonglong Tian and Hang Zhao. They will present it later this month at the Conference on Computer Vision and Pattern Recognition (CVPR) in Salt Lake City, Utah.
You can read the study from here.
More in AI :