MIT’s New AI Can Visually Understand Objects It’s Never Seen Before

Robots in factories are really good at picking up objects they’ve been pre-programmed to handle this thing, but it’s a different story when new objects are thrown into the mix.

The robots don’t know how to pick it up as they are not programmed for it and that’s where computer supervision starts to show lack case of its inability with the new object. As humans are getting lazy, who has time to reprogram robots all the time.



Researchers have been trying for a long time to teach robots to identify objects that they never saw before, but due to many reasons we always receive failure.

To overcome this problem a team at the MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL)  has developed a computer vision system that can identify those objects that it has never seen before.




The new system empowers robots to not only visually recognize objects, but to also do so so well that they can then proceed to accomplish related tasks, all without any previous input. The machine learning system also lets robots inspect random objects, and visually understand them to accomplish specific tasks.

It is created by MIT’s team of researchers led by Peter Florence. The study was published on the arXiv server.

The team has used a popular machine-learning technique known as generative adversarial modeling to have a computer learn about the properties of three-dimensional space.




The MIT team calls its system “Dense Object Nets” (DON). DON functions by analyzing objects as collections of points on a visual roadmap, a process that allows the system to understand all the object’s components even if it has never seen it before.

It can “see” because it looks at objects as a collection of points that the robot processes to form a three-dimensional “visual roadmap.” That means scientists don’t have to sit there and tediously label the massive datasets that most computer vision systems require.

Recommended: Researches Invent An AI That Detects Skin Cancer, Better Than Doctors

This standard technique allows DON to autonomously do very specific tasks such as grab an object from just one of its corners or parts, an ability previous systems lacked.

MIT’s New AI Can Visually Understand Objects It’s Never Seen Before

“Many approaches to manipulation can’t identify specific parts of an object across the many orientations that object may encounter,” researcher Lucas Manuelli said in a press release. “For example, existing algorithms would be unable to grasp a mug by its handle, especially if the mug could be in multiple orientations, like upright, or on its side.”

Researchers said this work could have a big impact on robotics and self-driving cars, helping to make machines that can learn how to act more intelligently in the real world.



This approach will let robots better understand and manipulate items and allows them to even pick up a specific object among a cluster of similar objects, a valuable skill for the kinds of machines that companies like Amazon and Walmart use in their warehouses.

“An exciting and important trend is the move in learning-based vision systems from just doing things with images to doing things with three-dimensional objects,” says Josh Tenenbaum, a professor in MIT’s Department of Brain and Cognitive Sciences.

“That includes seeing objects in depth and modeling whole solid objects—not just recognizing that this pattern of pixels is a dog or a chair or table.”

Recommended: MIT Students Teach AI To Make Pizza And It Is Delicious

In the future, a more sophisticated version of DON could be used in a variety of settings, such as collecting and sorting objects at warehouses, working in dangerous settings, and performing odd clean-up tasks in homes and offices.

Looking ahead, the researchers would like to refine the system such that it’ll know where to grasp onto an object without human intervention.

“DON solves that problem which means that we can now start to build increasingly more complex systems of smart agents that can teach themselves how to recognize and interact with different objects. I believe that Tedrake lab’s results are going to start a new wave of computer vision applications from robotic manipulation and process control to new intelligent automation solutions,” Luchici said.

More in AI :

MIT’s New AI Algorithm Can See People Through Walls

Ford’s New AI Will Highly Reduce Fuel Consumption By Cars

Facebook’s AI Rosetta Speaks Memes, It Will Help To Detect Hate Speech



Leave a Reply

Your email address will not be published.