Facebook AI model beats Google, runs 5x faster on GPUs

A team of researchers of Facebook has developed a novel low-dimension design space called the “RegNet” that has outperforms the traditional available models from Google and even runs five times faster on GPUs.



According to the researchers, the RegNet is able to produce a simple, fast, and also versatile network and in previous experiments, it was even able to outperform Google’s SOTA EfficientNet Models according to researchers in a paper titled the Designing Network Design Spaces.

The team aimed for interpretability & to discover general design principles, explaining that it would describe networks that are really simple, work well & are also capable of being generalized across the whole setting.




The Facebook AI team conducted controlled comparisons with EfficientNet with no training-time enhancements and under the same training setup. Back in 2019, Google’s previous EfficientNet was using a combination of the NAS and also the model scaling rules and also represented the current SOTA.

With the comparable training settings and Flops, RegNet models actually outperformed the EfficientNet models while being up to 5 times faster on GPUs!




Instead of designing and developing different networks, the team focused on designing actual network design spaces which compromise of huge and possibly infinite populations of model architectures. Analyzing the whole RegNet design space also provided the team with other unexpected insights into the network design.

The team of researchers noted that for, example, the depth of the best models is stable across the compute regimes with an optimal depth of 20 blocks or about 60 layers.

According to the paper, “while it is common to see modern mobile networks employ inverted bottlenecks, researchers noticed that using inverted bottlenecks degrade performances” which points out that the best models do not actually use either bottlenecks or inverted bottlenecks.

The Facebook AI Research team has just recently developed a tool that tricks the whole facial recognition system into wrongly identifying a person in a certain video. The whole de-identification system, which works well in live videos, actually uses machine learning in order to change certain key features of a subject in a video.

FAIR is known as an advancing state-of-the-art in artificial intelligence through the use of fundamental and applied research in open collaboration with the community.

This social networking giant even created the Facebook AI Research group way back in 2014 to advance the entire state of AI technology through open research for the benefit of all.

Ever since then, FAIR has grown even into international research organizations with labs all over Menlo Park, Montreal, Paris, Seattle, Tel Aviv, New York, London, and Pittsburgh. Facebook seems to be getting even better with its AI technology and with the help of FAIR, they have already beaten Google!

More in AI

Google & Murata creates world’s smallest AI module with Coral Intelligence

IBM is Deploying Its Watson AI to People’s COVID-19 Questions

Google Releases Quantization Aware Training API To Train Smaller, Faster AI Models

Apple Acquires AI Startup Voysis To Boost Siri IQ For AI Test


Leave a Reply

Your email address will not be published.