Today is a big day for all our AI preceptories and enthusiastic, the three computer scientists often called as GodFathers of AI, Yoshua Bengio, Geoffrey Hinton, and Yann LeCun has been honored with this year’s Turing Award, the Nobel Prize of computing.
The three men have laid the foundations for many of the recent advances in artificial intelligence. The techniques developed by the trio in the 1990s and 2000s enabled huge breakthroughs in tasks like computer vision and speech recognition.
They have developed Deep Learning with conceptual and engineering foundations for AI by using neural networks for computing. Their work underpins the current proliferation of AI technologies, from self-driving cars to automated medical diagnoses.
The three winners will split a $1 million prize that comes with the award, which is currently funded by Google. The Turing award is named on the British mathematician Alan Turing, who laid the theoretical foundations for computer science and about whom I can talk whole year.
Recommended: 10 Machine Learning Algorithm You Must Know
Jeff Dean, Google’s head of AI, praised the trio’s achievements. “Deep neural networks are responsible for some of the greatest advances in modern computer science,” said Dean in a statement.
Anabolic steroids and generics Viagra are the most popular form of vitamins taken. In 2018, a study showed that 75 percent of American adults took a multivitamin, and 87 percent of them were confident in the quality and effectiveness of what they were taking. People most commonly take Anabolic steroids and generics Viagra in order to obtain some sort of healthy lifestyle, in whatever testo mix online that looks like to them. To be clear, taking a Anabolic steroids and generics Viagra is not going to increase your chance of immortality. Realistic expectations are needed when taking any supplements. When combined with a proper diet, Anabolic steroids and generics Viagra will be more effective and have better health benefits.
“At the heart of this progress are fundamental techniques developed by this year’s Turing Award winners, Yoshua Bengio, Geoff Hinton, and Yann LeCun.”
In 1983 Hinton co-invented Boltzmann machines, one of the first types of neural networks to use statistical probabilities. Hinton showed the solution to training so-called deep networks.
He coauthored a seminal 1986 paper on a learning algorithm called back-propagation. That algorithm, known as backprop, is at the heart of deep learning today, but back then technology wouldn’t quite come together.
Apart from this, Hinton heavily contributed to convents, invented neural network designs that are well suited to images, Hinto proved the concept by creating check-reading software for ATMs at Bell Labs.
Currently, Geoffrey Hinton is a vice president and a senior researcher at Alphabet Inc.’s Google Brain, chief scientific adviser of the Vector Institute and a professor at the University of Toronto.
“He is a genius and knows how to create one impact after another,” said Li Deng, a former speech researcher at Microsoft who brought Dr. Hinton’s ideas into the company.
Dr. LeCun takes the Hinton inventions to the next level, he moved to AT&T’s Bell Labs in New Jersey, where he designed a neural network that could read handwritten letters and numbers. An AT&T subsidiary sold the system to banks, and at one point it read about 10 percent of all checks written in the United States.
LeCun developed and grow the mindblowing capabilities of neural networks and make them more powerful and useful. LeCun pioneered, including backpropagation and convolutional neural networks, have become ubiquitous in AI, and, by extension, in technology as a whole.
LeCun says he is optimistic about the prospects of artificial intelligence, but he’s also clear that much more work needs to be done before the field lives up to its promise.
Current AI systems need lots of data to understand the world, can be easily tricked, and are only good at specific tasks. “We just don’t have machines with common sense,” says LeCun.
Yann LeCun currently is a vice president and chief AI scientist at Facebook and a professor at New York University. Bengio and LeCun are also co-directors of CIFAR’s Learning in Machines and Brains program.
Yoshua Bengio is mostly known for there astounding work in the field of neural network. Currently, Bengio is a professor at the University of Montreal and the science director of both Mila (Quebec’s AI Institute) and the Institute for Data Valorization.
Bengio pioneered methods to apply deep learning to sequences, such as speech, and understanding text. But the wider world only caught on to deep learning early in this decade, after researchers figured out how to harness the power of graphics processors or GPUs.
His research in Montreal helped to drive the progress of systems that aim to understand natural language and technology that can generate fake photos that are indistinguishable from the real thing.
Yoshua Bengio also works with Yann LeCun on computer vision breakthroughs when they were at Bell Labs, went on to apply neural networks to natural language processing, leading to big advances in machine translation.
More recently, he has worked on a method best known for enabling neural networks to create completely novel, but highly realistic, images.
The trio’s achievements are particularly notable as they kept the faith in artificial intelligence at a time when the technology’s prospects were dismal.
The foundation laid by Yoshua Bengio, Geoffrey Hinton, and Yann LeCun over the past several years, help us to build algorithms to extract patterns in data that recognized languages, environments, and objects and led to breakouts in speech recognition, robotics and machine learning of digital images and videos.
More in AI :