Elon Musk says AI development should be regulated, including at Tesla
SpaceX & Tesla CEO Elon Musk once again warn the world about the development and usage of AI. Musk has said the development of advanced AI, including AI created by his own companies, should be regulated.
In response to the findings of an MIT study about OpenAI, Musk in a tweet said that he wants to see all organizations using artificial intelligence to regulated, even at his own company OpenAI.
Musk made his tweet in response to an MIT Technology Review story on the research lab, which claims its “mission is to ensure that artificial general intelligence benefits all of humanity”.
MIT reporter Karen Hao argued that the company “has allowed fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration.”
OpenAI is an independent research organization founded in 2015 by Musk, along with Sam Altman, Ilya Sutskever, Greg Brockman, Wojciech Zaremba, and John Schulman.
OpenAI was formed as a non-profit backed by $1 billion in funding from its pooled initial investors, with the aim of pursuing open research into advanced AI with a focus on ensuring it was pursued in the interest of benefiting society, rather than leaving its development in the hands of profit craving technology companies.
In 2017, OpenAI announced that Musk was quitting its board to avoid any conflict of interest with his work at Tesla. The next year, Musk said that he also “didn’t agree with some of what the OpenAI team wanted to do.”
The MIT study had highlighted that OpenAI has turned into a secretive for-profit company instead of an entity which it was earlier thought to be, working towards developing and distributing AI safely and equitably.
When Musk was asked if he meant AI should be regulated by individual governments or on a global scale, for example by the UN, he replied: “Both.”
It’s not the first time that Elon Musk had criticized the rampant development of AI applications and systems.
In 2014, he tweeted that AI could be more dangerous than nuclear weapons. Also, while addressing an audience at an MIT Aeronautics and Astronautics symposium in 2014 only.
Musk said that AI is our existential threat and humanity needs to be extremely careful.
“With AI, we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out,” he quoted.
The idea of having a central authority for the control on AI was jumping in Elon Musk for a while.
He told Recode’s Kara Swisher that “we ought to have a government committee that starts off with insight, gaining insight. Spends a year gaining insight about AI that are maybe dangerous, but especially AI.” The committee would then come up with regulations to ensure the safest uses of AI, he said.
Not all his Big Tech contemporaries agree to Musk idea. Facebook’s chief AI scientist Yann LeCun described his call for prompt AI regulation “nuts,” while Zuckerberg said his comments on the risks of the tech were “pretty irresponsible”. Musk responded by saying the Facebook founder’s “understanding of the subject is limited.”
Recently, Microsoft CEO Satya Nadella has announced that the company will invest $1 Bn in OpenAI to extend the benefits of artificial general intelligence (AGI).
AGI will work to replace most of the humans from the most economically valuable work.
With its 100 employees, OpenAI is aiming to build software for training, benchmarking and experimenting with AI which the company will distribute for free.
The startup has partnered with the Microsoft Azure Cloud platform to scale the hardware and software platform for AGI. Both these companies will also work to build new Azure AI supercomputing technologies.
Among other applications, OpenAI has also developed a deep neural network MuseNet which can produce musical compositions by using sounds of around 10 different musical instruments.
More in AI
Scientists Built AI To Figure Out What The Universe Is Made Of
Microsoft Trains World’s Largest Transformer Language Model
Neuralink Plans To Install There New Prototype Into A Human In 2020
Researchers Develop AI Tool To Predict Behaviour of Quantum System.