With artificial intelligence (AI) gaining momentum, Twitter CEO Elon Musk and other experts in the AI field called on for a pause in the development of AI systems on Wednesday. The experts pointed out to make sure that the system is safe.
In an open letter citing potential risks to society, the experts called on for a six-month pause in developing systems more powerful than OpenAI's newly launched GPT-4.
"This pause should be public and verifiable and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium," read the letter.
Microsoft-backed OpenAI unveiled the fourth iteration of its GPT (Generative Pre-trained Transformer) AI program, which has wowed users by engaging them in human-like conversation, composing songs and even writing lengthy documents.
"Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," said the letter issued by Future of Life Institute.
Musk had earlier said that "AI stresses me out." However, Tesla uses AI for an autopilot system. Musk has encountered issues with it and had taken efforts to regulate the autopilot systems. Tesla had to recall over 3.62 lakh US vehicles to update the software after US regulators said that the driver assistance system could cause crashes.
"Should we let machines flood our information channels with propaganda and untruth?... Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us?" read the letter.
The letter was signed by over 1,000 people including Musk. Stability AI CEO Emad Mostaque, researchers at Alphabet-owned DeepMind, and AI heavyweights Yoshua Bengio, often referred to as one of the "godfathers of
AI", and Stuart Russell, a pioneer of research in the field, were among those signed, reported Reuters.
With Microsoft and Google in the rush to deploy new AI products, the letter is unlikely to have an impact. However, an opposition voice was sounded against the new AI research with this letter.
"AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts," read the letter.