Industry Leaders Including Elon Musk Push for Pause in AI Development

Industry Leaders Including Elon Musk Push for Pause in AI Development

A group of industry executives and AI experts, including Elon Musk, have signed an open letter asking for a six-month pause in developing systems more powerful than OpenAI’s GPT-4.  In the letter, the cause for this request are potential risks to society and humanity.

Earlier this month, OpenAI announced GPT-4, which has shown impressive qualities including its ability to have human-like conversations, compose songs, and create recipes based off a single photo.

Non-profit group Future of Life Institute issued the letter, which was signed by over 1,000 people including Musk. The letter called for a pause on advanced AI development until safety protocols for the development of AI are created, implemented and audited by independent experts.  The letter also listed a number of risks to society and civilization that could arise, and they call on developers to work with policymakers on governance and regulatory authorities.

The letter states that “powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

Other notable signatures on the letter include Emad Mostaque, CEO of Stability AI, researchers at Alphabet-owned DeepMind, and AI experts Stuart Russell and Yoshua Bengio – the latter being considered one of the “godfathers of AI”.

The concerns stemming from this letter could have something to do with the fact that AI systems like ChatGPT can be misused in an effort to conduct phising attempts, disinformation and cybercrime.

Since its release, ChatGPT has been leading the race of AI chatbots, and has forced competitors like Google’s Bard to try and catch up. They’ve also announced partnerships with several firms like Instacart and Expedia, that will allow ChatGPT users to do things like order groceries and book flights.

Sam Altman, Chief Executive at OpenAI has not signed the letter.

According to Gary Marcus, a professor at NYU and one of the many who signed the letter, says “the letter isn’t perfect, but the spirit is right: we need to slow down until we better understand the ramifications. The big players are becoming increasingly secretive about what they are doing, which makes it hard for society to defend against whatever harms may materialize.”

Critics of the letter are accusing the signatories of promoting “AI hype”. They argue that claims surrounding AI’s current potential are greatly exaggerated.

“These kinds of statements are meant to raise hype. It’s meant to get people worried,” said Johanna Björklund, AI Researcher and Associate Professor at Umeå University. “I don’t think there’s a need to pull the handbrake.”

Johanna suggests that rather than pause the research, just making the research being done subject to stricter transparency requirements. “If you do AI research, you should be very transparent about how you do it.”

 

Story via Reuters

Ransomware Gang claims Responsibility for Hack that Home Security Company Denies Happened

Ransomware Gang claims Responsibility for Hack that Home Security Company Denies Happened

An Immersive Gamebox could be the Next Evolution of Gaming

An Immersive Gamebox could be the Next Evolution of Gaming