The letter in which more than 1,000 experts call for a halt to artificial intelligence as a "threat to humanity"

01-04-23

AI experts have signed a petition calling for companies with highly advanced models to stop for 6 months and agree on what security protocols they need to consider. 

OpenAI recently launched GPT-4 and Elon Musk wants the team to slow down training for the next update.OpenAI recently launched GPT-4 and Elon Musk wants the team to slow down training for the next update.[@Levart_Photographer] from Unsplash

Elon Musk founded OpenAI, the developer of ChatGPT and DALL-E almost 8 years ago. However, he stepped down from its leadership in 2019 and even went so far as to admit that AI was "quite dangerous" and that he feared he "had done some things to accelerate it". 

To try to alleviate what he built in his day, the entrepreneur has signed, along with 1,100 other AI experts, a petition to the big developers of these tools to slow down their training for half a year. Their intention is that, in that time, regulations can be put in place to allow them to create this technology in a more ethical way.

"We ask all AI labs to immediately suspend training of AI systems more powerful than GPT-4 for at least 6 months," reads the public letter. This request includes ChatGPT-5 which, since the last update was released, has had the full attention of the OpenAI team.

Among the well-known faces who have signed the document are, apart from Musk, Yoshua Bengio, a professor at Monreal University, who is considered a pioneer of modern AI; Steve Wozniak, creator of the first Apple machine; Yuval Noah Harari, a historian; and Jaan Tallinn, co-founder of Skype.

The letter, published by the Future of Life Institute, a non-profit organisation whose purpose is to fight technological risks that may be a danger to humanity, calls for the training pause to be "public and verifiable". However, they offer no examples of how such a pause could be verified.

The signatories explain that "if the pause cannot be enacted quickly", it should be up to governments to "step in and institute a moratorium".

Microsoft prohibits other search engines from using its data as "foundation and learning" for their own AI chat products.

In the document, they point out that AIs "can pose a profound risk to society and humanity" and that proper planning and care is not being taken. There has been an uncontrolled race to develop ever more powerful systems that no one, not even their creators, understands, predicts or can reliably control".

In collaboration with:

This site uses cookies from Google to deliver its services and to analyze traffic. Information about your use of this site is shared with Google. By using this site, you agree to its use of cookies.