An open letter signed by more than 1,000 artificial intelligence researchers and leaders such as Elon Musk have joined a call for an immediate pause on training ‘giant’ AI systems for at least six months, allowing time to study the danger of systems like GPT-4.
Other signatories of the letter include Apple co-founder Steve Wozniak, Getty Images CEO Craig Peters, Pinterest co-founder Evan Sharp, and renowned author Yuval Noah Harari, as well as engineers from Amazon, Google, Meta and Microsoft.
The letter was also signed by numerous Australian experts from some of the nation's leading universities.
It comes only two weeks after OpenAI, the company behind ChatGPT, launched GPT-4, a large multimodal model that accepts prompts of text and images, including documents with text and photographs, diagrams, or screenshots.
“We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4,” the open letter published by the Future of Life Institute, a non-profit backed by Musk, states.
“Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.
“Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”
The letter also states that AI labs and independent experts should use the pause to jointly develop and implement a set of shared safety protocols that are safe beyond a reasonable doubt.
“This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities,” the letter adds.
One argument often brought up in conversations surrounding AI is the paperclip theory, which is the idea that if you tell a machine to optimise a specific goal, it will do so at all costs.
By instructing a machine to maximise the number of paperclips it produces, it would eventually develop the power and resources necessary in order to achieve its goal, at the expense of human life if necessary.
Alluding to this, the letter warns that “AI systems with human-competitive intelligence can pose profound risks to society and humanity.”
If a pause is not collectively agreed on, the letter says governments should step in and create a moratorium.
There are 29 nations, including Australia, Canada, and India, that are part of the Global Partnership on Artificial Intelligence – an international initiative that aims to advance the responsible and human-centric development of AI.
Governing agencies in China, Singapore and the EU have also introduced early versions of AI governance frameworks.
“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter says.
“This confidence must be well justified and increase with the magnitude of a system's potential effects.”
Get our daily business news
Sign up to our free email news updates.
Help us deliver quality journalism to you.
As a free and independent news site providing daily updates
during a period of unprecedented challenges for businesses everywhere
we call on your support