Musk, Wozniak, tech leaders sign open letter calling for pause on “out-of-control” AI development

Musk, Wozniak, tech leaders sign open letter calling for pause on “out-of-control” AI development

SpaceX, Tesla and Twitter CEO Elon Musk

An open letter signed by more than 1,000 artificial intelligence researchers and leaders such as Elon Musk have joined a call for an immediate pause on training ‘giant’ AI systems for at least six months, allowing time to study the danger of systems like GPT-4.

Other signatories of the letter include Apple co-founder Steve Wozniak, Getty Images CEO Craig Peters, Pinterest co-founder Evan Sharp, and renowned author Yuval Noah Harari, as well as engineers from Amazon, Google, Meta and Microsoft.

The letter was also signed by numerous Australian experts from some of the nation's leading universities. 

It comes only two weeks after OpenAI, the company behind ChatGPT, launched GPT-4, a large multimodal model that accepts prompts of text and images, including documents with text and photographs, diagrams, or screenshots.

“We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4,” the open letter published by the Future of Life Institute, a non-profit backed by Musk, states.

“Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

“Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

The letter also states that AI labs and independent experts should use the pause to jointly develop and implement a set of shared safety protocols that are safe beyond a reasonable doubt.

“This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities,” the letter adds.

One argument often brought up in conversations surrounding AI is the paperclip theory, which is the idea that if you tell a machine to optimise a specific goal, it will do so at all costs.

By instructing a machine to maximise the number of paperclips it produces, it would eventually develop the power and resources necessary in order to achieve its goal, at the expense of human life if necessary. 

Alluding to this, the letter warns that “AI systems with human-competitive intelligence can pose profound risks to society and humanity.”

If a pause is not collectively agreed on, the letter says governments should step in and create a moratorium.

There are 29 nations, including Australia, Canada, and India, that are part of the Global Partnership on Artificial Intelligence – an international initiative that aims to advance the responsible and human-centric development of AI.

Governing agencies in China, Singapore and the EU have also introduced early versions of AI governance frameworks.

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter says.

“This confidence must be well justified and increase with the magnitude of a system's potential effects.”

Enjoyed this article?

Don't miss out on the knowledge and insights to be gained from our daily news and features.

Subscribe today to unlock unlimited access to in-depth business coverage, expert analysis, and exclusive content across all devices.

Support independent journalism and stay informed with stories that matter to you.

Subscribe now and get 50% off your first year!

Four time-saving tips for automating your investment portfolio
Partner Content
In today's fast-paced investment landscape, time is a valuable commodity. Fortunately, w...
Etoro
Advertisement

Related Stories

Owner of Houston’s Farm, Sunfresh and Gourmet Selections enters administration

Owner of Houston’s Farm, Sunfresh and Gourmet Selections enters administration

A national vertically integrated farming and ready-made salad manuf...

Body Fit Training sets sights on Scandinavia with master franchise agreement

Body Fit Training sets sights on Scandinavia with master franchise agreement

After opening up 42 new studios globally in 2024 to date, Melbourne...

AI is not a magic wand – it has built-in problems that are difficult to fix and can be dangerous

AI is not a magic wand – it has built-in problems that are difficult to fix and can be dangerous

By now, all of us have heard and read a lot about artificial intell...

Sleeping Duck founders win court case against angel investor Adir Shiffman

Sleeping Duck founders win court case against angel investor Adir Shiffman

Dr Adir Shiffman, an angel investor who helped turn around the fort...