GPT-4 Alert: Over a Thousand CEOs and Academics Call for Six-Month Halt on All AI Testing

To shared

A group consisting of academics, artificial intelligence (AI) experts, and executives such as Elon Musk are calling for a six-month pause in the development of more powerful systems, including OpenAI’s recently launched GPT-4, in an open letter citing potential risks to society and humanity.

Earlier this month, OpenAI, backed by Microsoft, unveiled the fourth version of its GPT (Generative Pre-trained Transformer) AI program, which has captivated users with its wide range of applications, from engaging users in conversations similar to those humans can have to composing songs and summarizing lengthy documents.

The letter, issued by the non-profit Future of Life Institute and signed by more than 1,000 individuals, including Musk, calls for a pause in the advanced development of artificial intelligence until independent experts develop, implement, and audit shared security protocols for such designs. «Powerful artificial intelligence systems should be developed only once we are confident that their effects will be positive and their risks manageable,» the letter said.

The letter details the potential risks to society and civilization of competitive AI systems circulating among humans in the form of economic and political disruptions and calls on developers to work with lawmakers on governance and regulatory authorities. And cosignatories of the letter included Stability AI CEO Emad Mostaque, DeepMind researchers, Alphabet-owned, and AI heavyweights Yoshua Bengio, often referred to as one of the «godfathers of AI,» and Stuart Eussel, a pioneering field researcher.

EU Concerns

Concerns have arisen as the EU police force, Europol, joined a chorus of ethical and legal concerns about advanced AI like ChatGPT on Monday, warning about the potential misuse of the system in phishing attempts, disinformation, and cybercrime. Meanwhile, the UK government unveiled proposals for an «adaptive» regulatory framework around artificial intelligence.

The government’s approach, described in a policy paper published on Wednesday, would divide responsibility for governing artificial intelligence (AI) among its regulators for human rights, health and safety, and competition, rather than creating a dedicated technology body.

Although Elon Musk, whose car manufacturing company, Tesla, is using AI for an autopilot system, has expressed concerns about the development of AI.

The GPT Case

Since its launch last year, OpenAI’s ChatGPT has led rivals to accelerate the development of similar large language models, and companies to integrate generative AI models into their products. Last week, OpenAI announced that it had partnered with around a dozen companies to incorporate their services into its chatbot, allowing ChatGPT users to order groceries through Instacart or book flights through Expedia.

Sam Altman, CEO of OpenAI, did not sign the letter, a Future of Life spokesperson told Reuters.

«The letter is not perfect, but the spirit is right: we must slow down until we better understand the ramifications,» said Gary Marcus, a professor at New York University who signed the letter. «Big players are becoming increasingly secretive about what they’re doing, which makes it difficult for society to defend against any harm that may materialize.»

Critics, however, accused the letter’s signatories of promoting «AI scaremongering,» arguing that claims about the technology’s current potential had been greatly exaggerated.

«These types of statements are meant to generate excitement. They’re meant to worry people,» said Johanna Björklund, an AI researcher and associate professor at Umeå University. «I don’t think there’s a need to pull the emergency brake.» Instead of stopping research, she said, AI researchers should be subject to greater transparency requirements. «If you’re doing AI research, you should be very transparent about how you’re doing it.»

The Full Letter:

Artificial Intelligence (AI) systems with human-level intelligence can pose profound risks to society and humanity, as demonstrated by extensive research and recognized by leading AI labs. As stated in the Asilomar AI Principles, widely endorsed, advanced AI could represent a profound change in the history of life on Earth, which must be planned and managed with corresponding care and resources. Unfortunately, this level of planning and management is not occurring, even though in recent months AI labs have entered a race out of control to develop and deploy ever-more-powerful digital minds that nobody, not even their creators, can understand, predict or reliably control.

Contemporary AI systems are now becoming competitive with humans in general tasks, and we must ask ourselves: should we allow machines to flood our information channels with propaganda and falsehood? Should we automate all jobs, including those we enjoy? Should we develop non-human minds that could eventually outnumber us, be more intelligent, and replace us? Should we risk losing control of our civilization? Such decisions must not be delegated to unelected technology leaders. Powerful AI systems should be developed only once we are confident their effects will be positive, and their risks manageable. This confidence must be well justified and increase with the magnitude of potential effects of a system. OpenAI’s recent statement regarding AI in general states that «At some point, it may be important to have an independent review before starting to train future systems, and the most advanced efforts to agree to limit the rate of computing growth used to create new models.» We agree. That point is now.

We therefore call on all AI labs to immediately pause for at least 6 months the training of the most powerful AI systems like GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be quickly promulgated, governments should intervene and institute a moratorium.

AI labs and independent experts should use this pause to collaboratively develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent external experts. These protocols should ensure that systems adhering to them are safe beyond any reasonable doubt. This does not mean a pause in AI development in general, simply a step back from the dangerous race towards ever-larger, unpredictable black-box models with emergent capabilities.

The research and development of AI should be refocused on making today’s powerful and cutting-edge systems more accurate, safe, interpretable, transparent, robust, aligned, reliable, and loyal.

At the same time, AI developers should work with lawmakers to dramatically accelerate the development of robust AI governance systems. These should include at a minimum: new and capable regulatory authorities dedicated to AI; supervision and monitoring of high-capacity AI systems and large computational capacity sets; provenance and watermarking systems to help distinguish real leaks from synthetic ones and track models; a solid ecosystem of audit and certification; liability for damages caused by AI; solid public funding for technical AI security research; and well-equipped institutions to deal with the dramatic economic and political disruptions (especially in democracy) that AI will cause.

Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an «AI summer» in which we reap the rewards, design these systems for the clear benefit of all, and give society the opportunity to adapt. Society has paused other technologies with potentially catastrophic effects on society. We can do it here. Let’s enjoy a long AI summer, not rush to fall unprepared.


To shared