• Wion
  • /Technology
  • /Explained | Why Musk, Apple co-founder Wozniak, others seek six-month-pause to AI race - Technology News

Explained | Why Musk, Apple co-founder Wozniak, others seek six-month-pause to AI race

Explained | Why Musk, Apple co-founder Wozniak, others seek six-month-pause to AI race

Some of the major technology companies are dedicating resources to breakthroughs in AI.

Artificial Intelligence (AI) is one of the hottest topics now, and perhaps this technology could one day outsmart human beings. Some of the major technology companies in the world including Amazon, Google, and Microsoft are dedicating resources to breakthroughs in AI. The meteoric rise of ChatGPT, developed by OpenAI and incorporated by Microsoft in its Bing search engine, opened the eyes of the world towards both the mind-boggling possibilities and pitfalls of AI. Now the question arises: are these companies moving too fast in rolling out AI? In an open letter on Wednesday (March 29), a group of prominent scientists and other tech industry notables, including Tesla and Twitterowner Elon Musk and Apple co-founder Steve Wozniak, are calling for a six-month pauseon training systems more powerful than OpenAI's newly launched model GPT-4to consider the risks.

The letter, which has been signed by more than 1,000 people, is a response to OpenAI’s recent release of GPT-4, a more advanced successor to its widely-used AI chatbot ChatGPT.

What does the letter say and why the pause?

Add WION as a Preferred Source

The letter has warned that AI systems with “human-competitive intelligence can pose profound risks to society and humanity” — from flooding the internet with disinformation and automating away jobs, to more catastrophic future risks out of the realms of science fiction." The letter addedthat recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

The letter also said that powerful AI systems should be developed only once people are confident that their effects will be positive and their risks will be manageable.

ALSO READ |Explained | Artificial Intelligence and screening of breast cancer

"Should we let machines flood our information channels with propaganda and untruth? ... Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?," the letter asked and said that such decisions must not be delegated to unelected tech leaders.

The letter further called on AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4. “This pause should be public and verifiable and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” it added.

Who has signed the letter?

The letter was issued by the Future of Life Institute and signed by more than 1,000 people including Elon Musk. The co-signatories included Stability AI's Chief Executive Officer Emad Mostaque, researchers at Alphabet-owned DeepMind, and AI heavyweights Yoshua Bengio. According to a report by the news agency Reuters on Thursday, the letter was not signed by OpenAI's CEO Sam Altman, Google CEO Sundar Pichai and Microsoft CEO Satya Nadella.

Elon Musk, who also runs Tesla and SpaceX and was an OpenAI co-founder and early investor, has long expressed concerns about AI’s existential risks. Earlier this month, Musk said, "AI stresses me out."

What has been the response to the letter?

Gary Marcus, a professor at New York University who signed the letter, said that the letter isn't perfect but the spirit is right — "We need to slow down until we better understand the ramifications." Marcus added the big players are becoming increasingly sensitive about what they are doing which makes it hard for society to defend against whatever harms may materialise, Reuters reported.

In a blog post, Marcus said that he disagrees with others who are worried about the near-term prospect of intelligent machines so smart that they could self-improve beyond humanity’s control. Marcus said he was more worried about “mediocre AI” that is widely deployed, including by criminals or terrorists to trick people or spread dangerous misinformation.

"Current technology already poses enormous risks that we are ill-prepared for. With future technology, things could well get worse," the New York University professor wrote.

James Grimmelmann, a Cornell University professor of digital and information law, said the pause on the AI race is a good idea but the letter is vague and it does not take the regulatory problems seriously.

“It is also deeply hypocritical for Elon Musk to sign on given how hard Tesla has fought against accountability for the defective AI in its self-driving cars,” Grimmelmann added.

What is GPT-4?

Earlier this month, OpenAI released GPT-4 which it says exhibits human-level performance. "We’ve created GPT-4, the latest milestone in OpenAI’s effort in scaling up deep learning. GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs) that, while less capable than humans in many real-world scenarios, exhibits human-level performance on various professional and academic benchmarks," Open AI's website said.

In an online demo on March 14, OpenAI President Greg Brockman ran through some scenarios that showed off GPT-4′s capabilities. Brockman demonstrated how the system could quickly come up with the proper income tax deduction after being fed reams of tax code — something he couldn’t figure out himself, according to a report by the news agency Associated Press.

“It’s not perfect, but neither are you. And together it’s this amplifying tool that lets you just reach new heights,” Brockman added.

(With inputs from agencies)


WATCH WION LIVE HERE

You can now write for wionews.com and be a part of the community. Share your stories and opinions with us here.