Sign In  |  Register  |  About Daly City  |  Contact Us

Daly City, CA
September 01, 2020 1:20pm
7-Day Forecast | Traffic
  • Search Hotels in Daly City

  • CHECK-IN:
  • CHECK-OUT:
  • ROOMS:

AI expert warns Elon Musk-signed letter doesn't go far enough, says 'literally everyone on Earth will die'

Eliezer Yudkowsky, a decision theorist and artificial intelligence expert, is calling for a complete "shut down" of all AI development on systems more powerful than GPT-4, arguing it is obvious that such advanced intelligence will kill everyone on Earth.

An artificial intelligence expert with more than two decades of experience studying AI safety said an open letter calling for six-month moratorium on developing powerful AI systems does not go far enough.

Eliezer Yudkowsky, a decision theorist at the Machine Intelligence Research Institute, wrote in a recent op-ed that the six-month "pause" on developing "AI systems more powerful than GPT-4" called for by Tesla CEO Elon Musk and hundreds of other innovators and experts understates the "seriousness of the situation." He would go further, implementing a moratorium on new large AI learning models that is "indefinite and worldwide." 

The letter, issued by the Future of Life Institute and signed by more than 1,000 people, including Musk and Apple co-founder Steve Wozniak, argued that safety protocols need to be developed by independent overseers to guide the future of AI systems.

"Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter said. Yudkowsky believes this is insufficient.

ELON MUSK, APPLE CO-FOUNDER, OTHER TECH EXPERTS CALL FOR PAUSE ON ‘GIANT AI EXPERIMENTS’: ‘DANGEROUS RACE’

"The key issue is not "human-competitive" intelligence (as the open letter puts it); it’s what happens after AI gets to smarter-than-human intelligence," Yudkowsky wrote for Newsweek. 

"Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die," he asserts. "Not as in ‘maybe possibly some remote chance,’ but as in ‘that is the obvious thing that would happen.’"

ARTIFICIAL INTELLIGENCE ‘GODFATHER’ ON AI POSSIBLY WIPING OUT HUMANITY: ‘IT’S NOT INCONCEIVABLE'

For Yudkowsky, the problem is that an AI more intelligent than human beings might disobey its creators and would not care for human life. Do not think "Terminator" — "Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers—in a world of creatures that are, from its perspective, very stupid and very slow," he writes. 

Yudkowsky warns that there is no proposed plan for dealing with a superintelligence that decides the most optimal solution to whatever problem it is tasked with solving is annihilating all life on Earth. He also raises concerns that AI researchers do not actually know if learning models have become "self-aware," and whether it is ethical to own them if they are. 

DEMOCRATS AND REPUBLICANS COALESCE AROUND CALLS TO REGULATE AI DEVELOPMENT: ‘CONGRESS HAS TO ENGAGE’

Six months is not enough time to come up with a plan, he argues. "It took more than 60 years between when the notion of Artificial Intelligence was first proposed and studied, and for us to reach today’s capabilities. Solving safety of superhuman intelligence—not perfect safety, safety in the sense of ‘not killing literally everyone’—could very reasonably take at least half that long."

Instead, Yudkowsky proposes international cooperation, even between rivals like the U.S. and China, to shut down development of powerful AI systems. He says this is more important than "preventing a full nuclear exchange," and that countries should even consider using nuclear weapons "if that's what it takes to reduce the risk of large AI training runs." 

"Shut it all down," Yudkowsky writes. "Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries." 

Yudkowsky's drastic warning comes as artificial intelligence software continues to grow in popularity. OpenAI's ChatGPT is a recently-released artificial intelligence chatbot that has shocked users by being able to compose songs, create content and even write code.

"We've got to be careful here," OpenAI CEO Sam Altman said about his company's creation earlier this month. "I think people should be happy that we are a little bit scared of this."

Fox News' Andrea Vacchiano contributed to this report.

Stock Quote API & Stock News API supplied by www.cloudquote.io
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.
 
 
Copyright © 2010-2020 DalyCity.com & California Media Partners, LLC. All rights reserved.