‘If We Go Ahead on This Everyone Will Die’ Warns AI Expert Calling for Absolute Shutdown

These considerations
sound like a joke,
but they are not!



March 31.– Human beings are not ready for a powerful AI under present conditions or even in the “foreseeable future,” stated a foremost expert in the field, adding that the recent open letter calling for a six-month moratorium on developing advanced artificial intelligence is “understating the seriousness of the situation.

AI RobotThe key issue is not ‘human-competitive’ intelligence [as the open letter puts it]; it’s what happens after AI gets to smarter-than-human intelligence,” said Eliezer Yudkowsky, a decision theorist and leading AI researcher in a March 29 Time magazine op-ed. “Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die.

Not as in ‘maybe possibly some remote chance,’ but as in ‘that is the obvious thing that would happen.’ It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers.”

After the recent popularity and explosive growth of ChatGPT, several business leaders and researchers, now totaling 1,843 including Elon Musk and Steve Wozniak, signed a letter calling on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” GPT-4, released in March, is the latest version of OpenAI’s chatbot, ChatGPT.

AI ‘Does Not Care’ and Will Demand Rights

[ Full text ]

  • Hits: 2773