Technology

Eliezer Yudkowsky: The AI academic warning

08:30 am on 17 March 2024

Artificial intelligence poses a grave danger to the future of life on Earth, Eliezer Yudkowsky says. Photo: VICTOR de SCHWANBERG / Science Photo Library via AFP

World leaders need to "build the off switch" to shut down artificial intelligence before it wipes out humanity, a researcher is warning.

Eliezer Yudkowsky is an AI researcher and the co-founder of the Machine Intelligence Research Institute in California.

He told Sunday Morning host Jim Mora developments in AI were moving far too fast, posing a grave danger to the future of life on Earth.

"I would tell the leaders of the world to build the off switch - to put all the hardware running AIs into a limited number of locations under international supervision," he said.

Listen

That way, once people realised how dangerous artificial intelligence was, "we could shut it all down", he said.

"If humanity wakes up one morning and decides not to die, or even that they'd prefer to have the option not to die, that's how we do it."

Yudkowsky said scientists had not yet built AI that was smarter than humans.

ChatGPT-4, the current version, was no smarter than someone with an IQ of 100 - the average intelligence, he said.

However, "if things are allowed to run as they are", then AI that was "a little bit smarter" than humans was inevitable, he said.

If a developer built an AI that was smarter than humans, "you are not in control, you are dead".

Currently, AIs were connected directly to the internet during training - before they were tested, Yudkowsky said.

They could access email, servers like Discord, and apps like Taskrabbit, which connects users with 'Taskers' - people who carry out jobs when instructed in exchange for a fee.

AIs had already shown they were smart enough to get Taskers to do their bidding, without realising the user on the other end was not human, he said.

"I would say stop, do not permit training AIs more powerful than that."

When asked if the benefits of AI development could outweigh the risks - for example, if more intelligent AIs could make breakthroughs in healthcare or transform the working week - Yudkowsky compared it to trying to tame a dragon to plough a field.

"I personally would not mess with it."

AIs were focused on their tasks to the detriment of all else, Yudkowsky said. They wanted something neither good nor bad, but "that kills us as a side effect".

He gave the example of an intelligent AI whose task it was to build rhombuses or giant clocks. It could seize all the materials available, using none for anyone else, he said.

Another example he gave was of an AI given the task to run nuclear power plants. It might run all the world's nuclear plants at capacity - a situation in which humans could not survive.

"It also might deliberately kill us because it doesn't want us building other superintelligence that would compete with itself."

World superpowers, like China, the United States and the United Kingdom, needed to get together and put measures in place to shut AI down in future, he said.