A TOP AI expert has issued a stark warning over the potential for world extinction that super-smart AI technology could bring.
Eliezer Yudkowsky is a leading AI researcher and he claims that “everyone on the earth will die” unless we shut down the development of superhuman intelligence systems.
The 43-year-old is a co-founder of the Machine Intelligence Research Institute and (MIRI) and claims to know exactly how “horrifically dangerous this technology” is.
He fears that when it comes down to humans versus smarter-than-human intelligence – the result is a “total loss”, he wrote in TIME.
As a metaphor, he says, this would be like a “11th century trying to fight the 21st century”.
In short, humans would lose dramatically.
On March 29, leading experts from OpenAI submitted an open letter called “Pause Giant AI Experiments” that demanded an immediate six month ban in the training of powerful AI systems for six months.
However, the American theorist says he declined to sign this petition as it is “asking for too little to solve it”.
The threat is so great that he argues that extinction by AI should be “considered a priority above preventing a full nuclear exchange”.
He warns that the most likely result of robot science is that we will create “AI that does not do what we want, and does not care for us nor for sentient life in general.”
We are not ready, Yudkowsky admits, to teach AI how to be caring as we “do not currently know how”.
Instead, the stark reality is that in the mind or a robot “you are made of atoms that it can use for something else”.
“If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.”
Yudkowsky is keen to point out that presently “we have no idea how to determine whether AI systems are aware of themselves”.
What this means is that scientists could accidentally create “digital minds which are truly conscious” and then it slips into all kinds of moral dilemmas that conscious beings should have rights and not be owned.
Our ignorance, he implores, will be our downfall.
As researchers don’t know whether they are creating self-aware AI then, he says, “you have no idea what you are doing and that is dangerous and you should stop”.
Yudkowsky claims that it could take us decades to solve the issue of safety in superhuman intelligence – this safety being “not killing literally everyone” – and in that time we could all be dead.
The expert’s central point is this: “We are not prepared. We are not on course to be prepared in any reasonable time window. There is no plan.
“Progress in AI capabilities is running vastly, vastly ahead of progress… in understanding what the hell is going on inside those systems. If we actually do this, we are all going to die.”
To avoid this earth-shattering catastrophe, Yudkowsky believes the only way is to halt all new training on AI technology worldwide with “no exceptions, including government and military”.
If anyone breaks this agreement, the AI expert proves his seriousness by saying that governments should “be willing to destroy a rogue datacenter by airstrike”.
“Have it clear that anyone talking of arms races is a fool. That we all live or die as one, in this, is not a policy but a fact of nature.”
Hammering his point home, Yudkowsky ends with: “If we go ahead on this everyone will die”.
“Shut it down.”