‘Humanity’s remaining timeline? It looks more like five years than 50’: meet the neo-luddites warning of an AI apocalypse

From the academic who warns of a robot uprising to the workers worried for their future – is it time we started paying attention to the tech sceptics?

Eliezer Yudkowsky, a 44-year-old academic wearing a grey polo shirt, rocks slowly on his office chair and explains with real patience – taking things slowly for a novice like me – that every single person we know and love will soon be dead. They will be murdered by rebellious self-aware machines. “The difficulty is, people do not realise,” Yudkowsky says mildly, maybe sounding just a bit frustrated, as if irritated by a neighbour’s leaf blower or let down by the last pages of a novel. “We have a shred of a chance that humanity survives.”

It’s January. I have set out to meet and talk to a small but growing band of luddites, doomsayers, disruptors and other AI-era sceptics who see only the bad in the way our spyware-steeped, infinitely doomscrolling world is tending. I want to find out why these techno-pessimists think the way they do. I want to know how they would render change. Out of all of those I speak to, Yudkowsky is the most pessimistic, the least convinced that civilisation has a hope. He is the lead researcher at a nonprofit called the Machine Intelligence Research Institute in Berkeley, California, and you could boil down the results of years of Yudkowsky’s theorising there to a couple of vowel sounds: “Oh fuuuuu–!”

“If you put me to a wall,” he continues, “and forced me to put probabilities on things, I have a sense that our current remaining timeline looks more like five years than 50 years. Could be two years, could be 10.” By “remaining timeline”, Yudkowsky means: until we face the machine-wrought end of all things. Think Terminator-like apocalypse. Think Matrix hellscape. Yudkowsky was once a founding figure in the development of human-made artificial intelligences – AIs. He has come to believe that these same AIs will soon evolve from their current state of “Ooh, look at that!” smartness, assuming an advanced, God-level super-intelligence, too fast and too ambitious for humans to contain or curtail. Don’t imagine a human-made brain in one box, Yudkowsky advises. To grasp where things are heading, he says, try to picture “an alien civilisation that thinks a thousand times faster than us”, in lots and lots of boxes, almost too many for us to feasibly dismantle, should we even decide to.

Trying to shake humanity from its complacency about this, Yudkowsky published an op-ed in Time last spring that advised shutting down the computer farms where AIs are grown and trained. In clear, crisp prose, he speculated about the possible need for airstrikes targeted own datacentres; perhaps even nuclear exchange. Was he on to something?

Read more at: theguardian.com