Professor Bengio arrived at the figure based on several inputs, including a 50 per cent probability that AI would reach human-level capabilities within a decade, and a greater than 50 per cent likelihood that AI or humans themselves would turn the technology against humanity at scale. Speaking with Background Briefing, Professor Bengio shared his p(doom), saying: "I got around, like, 20 per cent probability that it turns out catastrophic." "What I've been saying now for a few weeks is 'Please give me arguments, convince me that we shouldn't worry, because I'll be so much happier.' "We don't know how much time we have before it gets really dangerous," Professor Bengio says. They're imagining an AI with a penchant for sleight of hand, adept at concealing any gap between human instructions and AI behaviour.īut now, he believes we're travelling too quickly down a risky path. Instead, there's an emerging group of machine learning experts and industry leaders who are worried we're building "misaligned" and potentially deceptive AI, thanks to the current training techniques. These concerns aren't coming from conspiracy theorists or sci-fi writers though. The scenarios contemplated as part of that conversation are terrifying, if seemingly farfetched: among them, biological warfare, the sabotage of natural resources, and nuclear attacks. So your p(doom), if you have one, is your best guess at the likelihood - expressed as a percentage - that AI ultimately turns on humanity, either of its own volition, or because it's deployed against us. The "doom" component is more subjective but it generally refers to a sophisticated and hostile AI, acting beyond human control. It's both a dark in-joke and potentially one of the most important questions facing humanity. Artificial intelligence experts have been asking each other a question lately: "What's your p(doom)?"
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |