36% of Scientists Are Afraid Robots May Inflict "Judgment Day" on Humans in Near Future
"Maybe we have a problem."
More than one-third of scientists fear that robots will inflict a Terminator 2-style "judgment day" on humanity. That's according to a survey by researchers at NYU, who found that 36% of the scientists polled found it "plausible that decisions made by AI or machine learning systems could cause a catastrophe this century that is at least as bad as an all-out nuclear war." Read on to find out if experts say you need to start practicing your Arnold Schwartzenegger impression.
In the 1991 film Terminator 2: Judgment Day, an artificial intelligence (AI) system gains sentience and takes over the world, inciting nuclear holocaust and a human-vs.-robot war. The study involved 327 scientists who had recently published research on AI, and the 36% of the scientists surveyed agree the disasters postulated in the movie may not be too far-fetched. The survey was sent to scientists involved in natural language processing, which involves building computers that replicate humans' ability to discern meaning and gather knowledge from language, the UK Times reported. "The survey was designed to shed light on how members of this community think about the controversial aspects of their work," the news outlet said.
Additionally, 57% said that there had already been "significant steps toward the development of artificial general intelligence," meaning a machine with the intellectual ability of a human." Sixty percent of the scientists also said the potential carbon footprint of AI was a "major concern." And 59% said that in their field "ethical considerations can sometimes be at odds with the progress of science."
Lead researcher Julian Michael acknowledged that some scientists in the study were likely thinking of "sci-fi renegade AI scenarios" in which rogue computers run amok. "On the other hand, some were definitely thinking of more classic geopolitical disasters with AI splashed in," he told the Times. "For example, if machine learning [a form of AI] is deployed in some critical applications, like a nuclear warning system and it makes a mistake . . . then maybe we have a problem."
"AI is far from developing into anything like Skynet," the AI system in Terminator 2, said Tim Persons, the chief scientist and managing director of the Government Accountability Office, in August 2019. "That would require AI technologies with broad reasoning abilities, sometimes called third-wave AI, which are highly unlikely in the foreseeable future." "Currently, the most advanced AI is still in its second wave, driven by what's called machine learning—in which algorithms use massive datasets to infer rules about how something works with little to no human guidance," he added. "In comparison, the first wave implemented rules created entirely by humans."
The NYU study's results come just months after Google suspended, then fired, a senior software engineer for saying that one of the company's unreleased features, a chatbot named LaMDA, was sentient—meaning, it had reached a level of consciousness. Last July, Google said it had reviewed the claims and called them "wholly unfounded." The company said it takes the development of AI "very seriously" and is committed to "responsible innovation." Some experts dismissed the engineer's claims. "Nobody should think auto-complete, even on steroids, is conscious," Gary Marcus, founder and CEO of Geometric Intelligence, told CNN Business.
But this new survey indicates that some scientists aren't so confident that will always be the case.