Artificial intelligence is changing the nature of everything from jobs and the economy, to warfare, communications, privacy, and ethics. But its long-term impact remains to be seen. Will A.I. lead to a better, brighter future, or move us toward disaster?
“Like every powerful technology, A.I. is potentially dangerous,” said Facebook’s well-regarded A.I. Research Director Yann LeCun, speaking at a Data Driven NYC event on Tuesday.
The question of whether A.I.’s negative effects are likely to outweigh its positive ones is a hot topic for debateand speculation. Scientific analysis has not been comprehensive or sustained enough to gain any real insight. But a group of scientists is setting out to change that, conducting not just one study, but an actual century of studies, to take place at least once every five years for the next 100.
“The One-Hundred Year Study of Artificial Intelligence,” hosted by Stanford University and led by Microsoft Research director Dr. Eric Horvitz, will monitor A.I.’s advances and publish findings every five years in 18 different areas of interest, including key opportunities, democracy and freedom, law, criminal uses, human-machine collaboration, autonomy, and loss of control.
The study’s first report is scheduled to come out by the end of 2015, with additional reports to be published thereafter. Each will be produced by a group of experts chosen by Horvitz and a committee of computer scientists from leading research universities in the U.S. and Canada.
A major worry about artificial intelligence is that it’s seizing our jobs, displacing more and more human workers as robots grow increasingly sophisticated. That may put non-managerial positions especially at risk. Sebastian Thrun, the robotics developer who worked on Google’s driverless car, seems to agree. “My take is that A.I. is taking over,” Thrun told The New York Times. “A few humans might still be ‘in charge,’ but less and less so.”
Some say, though, that robots aren’t taking our jobs, but rather redefining them. Because A.I. requires training to recognize patterns of data and put them into context, humans remain key in that process. Humans supply inputs, telling the A.I. system how to properly interpret the data so that it can produce the correct outputs. As computer scientist Jaron Lanier said at Techonomy 2014 in November, “[People are] still needed. It’s just that they’re needed in a new way. … The new way of doing work is adding data to the cloud. The new way of doing work is adding valuable data to a big data statistical system.”
But beyond job security, some say A.I. endangers our very survival. “Loss of control of A.I. systems has become a big concern,” Horvitz told the Times. “It scares people.” Indeed, an April 2014 survey led by a professor at the University of Middlesex found that more than a third of Brits worried machines threatened their future. And public anxiety could swell further as leading thinkers express their fears, too.
Earlier this month, the surprisingly alarmed but much-renowned physicist Stephen Hawking told the BBC: “The development of full artificial intelligence could spell the end of the human race.” Tesla CEO Elon Musk echoed the sentiment at the October MIT AeroAstro Centennial Symposium, saying, “[W]e are summoning the demon” with artificial intelligence, and calling it perhaps “our biggest existential threat.”
When Musk’s warning was put to LeCun at the New York event, he said that we’re a long way from building machines that are smarter than humans. With time on their side, scientists can proceed with care and caution when deciding how much autonomy to give A.I., he continued. He also emphasized the difference between intelligence and autonomy, explaining: “You can have systems that are intelligent but not autonomous. They can solve problems but don’t decide by themselves what problems to solve.”
Whatever the outcomes of artificial intelligence, Horvitz believes the Hundred-Year Study of A.I. could help bring clarity and ease apprehension. “Even if the anxieties are unwarranted, they need to be addressed,” he told the Times.
View editorial post