Significant augmentation of human intelligence through artificial intelligence (AI) could be the biggest event in human history. “Unfortunately, it might also be the last, unless we learn how to avoid the risks,” according to Stephen Hawking, Stuart Russell, Max Tegmark, and Frank Wilczek.
Writing Thursday in the Independent, they contend that it would be a mistake “…to dismiss the notion of highly intelligent machines as mere science fiction” such as is depicted in the recently released movie “Transcendence,” starring Johnny Depp as researcher Will Caster.
As for “Transcendence,” I haven’t seen it, but film critic Scott Foundas says it depicts “…the culture of technophobia that gave us the predatory mainframes and cyborgs of '2001,' 'Demon Seed,' and 'Alien' and that early ’90s wave of cyber-paranoia thrillers ('The Net,' 'The Lawnmower Man,' 'Virtuosity') that now seem as quaint as dial-up Internet.”
Well, just because you suffer from cyber-paranoia doesn't mean the computers won’t be out to get you. “In the near term, world militaries are considering autonomous-weapon systems that can choose and eliminate targets,” write Hawking and his colleagues. In the medium term, AI could bring about great economic dislocation.
Looking farther ahead, they write, “…there is no physical law precluding particles from being organized in ways that perform even more advanced computations than the arrangements of particles in human brains.” Such agglomerations of particles, they continue, “…could repeatedly improve their design even further, triggering what Vernor Vinge called a 'singularity' and Johnny Depp's movie character calls 'transcendence.'”
They note that the technology of the singularity could outsmart financial markets, out-manipulate human leaders, and develop weapons we cannot understand. The short-term impact of AI depends on who controls it, they write, but add, “…the long-term impact depends on whether it can be controlled at all.”
Andrew Leonard in Salon is skeptical of the risks Hawking and his coauthors warn about. Leonard writes, “There’s nothing particularly new about the notion of runaway AI turning humans into a bunch of meat puppets—it’s one of the oldest and most popular tropes in science fiction. The notability here stems entirely from the fact that the warning comes from Hawking.”
Leonard takes issue with examples cited in the Independent article: autonomous vehicles; digital personal assistants like Siri, Google Now, and Cortana; and Watson, the computer that won Jeopardy!
Such inventions are amazing, he writes, “…but they are largely a product of advances in cheap sensor technology combined with the increasing feasibility of doing real-time data-crunching. The cars aren’t autonomous in a self-aware sense analogous to 2001′s Hal.”
Leonard writes, “We aren’t really all that much closer to creating real machine intelligence now than we were 20 years ago.” I'd argue for an even longer period—going back to Turing or before. Nevertheless, as Leonard writes, “We’ve just gotten much better at exploiting the brute force of fast processing power and big data-enabled pattern matching to solve problems that previously seemed intractable.”
He continues, “These advances are impressive—no question about it—but not yet scary. The machines aren’t thinking. They’re still just doing what they’re told to do.”
Nevertheless, it's worth repeating Hawking and his coauthors' quote that “…there is no physical law precluding particles from being organized is ways that perform even more advanced computations than the arrangements of particles in human brains.”
If, as the science writer Jennifer Ouellette contends, self awareness is an emergent property of the estimated 100 billion neurons in the human brain and the connections between them (just as a traffic jam is an emergent property of an agglomeration of vehicles and the roads and intersections they traverse), then a self-awareness of sorts may emerge from a sufficiently large and complex network of logic gates.
Short of that, motivation could certainly be simulated, and in a sufficiently complex system, a high-level programmed motive (“maximize return on investment” or “eliminate threats to the Washington, DC, airspace,” for example) could lead to unpredictable and adverse results.
So I concur with Hawking and his coauthors when they conclude their piece in the Independent by writing, “Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside nonprofit institutes such as the Cambridge Centre for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future Life Institute. All of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks.”