Robots may be coming for our jobs, but there is no reason to fear they will achieve world domination, despite the fears of some AI experts like Oxford professor Nick Bostrom.

That's according to Timothy B. Lee, writing at Vox. “Movies like the Terminator franchise and the Matrix have long portrayed dystopian futures where computers develop superhuman intelligence and destroy the human race—and there are also thinkers who think this kind of scenario is a real danger,” he writes, in part in response to comments from Bostrom in an earlier post at Vox.

He cites five reasons why we won't face a machine-ruled dystopian future.

First, he writes, genuine intelligence requires more than raw computational power. He cites as an example locking a brilliant English language speaker in a room with stacks of books about the Chinese language. That English speaker will never become fluent in Chinese without interacting with Chinese speakers to learn subtle shades of meaning and social conventions.

“Most of the information you need to solve hard problems isn't written down anywhere, so no amount of theoretical reasoning, on its own, will get you to the right answers,” he writes. He might have written that much of what is required for genuine intelligence hasn't been digitized.

Second, he says, machines are very dependent on humans—for energy, raw materials, and repair—and are likely to remain that way. He concedes the possibility that robots can be developed that can tend to machines and each other. He says that's unlikely “…due to a problem of infinite regress: robots capable of building, fixing, and supplying all the machines in the world would themselves be fantastically complex. Still more robots would be needed to service them. Evolution solved this problem by starting with the cell, a relatively simple, self-replicating building block for all life. Today's robots don't have anything like that and (despite the dreams of some futurists) are unlikely to any time soon.”

Third, he addresses the argument of Bostrom and others that scientists will be able to emulate the human brain. Lee responds, “Neurons are complex analog systems whose behavior can't be modeled precisely the way digital circuits can. And even a slight imprecision in the way individual neurons are modeled can lead to a wildly inaccurate model for the brain as a whole.”

He likens human-brain emulation to weather simulation, in which small errors early on snowball into large errors later.

Fourth, he writes, “To get power, relationships are more important than intelligence.” He illustrates this point with a picture of the current and last four Presidents of the United States. Societies are not run by scientists, philosophers, or chess prodigies, he writes, adding, “Any plausible plan for taking over the world would require the cooperation of thousands of people. There's no reason to think a computer would be any more effective at enlisting their assistance for an evil plot than a human scientist would be.”

Finally, he says, as intelligence becomes more pervasive, it will become less valuable. He concludes, “In a world of abundant intelligence, the most valuable resources will be those that are naturally limited, like land, energy, and minerals.”

See these related articles:

Robots will not achieve world domination
Published since 1962, Evaluation Engineering has delivered in-depth technical information to the test engineering market for more than 50 years, serving engineers, engineering managers, and corporate managers responsible for test measurement and quality of electronic products and systems. EE is a multimedia resource providing a monthly magazine and digital edition, daily e-newsletters, product showcases, monthly special reports, trade show events, and a comprehensive website for buyers and specifiers of test equipment in semiconductors, medical, communication, RF, microwave, and wireless applications. An annual subscription is free to qualified test engineering professionals.