close and return to the article

 

 

Robots will not achieve world domination

Sponsored by:
Email this article to a friend
  

  By Rick Nelson, August 28, 2014

Robots may be coming for our jobs, but there is no reason to fear they will achieve world domination, despite the fears of some AI experts like Oxford professor Nick Bostrom.

That's according to Timothy B. Lee, writing at Vox. "Movies like the Terminator franchise and the Matrix have long portrayed dystopian futures where computers develop superhuman intelligence and destroy the human race—and there are also thinkers who think this kind of scenario is a real danger," he writes, in part in response to comments from Bostrom in an earlier post at Vox.

He cites five reasons why we won't face a machine-ruled dystopian future.

First, he writes, genuine intelligence requires more than raw computational power. He cites as an example locking a brilliant English language speaker in a room with stacks of books about the Chinese language. That English speaker will never become fluent in Chinese without interacting with Chinese speakers to learn subtle shades of meaning and social conventions.

"Most of the information you need to solve hard problems isn't written down anywhere, so no amount of theoretical reasoning, on its own, will get you to the right answers," he writes. He might have written that much of what is required for genuine intelligence hasn't been digitized.

Second, he says, machines are very dependent on humans—for energy, raw materials, and repair—and are likely to remain that way. He concedes the possibility that robots can be developed that can tend to machines and each other. He says that's unlikely "…due to a problem of infinite regress: robots capable of building, fixing, and supplying all the machines in the world would themselves be fantastically complex. Still more robots would be needed to service them. Evolution solved this problem by starting with the cell, a relatively simple, self-replicating building block for all life. Today's robots don't have anything like that and (despite the dreams of some futurists) are unlikely to any time soon."

Third, he addresses the argument of Bostrom and others that scientists will be able to emulate the human brain. Lee responds, "Neurons are complex analog systems whose behavior can't be modeled precisely the way digital circuits can. And even a slight imprecision in the way individual neurons are modeled can lead to a wildly inaccurate model for the brain as a whole."

He likens human-brain emulation to weather simulation, in which small errors early on snowball into large errors later.

Fourth, he writes, "To get power, relationships are more important than intelligence." He illustrates this point with a picture of the current and last four Presidents of the United States. Societies are not run by scientists, philosophers, or chess prodigies, he writes, adding, "Any plausible plan for taking over the world would require the cooperation of thousands of people. There's no reason to think a computer would be any more effective at enlisting their assistance for an evil plot than a human scientist would be."

Finally, he says, as intelligence becomes more pervasive, it will become less valuable. He concludes, "In a world of abundant intelligence, the most valuable resources will be those that are naturally limited, like land, energy, and minerals."

See these related articles:


<< Previous Blog: What college majors lead to underemployment?

Next Blog: SL Power focuses on test and measurement at ITC >>


Leave a Comment

Comments

By C.G. Masi on August 29, 2014

Sorry, Rick. On this one I have to disagree with you. There are only two differences between us (humans) and them (robots): technology used for fabrication, and complexity. Fabrication technology is arguably not important in the end. Complexity, however, is. Projecting well into the future, we can easily imagine robots reaching a complexity level where they ask, "What's in it for me?" If we're stupid enough to let them think they're better off without us, we'll have a problem. It's just like the doomsday scenario presented by "Mutually Assured Destruction" via nuclear weapons. So far, by the time any culture has reached the level where they can make it happen, they've reached a level where they're sophisticated enough to see how dumb it would be. Of course, there aren't any guarantees that it'll continue to work out that way, so we'd better continue to be aware of the danger.

By Wiliam Ketel on August 29, 2014

I am not really concerned about robots taking over as dictators, but rather that they can certainly make some aspects of our existence quite miserable. We see that already in things like the robotic stock traders and robotic phone systems refusing to do what we need. Robotic driverless cars that will never be in a hurry will bring our roads to a crawl. So the damage by robots will not be like the movies, but more like a growing misery.

By Radio Randy on August 29, 2014

Robots may not be able to "consciously" overrun the human race, but it is feasible that a malevolent group of humans could build a mass of killing machines and release them on the world. The robots would eventually "die" off, but not after decimating the human race. The one bright spot in this scenario is that humans are very resilient and enough could survive to repopulate the world and start over as with the biblical reference to Noah's Ark. Just a thought...

By fred mcgalliard on August 29, 2014

Our greatest danger is that our computer controlled systems will do exactly what we tell them to. And real artificial intelligence implies real - though artificial - feelings. Without that all we get is a chunk of hardware that will act when we request it. Best we understand ourselves before we get too far along this path.

12055