Robot Apocalypse
Terminator. Source: http://graphicallya.blogspot.com/2012/04/esthetique-terminator.html

Robot Apocalypse

Artificial intelligence is coming. With the recent winning of Jeopardy! by IMB’s Watson and the most successful Turing test to date, it’s undeniably on its way.  And with it comes a whole slew of prophetic warnings. Through books like Isaac Aismov’s I, Robot and movies like The Terminator and 2001: A Space Odyssey, we’ve all been exposed to the potential for technology to revolt against the human race.

Already there are concerns about the implementation of robots in the military, on the road or even in hospitals. In an article for The Economist, Tom Standage suggests that we need to give these machines some ability to navigate moral situations. He says “As they become smarter and more widespread, autonomous machines are bound to end up making life or death decisions in unpredictable situations, thus assuming—or at least appearing to assume—moral agency.”

How do we do we imbue autonomous machines with the ethics that will allow them to successfully integrate into our society instead of ultimately destroying it? Elon Musk, the CEO of Telsa and technology innovator, believes that we can’t. Within the last couple of months, Musk has repeatedly warned against the use and development of such advanced technologies, citing the potential for a Terminator-esque situation. And of course there’s always the option of just banning the widespread use of autonomous technologies or never developing them at all.

However, that brings moral complications of its own. Do we have the responsibility and obligation to use robots if they will prevent loss of life? Standage claims that “Robot soldiers would not commit rape, burn down a village in anger or become erratic decision-makers amid the stress of combat” and that “Driverless cars are very likely to be safer than ordinary vehicles, as autopilots have made planes safer.”

Our society has an inherent distrust of artificial intelligence, fueled by Hollywood’s depictions of a future torn apart by a creation we cannot control. We know that however much we fear the ashy, smoke-streaked, rubble-strewn future of The Terminator, it’s not real; nevertheless, these images, like Musk’s comments, could help create what Adam Thierer, a specialist in technology law and policy, dubs a “technopanic,” which is a “moral panic over a vague, looming technological threat driven by crowd irrationality and threat inflation rather than sensible threat assessment.”

Artificial intelligence does, theoretically, have the potential to be extraordinarily destructive, but it isn’t yet even close to the independence or ability depicted by Hollywood. It’s soon to be so, but not yet. There’s still time to throw off the technopanic and do something.

Some claim that we should program the machines to conform to Isaac Asimov’s Three Laws of Robotics, but the entire plot of  I, Robot is centered around the difficulties that arise when these laws are subverted. Others, like Zoltan Istvan, recommend that we program a system of ethics entirely unique to autonomous machines and artificial intelligence. They would be, after all, a new kind of being, and therefore a new moral code would make sense.

Most people, however, immediately jump to the solution of programming these machines with human morality, as well as “love” and “kindness” and “forgiveness.” But this presupposes that such qualities are programmable, and their characteristics agreed upon. The question is, how do we implement a moral code we don’t even adhere to? If we are obligated to use robots to save lives and improve society, how do we teach them right from wrong?

Posted by Alex Jensen

Leave a Reply