Skip to Content, Navigation, or Footer.
Sunday, Nov. 24, 2024
The Observer

Killer robot report

Killer robots should concern us all. Science-fiction author Isaac Asimov was also worried about them, and in his epic Foundation series, as well as “I, Robot,” he proposed three laws to keep robots from becoming a threat to people: 1) a robot may not injure a human being or, through inaction, allow a human being to come to harm; 2) a robot must obey any orders given to it by human beings, except where such orders would conflict with 1); and 3) a robot must protect its own existence as long as such protection does not conflict with 1) or 2). Asimov later added a “zeroth law,” an overarching principle that a robot may not harm humanity, or by inaction, allow humanity to come to harm. This zeroth principle arguably permits a robot to harm an individual if it is for the better good of humanity.

The zeroth law came to mind earlier this year when a Dallas grand jury declined to bring charges against police officers who killed a suspect with a robot-delivered bomb. You may remember the incident that took place July 7, 2016, when five officers were killed by a sniper during a demonstration against police violence. Dallas police used a bomb mounted on a robot to kill the gunman who was holed up inside a parking garage.

Even at the time, questions were raised about the methods used to kill the sniper when he was surrounded by heavily armed police in a parking garage and could have been approached with armored vehicles and subdued with less deadly measures, such as tear gas. The sniper’s actions, while admittedly horrific, never were subject to a trial with the use of the robot substituted for due process, including an assessment of the mental state of the killer. This lack of due process may have been especially critical, as the shooter was an African-American Army reservist who served in Afghanistan; his anti-police animus may have been fueled by post-traumatic stress, but we’ll never really know.

The decision to use the robot was made after police had tried to negotiate an end to an hour-long standoff. An algorithm for the use of the killer police robot was taking shape: killing of police plus mounting frustration permits the use of deadly force, delivered by a bomb-wielding robot (ultimately operated by a frustrated and angry person). What made the use of the killer robot in this instance acceptable for many people was the fact that the robot was not autonomous: a number of persons were involved in handling and loading the bomb, directing the robot and making the decision to detonate the bomb. In a disturbing development, an autonomous killer robot may have already come down the road.

First, let’s delve into what an algorithm does. As commonly used, an algorithm is a system that takes an array of data, assesses it and prescribes actions according to a given set of criteria. The general intention is to automate and speed up the decision-making process (“if x, then y; if y, then z”). When used properly in conjunction with human input, algorithms have expanded our abilities in medicine, aeronautics and communications. Without a doubt, algorithms have improved our lives in many ways. Algorithmic decision-making has spread to the granting of loans, bail, benefits, college entrance decisions and job interviews. The robotic approach to life is attractive because it promises machine-driven objectivity, but it is only as objective as the person behind the program.

Self-driving cars have long been a dream of artificial intelligence scientists who have seen the complicated decisions and actions involved with driving a car as discrete programmable steps that could be handled by dedicated computers. Unfortunately, the recent death of a woman hit by a self-driving car highlights a problem with complicated algorithms that might produce unforeseen and unintended results.

On March 18, a customized Uber Volvo XC90 drove down a four-lane road in Tempe, Arizona, at just over 40 mph. The SUV was driving autonomously, with no input from its human backup driver. The car’s radar and light-emitting sensors detected an object in the road ahead. Algorithms crunched a database of recognizable mechanical and biological objects, searching for a match for the anomalous data it was receiving. The computer determined the object was another car, which programs suggested would speed away as the robot Volvo approached.

Eventually it found a definite match: the object was a bicycle with shopping bags hung from the handlebars. The computer handed control over to the human operator, just as the car overtook the bike. The cyclist was struck and killed. The complex algorithms piled upon algorithms had proven no match for a real-world situation that had not been programmed into the car’s computers. The flaw may have been as simple as a programmer who was not familiar with a cyclist needing to get home with groceries.

These incidents serve as reminders that computers are fast, not intelligent. The maxim that computers allow us to screw up twice as bad, in half the time, is not always true, but it is always a possibility unless we stay vigilant. Computer scientists sometimes speak of the ‘ghost in the machine’ to describe the unintentional or unexpected results of complex software programs. Usually these outcomes are benign or just frustrating as systems crash or simply stop working.

But sometimes, the ease and speed offered by automated systems is so alluring that we forget to ask the tough questions, especially when we focus on a desired outcome, be it swift “justice,” effortless travel or the “ideal” college student. The ultimate decision makers — the people writing the algorithms or sending in the bomb robots — must always be held accountable. The loss of even a single human life should never be seen as acceptable to further robotic autonomy. If we cede responsibility to the robots, humanity will not just come to harm, it will fade to a feeble ghostly presence in myriad machines.

Ray Ramirez is an attorney practicing, yet never perfecting, law in Texas while waiting patiently for a MacArthur Genius Grant. You may contact him at patrayram@sbcglobal.net

The views expressed in this column are those of the author and not necessarily those of The Observer.