Published in Can we stop the rise of killer robots?BLDG20

A bearded man in a turban drives a white van along a bumpy road in an unnamed desert country. He approaches a military checkpoint, but does not slow down to stop for inspection. Cue the ominous music and dramatic confrontation sequence. An autonomous computer vision system detects an anomaly, and acquires the target. It slews its weapon to track the van’s movement, and requests permission to fire from its operators. Camouflaged men kneeling over a laptop view the scene, the van bracketed by red crosshairs. They agree with the threat identification algorithm’s assessment, and press the button to open fire. The gun sprays bullets towards the van, igniting it in a violent conflagration. The video stops, the lights come up, and the room erupts in applause.

I am sitting in a nondescript conference center in San Diego, attending the 2007 meeting of the Association for Unmanned Systems International; also known as the “drone lobby.” I am a young engineer in his first job out of college, sent directly into the belly of the military-industrial beast. This video was presented by one of the top-tier defense contractors that sponsored of the conference. It represents their vision of the future. 

How did I get here? I walked through the MIT Career Fair with my freshly printed resume, looking for a job as I approached graduation the previous spring. I walked past desks for Lockheed, Boeing and the CIA, staffed by men and women in suits with dour faces. I arrived at a small desk staffed by an eager young man wearing a plain white t-shirt and jeans. “Do you like robots?” he asked; “want to help people?” I sure do. I had long tried to merge my technical interests with positive causes, trying internships at policy think-tanks in Washington DC and California, and software jobs near Boston. But none of them seemed to fit, and I was looking for a new direction.

“Our robot is designed to help people; to save them from danger.” It stood six foot four on two tracked legs, with hydraulic tubes snaking out of its arms like sinewed muscles. Atop this imposing body it had a small head, with a smooth plastic face and cameras for eyes. The recruiter said it was designed to put the person it was rescuing at ease. It looked like maybe it would be your friend afterwards.

I got an interview with the small robotics company, and eventually a job. I worked on computer simulations to develop motion controls, watching our robot lift blocks on an infinite green digital landscape. I wrote software to help it balance on the ends of its legs, and eventually got to test it on the real hardware. For about a month it was my job to kick the robot, validating that the controls were robust enough to recover from a disturbance. They worked, and our robot stood tall. It could carry the weight of a dummy soldier, and do arm curls with 300-pound bars.

The work was interesting, the pay was good, and my colleagues were congenial. But I began to notice something was off when I tried to bring up current events for conversation at lunch over the salad bar.

The Iraq war was still raging, and drone strikes were beginning to become commonplace. I was curious about how the other engineers felt about this, and the impact our work might have on the field. Our robot wasn’t designed to be weaponized, but it was funded by the military, and they would have ultimate control over its use in deployment. I had been in discussions where our funders had asked if it could conduct “point man operations”: putting a gun on it and sending it in first.

“But our robot won’t hurt people, it will fight other robots” said one fresh-faced engineer. As if the pureness of our intentions was enough to imbue the robot with compassion and empathy. “But that’s not how it works”, I replied, “we’re fighting an asymmetric war. The people on the other don’t have their own robots.” This had not occurred to him. “That’s not our fault,” he said. “And besides, we’re just building a tool. It’s up to the military to decide how it gets used.”

This line of argument is a familiar one to technical people, who claim that what they build is apolitical, and that the messy real world of ethics isn’t worth worrying about until the technology is out of the lab. It was a familiar argument to my grandfather, who worked on what became the Manhattan Project when it was still at the University of Chicago. He was a graduate student in nuclear physics, but wasn’t asked to come when the project moved to Los Alamos. Ostensibly because of his personal politics, they said he watched too many Russian movies.

Was I working on a similarly destructive project?

Any student of science, technology and society recognizes that technology has politics, and that the systems we design reflect our inherent worldview. Our robots do not follow some magical set of Asimov’s three laws; they are built by men and corporations to project our power across the globe. Their programming reflects our beliefs and biases.

The robot at the checkpoint in the video had a pre-programmed set of characteristics to look for. A large vehicle, driving quickly, not stopping for the checkpoint met its probabilistic criteria. It “asked” the soldiers for permission to fire, but they didn’t double check the program’s math; they pressed the red button and fired, destroying the van and any notion of the rules of engagement.

How different was this (fictional) behavior from that of the Apache gunners in the Wikileaks video “Collateral Murder”, or the Blackwater mercenaries in Nissour Square? Are robot operators exempt from ethics because the software chose the target? Is the programmer responsible for any innocent civilians that his software marked for death?

The keynote speaker at the conference played another video to close his presentation, this one from the Star Wars series. A robot army descended from spaceships, removed from their racks and deployed to fight the natives, who were vastly outnumbered and outgunned by the invaders. “This is what I want; can you build it for me?” the general said, seemingly without irony. “We’re working on it, sir.”

– Josh Levinger is the Director of Technology at the Citizen Engagement Lab, where he builds online tools to empower political campaigns.