Recent Geneva convention achieves only lukewarm results towards total ban
I read a story the other day on the National Public Radio (NPR) website titled “Weighing the Good and the Bad of Autonomous Killer Robots in Battle.” I was struck by how the “bad” seemed quite clear and straightforward, while the “good” consisted of murky theorizing and money-motivated promotion. The very existence of the article in a reputable publication is disturbing.
On the “cons” side, a Harvard Law School professor points out that this is, first and foremost, a human rights issue. She says, “It would undermine human dignity to be killed by a machine that can’t understand the value of human life.” She also points out that holding anyone responsible for war crimes would become difficult, if not impossible. Case closed, right?
Well, not quite. A representative from the Center for a New American Security wants you to hear his side of the story. Or theory. Or something… “What exactly is a decision, anyway?” (Oh, it’s a really easy question!) “Is my Roomba making a decision when it bounces off the couch and wanders around? Is a land mine making a decision? Does a torpedo make a decision?” asks the “ethical autonomist.” (More easy questions!) The answers? No, no, and… no. That’s because the answer to the original question is that a decision is “a conclusion or resolution reached after consideration.” (Merriam-Webster)
Only living things are capable of real consideration. Those who argue otherwise seem to fall into at least one of two categories: a) spend too much time with computers or, b) have a financial stake in the acceptance or rejection of new weapons technologies.
It’s clear that people are already getting paid to do public relations work for the robot manufacturers. If this particular, obvious disaster-in-the-making becomes a working reality, it will be a usual suspect that caused us to look the other way: greed. Avarice may be the great ethics eraser of the ages.
Besides the (many) people who stand to make, and are making, loads of money from developing, building an, most importantly, justifying these technologies, most rational people would say, “Hmm… ‘autonomous killer robots’… No. Next!” Let’s not forget the citizen war enthusiasts whose interests are in the “cool” factor. This type of warfolk adds fuel to the fire – and they vote.
The great freedoms that come with our mix of capitalism and the First Amendment require prudence and responsibility that seem daily like a faint morning mist, actively sought or completely missed. Only 14 countries have called for a total ban on killer robots. The U.S. is not one of them.
For the moment, it seems the richest most powerful nation in the world is okay with this attitude: “War is not the problem. It’s how we kill that needs improvement. All we need is more technology and everything will be just fine.”
My upcoming novel, ULTIMATE ERROR, is about the potential for an existential-level catastrophe caused by a human mistake.