It’s been almost six months since the United Nations urged the world’s armies to take a moment and think about what they’re doing before creating killer robots. Not an all-out ban like Human Rights Watch or the International Committee for Robot Arms Control are pushing for, mind you—just the suggestion to cool it for a bit and make sure the development of lethal machines isn’t going to lead to, you know, the destruction of the human race. Yet a couple reports suggest the Pentagon has no intention of slowing its forward charge toward robot warfare.
Most recently, Computerworld reported that US Army leaders were testing out remote-controlled robots with machine guns. A handful of private companies that are developing weaponized robots modeled their products for the military last week at Fort Benning in Georgia, to judge how they could be used in combat. Lieutenant Col. Willie Smith said at the event that he hopes to unleash the killing machines onto the battlefield within five years—not as weapons their own right, but as trusted members of the squad.
Notably, the weaponized robots modeled last week are only semi-autonomous—a human holding the controller still has to make the decision to shoot. The real, looming threat futurists are concerned with are lethal autonomous robots—the key distinction being that the fully autonomous machines can make the decision to kill on their own, without any human intervention. At this point there’s no telling if LARs will ever be used in battle, but according to a report by former intelligence analyst Joshua Foust, published in Defense One this month, it’s something the US is seriously considering.
Engineers and policy makers are working to study and develop drones that are increasingly autonomous, and could eventually launch a missile at a target of its own fruition, according to Foust. While that technology doesn’t exist yet, advances in artificial intelligence suggest it’s a matter of when, not if. DARPA is currently working on developing smart machines that mimic the human brain. The idea is that the futuristic machines will not only be able to learn and think like a human but think on the fly to make real-time decisions based on what’s going on around them.
As the Guardian noted back in May, the Pentagon spends about $6 billion a year to research and develop autonomous machines, but has claimed the autonomous weapons will only be used for “non-lethal” purposes. Drones were intended to be for surveillance and reconnaissance only—not offensive warfare. But the spy-planes are so useful as a weapon of war, it’s as if Pentagon can’t help itself.
The reasons are many: Autonomous planes are precise enough to tell the difference between an enemy combatant and innocent civilian, which could limit civilian casualties. They are smarter and more efficient than a human operator, and avoid putting human lives in danger. They’re also less vulnerable to being hacked by the enemy if there’s no human control.
Of course for every good reason to have smart killing machines in your back pocket there’s an equally disconcerting one. For starters, the concern that humans will become detached from the consequences of killing other people—not awesome. And while it’s becoming increasingly clear that you can teach a computer to think, teaching a computer to feel is another thing altogether. In other words, machines don’t have morals. Two, it’s in direct violation of rule no. 1 of the Three Laws of Robotics: A robot may not injure a human being. And three, the machines will obviously eventually rise up against the humans and kill us all.
Which brings us back to the UN’s sensible recommendation for a global moratorium on developing LARs. The question is, are modern militaries with increasingly sophisticated technology at their fingertips going to hold back?
Read More Articles By Motherboard.Vice.comVia