Police Use Robot to Kill Sniper

The infamous murder of five police officers that occurred during a Black Lives Matter protest in Dallas, Texas has brought to light a new police method for controlling the threat of snipers: the use of killer robots.

In an effort to avoid putting more police officers in danger during the attack, the Dallas police force used a robot to deliver an explosive device that ultimately ended the sniper’s life.

kbotWhile robots and unmanned systems have been in use by the US military before (drones, for example, are commonly used to drop bombs in locations where the military believes that terrorist leaders are hiding), this incident presents the first time that American police have used technology to kill a suspect. The military has released an estimate that about 2,500 “combatants” have been killed in 473 air strikes performed by drones, along with perhaps 100 non-combatants accidentally hit by the attacks. Many critics have voiced suspicions that these numbers are too low.

“Other options would have exposed our officers to greater danger,” the Dallas police chief said in a statement after the event. Still, the incident has spawned a debate regarding the use by police of “killer robots.”

Supporters of the military’s use of automated technology for killing generally argue that drones are both more effective than manned planes and reduce the risk to the pilots that would have to fly them otherwise. Those against the military’s use of drones argue that the lack of risk accomplished by using automated technology presents a dangerous fundamental change to the nature of military operations because it lowers the stakes in terms of using legal force.

South Korea uses robotic guns to guard the demilitarized zone it shares with North Korea. The guns are equipped with both heat and motion detectors as well as long-range weapons and speakers that announce imminent danger to any perceived person who enters the zone. South Korean officials have posited that the gun bots present a major advantage over having armed soldiers in that they neither tire nor do they fall asleep, an issue that plagues many human sentries. That said, the robots do not fire of their own accord; upon noticing a potential threat, a message is sent to a command center where a human ultimately decides whether to use force.

That brings us back to the Dallas killer robot: instrumental to the foundations of arguments made for or against its use is the fact that the robot was under human control during the entirety of its deployment. The technological and moral challenge revolving around killer robots tends to be a problem of automated killing; this was not the case in Dallas, though the technological advancement that the police department demonstrated did attract a fair amount of public attention.

Local and federal investigators work the scene where one or more gunmen opened fire on Dallas police officers last night in Dallas, Friday, July 8, 2016. A peaceful protest in Dallas over the recent videotaped shootings of black men by police turned violent Thursday night as an unknown number of people shot at officers, killing five and injuring seven, as well as two civilians. (AP Photo/Gerald Herbert) ORG XMIT: TXGH107

In the case of a truly autonomous killer robot, no person would be necessary in the decision for the robot to shoot. There remain many questions regarding how a robot would be programmed to deal with complex situations and the ethical dilemmas often presented to a soldier hoping to avoid civilian casualties. While the idea of a soldier robot murdering civilians is certainly terrifying, at the very least the deployment of automated soldiers would likely cut down on the cases of sexual assault in times of war.

 

Leave a Reply

Your email address will not be published. Required fields are marked *