Matt Peckham has a Time article about banning killer robots:
As if deploying drones— unmanned aerial vehicles— on the battlefield wasn’t controversial enough, here’s an even more disturbing question: should we allow weapon-wielding robots that can “think” for themselves to attack people?
Imagine a drone that didn’t require a human controller remotely pulling its strings from some secure remote location— a drone that could make decisions about where to go, who to surveil, or who to liquidate.
No one’s deployed a robot like that yet, but international human rights advocacy group Human Rights Watch sees it as an issue we need to deal with before the genie’s out of the bottle. The group is calling for a preemptive ban on all such devices “because of the danger they pose to civilians in armed conflict”. They’ve even drafted a fifty-page report titled Losing Humanity: The Case Against Killer Robots,which lays out its case against autonomous weaponized machines. “There’s nothing in artificial intelligence or robotics that could discriminate between a combatant and a civilian,” argues Noel Sharkey, a professor of artificial intelligence and robotics at the University of Sheffield, in the HRW video above. “It would be impossible to tell the difference between a little girl pointing an ice cream at a robot, or someone pointing a rifle at it.”
And that’s the chief concern: that a robot given autonomy to choose who to attack, without human input, could misjudge and either injure or kill unlawful targets like civilians in combat. Autonomous, non-sentient robots would also, obviously, lack human compassion, as well as the ability to assess a situation proportionally— gauging whether risk of harm to civilians in a given situation outweighs the need to use force.
Autonomous weaponized robots also raise the thorny philosophical question of who would be accountable, say such a robot did injure or kill a civilian (or anyone else, unlawfully). Remember, autonomous doesn’t equal conscious, so punishing the robot’s out. Who then? The operational personnel that programmed or deployed it? The researchers that designed it? The military or government in general?
In a statement accompanying the report, HRW warns that we’re probably just two or three decades away, maybe even less, from weaponized, autonomous robots:
Fully autonomous weapons do not yet exist, and major powers, including the United States, have not made a decision to deploy them. But high-tech militaries are developing or have already deployed precursors that illustrate the push toward greater autonomy for machines on the battlefield. The United States is a leader in this technological development. Several other countries– including China, Germany, Israel, South Korea, Russia, and the United Kingdom– have also been involved. Many experts predict that full autonomy for weapons could be achieved in twenty to thirty years, and some think even sooner.
We’re already seeing non-weaponized autonomous robots pop up in contemporary research, such as a swarm of insect-like robots that can fly in lockstep “like escapees from Space Invaders”, or an eerily human-like robot that can climb and leap from obstacles, unaided. Boston Dynamics is even developing a robot for the Pentagon that can autonomously hunt human beings across rough terrain.
What does HRW recommend we do? Establish international as well as domestic laws that prohibit the development, production or use of such weapons, then initiate reviews of existing technologies that might preempt autonomous weapons and create a professional code among scientists to consider the many ethical and legal issues as we roll forward.
Maybe it’s time we revisited author Isaac Asimov’s three laws of robotics, codified in a 1942 short story, and (pun half-intended) foundational in getting people talking about the ethics of artificial intelligence.
Rico says yeah, like that'll work...
No comments:
Post a Comment