Just because robots can do jobs that humans otherwise do, does that mean that they should? This question becomes especially difficult when we task a robot with applying lethal force. At an alarming rate, militaries from around the globe are employing AI to optimize operations and build weapons systems (i.e. robots) to take on more and more roles and tasks previously undertaken by human warfighters. But we should first ask: What does it mean to give a robot the power and authority to kill? What moral, political, and strategic issues arise from the utilization of autonomous weaponry and AI in conflict? Do we want or need regulation of these systems under the laws of war?
- 2017 Festival