Manage episode 291702420 series 2546508
Automated weapons mean fewer casualties, faster reaction times, and more precise strikes. They’re a clear win for any country that deploys them. You can see the appeal.
But they’re also a classic prisoner’s dilemma. Once many nations have deployed them, humans no longer have to be persuaded to march into combat, and the barrier to starting a conflict drops significantly.
The real risks that come from automated weapons systems like drones aren’t always the obvious ones. Many of them take the form of second-order effects — the knock-on consequences that come from setting up a world where multiple countries have large automated forces. But what can we do about them? That’s the question we’ll be taking on during this episode of the podcast with Jakob Foerster, an early pioneer in multi-agent reinforcement learning, and incoming faculty member at the University of Toronto. Jakob has been involved in the debate over weaponized drone automation for some time, and recently wrote an open letter to German politicians urging them to consider the risks associated with the deployment of this technology.