We’ve all seen the Terminator movies, the good ones, the bad ones and the one that had us contemplating those spelling bees with all took (I’m looking at your Terminator: Genisys). We’ve also all seen Wall-E, Robocop and the classic Bicentennial Man. The key factor these movies share is not that they all feature robots, but rather Artificial Intelligence. The differences between them however is that while the latter three movies feature sweet and adoring robots who have hopes and dreams much like the average human being. The Terminator movies, not so much. Arnold Schwarzenegger and co are always desperately trying to save the world from an Artificial Intelligence system that decides it can do away with humans. It’s basically The Matrix except with less mind games and supernatural flying.
The reason I’m talking about A.I is because, yet again, we’re at a crossroads about what to do with the subject altogether. Proceed with caution with the hopes we won’t create something that may one day look at us like we look at ants, or as some advocates believe to be the best choice, just stop meddling with A.I all together.
Now since we are humans, it’s safe to say that the latter option will never be adopted as official policy, and so onwards we trudge, carefully picking our way through the bogs and swamps that is the world of A.I, knowing that it’s possible we might lose our footing and create something that will be the end of us. And while many may scoff at the thought of us creating an Artificial Intelligence to rival that of Skynet, we’re actually already flashing our torch down the trail that leads to autonomous weapons.
It’s to this end that leaders, experts and researchers in the field of A.I, along with other key figures, recently signed an open letter warning of a “military artificial intelligence arms race” and calling for a ban on “offensive autonomous weapons”. It was signed by the likes of Tesla’s Elon Musk, Apple co-founder Steve Wozniak, Google DeepMind chief executive Demis Hassabis and professor Stephen Hawking, along with a 1000 other such signatures. The letter was subsequently presented at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina.
The letter states:
“AI technology has reached a point where the deployment of [autonomous weapons] is – practically if not legally – feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.”
Should one military power (and the Earth is chock full of those) start developing systems capable of selecting targets and operating autonomously without human interaction, it would create an arms race similar to, and possibly far more dangerous than, the nuclear arms race. Unlike nuclear arms however, the materials required are not that hard to acquire or manufacture but the task of monitoring it will be far more difficult. We’ve already seen how inquiries into projects like the Predator Drone program are deflected and that’s a weapon that is controlled by humans.
Musk and Hawking have warned that AI is “our biggest existential threat” and that the development of full AI could “spell the end of the human race”. But others, including Wozniak have recently changed their minds on AI, with the Apple co-founder saying that robots would be good for humans, making them like the “family pet and taken care of all the time”.
Personally, I’m on the side of the authors of the open letter, since as they put it, the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting.
What do you think about the possibility of autonomous weapons? Do you think they’re a distant future or just around the corner? Let us know your thoughts in the comments below!

