Is an A.I. War Inevitable?
advertisement
VOICE OVER: Noah Baum
WRITTEN BY: Mersini Karkoulas
The robots are taking over - right? Well, not necessarily. In this video, we assess exactly how likely a robot apocalypse is. Armageddon via artificial intelligence is a popular storyline in science fiction, but does automation really pose a danger to humanity? And, if it does, how would an A.I. conflict play out?
Is an AI War Inevitable?
We’re already at a point where artificial intelligence, or AI, has become an integrated part of our everyday lives. Defined as “the capability of a machine to imitate intelligent human behaviour”, it has developed into an almost unnoticeable force – from broad search engines, to specific convenience apps, to virtual personal assistants.
Since the turn of the twenty-first century, we’ve seen a rapid increase in the use of AI in both the professional workplace and our personal homes. And, as tech giants like Google, Amazon, and Apple continually pour money into the field, it’s only set to get smarter. But, should there be a cut-off point? Is it already time to ask ourselves “How far is too far when it comes to AI?”.
Smartphones typically have virtual assistants built in as standard, while major websites use AI to custom-build what you do (and don’t) see. But the wider applications are infinitely more than just the device you carry around in your pocket. Militaries all around the world are focussed on developing state-of-the-art AI to give them the extra edge they need. Naturally, with so much technology enveloping everything we do, fears that it could become a danger to humanity grow ever-stronger.
Today, AI is able to outsmart the regular old human brain in certain competitions, as shown by DeepMind’s AlphaGo which outplayed Ke Jie, the world’s best Go player, in May 2017. Unlike previous versions of AI built to play against human counterparts in games like Chess, AlphaGo was designed with an artificial neural network to actually learn for itself as it played – meaning it could gain a deeper understanding of the game. As such, its victory signified an impressive milestone for AI tech, as it proved that machine learning was progressing in a way that enabled complex problem solving comparable to what our own brains can achieve.
So, is the prospective jump from board games to battlefields really one we should worry about? Does a super-computer that’s super-good at playing Go truly translate into a digitalised super-soldier in a future war? It mostly hinges on whether AI could ever turn against humanity?
To answer this, we need to consider that Artificial Intelligence is always going to be rife with bias. When it comes down to it, the machines are informed by the people who created them, and then by the people that they learn from. So, from the very beginning the algorithms are skewed… which in some ways ensures a degree of human control, but in others prompts some worrying patterns.
Microsoft discovered a lot of the downsides with Tay, a self-learning bot released onto Twitter in 2016. Tay was an attempt to gauge how AI could learn from interactions with people across a digital platform. It started out as an interesting experiment but soon turned into a PR train wreck, with Tay spouting all sorts of racist, homophobic, and misogynistic interactions, filling its feed with hate speech. Tay was pulled from Twitter within twenty-four hours, but it had already highlighted genuine cause for concern – as it appeared to prove that AI can be hostile.
Jump forward to 2018, and CIMON – a prototype AI assistant onboard the International Space Station – reportedly needed to be deactivated after it developed a hostile, belligerent personality. While its actions – which included a refusal to stop playing music, as well as accusing the astronaut using the device of being “rude” – pale in comparison to heated racism or any of science fiction's more disastrous outcomes, the rebellious streak was enough to significantly worry the ISS. Should such hostility increase among AI, not only does it reflect poorly on humanity as a whole as its creators, but perhaps it leads us toward the robots wrestling for autonomous control. For now, we can simply switch off an unresponsive or unpredictable AI… but if it ever learns to override a shutdown request, then would there be anything to stop it?
It’s a particularly pressing concern for those building AI as a means of crime prevention. We increasingly rely on facial recognition to pick the face of a criminal out in a crowd, but some tests have proven that the technology can get it wrong, meaning that false arrests are often made. Given that facial recognition algorithms have also been described as ‘not race-neutral’, there are implications that AI can harbour racial prejudice – which leads us again to question the use of robots as a supposedly objective technology.
The fact that AI discriminates (or is capable of discriminating) surely raises a red flag when we consider supposed near-future plans to replace human jobs with a robot workforce. It all boils down to accuracy, and if AI is stumped even by different faces (innocently or not so), then can it ever be trusted to dependently make reliable decisions of any kind? Those most worried about an AI uprising believe we’ve already eclipsed the point of no return for this particular point.
With more jobs becoming automated as technology accelerates and repetitive tasks are computerized, the general status of a human employee is increasingly up for debate. The replacement of people by machines may even feel inevitable at times, as companies look to downsize staff, cut costs, and fast-track the completion of tasks. However, the societal shift could already be forging the ‘other side’ in an AI war, as real-world people are likely to become more and more disgruntled.
It’s been suggested that a ‘robot war’ could mostly be rooted in class divisions – thanks to AI taking over supposed ‘unskilled’ roles most of all. And so, as the working classes are cut from the payroll while the rich grow richer on profits, the social gap will grow wider and wider. If nothing else, this could create conflict between those who do control AI and those who don’t. But inevitably, the machines will eventually far outnumber the people pressing the buttons, and for the sake of progress they will’ve become slicker and more self-serving than ever before. Consider this alongside the growing use of AI for military purposes, and the problems begin to mount.
With the U.S., Russia and China especially competing on a global scale to be the world’s most powerful superpower, battle-ready AI is all about sneaking an edge on your opponent. Sometimes considered as a new frontier of a current Cold War between the three nations, artificial intelligence is being pushed in ways that might have been unimaginable only a few decades ago. Crucially, AI isn’t hampered by the same moral constraints as humans can be when faced with a target, and the complete lack of empathy makes for a literal killing machine.
One specific debate centres on Lethal Autonomous Weapon Systems (LAWS for short), with some nations suggesting that they don’t need human oversight to do their job. A 2018 UN meeting saw some countries vote against an international ban on them, a move which really signals a new era for global warfare, where the involvement of a human soldier isn’t always needed. Those against LAWS argue that those in favour are seeking to avoid responsibility for military actions, as it’s difficult to hold a machine, even an autonomous one, to account.
Combine the situation with the issues surrounding facial recognition, and there’s the prospect of self-operating AI that’s potentially unable to differentiate between targets and non-targets. Between a military base and a hospital, perhaps? An army convoy and an ambulance? Or, a soldier and a civilian? Throw into the mix AlphaGo levels of autonomy, plus Tay-style intolerances and a CIMON-like tendency to defy orders, and the advent of robots specifically designed to kill seems like a major risk, and an AI war looks closer to becoming a reality.
We haven’t yet reached what experts call the “Singularity”, the point at which AI machines switch from low complex beings to higher complex ones, so there’s no need to overly worry the next time you pay for groceries at an automated checkout. But, there’s an ever-growing list of prominent AI detractors (including Elon Musk, Bill Gates and the late Stephen Hawking) who warn that it’s only a matter of time before the sci-fi stories become real world problems. And, there are various ethics boards popping up across all industries, to regulate the use of AI. It’s clearly something that we as a society need to continually evaluate; Is AI something to celebrate and be excited by, or will the robots one day turn the tables on their creator?
Send