WatchMojo

Login Now!

OR   Sign in with Google   Sign in with Facebook
advertisememt

Will Terrifying AI Destroy Humankind?

Will Terrifying AI Destroy Humankind?
VOICE OVER: Peter DeGiglio
Do we REALLY need to worry about AI? Or is it worth the risk? Join us... and find out!

In this video, Unveiled takes a closer look at the (potentially) TERRIFYING future of artificial intelligence!

<h4>


Will AI REALLY Turn Against Us?</h4>


 


In 1997, IBM’s Deep Blue defeated the world champion, Garry Kasparov, at chess. In 2016, DeepMind’s AlphaGo beat Lee Sedol, one of the world’s top Go players. In the future, it will be much more than just board games. So what do you predict for tomorrow’s AI?


 


Humans have long considered ourselves the most intelligent lifeforms on Earth. But for how much longer? Artificial intelligent life is becoming more and more commonplace in modern society. And AI brings with it a great deal of benefits, along with just as many risks. The most harrowing of these risks is the chance that humans will lose control of AI, and that it will destroy humanity. So, are these fears an over-exaggeration, or are such sentiments grounded firmly in reality?


 


This is Unveiled, and today we’re answering the extraordinary question; will AI turn against us? 


 


It was actually during ancient times that we first created myths surrounding artificial intelligence. Greek legends were the earliest, such as the story of Talos, a bronze automaton gifted with intelligence, tasked to defend Crete. For most of our history, though, machines that could think have been far from the realms of reality. It wasn’t until around the middle of the 20th century that serious research into AI began. Alan Turing was the earliest pioneer of the field, publishing ‘Computing Machinery and Intelligence’ in 1950. In it, he coined the Turing test, which is a way to determine if a computer can indeed think intelligently. 


 


It’s been over 70 years since this paper, and now a variety of different AI is commonly used in daily life. Language translation, facial recognition, online advertising, search engines, there is an ever-growing list. Slowly, it’s starting to feel like there isn’t a field we haven’t applied it to. And, clearly, we have quickly found a lot of benefits to AI. For example, they can easily handle large quantities of data, are free from human error, and of course, they can work 24/7. Generally, it seems the technology is improving society and should continue to do so. AIs are a route to ultimate efficiency. However many believe we should be extremely cautious. 


 


Renowned physicist Professor Stephen Hawking once said “the development of full artificial intelligence could spell the end of the human race”. He followed this up by saying that AI would far exceed humans, since our advancements are limited by biological evolution. The late scientist thought we should be extremely careful. How justified exactly were his fears? Where AI is in its current state, already people worry it can be used maliciously. We’re currently in a worldwide race for lethal autonomous weapons - or LAWS. In 2024, the US Department of Defense promised $1 billion for the Replicator programme, which aims to field thousands of autonomous war drones. Shortly, it’s expected that drones will be able to use widespread facial recognition to target and attack specific individuals. Overall, warfare is one of the simplest fields we can apply AI to. And, as a result, the United Nations has been debating a worldwide ban on autonomous weapons. Unfortunately, quite a few countries oppose such a ban.


 


So governments worldwide are weaponizing artificial intelligence, but can we be sure that those AI weapons will stay loyal to their creators? While the US, for example, is against banning them, they still want to ensure humans are ultimately controlling them. The concept of ‘emergent behavior’, however, implies we might not be able to control them forever. In the case of LAWS, it’s when (in a near-future time) they’re connected in such a way that they can easily communicate with each other, independent of human behavior. Some military minds have pitched the idea for teams of hundreds of LAWS, for instance, connected in a weaponised hivemind. No humans necessary. Communication between them would expand, but to where is almost impossible to predict. At the least we might expect whole new tactics arriving via emergent behavior, turning weaponized AI into a force (of its own) to be reckoned with. The Pentagon is reportedly making a big push to develop what some have labeled ‘slaughter bot’ swarms, all of which means that, like it or not, they are likely to become a reality. Again, we have no idea what a heavily armed and highly connected flock of AI intelligence will do. Hopefully, failsafes will be put in place to prevent serious issues, however, some worry that any attempted block or limiter will eventually be overridden.


 


Meanwhile, the US Air Force is also working on something called Project VENOM. This aims to develop powerful F-16 fighters, which are capable of flying themselves. Currently, about 50 million dollars has been invested and, on the bright side, AI jets will certainly reduce the need to risk human pilots. If we can rely on the AI’s loyalty, and if we have the proper fail safes in place, then they should only ever be a danger to enemy targets. However, once again this is entirely new ground. Can VENOM really be realized exactly as its developers want it? Won’t there always be a risk of it turning against its maker? Or of it misinterpreting (or refusing) mission orders, and just going on a rampage? These are the sorts of huge questions that dog any plans to push forward. In general, as we haven’t yet completely solved AI even in a non-military context, many believe that it’s just far too soon to try weaponizing it. One positive note is that most major nations agree AI should never be given access to nuclear weapons. Although, alarmingly, not everyone is quite in agreement here, either.


 


But, of course, AI isn’t only about Lethal Autonomous Weapons. Yes, they could prove our doom, but what about everything else in the AI bracket? Currently, AI still isn’t truly sentient. We’ve likely all used some type of virtual assistants, such as Apple’s Siri, but these are not intelligent enough to overthrow humanity - no matter how spooky they can sometimes seem. Truly sentient AI is predicted in the very near future, though. So, should we be worried about that, in even a non-military setting? 


 


In 2017, news broke that Facebook had developed two chatbots tasked to converse with each other over a fictional trade negotiation. Machine learning was used to create them, and the ‘chat’ was monitored. Scarily, the two bots quickly deviated from any predicted script. They developed their own language, and started conversing in this, instead. It was at this point that Facebook shut the study down. Other AIs have done similar things, with Google’s translation AI also creating artificial languages before now. Broadly, it’s thought that the seeming nonsense to us acts as an intermediary language - an unreadable link - to the machines.


 


While it’s true that neither of these examples is particularly dangerous, given their lack of power… both cases do highlight how an AI world could lead in all new and unknown directions. The Facebook and Google stories might easily be explained away as glitches right now, but what happens when they’re more than one-off peculiarities? Are these small moments a sign of more significant things to come?


 


Generative AI, the most readily available form at present, can generate images, videos, text, and solve equations. And the tech can already do a lot of damage, for example by replacing certain jobs. Such as in China, where reports claim that about 70% of video game illustrators have been axed - partly due to growing reliance on AI. Gamers have critiqued the AI products, saying they lack human creativity, but there’s little sign of the trend stopping. Perhaps the most infamous contemporary issue of all, however, is the ongoing appearance of deep fakes. These involve using someone’s likeness to create an exact copy of that person, which rapidly leads to false images and videos involving them, made without their consent. Such AI was the trigger for the Hollywood strikes of 2023, but these tools could ultimately impact far more than simply those in the media.


 


How far can these new realities take us? On the one hand, as current, generative AIs are usually trained using human data, it’s at least thought (and hoped) that they couldn’t yet learn to do things that we can’t. Generative results draw on what’s been done prior. In other words, human knowledge will likely be the limiting factor. And so, the general consensus is that AI (in this state) can’t directly turn against us. Replace us, maybe. Dislike us, possibly. But break away from us? Probably not.


 


There is a darker extension, though. True AI, also known as Artificial Superintelligence, is a different ball game. Right now, it remains in the realm of science fiction. But, were fiction ever to become fact, then it’s proposed that this more advanced AI will learn so well that it will exceed human understanding. Humans will no longer be the most intelligent lifeforms on Earth, and we’ll all be painfully aware of our demise. Today, many may consider it speculative territory. But others say that it’s vitally important for us to speculate, in order to head it off. According to Stephen Hawking, for one, there are no physical laws preventing AI from one day (perhaps one day soon) operating better than the human brain can. At which point, it could turn against us just as easily as it could do any number of other things.


 


Will supreme AI crave power? Will it recognise life on Earth as valuable? Or see it as a threat? Or merely as an annoyance? Will it one day scrub back through all the media of now - the books and films and YouTube videos - with admiration, relish or disdain? For all the concerns, generative AI should remain beneath us… but is generative only the first generation? And will what’s coming next care two hoots about us that came before?

Comments
advertisememt