WatchMojo

Login Now!

OR   Sign in with Google   Sign in with Facebook
advertisememt
VOICE OVER: Peter DeGiglio
Is AI REALLY going to replace us?? Join us... and find out!

In this video, Unveiled takes a closer look at artificial intelligence, and what it could mean for the future of humankind! Thanks to the emergence of mainstream applications like ChatGPT, the AI revolution is finally here! So, what happens next? And is it REALLY an existential threat to society?

<h4>


What If AI Cloned and Replaced You?</h4>


 


For decades, artificial intelligence was an unknown quantity looming somewhere on the horizon. Now, it’s actually here, and we’re finally starting to see how it will really shape the future. For better or worse, AI has gotten everyone talking. But, underneath it all, there’s a growing sense of unease… and an emerging fear over how far this phenomenon could go.


 


This is Unveiled, and today we’re answering the extraordinary question; what if AI cloned and replaced you?


 


For a long time, when predicting the impact that AI might have on future generations, there were certain areas that commentators were more confident about than others. Or perhaps were more comfortable with. By loosely presenting AI as just being a really, really advanced computer, it was expected that it would be good for, say, mathematics; that it would potentially solve equations that have had humans stumped for centuries. That it would have lots of positive applications in physics and cosmology, enabling researchers to map the universe more quickly and more accurately than ever before. Perhaps there would be a positive impact on medicine, too, with AI growing capable of one hundred percent correct diagnoses and prescribing always-beneficial care for patients. 


 


However, alongside all of that, there have been some much darker predictions, as well. And while, at first, such ideas may have simply formed the basis for a gripping dystopian storyline somewhere… now, almost a quarter of the way through the twenty-first century, there's growing anxiety that some of the worst case scenarios could be about to happen in real life. In 2023, industry leaders and influential thinkers - including the likes of Elon Musk, Steve Wozniak and Ray Kurzweil - have variously called for a slowing down of AI development, seemingly due to concerns over the direction it could be taking. For some, the concerns are more about how out of control AI might soon become.


 


The AI singularity is the tipping point for the tech, when the machines step away from their human creators. It’s when AI learns to truly think for itself, when it no longer relies on human input, to the point that it could theoretically turn against us. For now, that’s still speculation; the robot apocalypse isn’t actually here. But a quieter war might have already begun; centered on questions of identity, privacy, and trust. 


 


With the mainstream introduction of ChatGPT in late 2022, the world came to realize just how convincing AI now is. Capable of producing reams of text - whole essays, even - and all without the clunky, telltale signs of past iterations. The gap has dramatically narrowed, and it’s now increasingly difficult to differentiate between an AI and a human voice… which is potentially a big problem for consumers, for teachers, for anyone counting on authenticity in what they read. AI image generators have, again, been around for some years, but have more recently emerged into the mainstream, increasingly sophisticated and at times producing visuals that are almost beyond photo-realistic. In the late 2010s, the term deepfake entered into popular use, and was further enhanced by digital audio tools to perfectly mimic a person’s voice. Fast forward just a couple years, and perhaps today even deepfake doesn’t do it justice; in terms of AI trickery, we’re in the Mariana Trench. Ultradeepfakes, where the line between what’s real and what’s not is, according to those warning against the tech, harder than ever to see.


 


So, where does all that leave the humble human being? Perhaps there’s some argument that, for as long as it plays out on machines - objects and products that are clearly and physically artificial - then there’s still not a lot to be worried about. If you don’t like or trust what your screen presents to you, then you can always just switch it off. But why, then, have there been so many stark warnings issued in recent times? In April, 2023, tech industry leaders - including representatives from Apple, Google, Meta and Microsoft - were all cited on a letter calling for a six-month pause on AI development due to the risks it apparently poses. 


 


In May, 2023, news broke that the man dubbed the godfather of AI, Geoffrey Hinton, had quit Google, citing his fears about modern AI as reason for his resignation. According to Hinton, we’re now closer than ever to it being more intelligent than us. He also warned of “bad actors” that might use AI for their own gain. Then, in late May, 2023, a single, one sentence statement was released to the global public, signed by a long, long list of scientists and industry leaders - including, again, Geoffrey Hinton, plus the likes of Sam Altman (the CEO of OpenAI, which produces ChatGPT) and Demis Hassabis (the CEO of Google DeepMind). The one-sentence warning read; “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”. It became arguably the highest profile, official link forged between artificial intelligence and the end of the world.


 


So, is this actually the end of the human being? How could this particular apocalypse play out? One specific scenario that the current clamor around AI seems to be leading toward is the idea of it replacing us. Programs and apps like ChatGPT contain far more knowledge within them than any one single person could ever claim to have; combine that with AI image generators, and how long before it knows every single face on Earth? Every single item of clothing or variation of fingerprint? Add in voice mimickers for good measure, and are you (yourself) even needed for video calls, work meetings, telephone exchanges? 


 


The deep learning that AI blooms out of means that it learns by experience… and, in just a few short years and decades, it has racked up an incredible amount of that. By some measures, it might even be said to be more experienced than any human that ever lived. And, equally, as we advance in other fields, so too does AI at a much faster rate… picking up the titbits of new knowledge we gain individually or in groups, and potentially running with it as part of a wider network. For example, say an astronomer spots a new type of celestial object and is puzzled by what it is; the right AI might be able to instantly compare it to all other objects in the universe, and know what it is within seconds. Hopefully it would share that information, but what if it didn’t? And what if the object was actually a black hole? And what if the AI calculated that it would soon kill us all, but figured - on balance - that it was fine with that? It’s easy to see how existential paranoia could set in, and with just that one, fairly unlikely and timid example. 


 


Consider recent advancements in gene editing; what happens if AI gains any kind of control over CRISPR-cas9? The technology already plays a growing role in most militaries; so what happens when AI begins crunching the numbers for weapons and warfare? So impressive is it seemingly becoming, that some have even branded AI as a threat to democracy. And not only because, in some variations of the worst-case scenario, it assumes totalitarian control and takes the freedom of choice away from us… but, also because, with AI pulling strings in the background, will future voters and campaigners be certain that the clips and soundbites and policies that they see, and read about, and spread are ever actually real. Might some of them (or all of them) be AI fabrications? Could even world leaders find themselves replaced? And, if it could happen to them, why wouldn’t it happen to you? For a long time, the primary cause for concern had been that robots will take our jobs. Now, is it more the case that AI could take our personalities, our image, and our lives? 


 


To finish, though, a reminder that it’s not all doom and gloom. Yes, warnings have been issued, and the debate surrounding AI has perhaps never been fiercer. But, most of those who are highlighting the risks are also calling for legislation; for AI protocols to ensure that its good side can continue to flourish. Because, for all the fear and anxiety, there’s no question that AI has become an invaluable tool in contemporary civilization. It has helped humankind to better understand everything. From the structure of the brain to the safety of an aircraft. From the best way to treat a cancer, to the most efficient way to build a city. Working with AI, humanity can get better and better… but, for how long will the choice to cooperate be ours alone to make? 


 


We refer back to that one-sentence statement; “Mitigating the risk of extinction from AI should be a global priority”. Perhaps it is time to take stock, before it’s too late (if not already) to do anything about it. Because that’s what would happen if AI cloned and replaced you.

Comments
advertisememt