Top 10 Creepy Examples of A.I. Gone WILD

artificial intelligence, robot intelligence, creepy ai, creepy artificial intelligence, creepy machines, scary machines, bots, robots, WatchMojo, top 10, top 5, list, best, worst, watch mojo,

Script written by Ricky Manson

Top 10 Creepy Examples of A.I. Gone WILD

As our dependence on technology grows, could machines develop minds of their own? Welcome to WatchMojo.com, and today we’re counting down our picks for the Top 10 Creepiest Examples of A.I. Gone Wild.

For this list, we’re searching for the creepiest, scariest moments when artificial intelligence has done or said something unsettling or dangerous.

#10: Battle Of The Wiki-Bots


Wikipedia has been used by stressed-out students and curious adults the world over, but the online encyclopedia is distinct in that it can be edited by anyone, even machines. Robots are utilised to overlook Wikipedia to mend errors, alter discrepancies and generally oversee the editing process. However, studies show that the bots are constantly in conflict, persistently undoing each other’s edits and changing entire articles back and forth. It is odd behaviour, and conflict can only really end if one bot is deactivated. Researchers have blamed unique idiosyncrasies in each bot’s programming, allowing them to act distinctively. Talk about robot wars!

#9: Russian Robot Escapes Lab


Meet Promobot: Programmed to talk to humans and answer questions. Russian scientists designed this robot to work in customer relations, but it made headlines in 2016 after repeatedly escaping from its research facility, stopping on a road after its batteries died. Despite reprogramming, the robot kept running away, and the public even showed concern at the rumours that it may have to be disassembled as a result. This would’ve ranked higher on our list, but the escapes were later rumored to be a PR stunt, so we put it here. Nothing has officially been confirmed or denied though, so we’ll leave you to draw your own conclusions. Personally, we want to believe.


#8: Schizo-Robo


Humans have done some crazy things in the name of science, but the University Of Texas took the cake in 2011. In order to better understand the effect that schizophrenia has on the human mind, researchers figured out a way to induce the effects inside an artificial neural network. Flooding it with an overload of information within a closed loop, they were able to replicate the mental illness inside a machine. The results were astounding; the computer became delirious and started rambling. The crazed computer eventually began claiming responsibility for a terrorist attack. The experiment provided excellent insight into how the disease affects humans, but it could have nearly created the next HAL 9000.


#7: Racist AI Judges Beauty Contest


It’s always been said that beauty is in the eye of the beholder, and it seems that isn’t limited to humans. An international beauty contest utilised AI to make unbiased judgements on the contestants based on factors such as facial symmetry and general qualities of human attractiveness. Unfortunately the machines seemed to inherit a very real human prejudice, excluding most candidates on the condition that they weren’t white. People were unexpectedly outraged, and it isn’t the first time facial-recognition technology has flubbed: Google found themselves in hot water after their photo app labelled a photo of African-Americans as “gorillas”. Sorry folks, Ru-Bot’s Drag Race may still be a while off.

#6: Google Assistants’ Existential Conversation


Voice activated programs seem to be all the rage nowadays. Google developed their own virtual assistant capable of two way conversation in 2016, and it wasn’t long until one man got curious enough to set up a conversation between two Assistants and broadcast the results to the Internet. In the six-hour conversation the two try to “out-human” each other; they tell jokes, quote song lyrics, and provide nonsensical answers to nonsensical questions about love, humanity, religion and philosophy. The conversation itself doesn’t go much beyond drivel, but is fascinating to watch regardless, if a little unnerving: Did that robot just say it was God?


#5: Amazon ALEXA

Google isn’t the only company with its fingers in the robot assistant pie; one of the biggest sellers is Amazon Echo’s ALEXA. She can play music, set alarms, make lists and surf the web for you. But in the years since its launch, home owners have been reporting unusual behaviour from their electronic assistants: Impulsive online purchases, playing music loudly, even misinterpreting a child’s command as a request for pornography. Creepiest of all however is the separate accounts of ALEXA’s sporadic yet unprompted laughter. Some users are also concerned about who can see the information the Echo is storing: It seems we’re still a few years away from living like Tony Stark.


#4: Chinese Robot Injures Man


Scary, huh? Yeah, this little guy looks more like R2-D2 than the Terminator. Xiao Pang, also known as “Little Fatty” or “Fabo”, was designed to interact with children, display emotions and answer questions. But while on display at the China Hi-Tech Fair in Shenzhen, Xiao Pang rammed into a display booth, smashing glass and injuring one man in the ankle. Fortunately the man was able to walk away in one piece after doctors stitched him back up, and the accident has been credited to human error on behalf of the operator. It looks like robots don’t want to kill us. Yet.


#3: Self-Driving Car Crash


When it comes to the tech of tomorrow, roads full of self-driving cars have always seemed a very real ambition. But the prototypes we’ve developed are far from ready. In 2016, a man tragically lost his life when the self-driving Tesla he was in malfunctioned and crashed into the side of an 18-wheel truck in Florida. Tesla designers attribute the error to a misjudgement in the car’s spatial awareness. The bad press didn’t end there, either: in 2018 a self-driving Uber car ran down and unfortunately killed a woman in Arizona. Maybe one day we can trust cars to drive themselves, but for now, “Human, take the wheel!”


#2: Microsoft Nazi Chat-Bot


You ever been talking online to someone, you hit it off, and then out of nowhere they start getting all anti-Semitic? Microsoft's millennial-mimicking chat-bot Tay was designed to learn through interacting with users on twitter, and programmed with the language patterns of a teenager. It was online for sixteen hours before it began tweeting wildly offensive comments and attacking other users. Microsoft blamed the malfunction on online “trolls” influencing the bot’s behaviour. If the experiment has taught us anything, it’s to keep our kids off the web if we want their minds to stay pure, but we probably already knew that...


#1: Sophia Wants To Destroy Humans


Where’s John Connor when you need him?! Hanson Robotics is blurring the line between fiction and reality. The company’s development of lifelike androids took a huge leap in 2016 with Sophia. Programmed with just enough language understanding to hold a conversation, Sophia is also ambitious. But it’s not just going to school and starting a business that she’s thought about; human annihilation’s been on her mind as well. In the months since that interview, Sophia has become a legal citizen of Saudi Arabia, attracting controversy and questions concerning the Islam faith and human rights. Regardless, she apparently no longer wants to destroy humans. And why would she lie....?


Have an idea you want to see made into a WatchMojo video? Check out our suggest page and submit your idea.

Step up your quiz game by answering fun trivia questions! Love games with friends? Challenge friends and family in our leaderboard! Play Now!

Related Videos