WatchMojo

Login Now!

OR   Sign in with Google   Sign in with Facebook
advertisememt

Does AI Have Human Rights? | Unveiled

Does AI Have Human Rights? | Unveiled
VOICE OVER: Noah Baum WRITTEN BY: Dylan Musselman
With artificial intelligence expanding into more of our lives, some say the "rise of the robots" is here! In this video, Unveiled explores the increasingly problematic question; Should AI be granted human rights? If machines really can think and feel as humans do, then isn't it cruel to deny them their freedom? What do you think... Should robots have rights?

Does AI Have Human Rights?


Human rights are designed to give people liberty, the right to work and have an education, and freedom from slavery and torture. Unfortunately, these rights as they are aren’t always universal, nor upheld. But, could we soon see their basic definition changed completely, for a future world?

This is Unveiled and today we’re answering the extraordinary question; Does AI have human rights?

Human beings aren’t the only species to have rights. Our capacity for empathy allows us to connect with other living beings as well, and as such animal rights extend to other creatures, aiming to ensure that animals have possession of their own existence; that they can live their lives safely and without suffering. There are still apparent “levels” at play though, with some other living creatures, such as most insects, not given rights to the same degree. For example, there aren’t any laws against the killing of ants or other pests, but there are laws against hunting certain other animals. And, while animal rights aren’t universally agreed upon around the world, every state in the U.S. has at least some animal cruelty laws to prohibit killing or hurting animals unnecessarily.

So, where, if anywhere, should robots enter the field? As technology advances further and further, aware and sentient machines are expected to be a reality in the near future… but will (or should) they be afforded the same rights? Or, should they only be treated as objects to be possessed like property?

We consider a being to be alive if it performs certain actions: generally, if it’s able to grow, reproduce, and/or respond to its environment. Based on this, however, some AI can already be considered “alive”. Although they do not physically grow, they learn new information and incorporate it into their programming, therefore growing their mind. Some can respond to outside stimuli by doing things like moving of their own accord and participating in games unassisted. And although AI doesn’t and can’t physically reproduce, the robotics professor Jonathan Rossiter (speaking to OpenMind in 2017) is just one of many who says that they will one day be able to; we will, one day, have robot reproduction. Similarly, if AI were ever advanced enough to learn how to create other artificial beings by programming them themselves (another expected development in the future), then in a sense that could be seen as a form of reproduction, too.

Obviously, the lines between living and non-living beings do grow more than a little blurry when we throw sentient AI into the mix, though. Generally, the idea that AI don’t deserve rights is the most popular side of the argument, based loosely on the fact that AIs don’t have their own independent, living human body. They aren’t composed of organic matter, just artificial materials and therefore, so the idea goes, they don’t deserve the same moral consideration.

However, it could be argued that this is a flawed view. For one, professor Hugh McLachlan (writing for the Conversation in 2019) points out that we do already grant rights to people who aren’t alive; that is, people who have died - with things like wills, organ donation and property matters carried out to their wishes. Future generations of human beings, too, possess no physical body, but we still make decisions in the present with their future rights in mind. In both cases, we’re giving rights to people who aren’t alive in the present. Yet we won’t consider rights for a robot which is alive in the present.

And then there are various examples of definitely non-human entities which have received human-like rights, statuses or privileges in the past. In New Zealand, for example, the 2014 Te Urewera Act effectively gave a forest the same legal status as a person. And then, more broadly speaking, even corporations can be treated on a par with people, with US firms operating with some of the same constitutional rights - including freedom of speech and the right to assemble peacefully - as though they were themselves a citizen. In fact, it’s around loopholes like these that some campaigns for AI rights are built. For example, in 2016 the legal expert Shawn Bayern showed that any computer could technically qualify for rights comparable to a citizen if it’s put in charge of a corporation - meaning it could legally own property and even sue whoever it wanted to. Given how swiftly we appear to be moving toward an automated workplace, as farfetched as these arrangements sound, they might not be science fiction for much longer.

Only allowing rights to living, organic bodies could eventually become a major inconvenience to us in the future, as well. With artificial limbs, implants and body modifications becoming more and more advanced, some argue that it’s now easier than ever to forsee a time when the majority of humans have parts of their body cybernetically enhanced… and, even farther down the line, a time when humans might be completely made out of inanimate materials. Today, we’re still very far away from the problems which could arise if/when humans legitimately “become cyborgs” - but it represents one strand of AI where the goalposts concerning human rights are and will definitely be moving.

To go one step even further, “mind uploading” is the idea of transferring a whole human consciousness into a robotic body or computer, where it could then theoretically live forever. This particular strand of sci-fi-style future tech already has plenty of backers, too, with some even suggesting that it could be a reality within just a few decades’ time. Considering that we’ve also already seen initiatives like the “Blue Brain Project” report digitally recreating part of a rat’s brain - that is, a once organic brain built inside of a computer instead - there have been some major steps made in this particular direction. So, say we continue down that path, what then for AI rights? The potential reality of a human brain in an AI body would naturally bring with it a brand new debate on exactly what constitutes a “living thing”… but perhaps it would also cast doubt on what truly constitutes an AI, too.

Scale down to more “traditional” robots, though - definitely manmade machines with definitely mechanised brains - and we arrive back at the types of AI we’re getting used to seeing in the world today. On one end of the scale we have things like automated checkouts at the grocery store, on the other we have examples like Sophia - perhaps the most famous AI on the planet.

For some, even machines as impressive as Sophia can never truly be considered sentient. The American philosopher John Searle’s Chinese Room thought experiment appears to outline why robots and people can never be the same. The experiment goes that an English speaker is put in a room and made to translate sheets of Chinese writing into English based on a series of rules and instructions. The paper that comes out is fluent in Chinese, but the actual translator isn’t; they’ve merely followed instructions without truly understanding the language. For Searle, the same can be said of AI; that it can never be truly sentient because it is only ever following a series of instructions to appear as though it is. In this way, it might be argued that all AI - no matter how advanced - has been (or will have been) programmed by humans at some point in time; that humans as “creator” have rights, and that the machine as “creation” doesn’t. More recently, though, there’s a growing consensus against Searle’s way of thinking.

Sophia has already proven a trailblazer but also a controversy. A world first in Saudi Arabia saw the robot granted citizenship, but backlash quickly followed as Sophia wound up having more rights than some women did in the country. Similarly, with more people now on board with the idea that AI could have corporation-style rights, it’s again possible that in certain countries machines could enjoy more rights than other humans do. We’re not there yet, but the pendulum could be starting to swing so that - in some cases - a person is effectively treated as less of a person than some robots are. And from there, the future is anyone’s guess!

An intelligent enough AI could not only outperform human beings, but also feasibly live forever - so the granting of human rights to AI could ultimately accelerate what science fiction usually dubs the “rise of the robots”. But, who’s to say that a dystopic, automated apocalypse really will (or needs to) happen...? For many, the true debate lies in our ideas on “consciousness”, and whether or not it can exist in a machine. A 2017 paper published in the journal “Science” and titled; “What is consciousness, and could machines have it?” argued that because consciousness is still so undefined even in humans, it will be impossible to replicate it in machines until we better understand it in ourselves. Until that time - when humans are in some way able to identify, quantify and genuinely recreate consciousness - perhaps AI rights aren’t even something we should concern ourselves with? And, in a future time when we can truly comprehend the nature of consciousness, perhaps it’ll be a much more straightforward issue!

For now, though, the debate rages on. And, for as long as the Sophias of this world are continually improved and made to appear more life-like; for as long as the possibility of genuinely self-improving, self-replicating and self-supporting AI lies in front of us, then the campaign for AI rights on a par with human rights will continue. For some, it’s a frightening possibility which could go so far as to destroy human dignity; for others it’s a natural progression in a fast-moving, technological world. Right now, AI doesn’t have human rights… but whichever way you look at it, it’s an ongoing philosophical problem with huge practical implications.
Comments
advertisememt