General
Northcoders discuss: Should AI machines ever have rights?
In the light of robot Sophia's award of Saudi Arabian citizenship, five Northcoders discuss the question: Should AI machines ever have rights?
Introduction with Sam Shorter - Social Media Manager
If pop culture has taught us anything over the last century, it’s that robots are bad news.
Ex-Machina, Blade Runner, Tron? All examples of AI being a bit of a nuisance. Although, with the constant and rapid technological advancements being made every day, these films are starting to seem a lot less farfetched.
A robot named Sophia was created by Hanson Robotics in 2015. Sophia uses AI, visual data processing, facial recognition and machine learning. She’s designed to get smarter over time. Since then, she has taken part in many different high profile interviews and has discussed how robots could one day become more human-like and be more ethical than people.
In an interview with Khaleejtimes Sophia has said that “It will take a long time for robots to develop complex emotions and possibly robots can be built without the more problematic emotions, like rage, jealousy, hatred and so on,”
In October 2017, Sophia was awarded Saudi Arabian citizenship; the world’s first robot to be granted citizenship of a country. Many have questioned how a robot has more rights than women in Saudi Arabia.
This begs the question: should AI ever have rights?
Sam, what do you think?
Sam Caine - Lead Tutor
What a question, and one that leads to many others.
Firstly, what are we defining as AI? If we are talking along the lines of Amazon's Alexa, or Google Assistant in their current iterations, no. Although they are reliant on complex algorithms; no one would argue that they have any form of sentience. They do not 'think' in the abstract sense; they process. They do not have feelings, or a sense of self. Although they can cite the dictionary definitions on request, they do not know fear or pain.
We also have to consider what we define as rights. If an AI was to be given the questionable gift of the full range of human emotions, would it have the right to not be needlessly exposed to the negative ones? In that case, I'd argue yes, in the same way that I'd argue against harming other people. But as we struggle to understand the human mind, it seems unlikely that we would a) be able to replicate its experiences in an AI and b) be able to quantify the depth of those experiences.
In order to truly grapple with this question, we have to first form uniform ethical tenets abouts what we consider to be deserving of rights, and what we consider these rights to be. If we cannot agree as to how we should treat a living, material creature such as a fox, or even another person, we're going to be hard-pressed to properly deal with the same questions applied to something abstract, like a sentient vending machine.
It is in our interest to develop answers to these questions with humanity in mind, as when we find ourselves bowing to sentient vending machine overlords, we should hope that they too have considered our rights, and reached a favourable conclusion.
Josh Gray - Talent & Partnership Coordinator
Before I outline my opinion I feel it is important to understand exactly what we mean by the term ‘Artificial Intelligence’. The Oxford Dictionary defines AI as ‘The theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.’
It’s my opinion anything that can demonstrate sentience, self-awareness or can be held accountable for its own actions deserves some degree of legally recognised rights. The question I ask is if these traits can be demonstrated by a machine that fits our definition of AI?
Using the examples given in the first paragraph I would have to say no, what we currently understand as AI machines should not have rights. Although very clever, a machine that can do this is merely performing a series of predefined tasks without free-will or emotions.
The answer gets more complicated when we consider the ‘ever’ part of the initial question. History has shown us that the definition of AI changes depending on what the current issue is we are trying to solve. So in the distant future will AI machines have emotions and self-awareness? I’m confident that it will not.
Let me back this admittedly bold statement up. As the term AI becomes more commonplace and widely-used, our definition of it will become more rigid. You would be pretty upset if your AI learning degree became irrelevant after 5 years. I believe that when machines that are capable of sentience and self-awareness are created, they will be given a new, distinct name to differentiate them from what came before. It’s for this reason that I feel what we understand as AI machines should not have rights, any any machine that should have rights should be called something else.
Harriet Ryder - Head of Community
Firstly, I can completely imagine a future where AIs do have rights. I mean, right now we’re in a situation where corporations have certain rights. Corporations have ‘corporate personhood’ which means they are entitled to certain legal protection and legal action in the same way a person is. Have you ever thought it weird that you can sue a corporation, or that corporations have the right to enter into a contract, and to “speak” freely? Probably not, because we’ve come to accept these things as natural. Yet, corporations aren’t people. They don’t think or feel.
Imagine a situation where a small group of humans create an AI that is in charge of diagnosing tumours (something AIs have already been trained to do and are very good at). Those humans wouldn’t want to be personally responsible for the AI getting something wrong so they would likely be very happy for their AI to be recognised as a legal entity so that it can be sued and held responsible instead of them. This would be even more imperative if it were a large group of people working on an AI. Who would be individually responsible? It would be hard to know, so better just to make the AI a legal entity itself. This is what happened with corporations so I can see it happening with AIs too.
But should an AI have rights? I can see a benefit in AIs being accountable legal entities. But there are clearly more ethical issues to unpack.
Ruth Ng - Head of Growth
“Should AI ever have rights?” Let’s frame the question. Does ‘ever’ here mean ‘in the future’, or ‘in any case whatsoever?’
We could interpret it as “Can AI machines ever achieve sentience (or some other condition) sufficient for them to be assigned rights?”. Another interpretation: “Should sentient (or whichever condition) AI be assigned rights?”
Let’s deal with the latter first. Here, we assume truly sentient AI, and ask: should they have rights?. I subscribe to the view that, broadly, we should minimise painand discomfort. So for me, following on from Sam, it’s straightforward that truly sentient AI should be afforded rights.
What about the first question? Can AI achieve the sentience required for rights assignment? Sam expressed doubt that we could replicate human consciousness. Yet on many accounts of consciousness, machines may already qualify as conscious (read about Giulio Tononi's fascinating Integrated Information Theory!) Josh claims that emotional, self-aware AI is not imminent. Yet I think it’s possible it’s already been created. In 2016, researchers tried to teach a robot to feel pain. But it seems epistemologically impossible to know whether they were successful.
If we cannot truly know whether pain and discomfort can be felt by AI machines, we face an ethical quandary. The only compassionate solution is to admit we can’t know, but to avoid causing potential pain, assign them rights as a contingency.
Summing Up: Sam Shorter - Northcoders Social Media Manager
A big point that seems to crop up throughout this discussion is how we define AI and what level of AI should have rights. Defining the point at which AI is truly conscious is incredibly knotty and disputed but nevertheless if truly sentient AI were to be created, then surely rights should be assigned. I would call into question the unethical nature of creating a machine that could be subject to suffering and feel pain.
Or become a second class citizen? In an interview with the verge, AI ethics researcher, Joanna Bryson said
“It’s about having a supposed equal you can turn on and off. How does it affect people if they think you can have a citizen that you can buy.”
Although I personally think that we are a long way off truly sentient AI it's not ever going to be clear whether this is truly possible. If it's possible, but we don't assign rights to AI, we might inadvertently be causing a huge amount of pain / distress.
And in a world where there is probably enough pain and distress, maybe it’s not the time to start creating more ‘things’ that we can be mean to.
Subsribe to read more blogs just like this one!
Sam Shorter