15 min read

Today, intelligent machines are everywhere. They’re driving our cars, giving us personalised shopping recommendations and monitoring fraud. They have huge potential to improve our lives (or ruin civilisation, if dystopian sci-fi movies are anything to go by). But while we can make machines smart – can we make them moral?


Artificial intelligence could replace us altogether. At least, that’s what Stephen Hawking fears. In an interview with Wired, Hawking says: “If people design computer viruses, someone will design AI that replicates itself. This will be a new form of life that will outperform humans.” It’s a scary possible scenario – one echoed by the likes of Elon Musk, who has described AI as “the biggest risk we face as a civilisation”.

Fearmongering aside, as machines become more developed and more commonplace, serious questions about the role of robotics in human society will no doubt be raised. Especially when these robots are eventually charged with making ethical decisions.

Take, for example, the following scenarios:

Scenario 1

You’re in a car driving along a residential road. The year is 2040 and your car is driving itself. Two children are playing in a field nearby. They suddenly run on to the road. Your car could swerve to avoid the children, but it would have to collide into an adjacent car that has a family of four inside. Who should your car save?

Scenario 2

Similarly, imagine this: you’re developing automated military weaponry. It’s a given that civilians may be put at risk when taking out an enemy. But how do you determine this risk? And what type of civilians are acceptable – would you kill pregnant women and children?

Scenario 3

These unpleasant future-facing scenarios echo the famous trolley problem, a thought experiment devised by the philosopher Philippa Foot in 1967. A trolley (or tram, or train) is travelling along a track. It suddenly finds itself out of control. The options are either to let it continue travelling as it is and kill the five people standing on the track, or the driver could flick a switch that would divert the trolley on to another track, but in turn kill one person.

There is no definitive answer. The idea is to get people to justify through reason and argument as to how they reach a decision. And while these types of dilemma are highly improbable in the real world, they do help us think about the ethics of AI. MIT have even set up an online experiment that fleshes out the ‘who to save’ scenario, using the results as ‘moral crowdsourcing’.

 

21% of people are likely to buy a driverless car whose morals are regulated

vs

59% of people are likely to buy a car programmed to always save the driver’s life

(Source: Science via The Washington Post)

Creating AI in our image – not always a good idea

“Human nature is not black and white but black and grey.”

—Graham Green

 

Perhaps we don’t need to worry so much about the morality of AI. After all, we’ll be creating them so we can shape their nature. If we train them to think like humans and develop our values, they’ll instinctively blend into our society. However, this approach is problematic in itself.

Firstly, a robot’s nature will always be different to ours. Even if we were to reach the sci-fi favourite of AI sentience, a robot’s lack of eating, aging and breeding means it would never truly think and behave like a human. Secondly, humans are innately flawed beings. And this can be reflected in the machines we create.

In 2009 researchers at the Ecole Polytechnique Fédérale de Lausanne in Switzerland found they could get robots to eventually lie. In their experiment, they fitted a number of small robots with artificial neural networks and sensors. These robots were programmed to find ‘food’ (a light-coloured ring). Robots were rewarded more points if they stayed closer to this food. Robots would lose points if they went near the ‘poison’ (a dark-coloured ring). The robots could also flash a blue light that could be detected by the other robots.

The first few generations of robots quickly evolved to find the food while flashing their blue lights randomly. This allowed other robots to learn from the flashing light and find the food more quickly. By the 50th generation of robots, they had learned not to flash their lights while near the food so they wouldn’t draw attention to the other robots. Some would even flash their light near the poison to trick the other robots.

These behaviours reflect the evolutionary behaviours found in the natural world. But while we can expect animals and humans to deceive each other to survive, do we want that for our machines?

Similarly, robots with human-like traits can manipulate people’s feelings. Researchers found that, when robots have personable traits, people are more likely to forgive their mistakes. They’ll even go as far to lie to the robot to avoid hurting its feelings. Non-expressive robots experienced no such treatment. While this highlights how important it is that robot designers identify with care which human traits they should try to replicate, it also draws attention to the ethical and emotional implications of creating robots in our image.

Machine-learning bias must also be considered. AI machines are powered by data, and such data can sometimes reinforce negative stereotypes and amplify discrimination. Researchers have found that, with a lack of supervision, AI tools can learn to associate female names with roles such as ‘secretary’ or ‘librarian’, and male names with jobs such as ‘doctor’ or ‘scientist’. Until society learns how to eradicate bias, machines will struggle to remain unprejudiced.

Microsoft’s Twitter bot, the 2016 Taybot, was designed to talk like a young millennial through ‘conversational understanding’. The more Twitter users chatted to the bot, the more it learned to mimic conversational language. However, in less than 24 hours, the bot was blurting out inflammatory slurs and echoing some of the worst traits of humanity.

The machines making decisions today

AI and machine learning are already changing the way some industries make decisions. However, most tools are being used in support of human decision-making, not in lieu of it (for now).

Policing

UK police in Durham have implemented a predictive AI system that determines whether or not to keep suspects in custody. The Harm Assessment Risk Tool (Hart) uses police data recorded between 2008 and 2012 to calculate individuals based on a low, medium or high risk of committing a future offence.

It was first tested in 2013 and the police monitored the AI’s decisions over the course of two years. The expectations for suspects determined low-risk were correct 98% of the time, while high-risk suspects were correct in 88% of cases.

Finance

Financial firms are using credit-scoring tools with AI capabilities to speed up their lending decisions while potentially limiting risk.

These tools can process transaction and payment history data, as well as unstructured or semi-structured data sources (such as social media data, text message data and mobile phone activity). By using this additional data, financial lenders can better gauge an applicant’s spending behaviour and likeliness to repay, thus creating a much more comprehensive view of a person’s creditworthiness.

AI and regulation

Credit: Ioannis Oikonomou


“If people design computer viruses, someone will design AI that replicates itself. This will be a new form of life that will outperform humans.”

—Stephen Hawking

Insurance

Insurers hold on to huge volumes of data. AI can help crunch this data to give insurers better insights about their customers, while giving customers more personalised premiums.

AI can also help make decisions in insurance claims. One existing example is Tractable, a photo-estimating AI solution. A collision repairer uploads images of the damage and an estimate to fix it on to the system. The technology then compares the picture with existing claims and can accurately quote for the cost of a repair.

Defining ethical behaviour for robots

“In law a man is guilty when he violates the rights of others. In ethics he is guilty if he only thinks of doing so.”

—Immanuel Kant

 

According to Wendell Wallach, author of the book Moral Machines: Teaching Robots Right from Wrong, there are three main types of morality for robots:

The robot is built with pre-programmed responses for all possible situations

When the robot comes across a situation it is not technically prepared for, the robot will bring some form of ethical reasoning to inform its next actions

The robot can learn ethical lessons from past encounters and develop its own moral compass

But even if it’s possible to create robots that can make moral decisions, should we? The questions of whether or not robots are in a position to make ethical decisions is a tricky one. After all, are humans adequately ‘programmed’ to make ethical decisions?

Perhaps ancient philosophy can help. When it came to morality, Plato saw it as something beyond human nature. It was objective and universal. Actions could only be moral if they could be justified from a detached point of view. When applied to AI, Plato might vouch for machines doing whatever it is objectively moral for it to do. Robots would and should be able to make the tough decisions that might appear impossible for us flawed humans to compute.

On the other hand, Aristotle saw morality as the best way for a being to behave. Humans have their own morality, as do animals. Morality is about being able to personally explain your choices to another rational individual. As such, robots that follow this view would have to develop their own morality. Their choices would be driven by their own values.

Each approach has its pros and cons, but neither stands out as better than the other. Yet considering how humans are more affected by emotion (e.g. road rage) and more prone to bad judgement (e.g. drunk driving), at least we can expect robots to behave consistently.

Is regulation the answer?

“Are fundamental assumptions underlying regulatory frameworks, such as a very generic distinction between ‘things’ and ‘humans’, sustainable in the longer term, if (bio)robotic applications are increasingly built into human bodies?”

Leenes R, Palmerini E, Bert-Jaap K, Bertolini A (2007)

 

The World Economic Forum’s Global Risk Report 2017 stated that AI, biotech and robots have some of the highest possible benefits to society. Together, these technologies can drive economic growth and solve society’s most complex challenges, such as climate change and population growth. However, these technologies also require the most legislation.

Today, the most famous rules governing robots and ethics were devised by science fiction author Isaac Asimov. His Three Laws of Robots state:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

These rules are still referenced as a guide for development today. But considering these rules were first introduced by Asimov in 1942, they perhaps do not reflect the AI capabilities and complexities of our times. Furthermore, these rules fail to address a number of questions: for example, what constitutes as harming a human being, and should a robot-surgeon amputate a human’s gangrene-affected leg to save his or her life?

In 2016 the European Parliament published a paper that acknowledged AI could surpass human intellectual capacity in the next few decades. Unless proper preparations were made, this could “pose a challenge to humanity’s capacity to control its own creation and, consequently, perhaps also to its capacity to be in charge of its own destiny and to ensure the survival of the species”.

The report recommended that robotic research should be carried out in the interests of humans and their wellbeing. It called for a European agency for AI that could provide regulatory and technical advice for robot designers. And while mention of AI ethics was sparse, it did touch on how liability should be proportionate to the robot’s autonomy and its actual levels of direction. This suggests machines and their creators are to be held dually responsible for the robot’s decisions.

The future

72% of Europeans believe robots are good for society because they help people.

(Source: Eurobarometer, 2015)

 

The issue with morality is that there is no right and wrong. Everyone, from robot designers and engineers, to policymakers and the public, will have their own idea of what the best course of action is. What’s more, teaching morality is difficult because humans can’t express morality in objective terms, making it impossible to give measurable metrics for machines to process. And then there’s the argument that humans don’t have a true understanding of morality themselves. We usually go with our gut feeling. Machines, by their nature, lack such instincts.

Perhaps the best way forward is to develop machines in the way we raise children. Morality is a continuous learning process. We don’t directly force our choices on our children – rather, we guide them on what the best behaviour should be. Eventually the child (or robot) will act in a way it believes to be right.

For now let’s remember that AI cannot truly think for itself. Yes, it can perform highly complex tasks often associated with intelligent people (e.g. intricate equations, medical deduction, legal reasoning, etc.), but robots still can’t quite do basic human tasks, such as folding the laundry or telling jokes. Making independent moral judgements is still a long way off.