15 min read

Humanlike robots continue to transform society. But is there a limit to how ‘human’ we can make machines? And what are the ramifications of creating robots in our image?


It’s often said certain qualities, such as morality, emotion and culture, are uniquely human. They’re what separate us from non-human animals and other living organisms. In fact, the philosopher Aristotle characterised the human race as ‘rational animals’ who live by art and reasoning. In essence, we pursue knowledge for the sake of it.

But what if we were to transfer these qualities on to a robot? If a robot could be made to feel empathy or express creativity, would that make it ‘human’? It might seem like a far-flung question, but the reality is closer than you think.

AI is evolving at a rapid pace thanks to better computer hardware and software. So it might not be too long before jobs traditionally considered ‘safe’ are automated as well.

According to a study between Oxford and Yale University researchers, AI is expected to automate all human tasks in the next 45 years and all human jobs in the next 120 years. Jobs that involve predictable and repetitive tasks will go first, and jobs that require high levels of creativity and emotional intelligence will follow. Despite the complexity, creativity or skill a job entails, robots are coming for it.

So what does this mean for society today, and what can we expect from humanlike robots in the coming years?

The ‘humanness’ of robots today

If you’re reading this article, you’re probably well adjusted to being human. You can navigate your way around this website, digest the information at your own pace, and make up your mind about the issues at hand. You’re probably sitting on a chair. Maybe you’re even eating a snack. For you, all these activities are nothing extraordinary.

But for robots and their researchers, replicating these simple behaviours has proved highly difficult. It turns out, humans are complex beings. Everything from how we view a scene in front of us, to how we navigate a terrain, is hard for a robot to achieve. Whereas we humans are skilled at adapting our abilities to new or dynamic environments, robots need to know what’s coming in order to function.

Robonaut (R2)

Credit: Nasa


Nasa’s Robonaut (R2), photographers: Robert Markowitz and Bill Stafford

Videos of door-opening robot dogs and back-flipping humanoids are fuelling people’s imaginations (and fears) — but are people mistaking machine performance with intelligence?

“An AI system can play chess fantastically, but it doesn’t even know that it’s playing a game. We mistake the performance of machines for their competence. When you see how a program learned something that a human can learn, you make the mistake of thinking it has the richness of understanding that you would have.”

Rodney Brooks Chairman and CTO of Rethink Robotics

In many ways, AI is still in its infancy. Unlike humans, who have a greater and deeper understanding of the world around us, AI machines still lack the ability to grasp context.

In short, today’s robots tend to find difficult tasks easy and easy tasks difficult. They may be skilled in playing world-class chess but still struggle with loading the dishwasher. And, despite much of today’s computing terminology taking inspiration from human traits (computers can catch a ‘virus’, ‘read’ discs and ‘write’ files), the robots currently in development are still far from human.

However, researchers are trying to bridge this gap. They’re not only trying to improve the technical abilities of robots so they’re better at processing environments – they’re improving the social potential of robots as well.

The field of social robotics is all about designing robots with human behavioural characteristics in mind. This can include anything from the way robots speak and how they sit, to how they react to different environments or people. The idea is to have robots echo human social conduct as closely as possible.

The benefits are twofold:

Firstly, it makes it easier for the public to interact with robots. Because these robots act like humans, there is little need for previous training or experience. For example, people often make eye contact with drivers before they cross the road. Should driverless cars be introduced, there needs to be a way for cars to communicate with pedestrians in a similar manner. In this instance, robots could have lights that blink as soon as eye contact is made.

Secondly, this helps robots better integrate with society. Consider the workplace, for instance. The best collaboration and teamwork often happens when people build a rapport. Social robotics that can instil human traits, such as humour or charisma, can help boost relationship building.

But even if we could create robots capable of mimicking our physical movements and interaction, could they truly think and behave like us? Creativity and emotional intelligence are two things often considered innately human… but are they?

Can we teach robots creativity?

Robots are helping people become more creative every day. New tools in image making and music composition are helping push the frontiers of what’s humanly possible. With robots doing the time-consuming and non-creative tasks, artists can focus their energy on their art instead of spending time on their tax returns.

But whether or not robots can create art themselves is a difficult question. Firstly, a definition of creativity is needed.

Margaret Boden, a cognitive science expert at Sussex University, defines creativity as “the ability to come up with ideas or artefacts that are new, surprising and valuable.” Ideas can include things such as poems, songs, jokes or scientific theories, while artefacts can be anything from sculptures, paintings to vacuum cleaners.

Credit: Banksy / Photo by Scott Lynch [CC BY-SA 2.0]


“Creativity is the ability to come up with ideas or artefacts that are new, surprising and valuable.”

—Margaret Boden, University of Sussex

Banksy robot art

‘New’ and ‘surprising’ are highly difficult to automate. Automation relies on explicit instructions and boundaries, going against the very notion of creativity. What’s more, creativity in art, writing and music requires drawing on personal experience. These works are creative because their creators understand the core principles of their craft and lean on different inspirations to create something ‘new’.

What’s more, some argue that, even if a computer is capable of creating ‘creative’ works, it can never be genuinely creative. This is because it’s the programmer’s creativity at work. Just as a paintbrush was the tool of Monet, or a camera was the tool of May Ray, the AI is the tool of whoever is using it.

Despite that, various examples of robots expressing themselves (or their creators) already exist.

Expressive robots today

Aiva

Aiva stands for ‘Artificial Intelligence Virtual Artist’ – and is an AI that can compose classical music. Recently it became the first AI to officially acquire the status of composer, which gives it authorship rights under France and Luxembourg’s copyright laws.

It uses deep learning to compose symphonies, based on the compositions of famous composers such as Beethoven and Mozart (whose work is copyright-free). From this analysis, it creates its own work, based on whatever series of notes it considers to sound ‘right’. The sheet music is then played by professional artists using real instruments and recorded in a recording studio.

RoboThespian

RoboThespian
Credit: Scailyna (Own work) [CC BY-SA 4.0]

RoboThespian is an acting humanoid. Created by UK-based Engineered Arts, it’s capable of talking and displaying facial expressions. Currently, it’s being used for corporate events and in museums to entertain and communicate with visitors. It has also had time on stage, playing a key character in a play called Spillkin.

Botnik

Botnik is an AI-assisted tool that uses different types of human-created content, including anything from Seinfeld scripts to Harry Potter novels, to creative predictive keyboards. These can be used to write new versions of existing works. It’s already written new versions of the Harry Potter books, as well as new episodes of Scrubs, X-Files and Seinfeld. “The idea of Botnik is that humans and machines working together can come up with things that neither would be able to on their own,” Botnik CEO and co-founder Jamie Brew has said.

So can AI be creative? Today’s AI can write poetry, but it can’t write a novel. It can act, but it can’t make a movie. Technical ability can be mimicked, but creativity and intention are far from perfect. While robots can create what looks like art, it’s still debatable whether it actually is.

Can we teach robots emotional intelligence?

Human intelligence relies on an elaborate network of neurons within the brain. And while it’s still unclear how exactly they form emotion and logic, scientists are attempting to create AI systems that can replicate this intelligence.

With tomorrow’s generation growing up with AI, soon interacting with digital interfaces will become the norm. Making sure robots have a degree of emotional intelligence may help ease fears humans will eventually lose their ability to empathise with other humans.

However, robots need not be overly humanlike for us to relate to them. Humans are naturally inclined to anthropomorphise robots and give them intelligent traits, even if they don’t exist yet.

With the trend to call electronic gadgets by human names (for example, ‘Alexa’ or ‘Siri’), or design them with human features in mind (be it subtle blinking eyes or whole four-limbed bodies in the case of Honda’s Asimo humanoid) – understanding how we act around robots can help researchers design robots that work better for us.

Emotional intelligence in robots today

Emotionally intelligent AI can be used to improve road safety by monitoring fatigue and detecting distraction in drivers. Toyota’s Concept-i is exploring this idea in a number of ways. It uses AI to gauge human emotions by analysing facial expressions, conversation topics and driving habits. If it detects that alertness levels are decreasing, it can jiggle the driver’s seat or suggest a pit stop. If it senses the driver is moody, it can change the music to cheer the driver up. Or, if it senses the driver is frustrated and stuck in a traffic jam, it can rhythmically inflate and deflate the driver’s seat, with the idea of regulating breathing to calm the driver down.

Emotional intelligence in AI can help with mental health support. Cognito Companion is one such app that tracks a patient’s behaviour. Location data can be used to indicate if a patient hasn’t left their home in a number of days, while phone logs can show if the patient hasn’t spoken to anyone for several weeks. AI is used to analyse audio ‘check-ins’, where the patient leaves a voice recording. Recognition techniques and AI algorithms can pick up on emotional signals to tell if a person is depressed, and this data can help doctors track their patient’s progress.

SoftBank Robotics has developed a humanoid robot named Pepper for use as a customer service representative. It draws on touch, voice and emotion to communicate with humans. The idea is that if a person is sad, the robot can pick up on this and attempt to cheer the person up. This can be anything from telling a joke to playing cheerful music. Its manufacturers claim that the more people communicate with it, the more Pepper learns about its audience, therefore becoming more personalised in its communications.

Pepper, SoftBank Robotics
Pepper, SoftBank Robotics

These examples highlight that the advent of truly emotionally intelligent AI is still not imminent. And let’s face it, as humans, we often find it hard to understand how our own friends and families are feeling – so what chances do robots have? Human behaviour is nuanced, inconsistent and often unpredictable. The ability to pick up on these subtle emotional cues is something robots still have a long way to go with.

The road for robots ahead

Every day robots are becoming more advanced, more complex and more human. Eventually these emotionally intelligent, highly creative and deeply social machines seem likely to replace much of the human workforce. But not any time soon.

For now it makes sense to focus on how humans and machines can work together to build on human potential. Whether it’s safer roads or more connected communities, such technology will soon be ingrained in society. Rather than fear the robotic future, the best path forward is to focus on how we can use robots to empower humanity and to achieve our goals – allowing us to communicate, create and remain ‘rational animals’.