Recipe for a Robot

Three researchers talk about what it takes to make a social robot

The Researchers

Maja Matarić
Ayse Saygin
Andrea Thomaz

After years of existing only in fiction, robots with personalities and expressive facial features are now beginning to find a place in real-world settings such as hospitals, schools, and nursing homes. In turn, scientists and engineers are seeking to improve how these robots are able to guide and support patients undergoing physical rehabilitation, act as teaching aides, serve as companions to the elderly confined to nursing homes, and be used for other dynamic purposes.

These researchers are finding that for “social robots” to be effective, they have to be socially acceptable to people who interact with them. But what does it take to make a robot social and engaging? Should it walk and talk just like a person? Should it be emotionally expressive and responsive? Does the personality of the robot matter? What’s needed to get a robot and a human to work well together?

To answer these questions, researchers in artificial intelligence and robotics are seeking insights from neuroscience, psychology and social science. This interdisciplinary approach is leading to the creation and testing some of first social robots and finding out what works and what doesn’t in the up and coming field of human-robot interactions (HRI). Recently, The Kavli Foundation brought together three pioneers in HRI to discuss how all of their findings are shaping the future of robotics, let alone our future social sphere.

  • Maja Matarić – professor of computer science, neuroscience and pediatrics, and director of the Center for Robotics and Embedded Systems, University of Southern California; session chair at the 2011 Kavli Frontiers of Science Symposium.
  • Ayse Saygin – assistant professor of cognitive sciences and neurosciences, and faculty member of the Kavli Institute of Brain and Mind, University of California San Diego; presenter at the 2011 Kavli Frontiers of Science Symposium.

Below is an edited transcript of the discussion.

THE KALI FOUNDATION (TKF): To begin, can you tell us what are the key ingredients for a social robot?

Andrea Thomaz, assistant professor of interactive computing and director of the Social Intelligent Machines Laboratory at the Georgia Institute of Technology.
Andrea Thomaz, assistant professor of interactive computing and director of the Social Intelligent Machines Laboratory, Georgia Institute of Technology.

ANDREA THOMAZ: In my lab, we see human social intelligence as being comprised of four key components – the ability to learn from other people, the ability to collaborate with other people, the ability to apply emotional intelligence, and the ability to perceive and respond to another person’s intentions. We try to build this social intelligence in our robots. With the first component, our goal is for the robot to go beyond its machine programming to learn from a human who knows nothing about robotics and can teach it just as if he or she was teaching a person. With collaboration, we want the robot to be able to participate in teamwork activity with another human. Emotional intelligence is also important because emotions communicate a lot of important information needed for collaboration like “I’m stressed out or I’m tired.” You need to be able to tell when someone is overworked so you can help them out a little bit more. The fourth component is being able to understand people’s intentions. A lot of research shows that people are really good at perceiving people’s intentions and goals from their actions. If I see you reach for a tool, I’m going to infer that you want that tool and think about why you want it. It’s a lot different than just seeing some pixels on a screen and inferring your hand is moving—a social robot needs to go further and understand why your hand is moving. We try to program into a social robot all these things that we intuitively and unconsciously communicate to each other.

MAJA MATARIĆ: How we interact with embodied machines is different than how we interact with a computer, cell phone or other intelligent devices. We need to understand those differences so we can leverage what is important. Features of expression in the face and body of the robot are important, especially the body because body posture, gestures, and how close we get to someone are subtleties we use to manipulate social interactions.

AYSE SAYGIN: I work primarily on body language from the standpoint of how people understand each other’s movements and actions. We can’t yet build a model of how the human brain does it and then put that into a machine. I think it’s important, but we’re not there yet in terms of fully understanding it.

Maja Matarić, PhD, professor of computer science, neuroscience and pediatrics, and director of the Center for Robotics and Embedded Systems, of the University of Southern California.
Maja Matarić, PhD, professor of computer science, neuroscience and pediatrics, and director of the Center for Robotics and Embedded Systems, University of Southern California.

MAJA MATARIĆ: But not everything is modeled on biology. Planes don’t fly the way birds do, but they’re still really good at flying. Our robots have moveable eyebrows, but people don’t tend to look at eyebrows when they interact with people because there is so much else to look at, such as the eyes and the muscles around them. When you don’t have all that other stuff, the eyebrows become important and can convey a great deal.

TKF: How important is it for the robot to look and act like a person?

AYSE SAYGIN: Making robots more humanlike might seem intuitively like that’s the way to go, but we find it doesn’t work unless the human-like appearance is equally matched with human-like actions. Certain theories of human cognition predict a human likeness is going to increase empathy. But when we tested this by doing brain scans of people as they observed highly human-like robots compared to less human-like robots, we found a robot that is not consistent in both its appearance and behavior confuses the brain, which detects the mismatch and doesn’t respond to it the way it would to a person. So the appearance is not trivial, and you might want to bypass the issue completely by not getting close to the human appearance because when you do, people are going to expect behavior that is congruent with that appearance. When those expectations are not met, they sense something is off and perceive the robot as being creepy.

Ayse Saygin, PhD, assistant professor of cognitive science and neurosciences with a backgtround in computer science, and faculty member of the Kavli Institute of Brain and Mind at the University of California San Diego.
Ayse Saygin, PhD, assistant professor of cognitive science and neurosciences and faculty member of the Kavli Institute of Brain and Mind at the University of California San Diego.

ANDREA THOMAZ: I agree with both Ayse and Maja that it’s more important for the robot to act like a person than to look like a person. We tend to project human beliefs and goals onto inanimate objects and it’s interesting to think about how we can take advantage of this aspect of human psychology. Making a robot behave in such a way that it causes people to correctly predict its intentions can be an instrumental part of making human-robot interaction more effective. One of the best arguments for this is Pixar—their ability to make you believe inanimate objects are humanlike is amazing and we take that as inspiration. We can add a human-like quality to a robot’s motion, based on what we know makes motion expressive in a cartoon character. One of my students has shown that when robots have motions that people recognize and can intuit the intentions from, they interact better with the robot and have a better memory of the task they are doing when they are interacting with the robot. So it’s tapping into a more unconscious perception of the motion.

TKF: What other features should a social robot have?

MAJA MATARIĆ: Everyone knows that personality is important in human-human relationships, so we explored if it is also important in human-robot relationships because robots can be programmed to have personalities. We found that when we matched the personality of the robot to that of the user, people performed their rehab exercises longer and reported enjoying them more. It turns out that the truism that opposites attract is not true at all. We found that if extroverted stroke patients interacted with extroverted robots, they did better than extroverted patients that interacted with introverted robots and vice-versa. It was a very powerful effect, but we have only tested it in the context of motivating behaviors, like exercise in stroke patients, that are otherwise not pleasant or fun on their own. More work needs to be done to assess what are the best robot personalites for other kinds of interactions. People will ascribe personalities to machines anyway, so we might as well manipulate this in a productive way.

AYSE SAYGIN: In one of my studies in which people judged computers as to how human-like their conversation was, we found people really liked the computer that was rude—a computer that would use swear words. People thought this computer was a real person because it expressed emotions. So you see people can be very easy to manipulate at this level, but to really get robots to work in multiple settings, you have to have adaptable machine learning so the robot can learn what each user prefers.

As director of the Center for Robotics and Embedded Systems at the University of Southern California, Maja Matarić led the team that designed Bandit, a robot created to encourage and teach social behavior to children with autism, help stroke patients with their physical rehabilitation exercises, and assist the elderly with physical and cognitive exercises.

MAJA MATARIĆ: And a personal robot that interacts with the same user over a long period of time raises other challenges. We need to figure out how to make a personal robot remain interesting and engaging to you over a period of months and years. Historically we try to program into machines the best way to do something with the idea being that there is a best way out there that’s well defined. That’s not true for interactions with people—it’s not at all clear that there is a best way. And whatever is the best way today may be different next month because people change and interactions change. After all, nearly half of all marriages break up, so people are pretty bad at these long-term relationships.

TKF: Andrea, what did you learn from your studies of how people interact with emotionally expressive and responsive robots?

ANDREA THOMAZ: We tried to make a robot that mimicked the same behavior you see in infants. When infants get into a novel situation they try to understand if it’s good or bad based on the caregiver’s emotional response. When I was at MIT in Cynthia Breazeal’s lab, we made a robot called Leonardo that extracted features of the voice and eyebrows of the human it was interacting with to decide if a novel object was positive or negative. These studies showed the importance of emotional expression as a communication channel. We don’t exactly understand how emotional intelligence works in humans, so we’re implementing these simplified models in machines to figure out what does and doesn’t work.

TKF: What have been some of the more surprising findings to come out of your human-robot interaction research?

ANDREA THOMAZ: We found in our studies that people actually really enjoy interacting with robots that are learning from them. They’re surprised by the robot changing its behavior based on their input, and they find this to be a very rewarding experience. It’s great that people don’t mind teaching the robot. We’re also finding that it’s not really that important for a person to understand completely the underlying software that’s programming the robot in order to teach it effectively. They don’t have to understand how reinforcement learning or active learning works, for example, to interact effectively with the robot.

MAJA MATARIĆ: People maintain a great deal of stereotypes about how they interact with robots. They assume kids love to interact with robots and older people don’t. But as we do more research on this, we find more evidence against these stereotypes. The older people in our studies had never seen these robots before and did not even have much experience with computers, yet they engaged with our robots with great interest. With robotics, there’s great potential to over generalize, overhype, and also be excessively concerned and say “Oh my gosh, robots are going to take over the world.” When I hear that my response is “Yeah, right, as soon as they can walk three steps and not trip on the carpet or the cat.” We need to listen to what science is telling us and don’t just go with gut feelings.

TKF: Ayse, much of your neuroscience research has focused on perception. How do your studies on robots help this research?

Developed at Georgia Tech, Simon the robot is designed to learn from humans the way a person would /><span class=
As director of the Social Intelligent Machines Laboratory at Georgia Tech, Thomaz led the team that designed Simon, a robot created to learn from humans the way a person would — through observation, demonstration and social interaction — then respond through familiar social cues. Pictured: Simon flanked by two of his designers, Nick DePalma and Thomaz. (Photo credit: Gary Meek/Georgia Institute of Technology)

AYSE SAYGIN: In reality, looking and acting are rarely separated—when you see a dog, it looks like a dog and moves like a dog. But now we have these robots that defy those facts, so we can use them to learn about human perception and how it works. I sometimes compare this to visual illusions, where we learn something about perception by looking at how it is disrupted in certain cases. There’s been a decade’s long debate about whether the goal of artificial intelligence should be to understand human intelligence and simulate it or whether the goal should be to get stuff working. I think we should do both. We need people trying to make robots that help people, but at the same time a “failed” robot can be informative in my area where I can do an experiment with it to try and understand why it failed. Robotics researchers can then use my neuroscience findings to better design the next robots so that they help more people. We have all these uncertainties in understanding both the human side and the robotics side. So putting them together is a little bit stressful, but it’s actually better when trying to get some answers and solve problems. I really enjoy the interdisciplinary nature of this line of work. We’re trying to use the brain to understand human-robot interaction, and robot-human interaction to understand the brain.

TKF: Do you confront ethical issues tied to building social robots?

MAJA MATARIĆ: Ethics is a hot topic in our field. Roboticists are certainly concerned with whether technology is dehumanizing and whether we are creating machines that will take people away from other people. But we find that usually just the opposite is the case. For example, when we work on robots for kids with autism, the robots are designed to help the kids learn how to interact socially with other children, so the goal is to bring children together. What we are finding from our studies is that often the machines bring people together in really novel ways, like computerized social networks bring people together in different ways that were entirely unanticipated 10 years ago. I chaired a session about Computational Social Science at the recent Kavli Frontiers of Science Symposium just a couple of weeks ago where we discussed this very topic. One ethical issue that we do need to worry about is the fact that people do get attached to these machines emotionally. So what happens if the machine breaks or becomes obsolete and the manufacturer has moved on to the next level with a new machine?

To learn more about Simon and Bandit (left to right), see the sidebar story, Social Robots in the Real World.
Meet Simon and Bandit (left to right). (Photo credit: Gary Meek/Georgia Institute of Technology; M. Matarić, USC)

ANDREA THOMAZ: I agree. To make a robot work better in human society it is going to have to be pushing all the right buttons so you interact with it as if it is another human. People already get emotionally attached to their cars, which don’t even push those social buttons, so we have to be worried about the fact that people could get even more attached to a social robot. We need to consider what we can do to make it really transparent that the robot is a machine that can break and might go away or become obsolete. This is being discussed a lot at conferences and there are university classes and books on robot ethics.

TKF: Robotics is a field that has been traditionally dominated by men, but the field of human-robot interaction seems to have a lot of women. Why do you think that is?

ANDREA THOMAZ: When you go to a human-robot interaction conference, it’s certainly more gender balanced than a typical engineering or robotic conference. There have been studies that show that women are drawn to technology fields that have a more clear impact on people so that could be why. My sense is that this subfield of robotics is more gender balanced.

MAJA MATARIĆ: I agree and it’s certainly a wonderful way to recruit more women into engineering in general and robotics in particular. If you look at the statistics, women in engineering tend to be in biomedical engineering, probably because they can see how their work in that field can help people; HRI has the same potential.

AYSE SAYGIN: We know from human psychology that people like to see themselves represented. If you are a woman it can be discouraging to look at the professors or the big leaders in your field and see they are all men. I’m pleased to have this involvement of women.

Read More