Robots are increasingly playing a role in customer service – be it taking orders in restaurants, providing information in banks, or delivering items in hotels. For the investment in customer service robots to be worthwhile, though, customers need to enjoy interacting with them, and they need to trust them. A robot’s appearance has a key role in users’ willingness to interact with it, and their enjoyment of that interaction. Namely, a certain level of perceived “humanness” (or anthropomorphism) is necessary, for meaningful interaction to occur. Though which factors are better are creating that perception is still unclear.
A team of researchers in The Netherlands has investigated the effect of two types of anthropomorphic features of a robot on users’ enjoyment of the interaction. Namely, they tested the impact of looking like a human vs behaving like a human on users’ perceptions. This is an interesting issue, from a customer-interface perspective, because it isn’t always possible to improve both aspects of a robot. Therefore, making a robot behave more like a human is not always compatible with making it look more like one.
The researchers’ investigated the impact of this trade-off by manipulating the eyes of a Pepper humanoid service robot (see Figure 1, below). The robot was place by a reception desk, and offered directions to specific locations on a campus. Sometimes, the eyes of the robot had a static colour (like humans have), but the robot did not change its gaze as it interacted with the user (unlike humans). In the second case, the eyes of the robot had a dynamic colour (unlike humans), but the robot changed its gaze (like humans do). The experiment was conducted by Michelle M.E. van Pinxteren, Ruud W.H. Wetzels, Jessica Rüger, Mark Pluymaekers and Martin Wetzels, and the results are published in the Journal of Services Marketing, in a paper entitled “Trust in humanoid robots: implications for services marketing”. An open access version is available here.
The findings, represented in Figure 2 below, show (on the right-hand side of the image), first of all, that users are more likely to enjoy the interaction with the robot, and to intend to use it again, if they trust the robot. Second, perceived humanness (middle section of the image) of the robot improves users’ trust on the robot. Third, (left-hand side of the image), increasing the human-like behaviour of the robot (at the expense of human-like appearance) has a negative impact on perceived humanness of the robot.
However, this final result regarding the trade-off between human-like appearance and human like behaviour has an important caveat, shown in Figure 3. Users reporting high comfort in interacting with the robot (solid line in figure 3), also report high levels of perceived humanness. For this group, the results are as reported: they strongly prefer human-like features than human-like behaviours. However, users reporting low levels of comfort in interacting with the robot (dotted line in figure 3), also report low levels of perceived humanness. Moreover, they slightly prefer human-like behaviours to human-like features.
Naturally, we need to be very careful not to over-generalise from this study, in a very specific experimental setting, with a very particular (and limited) form of human-like behaviour. Yet, the difference between the preferences and assessment of these two groups is very interesting. It suggests that anxiety with the interaction could have a dramatic effect on users’ perception of the robot, and which features make them feel good (or less bad) about interacting with a robot, in a service setting.
This reminded me of a conversation that I had on Tuesday, after my presentation on customers’ perceptions of customer service chatbots. My colleague told me that she really dislikes when a chatbot is being “fake-friendly”, such as when a bot asked her if she was having a good day. What features of a robot’s appearance or behaviour discourages you from interacting with a robot?