As a concept, Artificial Intelligence (AI) is usually defined in terms of how closely its workings (e.g., ability to hold a conversation) resemble human reasoning. The closer it is to humanlike performance, the better the AI is deemed to be (read about the Turing test, here).
But what about the way the AI looks? Or sounds? Or acts? Is it better for AI to be humanlike, or not? We call the process of making the AI feel like a human agent, anthropomorphising.
With AI making more and more inroads into customer-service (for instance, in the form of customer support chatbots), companies are wrestling with the extent to which to make their AI agents “feel” humanlike. For instance, should we give our AI assistant a gendered name and voice? What about making our robot look like a person?
The empirical evidence is unclear. Some studies suggest that anthropomorphising AI supports customers interactions with the technology. However, there are also studies that indicate that anthropomorphised AI creates a sense a sense of unease among customers and is, therefore, counterproductive for marketing purposes. This range of findings show how important it is to understand if and how anthropomorphism impacts on customer intentions and marketing outcomes.
Very helpfully, researchers Markus Blut, Cheng Wang, Nancy V. Wünderlich and Christian Brock conducted a meta-analysis of research looking at this issue. The findings are reported in the paper “Understanding anthropomorphism in service provision: a meta-analysis of physical robots, chatbots, and other AI”, which was published in the Journal of the Academy of Marketing Science, earlier this year.
1 Perceptions of Anthropomorphism
Perhaps unsurprisingly, the researchers found that humanlike physical features (e.g., having a face or a body) were more strongly associated with perceived anthropomorphism than non-physical ones. However, the display of emotions (which is a non-physical human like feature) had a significant positive impact (RC = 0.61) on whether or not customers perceived the AI solution to be humanlike.
2 Effects of Anthropomorphism
When the AI solution was perceived as humanlike, customers were more likely to rate it as animated, intelligent, likeable, safe, present, ease to use and useful than otherwise. As a consequence, customers tended to be positively predisposed towards the technology, to be satisfied with the interaction, and to rate it as more trustworthy.
3 Intention to use
Perceived anthropomorphism was strongly correlated to intention to use the technology. The effect was particularly strong:
- When the AI was presented as female
- For information processing services (e.g., banking and financial services)
This paper provides a really useful overview and analysis of the empirical evidence arrow the topic of AI anthropomorphism. I am sure that I will refer to it a lot, in the future. Having said that, I disagree with some of the recommendations that the authors go on to make, such as that managers should embrace humanlike AI, and opt for feminised AI customer assistants, particularly in information processing settings.
I disagree because the studies reviewed in this meta-analysis looked at a very narrow range of effects of anthropomorphism. Namely, the studies focused on the impact of anthropomorphic features on customers’ perceptions of the AI, only. However, the actual impact on brand perception, or the impact on satisfaction with the overall experience (as opposed to the specific experience of interacting with the AI), may be different. So, if you are a marketing manager considering either anthropomorphising AI in general, or opting for female gendered AI specifically, first, make sure that you research customer expectations of the type of service and of the service provider, very carefully.
2 thoughts on “￼To be, or not to be humanlike, that is the question for marketing AI”
I read your blog post with interest. I noticed that under sociodemographic in the image, there was no mention of race/ethnicity. I obtained the impression that all the participants in the research that was examined were predominately White. This factor is more likely to be true in a racist country such as America. Also, AI has a poor reputation when it comes to race. For example, most AI facial recognition learning was with white faces because the researchers were white. (I have forgotten the reference). I appreciate race adds another level of nuance to the research, which may be difficult to compensate for. Indeed the real test of an AI is to pass the Turing test. (Thinking about it, some humans probably could not pass the Turing test, but I digress). So my question is this. How close to passing the Turing test were the results examined in the meta-research?
I went back to the paper and, indeed, there is no explicit mention of ethnic characteristics of the AI, as one of the demographic dimensions of anthropomorphic AI. Your comment about the Turing test reminded me of a passage in the book “You are not a gadget – A manifesto” by Jaron Lanier. He wrote that “the Turing test cuts both ways. You can’t tell if a machine has gotten smarter or if you’ve just lowered your own standards of intelligence to such a degree that the machine seems smart. (…) Did that search engine really know what you want, or are you playing along, lowering your standards to make it seem clever?” 😉