The customer perspective on AI

There is considerable interest regarding the potential of AI for various customer facing tasks – from market prospecting, to sales and after-sales. But what will customers think of these technologies? Will they be happy that their delivery query is being handled by a chatbot, or that their meal is being delivered by a robot?

Jochen Wirtz, Paul G. Patterson, Werner H. Kunz, Thorsten Gruber, Vinh Nhat Lu, Stefanie Paluch and Antje Martins, explored this issue in the conceptual paper “Brave new world: service robots in the frontline”, published in the Journal of Service Management (free version available here). Drawing on literature looking at technology acceptance and customer satisfaction in service settings, the authors propose that customer’s acceptance of AI-enabled solutions in service settings will depend on the following factors:

SR 01
Image source

 

First of all, customers will be willing to interact with AI if the technology provides an easy, useful, accepted way of solving their problems. Given that AI solutions are expected to be able to draw on a range of information, process data quickly and be adaptive, the authors conclude that, as far as the functional aspects of AI are concerned, customers are likely to welcome the deployment of AI technology in service settings.

 

However, satisfaction in service settings depends on more than just getting the job done. For instance, think about how your experience of eating at a restaurant might be negatively impacted by the waiter’s behaviour, even if your food is delicious. So to the functional attributes, Wirtz and colleagues add two relational dimensions that are crucial in a service setting: trust and rapport.

 

Trust refers to the extent to which customers feel secure and comfortable about depending on the AI. Early evidence suggests that there is ‘an undercurrent of apprehension, unease, and distrust toward (AI)’ (p. 918), which would hinder acceptance of AI solutions. In turn, rapport refers to the feeling of closeness with the other party. The emerging evidence is that interaction with AI improves rapport, particularly where the interactions were personalised. So, the growing ubiquity AI solutions – for instance, via smart speakers – will help improve its acceptance.

 

To further account for the ‘soft’ dimensions of service delivery, Wirtz and his colleagues also consider the role of the following socio-emotional elements: perceived humanness, social interactivity and social presence. Specifically, the AI solution needs to act (sounds, looks, interacts…) in ways that resemble a human, for any meaningful interaction to occur. However, it can’t look or behave too much like one. This is both to manage customer expectations and avoid disappointment when the AI performance’s falls below that of a human (for example, when it misunderstands the customer’s context or emotion), and to take account of the uncanny valley effect [1].

 

In addition, the AI solution needs to act in ways that conform to social expectations. The authors say that:

(F)or humans and robots to be able to interact effectively requires robots to observe accepted social norms, including displaying the appropriate actions and (surface) emotions. It is important that customers’ needs, their perceptions of a robot’s social skills and robot performance are aligned for a wide adoption of service robots. (p. 917)

 

Moreover, Wirtz and colleagues argue that customers need to feel like they are interacting with another social being, and that someone is “taking care” (p. 917). [2]

 

Last but not least, the paper’s authors note that the key to customers’ acceptance of AI is not so much whether there are high or low levels of each of the functional, relational and socio-emotional elements above, but rather whether that level meets the needs of the customers and the characteristics of the task to be performed by the AI.

 

Based on the characteristics of AI, and the customers’ likely willingness to accept interacting with AI in service delivery, Wirtz and his co-authors go on to propose the following typology for when services might best be delivered by an AI solution, a combination of AI and humans, or just humans:

SR 02
Image source

 

This graph is looking at the potential of AI; the best-case scenario, so to speak. It ignores factors such as:

  • The cost advantage of AI vs labour – a ticketing AI may be too big an investment for some venues; or, conversely, lack of qualified staff may lead companies to use AI in emotional-social complex settings;
  • The positioning of the brand – a brand presenting itself as a tech leader may insist on using AI solutions (or pretend to be using such solutions) even when it would make more sense (from a cost effectiveness perspective) to use staff. Conversely, a luxury provider may insist on having the human touch where machines would do an equally good job – think about The Carlyle Hotel in New York, which prices itself in still having elevator assistants to this day.
  • Negative side effects – The deployment of AI in one area is likely to have negative effects in other areas, at the level of the organisation, the customer, the industry or society. For instance, automatic trading algorithms have created flash crashes in the US stock market, Uber’s self-driving vehicle hit and killed a pedestrian, the use of female voices in smart speakers may influence the perception of women as subservient servants, the presence of big economies of scale may lead to winner-takes-it-all markets as is the case of Amazon or Facebook, and the use of AI powered robots to care for the elderly and vulnerable adults may lead to social deprivation.

 

What other factors may impact on either the customers’ willingness to accept AI in service delivery, or the use case for AI in service settings?

 

[1] “Uncanny valley” refers to the finding that increasing the human-like features of a robot (for instance, the bow tie and the blinking eyes in the aloft robot depicted in the video at the top of this blog post) improves its acceptability by humans, but only up to a point. If the machine starts to look too human, it is deemed creepy or freaky, and is rejected.

[2] I am really struggling to understand what “social presence” means in the case of AI. More specifically, what it means to “feel that the AI solution is another social being, which is taking care” of the customer. Any ideas?

2 thoughts on “The customer perspective on AI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s