Perceived blame, matters

In customer service, things are bound to go wrong, at some point. When that happens, it is important to not only understand what went wrong, but also what is the perceived cause of the problem, because that impacts on the recovery strategy.

 

Take interacting with a chatbot, for instance. As discussed in a previous post, many factors can lead a customer to be unhappy after interacting with a customer service chatbot. Here are the 5 categories of problems that can occur:

Category Description
Functionality Chatbot is deemed to be of limited assistance
Affective Chatbot lacks empathy
Integration Loss of information during interaction, or during handover to human assistant
Cognition Chatbot can’t understand query
Authenticity Unclear whether service is being provided by chatbot or human

To properly address customers’ frustration caused by these problems, we need to not only solve the real cause of the problem, but also the perceived cause. Why? Because the perceived cause will shape customers’ response to the problem, as well how they interpret other information from or about the firm. We call this effect, the confirmation bias. For instance, if customers believe that the problem occurred because they are unable to use chatbots, they will use an alternative channel to contact the firm or, maybe, try to become more “versed” at using this technology. However, if they feel that it was the firm’s fault, they will seek redress (e.g., an apology, or compensation) or, even, retribution.

 

So, who do customers blame, when things go wrong in interactions with chatbots?

chatbot-3589528_1920

According to research reported in the paper co-authored with Daniela Castillo and Emanuel Said (mentioned in a previous post), these are the perceived sources of problems when interacting with customer service chatbots:

Problem Who’s to blame? Firm, because: Customer, because:
Functionality Firm, only Failure to invest in features valued by customer [care]
Affective Firm, mostly Failure to invest; Unsuitable use of chatbot [care] Diminished presence of mind, during emotional situations [emotional]
Integration Firm, mostly Poor organizational procedures [incompetence] Failure to proactively save information [behaviour]
Cognition Both Lack of attention during development of the chatbot [incompetence] Failure to use simple language and check spelling [cognitive]
Authenticity Both Deliberate misrepresentation of chatbot via naming, script and other features [deceit] Failure to notice cues – e.g., speed of reply [cognitive]

We found that, in situations of “mild” dissatisfaction, customers tried to by-pass the chatbot, and solve the problem via another channel, which results in resource inefficiency for the firm. This would include resorting to social media to express their frustration, which can impact how other customers perceive the brand. However, when customers experienced more dissatisfaction and, specially when they experienced emotional losses (e.g., affective problems), they pursued harsher measures such as terminating the service, or moving to a competitor, which represents loss of revenue for the firms.

 

So, when measuring the success of a chatbot, don’t think only in terms of “cost savings” or “convenience”. Think also how it can impact the firm’s image, and how customers’ reason about the causes of the problems that they experience.

 

The paper reporting this study was published in The Service Industries Journal. You can find a free, pre-print version of the paper,here.

4 thoughts on “Perceived blame, matters

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s