Accountability for chatbot errors: Air Canada’s legal battle

When Air Canada’s chatbot gave wrong information to a customer, and the customer complained, the airline argued that it was not responsible for the chatbot’s mistakes. They tried to get away from honouring the wrong information provided by the chatbot, saying that the customer should have consulted the company’s website instead. Canadian courts disagreed, and ordered Air Canada to comply with the information that had been provided by the chatbot. You can read about the case, here.

Photo by John McArthur on Unsplash

Air Canada’s attempt to evade responsibility for the chatbot’s mistake suggests that the company failed to grasp something discussed in the paper “The dark side of AI-powered service interactions: exploring the process of co-destruction from the customer perspective”, co-authored with Daniela Castillo and Emanuel Said. As we write in that paper:

“When AI applications, such as chatbots, are introduced to the frontline, customers view such applications as a substitute for human (employees). As a result, customers hold similar (…) expectations regarding service levels”.

Specifically, this kind of mistake is an example of a cognition challenge, and it is one of five sources of value destruction for customers who interact with company chatbots. 

CategoryDescription
FunctionalityChatbot is deemed to be of limited assistance
AffectiveChatbot lacks empathy
IntegrationLoss of information during interaction, or during handover to human assistant
CognitionChatbot can’t understand query
AuthenticityUnclear whether service is being provided by chatbot or human

Blame for cognition challenges is attributed to the firm, not the technology itself, because chatbots are deemed to lack agency and, therefore, to be unable to act autonomously. Instead, customers attribute this failure to incompetence by the firm (and, in some cases, to their own inability to communicate effectively with the chatbot).

The qualitative study reported in that paper was extended with two rounds of online experiments, to test the effect of type of interaction (forced vs voluntary), as well as the type (process vs outcome) and severity (high vs low) of failure on customers’ attribution of blame. Hopefully, the paper will be out soon – but, in the meantime, if you want to know more, please reach out to the talented Daniela Castillo who is leading this work on customer perceptions of chatbot failures.

With chatbots becoming ubiquitous, I suppose that we will see other cases of costly mistakes for companies using them. Have you ever experienced a similar situation where a company’s chatbot provided incorrect information, and how did the company handle it?

5 thoughts on “Accountability for chatbot errors: Air Canada’s legal battle

  1. Did you come across the story recently where a chatbot working for DPD told a user that the company was “the worst delivery service”, as well as other failings.

    https://www.foxbusiness.com/technology/dpd-ai-error-causes-chatbot-swear-calls-itself-worst-delivery-service-disgruntled-user-report

    It was not that the bot necessarily provided misleading information, but that it didn’t perform to expectations. Or something similar. It didn’t disrespect customers but swore and undermined corporate values. In this example, to use your words, had agency and was able to act autonomously.

    Like

Leave a comment