Earlier this week, the FT Advisor reported on a study by Openwork, which showed that “despite a surge in robo-solutions in the market consumers still prefer face-to-face advice when it comes to planning their finances”. According to the study, “71 per cent (of respondents) had concerns robo-advice would not be appropriate for their financial needs while 73 per cent said they would prefer to receive advice face to face”. Though, younger consumers were much more positive about receiving financial advice from a robotic solution: 44% of under 25s have “no concerns that robo-advice might not be appropriate for their needs”.
A similar result was found by Arthur S. Jago. In a paper published in the Academy of Management Discoveries journal, Jago reports on a study whereby he asked participants to judge the following ethically-charged behaviours:
- Autonomously improving the safety features of a car
- Fixing a client’s electrical issue
- Correcting a payment error
- Signing up and underage patient for a useful medical trial
- Changing highways to avoid killing migrating crabs
In every scenario, the study participants were happier with the behaviour if they believed that it had been performed by a human than by AI.
The financial advice and Jago’s list of behaviours are all scenarios where, potentially, AI could perform better than humans. And, in theory, with the right level of relational and socio-emotional elements, customers would be willing to accept the deployment of AI in those scenarios. So, what else could be at play here?
11, drawing on literature from the field of the psychology of automation, argue that consumer acceptance of AI varies according to the type of task that is being performed. In the paper entitled “How artificial intelligence will change the future of marketing”, which was published in the Journal of the Academy of Marketing Science, they argue that consumers’ willingness to adopt and use AI is influenced by the task’s characteristics.
Davenport and colleagues argue that customers resist AI when the task is:
- Perceived has being subjective – This is because consumers perceive that intuition, affect and empathy are needed to perform the task well, and they deem that the AI lacks in those skills.
- Perceived as being unique – If the task is perceived as having unique, unrepeatable features, then customers are less willing to accept the use of AI in it. For instance, creating a bespoke event or dealing with the complex needs of a patient that suffers from various illnesses. Related to this, customers who scores high on the ‘personal sense of uniqueness’scale (yes, there is such a thing!) are more likely to resist interacting with AI (which helps explain why luxury services might not deploy AI even when it would be cost effective to do so).
- Very consequential for customers – A task that is very consequential for customers (e.g., remote interference with my car’s safety features) makes risks more salient to them. As a result, those customers are less willing to trust the AI in those circumstances.
- Related to autonomous goals – Tasks can be divided into low level construal mindsets (focused on the ‘how’) or high-level ones (focused on the ‘why’). For instance, in an assembly line, an agent (human worker or machine) can attach part A to part B because it was tasked with doing that, or because it decided (autonomously) that attaching A to B was the best way of achieving a generic goal such as creating a safe / aerodynamic / fun / beautiful product. Consumers are more willing to accept AI in the first scenario (low level construal mindset), than in the second one (high level construal mindset).
- Salient to the customers’ identity. Customers may have different self-identities – for instance, being fit, being a good parent, making choices that are good for the environment, etc. Customers resist using AI in tasks that are seen as central to those identities, because they perceive it as ‘cheating’. 
Financial advice for things like funding one’s retirement or creating a trust to fund for one’s children are likely to score relatively high on the characteristics above. I would like the advisor to show empathy for my goals, to appreciate my unique situation, to understand how very important the decision is for me, to know the best way of achieving my goal, and to recommend something that fits with my sense of self. And, for the time being, most of us are likely to feel that we can only get that from a human advisor, right?
 Side note: that might help explain why we can easily imagine AI replacing aspects of other peoples’ jobs, but struggle to imagine it replacing any aspect of our own jobs.