Some time ago I read a paper by Ming-Hui Huang and Roland T. Rust where they introduced the idea of AI as a customer. In the paper appropriately titled “AI as customer”, which was published in the Journal of Service Management, Huang and Rust present the idea that AI, in addition to being used to provide services, can also be used on the consumer side.
The authors identify three uses of AI in consumption:
- AI augmenting the human customer, for instance, by providing information needed to make a decision;
- AI replacing the human customer, for instance, by completing a specific task that the customer assigned to it;
- AI being the customer itself, by completing all the tasks related to a particular function, autonomously.
The paper goes on to argue that AI customers perform better than their human counterparts in narrowly defined tasks, where access to data and superior analytical capabilities are needed: “With accessible and available big data from the (Internet of Things), great computing power and the right algorithms and models, AI can be very powerful in generating analytics. AI today most often relies on neural network-based machine learning to “think.” It is “weak (or narrow) AI” that is designed to perform a narrowly defined task very well, such as providing movie recommendation for human customers to enjoy” (page 213).
Conversely, “Current AI is not good at tasks that are contextual, require intuition and involve biological feelings.” (page 213)
When applied to the right context, AI could enhance the consumption benefits or reduce the effort required to generate those benefits for the human customers that AI supported or replaced. The table below summarises the examples and the outcomes from each type of AI customer role, discussed in the Huang and Rust paper [though, I think that some of the examples discussed in the paper – and listed below – are more akin to AI acting as a service provider rather than a customer]:
I was wondering what this typology implied in terms of the impact for service providers interacting with AI customers. And, because I am doing some work about online health information, I applied it to that context.
The AI application I chose for this scenario is an LLM like, say, ChatGPT (this is one of hottest areas for application of ChatGPT). To simplify the analysis, I am assuming that patients can order medication online, safely.
|Role||AI augments human customer||AI replaces human customer||AI is the customer|
|Task||The LLM produces lay summaries of medical research||The LLM compiles list of questions to ask consultant – e.g., request for specific treatment mentioned in recent medical papers||The LLM is connected to an API which orders medication mentioned in recent medical papers|
|Impact, if LLM is suitable for task||Reduces time spent educating patient about latest treatments. Can focus on discussing treatment options for patient, instead. Outcome: Saves provider’s time; enhances advice provided to patient.||Can focus on discussing suitable dosage, side effects of requested treatments, etc… Outcome: Saves provider’s and patient’s time; Patient may feel empowered.||Patient starts treatment, on his own. Outcome: Saves provider’s and patient’s time; Speeds patient’s access to treatment.|
|Impact, if LLM is not suitable for task||Consultant needs to spend appointment time correcting summary produced by LLM. May run out of time to discuss treatment options in that appointment and/or delay other patients’ appointments. Outcome: Wastes provider’s time and, possibly, other patient’s; may delay start of treatment for patient.||Consultant needs to spend appointment time explaining why that is not a suitable option, and (re)building trust with patient. Outcome: Wastes provider’s time; may undermine relationship between consultant and patient; may delay start of treatment for patient.||Consultant is unaware that patient started treatment. Outcome: Consultant is unable to monitor side effects and advise accordingly (e.g., regarding possible interference with other treatments)|
This is just a rough, back of the envelope analysis. I would need to run it by people providing these services, to see if it makes sense to them. Plus, look at other contexts. But a few things pop up:
- The benefits are fairly contained (customer and service provider), whereas the costs extend to others;
- The benefits are mostly tangible (time and costs savings), whereas the costs also include intangible factors such as damaged relationship;
- The costs escalate quickly, with the decrease in human intervention.
Let me know if you find this approach helpful, and how you would build on this.