The quality of credence products, like car repairs or medical treatments, is very difficult to assess before purchase because those products contain many intangible attributes that are difficult to observe and understand. For example, I can’t really see the components that make a vaccine effective – I need to trust (hence the descriptive “credence”) that vaccines are effective and/or that a particular laboratory or pharmacy are doing what they are meant to be doing.
Because of the risk of purchasing a product whose quality is difficult to assess, many customers delay the purchase or consider only the offers from a very restricted set of providers. One way that managers try to reduce the risk for consumers, and overcome the resistance to purchase a credence good, is to provide information about the “backstage” of credence goods – for instance, how it is done or the components used. Though, managers also need to be mindful of information overload, that is, the point beyond which the volume of information provided exceeds the capacity to process it and make a decision. It is a tricky balance.

Florence Nizette, Wafa Hammedi, Allard C.R. van Riel and Nadia Steils researched this tricky balance in relation to explanations provided by AI. The researchers examined how “the interplay between the level of detail provided in explanations and the degree of control consumers have over explanations” (p.51) impacted on consumers’ decisions and why.
The research findings are reported in the paper: Nizette, F., Hammedi, W., van Riel, A. C., & Steils, N. (2025). Why should I trust you? Influence of explanation design on consumer behavior in AI-based services. Journal of Service Management, 36(1), 50-74.
Somewhat surprisingly, the researchers identified a preference for highly detailed AI explanations. This goes against traditional information overload theory, which predicted that non-experts would be confused or overwhelmed by too much information.
The authors propose that this more-is-more effect occurs because of signalling. Namely, they propose that the detailed explanation, in addition to providing information about the product, also offers “assurance” that the AI is acting in the consumers’ interest. This is interesting because it suggests that the goal of explanations in credence markets is not just about transferring “data points”, but it is also about building a “relationship” that makes the consumer feel secure in the AI’s integrity and competence.
Though, there is an important caveat: understanding was enhanced when consumers had limited control over how the explanation was displayed. Specifically, understanding was higher when the explanation was automatically presented to the consumers than when it was presented “on demand”. The researchers suggest that the effort required to navigate the system and “pull” detailed explanations can trigger cognitive overload, as well as reduce the assurance benefit of the detailed explanation itself.
In summary, this paper suggests that the less-is-more adage does not apply when the stakes are high, as is the case of credence products. If the decision is complex and high-risk, it is important to resist the urge to over-simplify the AI’s logic. With AI and credence products, more is more, provided that the consumer doesn’t have to work for it.
