With growing interest in AI-based service agents, an increasing number of studies are comparing how users perceive interactions with AI versus human agents. Yaqi Chen, Haizhong Wang, Sally Rao Hill and Binglian Li examined user assessment of adverts created by a generative AI agent vs a human one. The details for this paper are: Chen, Y., H. Wang, S. Rao Hill, and B. Li. (2024). Consumer Attitudes Toward AI‐Generated Ads: Appeal Types, Self‐Efficacy and AI’s Social Role. Journal of Business Research, 185. 114867
In one round of studies, the researchers created adverts for chocolate. Some adverts emphasised the benefits of the product for consumers themselves, namely how much they would like a specific chocolate (i.e., agentic appeals). The other adverts emphasised the product’s relational benefit, such as how consumers could share the chocolate with friends (i.e., communal appeals).
Chen’s team found that for chocolate adverts, agentic appeals were rated more positively when created by AI, whereas communal appeals were preferred when created by humans.

However, this preference did not hold when the team tested adverts for mobile phones. In that case, consumers always preferred the AI adverts (light grey colour in the graph below). Unfortunately, the paper does not offer an explanation for the different effects in the mobile phone vs chocolate scenarios. Perhaps it has to do with the fact that the average consumer finds mobile phones’ plans so difficult to understand?! I certainly struggle with it… much more than choosing chocolates. An AI agent would definitely be better than me at comparing all options for different mobile phones!!
Another interesting effect found by this team is that, like in a study by Jin and Zhang that I discussed previously, the attitude towards the AI-generated communal ads improved when the role of the AI was described as supportive (e.g., “ChatGPT is a friendly partner of humans, who works with humans and co-create a high-quality service experience”), represented by the blue column.
Though, again, that effect was observed only for the agentic appeals. For the communal ones, the AI + humans adverts performed less well than either of the other options. Again, no explanation for why this is the case, which is a bit frustrating (and something that a bit of qualitative research would have solved).

While the lack of explanation for some of these effects is a little unsatisfying, the paper highlights a key takeaway: consumer preferences for AI aren’t fixed. Our perceptions change depending on the product, message, and how AI is positioned. It’s a useful reminder that we still have much to learn about perceptions towards AI agents (and of the need for qualitative research).
Have you ever seen or interacted with an advert or service that clearly came from an AI agent? Did it feel different from a human one? I’d love to hear how it shaped your impression—or your decision to buy.


