Last week, I did a presentation at the University of Birmingham, related to the paper “Snakes and Ladders: Unpacking the Personalisation-Privacy Paradox in the Context of AI-Enabled Personalisation in the Physical Retail Environment”, with Brendan Keegan and Maria Ryzhikh. In that paper, we report on the results of a study which looked at young female customers’ perceptions of AI in fashion retail. Key findings included that:
- Customers generally welcome the AI-personalised offers, but only from firms that they bought from, already.
- Customers did not trust AI’s ability to make relevant fashion recommendations, focusing, instead, on the discounts that they might accrue.
- Customers showed little tolerance for mistakes.
During the Q&A someone (I can’t remember the name) asked my view on how exposure to recent technological developments, such as ChatGPT, might impact on the results of the study.
It was an interesting question that I had not, yet, considered.
The study was conducted long before LLMs like ChatGPT became a household name. So, the interviewees compared their in-store personalisation experience with online personalisation or staff recommendations, not ChatGPT.
My intuition was that, as ChatGPT performs so well in terms of speed and style, they will develop high expectations in terms of the tone and fluency of communications, and less tolerance for mistakes. Someone in the audience suggested the opposite: that seeing ChatGPT make mistakes would lower people’s expectations of what such a tool might achieve.
The truth is that I do not know. I do not have solid evidence (beyond the anecdotal) of how people perceive ChatGPT. But the question left me curious.
So, I ended up going down a rabbit hole trying to find studies that assessed users’ experiences with ChatGPT. I didn’t find many studies that reflected on actual use (as opposed to potential use), and the ones that I did come across were based on student populations.
For instance, the paper “Chatting with ChatGPT: Decoding the Mind of Chatbot Users and Unveiling the Intricate Connections between User Perception, Trust and Stereotype Perception on Self-Esteem and Psychological Well-being” used a sample from three universities. Participants in the study conducted by Mohammed Salah, Hussam Alhalbusi, Maria Mohd Ismail and Fadi Abdelfattah reported generally positive user perception and high levels of trust in ChatGPT. Though, they “did not perceive ChatGPT as perpetuating harmful stereotypes”, which could be a bit naïve.
Another paper looked at the perceptions of students in a Computer Engineering degree, at a University in the UAE. The paper is entitled “Exploring Students’ Perceptions of ChatGPT: Thematic Analysis and Follow-Up Survey”, and was published in IEEE access. The methodology adopted by the author, Abdulhadi Shoufan, is outlined in figure 1 below:

When asked “What do you think of ChatGPT? Think deeply and write down whatever comes into your mind!” (after completing a learning activity on ChatGPT), they mentioned the following positive and negative experiences:

Then, after completing a few more activities, the students participated in a survey where they stated their level of agreement with each of positive and negative statements. As you can see below, students had a very positive experience (items starting with PT):

They seem delighted with ChatGPT’s capabilities, and found it helpful and effective.

Though, “most of them believe that it requires good background knowledge to work with” (p. 38805), as per item NT4.

So, the evidence emerging about these early experiences seems to be fairly positive and forgiving. But, then again, that could be because of the low expectations in place prior to those interactions.
Have you come across other studies looking, systematically, at users’ early experiences and perceptions of this tool?