May 2023 round-up

May was busy. There were several big things happening at the same time, and I constantly felt being pulled between different demands on my time and attention. As much as possible, I tried focusing only on the task at hand, rather than let myself feel overwhelmed about everything that had to be done. This is … Continue reading May 2023 round-up

In search for evidence of ChatGPT’s user experiences and perceptions

Last week, I did a presentation at the University of Birmingham, related to the paper “Snakes and Ladders: Unpacking the Personalisation-Privacy Paradox in the Context of AI-Enabled Personalisation in the Physical Retail Environment”, with Brendan Keegan and Maria Ryzhikh. In that paper, we report on the results of a study which looked at young female … Continue reading In search for evidence of ChatGPT’s user experiences and perceptions

What if AI was your customer? Some thoughts on LLMs as medical patients

Some time ago I read a paper by Ming-Hui Huang and Roland T. Rust where they introduced the idea of AI as a customer. In the paper appropriately titled “AI as customer”, which was published in the Journal of Service Management, Huang and Rust present the idea that AI, in addition to being used to … Continue reading What if AI was your customer? Some thoughts on LLMs as medical patients

Quality and ethical concerns over the use of ChatGPT to analyse interview data in research

A few weeks ago, I was asked to review a paper that had used ChatGPT to code product reviews. The authors had entered the reviews onto ChatGPT and instructed it to summarise the key reasons for complaints about the product. To assess the quality of ChatGPT's classification, the authors extracted a number of the complaints, … Continue reading Quality and ethical concerns over the use of ChatGPT to analyse interview data in research

Assessing the risk of misuse of language models for disinformation campaigns

The report “Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations” discusses how large language models like the one underpinning ChatGPT might be used for disinformation campaigns. It was authored by Josh A. Goldstein, Girish Sastry, Micah Musser, Renee DiResta, Matthew Gentzel and Katerina Sedova, and is available in the arXiv repository. … Continue reading Assessing the risk of misuse of language models for disinformation campaigns

January 2023 round-up

I was reeeeeeally tempted not to write this post. I had high hopes that I would be wrapping up a couple of projects, but that didn’t really happen… mostly because I failed to plan for some tech fails and logistical mishaps. You would think that after all these years I would be better at anticipating … Continue reading January 2023 round-up

ChatGPT and university education – the opportunity, the challenge and the breakthrough

Image created using Dall-E Like it or not, ChatGPT and other forms of generative conversational AI are here to stay. Last weekend, John Naughton, writing in the Guardian, compared ChatGPT to Excel*, noting that “[Excel] went from being an intriguing but useful augmentation of human capabilities to being a mundane accessory”. It would never occur to current … Continue reading ChatGPT and university education – the opportunity, the challenge and the breakthrough

Dear ChatGPT, your answer is convincing but it is a complete fabrication

I have been spending some time exploring ChatGPT, the new AI powered, conversational chatbot, which is attracting a lot of attention for the range and the quality of its output. ChatGPT, by OpenAI, was launched at the end of November. It can do things as diverse as writing letters / e-mails, short answers, long articles … Continue reading Dear ChatGPT, your answer is convincing but it is a complete fabrication