Conversations with Chat GPT: Convention over linguistic rules

While I warmly encourage everyone to get familiar with generative AI, I often suggest that they use it mostly for purposes where it doesn’t matter if the answer is correct or not. If one must use it in a context where accuracy matters, then I suggest using it only when we know the answer and can … Continue reading Conversations with Chat GPT: Convention over linguistic rules

LLMs need to be more kale

A couple of weeks ago, Gary Marcus’s newsletter flagged a company (Inqwire) that had a statement on their frontpage, stating that they do not use LLMs*, and adding that they do not pretend to be using humans when they use chatbots. Inqwire’s positioning is the complete opposite of pseudo-AI, in which companies sell certain services (e.g., … Continue reading LLMs need to be more kale

Thoughts on the privacy threats and personalisation opportunities of qualitative inference with large language models 

I have come across the paper entitled “Beyond Memorization: Violating Privacy Via Inference with Large Language Models”, authored by Robin Staab, Mark Vero, Mislav Balunović and Martin Vechev. Staab and his team investigated “whether current LLMs could violate individuals' privacy by inferring personal attributes from text”. Using prompts and techniques that, to me, seem quite … Continue reading Thoughts on the privacy threats and personalisation opportunities of qualitative inference with large language models 

Generative AI and Academic Blogging: A Beginner’s Guide

January marks the anniversary of this blog, and it has become a tradition for me to use this occasion to write a post sharing insights and tips to help other academics in leveraging the power of blogging for public engagement. This year, as the blog turns fourteen, I want to look at how Generative Artificial … Continue reading Generative AI and Academic Blogging: A Beginner’s Guide

[Miscellany]: Failing to foresee the current state of AI; AI replacing vs augmenting jobs; and regulation of AI in the EU

Failing to foresee the current state of AI The last 14 months or so have seen incredible change in AI technology. AI has progressed beyond a level that many analysts thought it would take many years – or, indeed, many decades – to achieve. In this blog post, Scott Aaronson, who is a computer scientist at … Continue reading [Miscellany]: Failing to foresee the current state of AI; AI replacing vs augmenting jobs; and regulation of AI in the EU

May 2023 round-up

May was busy. There were several big things happening at the same time, and I constantly felt being pulled between different demands on my time and attention. As much as possible, I tried focusing only on the task at hand, rather than let myself feel overwhelmed about everything that had to be done. This is … Continue reading May 2023 round-up

In search for evidence of ChatGPT’s user experiences and perceptions

Last week, I did a presentation at the University of Birmingham, related to the paper “Snakes and Ladders: Unpacking the Personalisation-Privacy Paradox in the Context of AI-Enabled Personalisation in the Physical Retail Environment”, with Brendan Keegan and Maria Ryzhikh. In that paper, we report on the results of a study which looked at young female … Continue reading In search for evidence of ChatGPT’s user experiences and perceptions

What if AI was your customer? Some thoughts on LLMs as medical patients

Some time ago I read a paper by Ming-Hui Huang and Roland T. Rust where they introduced the idea of AI as a customer. In the paper appropriately titled “AI as customer”, which was published in the Journal of Service Management, Huang and Rust present the idea that AI, in addition to being used to … Continue reading What if AI was your customer? Some thoughts on LLMs as medical patients

Quality and ethical concerns over the use of ChatGPT to analyse interview data in research

A few weeks ago, I was asked to review a paper that had used ChatGPT to code product reviews. The authors had entered the reviews onto ChatGPT and instructed it to summarise the key reasons for complaints about the product. To assess the quality of ChatGPT's classification, the authors extracted a number of the complaints, … Continue reading Quality and ethical concerns over the use of ChatGPT to analyse interview data in research

Assessing the risk of misuse of language models for disinformation campaigns

The report “Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations” discusses how large language models like the one underpinning ChatGPT might be used for disinformation campaigns. It was authored by Josh A. Goldstein, Girish Sastry, Micah Musser, Renee DiResta, Matthew Gentzel and Katerina Sedova, and is available in the arXiv repository. … Continue reading Assessing the risk of misuse of language models for disinformation campaigns