A framework to decide whether and how to use generative AI chatbots

Since the early days of Open AI’s release of ChatGPT 3.5, many voices have alerted to the fact that, while generative AI tools produce very convincing answers, they are also prone to making up information. This propensity is referred to as hallucination. Concerns over generative AI’s propensity for hallucination are almost as prevalent as enthusiasm for … Continue reading A framework to decide whether and how to use generative AI chatbots

Generative AI and Academic Blogging: A Beginner’s Guide

January marks the anniversary of this blog, and it has become a tradition for me to use this occasion to write a post sharing insights and tips to help other academics in leveraging the power of blogging for public engagement. This year, as the blog turns fourteen, I want to look at how Generative Artificial … Continue reading Generative AI and Academic Blogging: A Beginner’s Guide

The opposite of trust? 

According to the Oxford Languages dictionary, the opposite of trust is distrust. The two concepts are like two sides of the same coin. However, according to the paper “What Does the Brain Tell Us About Trust and Distrust? Evidence from a Functional Neuroimaging Study”, authored by Angelika Dimoka, trust and distrust are two very different … Continue reading The opposite of trust? 

New paper: Online information search by people with Multiple Sclerosis: a systematic review

Last week, I had an health check for the Our Future Health research programme (if you are in the UK, please consider joining this research programme, which aims to “find ways to prevent, detect and treat diseases earlier”). Looking at my blood results, the nurse suggested that I should take some steps to improve my cholesterol level. … Continue reading New paper: Online information search by people with Multiple Sclerosis: a systematic review

Quality and ethical concerns over the use of ChatGPT to analyse interview data in research

A few weeks ago, I was asked to review a paper that had used ChatGPT to code product reviews. The authors had entered the reviews onto ChatGPT and instructed it to summarise the key reasons for complaints about the product. To assess the quality of ChatGPT's classification, the authors extracted a number of the complaints, … Continue reading Quality and ethical concerns over the use of ChatGPT to analyse interview data in research

March 2023 round-up

The month started with a presentation about the online health information project at the Multiple Sclerosis patients’ day, to gather feedback about the idea from the very people that live with the condition. Later in the month, we did another two presentations for neurologists, again to gather feedback. I found these sessions really helpful to … Continue reading March 2023 round-up

It’s not because a dataset is big that it will be good. And it is not because we used a sophisticated algorithm that the decision will be fine

These are the notes from a talk that I delivered, recently, about the importance of data quality, and how to assess it. https://www.slideshare.net/slideshow/embed_code/key/1ect9VUVbgOPt7 In my talk, I started by noting the critical role of data as a source of insight and, subsequently, as an enabler of service automation. Then, went on to note that data … Continue reading It’s not because a dataset is big that it will be good. And it is not because we used a sophisticated algorithm that the decision will be fine

Assessing the risk of misuse of language models for disinformation campaigns

The report “Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations” discusses how large language models like the one underpinning ChatGPT might be used for disinformation campaigns. It was authored by Josh A. Goldstein, Girish Sastry, Micah Musser, Renee DiResta, Matthew Gentzel and Katerina Sedova, and is available in the arXiv repository. … Continue reading Assessing the risk of misuse of language models for disinformation campaigns