Last week, I came across various headlines about a study conducted by the New York Times, which found that readers preferred short stories generated by AI to those written by humans. Previous studies had found the same in relation to poetry and adverts. Before you decide to delegate your writing to generative AI, though, you should consider that texts may … Continue reading Choosing when to use generative AI for writing tasks
Month: March 2026
LLMs can flatter you into being wrong
Much has been written about the dangers of LLMs’ hallucinations. Hallucinations occur when the model confidently presents an incorrect answer. This happens because LLMs are not knowledge systems. For example, when Co-Pilot made up the existence of a fictitious match between Maccabi Tel Aviv and West Ham. But hallucinations are not the only epistemic risk linked to … Continue reading LLMs can flatter you into being wrong
The Reputational Risk of Disclosing AI Use
I am currently marking coursework where students were allowed to use generative AI, but they have to detail how they used it and include screenshots. And this reminded me of a paper that I read some time ago, entitled "Competence Penalty Is a Barrier to the Adoption of New Technology". This paper reports a study … Continue reading The Reputational Risk of Disclosing AI Use
February 2026 round-up
Another month of contrasting travels: starting with a trip to Portugal for a funeral (on my birthday) and ending with a trip to Milan for the Olympics (a birthday present). There was also time with friends, coffee with colleagues, theatre, puzzles and flowers. Research I had discussions with different colleagues about some potential research grant … Continue reading February 2026 round-up