Assessing the risk of misuse of language models for disinformation campaigns

The report “Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations” discusses how large language models like the one underpinning ChatGPT might be used for disinformation campaigns. It was authored by Josh A. Goldstein, Girish Sastry, Micah Musser, Renee DiResta, Matthew Gentzel and Katerina Sedova, and is available in the arXiv repository. … Continue reading Assessing the risk of misuse of language models for disinformation campaigns

January 2023 round-up

I was reeeeeeally tempted not to write this post. I had high hopes that I would be wrapping up a couple of projects, but that didn’t really happen… mostly because I failed to plan for some tech fails and logistical mishaps. You would think that after all these years I would be better at anticipating … Continue reading January 2023 round-up

ChatGPT and university education – the opportunity, the challenge and the breakthrough

Image created using Dall-E Like it or not, ChatGPT and other forms of generative conversational AI are here to stay. Last weekend, John Naughton, writing in the Guardian, compared ChatGPT to Excel*, noting that “[Excel] went from being an intriguing but useful augmentation of human capabilities to being a mundane accessory”. It would never occur to current … Continue reading ChatGPT and university education – the opportunity, the challenge and the breakthrough

Dear ChatGPT, your answer is convincing but it is a complete fabrication

I have been spending some time exploring ChatGPT, the new AI powered, conversational chatbot, which is attracting a lot of attention for the range and the quality of its output. ChatGPT, by OpenAI, was launched at the end of November. It can do things as diverse as writing letters / e-mails, short answers, long articles … Continue reading Dear ChatGPT, your answer is convincing but it is a complete fabrication