It is widely acknowledged that answers produced by large language models (LLMs) reproduce the biases that informed how they were trained. For instance, assuming that doctors are male while nurses are female. Though, that bias doesn’t simply reflect the systemic inequality in our society. Rather, it reflects the prevalent stereotypes, that is the type of image commonly … Continue reading LLMs reflect data about how people believe the world to be, not data about how the world is
Tag: LLMs
Finding the human in an AI world
You might have seen this graph, already. It depicts an explosion in the use of the term “delving into” in academic papers and is used as evidence that authors are writing academic papers with LLMs. Image source Similar messages are flying online, pointing to other words overused by AI such as intricate, complexity, intersection, nuanced or stakeholders. … Continue reading Finding the human in an AI world
A framework to decide whether and how to use generative AI chatbots
Since the early days of Open AI’s release of ChatGPT 3.5, many voices have alerted to the fact that, while generative AI tools produce very convincing answers, they are also prone to making up information. This propensity is referred to as hallucination. Concerns over generative AI’s propensity for hallucination are almost as prevalent as enthusiasm for … Continue reading A framework to decide whether and how to use generative AI chatbots
Thoughts on the privacy threats and personalisation opportunities of qualitative inference with large language models
I have come across the paper entitled “Beyond Memorization: Violating Privacy Via Inference with Large Language Models”, authored by Robin Staab, Mark Vero, Mislav Balunović and Martin Vechev. Staab and his team investigated “whether current LLMs could violate individuals' privacy by inferring personal attributes from text”. Using prompts and techniques that, to me, seem quite … Continue reading Thoughts on the privacy threats and personalisation opportunities of qualitative inference with large language models