It is widely acknowledged that answers produced by large language models (LLMs) reproduce the biases that informed how they were trained. For instance, assuming that doctors are male while nurses are female. Though, that bias doesn’t simply reflect the systemic inequality in our society. Rather, it reflects the prevalent stereotypes, that is the type of image commonly … Continue reading LLMs reflect data about how people believe the world to be, not data about how the world is
Tag: gender bias
Conversations with Chat GPT: Convention over linguistic rules
While I warmly encourage everyone to get familiar with generative AI, I often suggest that they use it mostly for purposes where it doesn’t matter if the answer is correct or not. If one must use it in a context where accuracy matters, then I suggest using it only when we know the answer and can … Continue reading Conversations with Chat GPT: Convention over linguistic rules
The automation of sexism and racism
Four years ago, while preparing for a presentation, I searched google for a generic image of a “person” to add to my slides. Of the first 25 results, one (4%) had long hair. Three (12%) images were of people with dark skin (1 woman and 2 men; all with short or no hair). And, overall, there … Continue reading The automation of sexism and racism