LLMs reflect data about how people believe the world to be, not data about how the world is

It is widely acknowledged that answers produced by large language models (LLMs) reproduce the biases that informed how they were trained. For instance, assuming that doctors are male while nurses are female. Though, that bias doesn’t simply reflect the systemic inequality in our society. Rather, it reflects the prevalent stereotypes, that is the type of image commonly … Continue reading LLMs reflect data about how people believe the world to be, not data about how the world is