LLMs can flatter you into being wrong

Much has been written about the dangers of LLMs’ hallucinations. Hallucinations occur when the model confidently presents an incorrect answer. This happens because LLMs are not knowledge systems. For example, when Co-Pilot made up the existence of a fictitious match between Maccabi Tel Aviv and West Ham. But hallucinations are not the only epistemic risk linked to … Continue reading LLMs can flatter you into being wrong

A framework to decide whether and how to use generative AI chatbots

Since the early days of Open AI’s release of ChatGPT 3.5, many voices have alerted to the fact that, while generative AI tools produce very convincing answers, they are also prone to making up information. This propensity is referred to as hallucination. Concerns over generative AI’s propensity for hallucination are almost as prevalent as enthusiasm for … Continue reading A framework to decide whether and how to use generative AI chatbots