Much has been written about the dangers of LLMs’ hallucinations. Hallucinations occur when the model confidently presents an incorrect answer. This happens because LLMs are not knowledge systems. For example, when Co-Pilot made up the existence of a fictitious match between Maccabi Tel Aviv and West Ham. But hallucinations are not the only epistemic risk linked to … Continue reading LLMs can flatter you into being wrong