An AI “hallucination” is when a language model produces a confident, plausible statement that’s false or not supported by the evidence it was given. Hallucinations happen because models generate likely text (not verified facts) and can be incentivized to guess […]