LLM “hallucinations” are mathematically impossible to avoid

From Gyana Swain for Computerworld on how “hallucinations” a.k.a. computer errors are inevitable with LLMs:

“Unlike human intelligence, it lacks the humility to acknowledge uncertainty,” said Neil Shah, VP for research and partner at Counterpoint Technologies. “When unsure, it doesn’t defer to deeper research or human oversight; instead, it often presents estimates as facts.”

The OpenAI research identified three mathematical factors that made hallucinations inevitable: epistemic uncertainty when information appeared rarely in training data, model limitations where tasks exceeded current architectures’ representational capacity, and computational intractability where even superintelligent systems could not solve cryptographically hard problems.

LLMs function under data with uncertainty attached. For general purpose, in the human world, that will always be the case. This is problematic when you have chatbots spouting sure things all the time.

FlowingData Delivered to Your Inbox

Weekly Digest