← Knowledge

Interactive · AI Basic

Why AI Makes Things Up

Hallucination — when AI invents facts confidently, and how to catch it

1

Real Hallucination Cases

When AI answers confidently but is flat wrong

AI doesn't "know" what's true — it predicts plausible words from what it's seen. Here are 4 real patterns where AI invents confidently. Click to reveal.

⚠️

Warning sign: AI answers with high confidence + gives "credible-looking" references. The more structured and polished the answer, the more you need to verify.

2

Why AI Makes Things Up

3 root causes

📝

Analogy: AI is like a student guessing on an exam — they didn't read this chapter, but they have to answer, so they use "probability" from what they've read. Close enough = correct / far off = confabulated answer.

💡

AI isn't "lying on purpose" — it's doing exactly what it was trained to do: predict the next word. Sometimes "sounds like a good answer" doesn't match "is a true answer."

3

How to Catch Hallucinations

5 practical techniques

🎯

Simple rule: High-stakes work (medical, legal, finance, job applications) — AI can draft, but never submit without human verification.

← Back to Knowledge