Skip to content
Glossary - AI & Machine Learning

Hallucination Rate

How often an AI model confidently generates false or made-up information as if it were fact.

Hallucination rate is the percentage of AI outputs that contain false or fabricated information presented as fact. The model isn't lying. It doesn't know it's wrong. That's what makes it a real problem.

For product managers building on top of AI, hallucination rate isn't a hypothetical concern. It's a metric you have to track.

How PMs measure it

You can't measure hallucination rate automatically unless you have ground truth to check against. The most common approach is human evaluation: sample AI outputs, verify them against a known source, and record the failure rate.

If you're building AI-generated summaries of customer feedback, pull a sample, read the originals, and check whether the summary invented any claims. Over time, you build a benchmark.

Some teams use a secondary AI call to evaluate the first, an LLM-as-judge pattern. It's faster than manual review, but you're trusting a model to catch another model's mistakes, which has its own failure modes.

Why PMs care

Hallucination isn't just a technical metric. It's a trust metric. If users catch your AI feature saying something false once, they second-guess it forever.

The acceptable rate depends entirely on the stakes. Summarizing feature requests? Users will tolerate an occasional slip. Generating legal text? Zero tolerance.

Set your acceptable rate before you ship. Then instrument for it. "We'll watch it after launch" is not a plan.

Every voice heard.
Every feature shipped.

You're ready to ship. We're ready to help.
Start free, no credit card required.