Home / Glossary / False Positive Rate

False Positive Rate

The false positive rate is the percentage of human-written text that an AI detector incorrectly classifies as AI-generated. Among DetectArena's 6 tested tools, false positive rates range from 0.01% (Pangram) to 8.0% (ZeroGPT). For high-stakes applications like academic integrity, the false positive rate is the single most important metric because a false accusation can cause serious harm.

Understanding False Positive Rates

A false positive in AI detection occurs when the tool says "this text is AI-generated" but the text was actually written by a human. The false positive rate is the probability of this error occurring across a large sample of human-written texts.

False positive rates among DetectArena's 6 tested tools:

Why It Matters

A false positive can lead to a student being wrongly accused of cheating, a freelance writer losing a client, or published content being unnecessarily flagged. The consequences of false positives are asymmetric: a false negative (missing AI text) is usually less harmful than a false positive (wrongly accusing a human).

For this reason, false positive rate should be the primary metric when choosing an AI detector for high-stakes applications. See the full guide on false positives for causes and mitigation strategies.