False Positive Rate
Understanding False Positive Rates
A false positive in AI detection occurs when the tool says "this text is AI-generated" but the text was actually written by a human. The false positive rate is the probability of this error occurring across a large sample of human-written texts.
False positive rates among DetectArena's 6 tested tools:
- Pangram: 0.01% (1 in 10,000)
- Winston AI: 0.5% (1 in 200)
- Originality.ai: 1.5% (1 in 67)
- GPTZero: 2.0% (1 in 50)
- Sapling: 5.0% (1 in 20)
- ZeroGPT: 8.0% (1 in 12)
Why It Matters
A false positive can lead to a student being wrongly accused of cheating, a freelance writer losing a client, or published content being unnecessarily flagged. The consequences of false positives are asymmetric: a false negative (missing AI text) is usually less harmful than a false positive (wrongly accusing a human).
For this reason, false positive rate should be the primary metric when choosing an AI detector for high-stakes applications. See the full guide on false positives for causes and mitigation strategies.