Why Antifraud Often Hurts Publishers More Than The Fraud Itself
Fraud is a real problem. Publishers know this. But here's what they're learning: the "solution" often costs more than the problem.
Antifraud systems operated by ad networks routinely reject, discount, or silently drop legitimate traffic—and publishers have no recourse.
The Silent Deduction Problem
Publisher sends 1,000,000 impressions. Dashboard shows 1,000,000 impressions. Payment arrives for 700,000 impressions.
Where did 300,000 impressions go? "Filtered for quality." What quality issues? "Proprietary methodology." Can you appeal? "No."
This isn't hypothetical. Publishers regularly report 20-40% discrepancies between sent traffic and credited traffic. The gap is attributed to fraud filtering, but:
- No impression-level data showing what was rejected
- No explanation of which signals triggered rejection
- No ability to dispute or verify
- No consistency—same traffic passes one day, fails the next
Overaggressive Filtering
Networks have asymmetric incentives around fraud:
If they let fraud through: Buyers complain, demand refunds, leave platform.
If they filter too aggressively: Publishers... do what exactly? They don't know what was filtered. They can't prove it was legitimate. They have no leverage.
Given these incentives, networks err toward aggressive filtering. When in doubt, reject. The publisher absorbs the cost.
The False Positive Tax
Every antifraud system has false positives—legitimate traffic incorrectly flagged as fraud. Industry estimates suggest false positive rates of 5-15% for typical fraud detection.
For a publisher doing $10,000/month, a 10% false positive rate means $1,000/month lost to incorrect filtering. Over a year, that's $12,000—likely more than actual fraud would have cost.
And unlike fraud (which at least sometimes gets caught and refunded), false positives are permanent losses. No one reviews them. No one refunds them.
Quality Score Manipulation
Many networks apply "quality scores" that modify publisher payouts:
- Score of 1.0 = full payout
- Score of 0.8 = 80% payout
- Score of 0.6 = 60% payout
How is the score calculated? Trade secret. What factors affect it? Not disclosed. How can you improve it? No guidance.
Publishers operate blindfolded, their revenue modified by an opaque algorithm they can't see, understand, or influence.
The Verification Paradox
Publishers can't verify fraud claims because they don't have access to:
- Buyer-side conversion data (did the traffic actually underperform?)
- Antifraud system internals (what triggered the flag?)
- Comparative data (how did other publishers' similar traffic score?)
They must trust the network's claim that traffic was fraudulent. The same network that profits from that classification.
What Good Antifraud Looks Like
Publisher-friendly fraud detection would include:
Transparency: Show what was filtered and why. If you reject an impression, log the reason.
Consistency: Same traffic should score the same way. Random variation indicates a system problem, not a fraud problem.
Appeal process: Let publishers dispute obvious errors. Review edge cases with human judgment.
Buyer configuration: Let buyers set their own thresholds. Aggressive filtering is fine if buyers choose it.
Fallback, not discard: Traffic that fails one buyer's antifraud might pass another's. Don't destroy the impression—redirect it.
The Uncomfortable Truth
Networks position antifraud as protecting publishers from being blamed for bad traffic. The reality: antifraud often functions as a mechanism to reduce payouts under the unchallengeable banner of "fraud prevention."
Publishers deserve to know: what exactly is being filtered, by what criteria, with what false positive rate? Without this information, "fraud protection" is indistinguishable from "arbitrary revenue reduction."
The solution isn't less antifraud—fraud is real. The solution is antifraud that publishers can see, understand, and verify. Anything less is just another way to extract value from the supply side.