Antifraud Is Not a Switch: Why Fraud Detection Is Always a Tradeoff
Advertisers often think of antifraud as binary: on or off, protected or exposed. The reality is far more nuanced. Every antifraud decision involves tradeoffs between catching fraud and losing legitimate traffic.
The False Positive Problem
Antifraud systems make two types of errors:
False Negatives (Missed Fraud)
Fraud that slips through detection:
- Sophisticated bots that mimic human behavior
- Residential proxies that look like real users
- Click farms using real devices
- New fraud techniques not yet fingerprinted
False Positives (Blocked Legitimate Traffic)
Real users incorrectly flagged as fraud:
- Privacy-conscious users on VPNs
- Corporate users behind proxies
- Users with unusual but legitimate behavior
- New devices or browsers with thin history
The Sensitivity Dial
Imagine antifraud sensitivity as a dial from 1-10:
Low Sensitivity (1-3)
- Blocks: Only obvious, certain fraud
- Misses: Sophisticated fraud gets through
- False positives: Very few legitimate users blocked
- Use case: Maximize reach, accept some fraud cost
Medium Sensitivity (4-6)
- Blocks: Most common fraud patterns
- Misses: Some sophisticated attacks
- False positives: Some VPN users, edge cases blocked
- Use case: Balanced approach for most campaigns
High Sensitivity (7-10)
- Blocks: Anything suspicious
- Misses: Very little gets through
- False positives: Many legitimate users blocked
- Use case: High-value conversions where quality is critical
The Business Tradeoff
Choosing sensitivity is a business decision, not a technical one:
Example: CPA Campaign at $5 Payout
- Option A: Loose antifraud, 15% fraud rate, 100 conversions = $75 lost to fraud
- Option B: Strict antifraud, 5% fraud rate, 70 conversions = $17.50 lost to fraud
Option A: Net 85 valid conversions. Option B: Net 66.5 valid conversions.
Loose antifraud wins - unless the 30 blocked conversions were actually valid (false positives).
The Hidden Cost of Strict Filtering
When you block aggressively:
- Real customers are turned away
- Lifetime value is lost, not just one conversion
- Competitors capture users you rejected
- Publishers stop sending you traffic
Why "Just Block All Fraud" Doesnt Work
Perfect Detection Doesnt Exist
No algorithm can perfectly separate humans from bots. The sophistication arms race never ends. Theres always a gray zone.
Fraud Definitions Vary
Is a VPN user fraud? What about someone who clicked accidentally? A user who converted but disputed the charge? "Fraud" isnt always clear-cut.
Context Matters
Traffic that looks fraudulent for one offer is fine for another:
- Datacenter IP might be fraud for local services but fine for B2B software
- Fast clicks might be bot fraud or just an eager user
- International IP might be VPN fraud or legitimate traveler
The Right Approach
Match Sensitivity to Value
- High-value conversions ($100+): Strict filtering justified
- Low-value conversions ($1-5): Volume matters more
- Brand campaigns: Reach usually beats purity
Test Different Levels
Run parallel campaigns with different antifraud settings. Measure actual ROI, not just fraud rates. Let data decide.
Layer Your Approach
- Soft filtering at ad serving (block obvious fraud)
- Harder filtering at conversion (validate valuable actions)
- Post-analysis to refine rules
Accept Imperfection
Budget for some fraud. Trying to eliminate 100% costs more (in lost legitimate traffic) than accepting a small fraud rate.
What This Means for Platforms
Good platforms let you choose your tradeoff:
- Adjustable sensitivity levels
- Visibility into whats being blocked and why
- Ability to recover false positives
- Data to optimize your settings
Bad platforms make the choice for you:
- One-size-fits-all antifraud
- No visibility into decisions
- No way to adjust sensitivity
- Trust us approach
Antifraud isnt about being "protected" or "exposed." Its about finding the right balance for your specific situation - and having the tools to adjust when that situation changes.