Case Study: A/B Testing Bid Strategies with Smart Rules Automation
Finding the right bid is usually trial and error. Bid too low, miss good traffic. Bid too high, overpay for conversions. This case study shows how one buyer used Smart Rules to systematically test bid levels and find their optimal CPM.
The Problem
A media buyer running popunder campaigns faced a common challenge:
- Unclear optimal bid - Was $1.00 CPM too high? Too low?
- Manual testing was slow - Changing bids, waiting for data, comparing results
- Inconsistent conditions - Traffic quality varied day-to-day, making comparisons unreliable
- Time-consuming monitoring - Had to check stats constantly to evaluate tests
They needed a systematic way to test bid levels under controlled conditions.
What They Did
Step 1: Created Test Structure
Instead of one campaign, they created three identical campaigns:
- Campaign A: $0.80 CPM (conservative)
- Campaign B: $1.00 CPM (baseline)
- Campaign C: $1.20 CPM (aggressive)
Same targeting, same creatives, same landing page - only bid differed.
Step 2: Set Up Smart Rules for Monitoring
They configured Smart Rules to automatically track performance:
Rule 1: Pause if CPA exceeds target
IF cost_per_conversion > $5.00 AND conversions > 10 THEN pause campaign
This stopped overspending campaigns automatically.
Rule 2: Alert on performance milestones
IF conversions >= 50 THEN send notification
Notified when statistically significant data accumulated.
Rule 3: Scale winners automatically
IF cost_per_conversion < $3.00 AND conversions > 25 THEN increase budget by 50%
Winning bid levels got more budget without manual intervention.
Step 3: Ran the Test
All three campaigns ran simultaneously for two weeks:
- Equal starting budgets
- Same time period (controlled for day-of-week effects)
- Smart Rules handled monitoring
Step 4: Analyzed Results
After sufficient data accumulated:
| Campaign | Bid | Impressions | Conversions | CPA |
|---|---|---|---|---|
| A (Conservative) | $0.80 | 45,000 | 38 | $4.21 |
| B (Baseline) | $1.00 | 62,000 | 71 | $3.52 |
| C (Aggressive) | $1.20 | 78,000 | 82 | $4.56 |
The Result
The data revealed clear insights:
- $0.80 was too low - Missed quality inventory, lower conversion rate
- $1.00 was optimal - Best balance of volume and efficiency
- $1.20 was diminishing returns - More impressions but worse ROI
The Smart Rules had already scaled Campaign B budget by 50% during the test, recognizing it as the winner before manual analysis.
Key Takeaways
Simultaneous Testing Eliminates Variables
Running tests in parallel controls for market fluctuations. Sequential testing (week 1 at $0.80, week 2 at $1.00) introduces confounding variables.
Automation Removes Emotion
Smart Rules made objective decisions based on data. No second-guessing, no "lets give it more time" on losing variants.
Statistical Significance Matters
Waiting for 50+ conversions per variant before drawing conclusions avoided false signals from small samples.
The Optimal Bid Isnt Always Intuitive
Higher bids dont always mean better results. The $1.20 bid got more traffic but worse quality (or faced more competition for worse inventory).
Implementing This Approach
Test Design
- Choose 3-4 bid levels to test
- Space them meaningfully (20-30% apart)
- Create identical campaigns except for bid
- Set equal budgets
Smart Rules Setup
- Pause rule: Stop losers before they waste budget
- Alert rule: Know when data is sufficient
- Scale rule: Automatically amplify winners
Analysis Framework
- Compare CPA (primary metric)
- Check conversion volume (sufficient scale?)
- Calculate ROI if revenue data available
- Consider traffic quality indicators
When to Use This Approach
Bid testing is valuable when:
- Entering a new market or vertical
- Launching campaigns on new traffic sources
- Market conditions have changed significantly
- Current performance has plateaued
- You want to verify assumptions about optimal bids
Let Smart Rules do the monitoring while you focus on strategy. Data-driven bid optimization beats guesswork every time.