Back to Blog
Technology & Architecture

Inside PopTrade's RTB Architecture: How We Handle Real-Time Bidding at Scale

December 9, 20254 min read

Real-time bidding happens in milliseconds. When a user loads a page, an entire auction must complete before they notice any delay. This article explains how PopTrade's architecture handles this challenge without requiring massive server infrastructure.

The RTB Challenge

Every ad request triggers a complex sequence:

  1. Publisher's page sends ad request
  2. Platform receives and parses request data
  3. Eligible campaigns are identified
  4. Fraud checks are performed
  5. Auction runs among qualified bidders
  6. Winner is selected and ad is served

All of this must happen in under 100 milliseconds to avoid impacting user experience. Traditional approaches throw hardware at the problem - more servers, more memory, more compute. We took a different approach.

Pre-Bid Processing

The key insight is that most of the work can happen before the bid request arrives.

Campaign Indexing

Instead of querying the database for every request, we maintain in-memory indexes:

  • Geo index - Campaigns pre-sorted by target countries
  • Device index - Separate lists for desktop, mobile, tablet
  • Budget index - Only campaigns with available budget
  • Schedule index - Only currently active campaigns

When a request comes in, we intersect these pre-built sets rather than filtering from scratch. This reduces query time from potentially hundreds of milliseconds to single-digit milliseconds.

Placement Pre-Qualification

Similarly, placements are pre-evaluated:

  • Floor CPM cached and indexed
  • Allowed categories pre-computed
  • Blocked advertisers maintained in fast-lookup sets

Queue Isolation

Not all requests need the same processing path. We separate traffic into isolated queues:

Fast Path (90% of requests)

Standard requests with clean signals:

  • Known geo, known device type
  • No suspicious fraud indicators
  • Multiple eligible campaigns available

These go through optimized pipeline with minimal checks.

Evaluation Path (8% of requests)

Requests needing additional analysis:

  • Borderline fraud scores
  • New or unrated placements
  • Complex targeting intersections

Slightly longer processing, but still sub-100ms.

Deep Analysis Path (2% of requests)

Suspicious requests requiring full evaluation:

  • External antifraud provider calls
  • Historical pattern matching
  • Manual review flagging

May exceed 100ms but prevents fraud from entering the system.

Auction Mechanics

Once eligible campaigns are identified, the auction itself is straightforward:

Bid Collection

Each campaigns effective bid is calculated:

  • Base bid from campaign settings
  • Geo-specific adjustments applied
  • Smart Rule modifiers calculated
  • Quality score factors included

Second-Price Logic

We use second-price auction mechanics:

  • Highest bidder wins
  • Winner pays second-highest bid + minimum increment
  • This encourages truthful bidding

Tiebreakers

When bids are equal:

  • Campaign quality score
  • Historical performance
  • Random selection as final fallback

Latency Optimization

Connection Pooling

Database connections are pre-established and pooled. No connection setup overhead per request.

Redis Caching

Frequently accessed data lives in Redis:

  • Campaign configurations
  • Placement details
  • Frequency cap counters
  • Recent fraud scores

Async Where Possible

Non-critical operations happen after response:

  • Statistics recording
  • Log aggregation
  • Notification triggers

The user gets their ad while we finish bookkeeping.

Scaling Strategy

Our architecture scales horizontally:

Stateless Request Handlers

Each server can handle any request. No session affinity required. Add servers to add capacity linearly.

Shared Nothing

Servers dont communicate with each other during request processing. All shared state lives in Redis/PostgreSQL.

Geographic Distribution

Request handlers deployed in multiple regions. Users hit the nearest endpoint, reducing network latency.

Why This Matters

This architecture lets us:

  • Stay fast - Sub-100ms response times for most requests
  • Stay efficient - No over-provisioning of expensive infrastructure
  • Stay reliable - Queue isolation prevents bad traffic from affecting good traffic
  • Stay scalable - Linear scaling without architectural changes

Real-time bidding doesnt require billion-dollar infrastructure. It requires smart architecture that does expensive work ahead of time and keeps the critical path lean.

Share: