Demand Side (Left)
Advertisers and agencies set goals and budgets; DSPs choose impressions and bid prices.
Six diagrams plus a PM components playbook on auction quality, identity durability, supply trust, and attribution integrity — so this reads like an operating manual, not just a glossary.
Horizontal topology: demand on the left, exchange intelligence in the middle, and supply on the right with directional flow.
Advertisers and agencies set goals and budgets; DSPs choose impressions and bid prices.
Marketplace auctions match bid demand with inventory supply and pick winners in milliseconds.
Audience and contextual signals improve bid quality and targeting relevance.
SSPs package publisher inventory, run yield optimization, and return winning creatives for rendering.
Post-render systems verify viewability, brand safety, and invalid traffic across the whole chain.
PM lens: the middle layer (exchange + data + verification) is where optimization leverage and margin extraction both concentrate.
One impression clears in about 100–150ms: request fan-out, bidding, auction, render, then async verification.
| Step | Actor | Action | Time |
|---|---|---|---|
| 1 | User | Loads page | 0ms |
| 2 | Publisher | Ad tag sends bid request to SSP | 5ms |
| 3 | SSP | Forwards bid request to exchanges | 10ms |
| 4 | Exchange | Sends requests to DSPs with user signals | 15ms |
| 5 | DSP | Scores user and computes bid | 20-50ms |
| 6 | Exchange | Runs auction, selects winner | 55ms |
| 7 | SSP/Publisher | Winner returned and creative rendered | 60-70ms |
| 8 | User | Ad visible | ~100ms |
| 9 | Verification | Viewability + brand safety checks | 200ms+ |
Hero constraint: each participant steals from the same latency budget. If your step is slow, someone else’s optimization work gets canceled by user abandonment.
As third-party cookies fade, targeting shifts to first-party data, identity graphs, and contextual signals.
How: Browser cookie tracks behavior across sites for profile-based targeting.
Pros: Universal, simple, cheap.
Cons: Privacy backlash, browser blocks, unstable future.
Status: Dying.
How: Logged-in publishers collect consented user data and activate through controlled environments.
Pros: High quality, durable, consent-based.
Cons: Hard for small publishers without authentication scale.
Winners: Amazon, NYT, large logged-in platforms.
How: Hashed email identity passed across ecosystem as cookie alternative.
Pros: Cross-site addressability with consent pathways.
Cons: Fragmented standards and uneven adoption.
Players: TTD (UID2), LiveRamp, Google PAIR.
How: Target by page/topic semantics instead of user-level tracking.
Pros: Privacy-safe, broad scale, no personal identifiers required.
Cons: Lower precision for niche intent targeting.
Revival: GumGum, Peer39, NLP-based contextual providers.
For a $10 CPM buy, intermediaries often take half before the publisher sees revenue.
Ad tech tax: in open auction flows, publishers often receive only ~50% of advertiser spend. The other 50% is fragmented across intermediaries and service layers.
The same impression transaction looks efficient to one side and margin-destructive to the other.
Asymmetry: advertisers buy outcomes; publishers sell attention. Programmatic intermediaries arbitrate between those objectives and capture spread in the middle.
Attribution choice changes budget allocation behavior as much as targeting model choice does.
How: Final clicked ad gets 100% conversion credit.
Pros: Simple and deterministic.
Cons: Ignores awareness assist touches.
Used by: Smaller advertisers, default channel reports.
How: Credit is distributed across touchpoints (linear/time-decay/position).
Pros: Better journey visibility than single-touch models.
Cons: Data heavy, privacy-constrained, modeling fragile.
Used by: Sophisticated brands and agencies.
How: Statistical model estimates channel contribution at aggregate spend level.
Pros: Privacy-safe and includes offline media.
Cons: Slow cadence; weak for intraday optimization.
Used by: Large enterprise advertisers.
How: Holdout A/B design compares exposed vs control conversion outcomes.
Pros: Causal signal, not just correlation.
Cons: Operationally expensive and slower to run.
Used by: Advanced teams (often with platform-native experiments).
Practical stack: many mature teams run MMM for strategic budget setting, MTA for tactical optimization, and incrementality tests to validate both.
Where product choices in adtech move revenue, margin, and trust: auction quality, identity strategy, and measurement credibility.
How this connects to the diagrams: Programmatic Stack defines control points, RTB flow shows real-time decisions, Revenue Flow quantifies take-rate impact, and Measurement reveals attribution risk. This section converts the map into a PM action model.
What it does: Decides which impressions to bid on and how aggressively to price each one.
PM metrics: Win rate, CPM efficiency, cost per outcome (CPA/CPI), spend pacing vs budget.
Pitfall: Optimizing only for cheap inventory often degrades downstream conversion quality.
What it does: Filters inventory for fraud, viewability, and contextual fit before buyers pay for it.
PM metrics: Viewability %, IVT %, brand safety incidents, effective CPM lift for curated paths.
Pitfall: Overly broad supply creates apparent scale but weak business outcomes.
What it does: Connects impression exposure to conversion outcomes with model assumptions.
PM metrics: Attributed ROAS, incremental lift, modeled-vs-observed delta, reporting lag.
Pitfall: Last-click comfort can hide incrementality collapse.
What it does: Maintains targeting and frequency control as deterministic IDs become scarcer.
PM metrics: Match rate, reachable audience %, frequency control error, consented signal coverage.
Pitfall: Treating identity as a vendor checkbox instead of a product capability.
Interview shortcut: frame adtech tradeoffs as efficiency vs control vs trust. Buyers chase outcomes, sellers chase yield, and regulators/users demand privacy. Great PM narratives show how your product balances all three instead of maximizing only one.
Teams optimize media cost while conversion quality and incrementality collapse.
Inventory expansion without fraud/viewability controls inflates spend but weakens outcomes.
Last-touch reporting claims wins that were already likely; budget gets misallocated.
No first-party strategy means targeting performance drops with every privacy policy shift.
| Situation | Optimize For | Guardrail Metrics | Avoid |
|---|---|---|---|
| New campaign ramp | Learning speed + signal quality | Win rate, CVR trend, pacing error | Over-constraining bid strategy too early |
| Brand safety incidents rise | Inventory trust + controls | IVT %, unsafe placement rate | Blindly widening exchanges to recover scale |
| ROAS pressure from finance | Incremental efficiency | Lift tests, iROAS, CPA by cohort | Only tuning last-click rules |
| Cookie/signal degradation | Identity resilience | Match rate, addressable reach, frequency error | Treating privacy changes as one-off incidents |
Angle: segment by channel/signal class, shift budget to high-confidence inventory, re-calibrate attribution with holdouts.
Angle: show net yield impact (not fee % alone), add transparency controls, and test curated paths that improve effective CPM.
Angle: propose tiered inventory quality lanes with explicit spend caps and quality SLAs.