- Diversified UA
- Posts
- The Incrementality Playbook
The Incrementality Playbook
Your Dashboard Says Every Channel Is Profitable. Your P&L Disagrees.

In Issues #51-53, we built the operational system: allocate budget (70/20/10), diversify creative (60/30/10), test new channels (Select → Test → Decide).
But every decision in that system depends on measurement. And your measurement is probably lying to you.
Attribution models overestimate channel contribution by 20-40% on average. A 2025 Marketing Science Institute study found that brands without incrementality testing waste 23% of marketing spend on non-incremental activities.
The fix isn't better attribution. It's incrementality testing — measuring what actually changes when you turn a channel on or off.
The Attribution Trap
Last-click gives 100% credit to the last touchpoint. Multi-touch distributes it. Both measure correlation, not causation.
Uber (2017): Kevin Frisch, head of performance marketing, cut $100M of Uber's ~$150M programmatic ad budget. App installs didn't drop. The "conversions" were organic users that ad networks were falsely claiming credit for.
eBay (2015): Researchers turned off branded search ads in random markets. Near-zero incremental sales lost. They were paying for "$12.28 return per dollar" — on conversions that would have happened organically.
Edmunds.com (2017): Replicated the eBay study. Found ~50% of branded search traffic was non-incremental.
Attribution tells you who touched the ball. Incrementality tells you who scored the goal.
Why This Matters Now
iOS ATT opt-in at ~35% — deterministic attribution is broken for 65% of iOS users
SKAN 5.0 improved but still aggregated and delayed
Google Privacy Sandbox closing Android device-level tracking through 2026
52% of US marketers already use incrementality testing (eMarketer) — if you're not, competitors have data you don't
73% of marketing leaders call it essential, up from 41% in 2023 (Gartner)
Platform tests are biased — Braun & Schwartz proved Meta and Google don't create truly randomized test groups. Independent testing is the only reliable path.
44% of marketers cite accuracy concerns as the #1 barrier to incrementality adoption (eMarketer) — which is exactly why starting simple with geo-lift matters
The measurement gap isn't shrinking. Teams that switch now have a 12-18 month advantage.
The Incrementality Playbook
Three methods, ordered by complexity.
Method 1: Holdout Tests
Turn off a channel for 2-4 weeks. Measure what happens to total conversions.
Example math:
Conversions with channel ON: 1,000/week
Channel's attributed conversions: 300/week
Conversions with channel OFF: 850/week
Actual incremental value: 150/week (not 300)
Over-attribution: 2×
Simple. Cheap. But you lose real revenue during the test.
Method 2: Geo-Lift Tests
Run ads in some geographic markets, pause in others. Compare conversion lift.
How to run it:
Select 3-5 comparable markets (similar demographics, baseline rates)
2-3 test markets (ads on), 1-2 control (ads off)
Run placebo test first (A/A) to validate your model
Run for 3-4 weeks minimum
Works on budgets as low as $10K. Meta's GeoLift tool (open source) automates the statistical design. This is the workhorse method.
Method 3: Ghost Ads / PSA Tests
Show real ads to treatment group, PSAs to control group. Compare conversions.
True randomized experiment. Gold standard. But requires platform support (Meta Conversion Lift, Google Brand Lift) and $50K+ budget.
Which Method? Match Your Situation
Situation | Method | Min Budget | Duration |
|---|---|---|---|
Suspect over-attribution | Holdout | $0 | 2-4 weeks |
Ongoing channel measurement | Geo-Lift | $10K | 3-4 weeks |
Precise lift measurement | Ghost Ads | $50K+ | 4-6 weeks |
Validating reallocation | Geo-Lift | $10K | 3-4 weeks |
Start with geo-lift. Best balance of rigor, cost, and actionability.

What This Looks Like in Practice
A CPG brand. $12M annual spend across Meta, Google, TV, email. Attribution said everything was balanced:
Channel | Attributed | Incremental | Gap |
|---|---|---|---|
Meta | 35% | 28% | Over by 7% |
Google Search | 28% | 12% | Over by 16% |
TV | 22% | 38% | Under by 16% |
15% | 8% | Over by 7% |
Google Search was getting 2.3× more credit than it deserved. Those "conversions" were high-intent users who would have found the product anyway — Search intercepted the last click.
TV was getting half the credit it deserved. It drove awareness that converted through other channels. Attribution gave it zero credit for that.
They shifted $2.3M from Search to TV. Result: 11% improvement in total conversions over 6 months.
Your "best" channel by attribution might be your most over-credited. Your "worst" might be your biggest hidden driver.
The 5-Minute Incrementality Audit
Step 1: List your top 4 channels with their attributed conversion share.
Step 2: Flag any channel where attributed share ≈ budget share. Suspiciously convenient — attribution is probably just reflecting your spend.
Step 3: Identify your "last click magnet." Usually branded search or email. These channels intercept conversions other channels created.
Step 4: Pick one channel to test. Start with the one you suspect is most over-attributed.
Step 5: Choose your method: Holdout (simplest), Geo-Lift (most practical), or Ghost Ads (most rigorous).
Your first incrementality test will almost certainly reveal at least one channel getting 2× more credit than it deserves. That's not a failure — that's the beginning of actually understanding your marketing.
Run your own audit: I built a free interactive tool that does this analysis for you. Input your channels, get instant flags. Try the Incrementality Audit Calculator →
Connecting the Dots
Four issues. One system.
Issue #51: How to allocate your budget (70/20/10). Where the money goes.
Issue #52: How to allocate your creative (60/30/10). What the ads say.
Issue #53: How to test new channels. Where to grow next.
Issue #54: How to measure what's actually working. Whether any of the above is true.
Budget allocation without incrementality is optimizing a broken map. The teams winning in 2026 don't just spend differently — they measure differently.
One Thing To Do This Week
Pick the channel you spend the most on. Ask yourself: "If I turned this off for two weeks, how many conversions would I actually lose?"
If you don't know the answer, that's the problem. Run the test. Find the truth.
Your first incrementality test won't give you better data. It'll give you real data — probably for the first time.
Daniel Avshalom
Diversified UA
P.S. — This issue completes the four-part system: Budget (Issue #51), Creative (Issue #52), Channels (Issue #53), Measurement (Issue #54). If you missed any, they build on each other. Read Issue #51. Read Issue #52. Read Issue #53.
Sources:
Blake, Nosko & Tadelis (2015): "Consumer Heterogeneity and Paid Search Effectiveness" — eBay branded search incrementality study
Uber/Kevin Frisch (2017): Programmatic ad fraud discovery — $100M in non-incremental spend identified
Coviello et al. (2017): Edmunds.com branded search replication — ~50% non-incremental traffic
eMarketer (2025): MMM, Incrementality & Measurement Trends — 52% of US marketers using incrementality testing
Gartner (2025): State of Marketing Analytics — 73% of leaders view incrementality as essential
Marketing Science Institute (2025): Non-incremental spend waste estimated at 23% for brands without testing
Adjust (Q2 2025): Global ATT opt-in rates at ~35%
Apple (2025): SKAN 5.0 / AdAttributionKit — expanded conversion values to 1,024 levels