Follow
Follow

The Gap Between the Promise and Reality of CTV

Last week I joined Sam Khoury on stage at Marketecture Live III for a fireside chat titled “Deterministic? Prove It.” The session covered CTV identity, supply chain signal integrity, and incrementality.

CTV is supposed to be the premium, authenticated, addressable dream. Brands have been sold on that narrative for years. And for good reason, because when the signal, identity, and measurement layers actually hold together, CTV becomes one of the most powerful channels available to marketers.

Sam opened with a story I’m still thinking about.

He spent a Sunday morning hunting down MacKenzie-Childs home accessories at Targets across Southern California. His wife had seen a CTV ad for an exclusive collaboration, interrupted his quiet morning before their daughters woke up, and sent him out to find the limited-edition items before they were sold out.

It was the right household. The right message. The right moment. A clear line from impression to physical purchase. No click. No last-touch event. Just a campaign that actually moved someone.

That story is the promise of CTV.

The reason that promise is still not the norm is the gap between what CTV could be and what the identity layer beneath it actually delivers.

How Identity Precision Gets Lost

Premium CTV is primarily an authenticated environment. When someone opens Peacock, Disney+, or Paramount+, they log in with an email address associated with a subscriber account. The publisher therefore begins with a declared first-party identity tied to a real audience relationship.

The problem is what happens next.

Publishers are not willing, nor should they be, to push that authenticated data into the public pipes of programmatic advertising. Any signal they are willing to pass gets translated, abstracted, and diluted by intermediaries with their own economic incentives around the impression.

What begins as a placement informed by a publisher’s authenticated relationship with a subscriber rarely survives the journey intact. By the time the bid request reaches the buy side, the original signal has often been reduced to a handful of proxy signals.

In practice that often means the decision is being made using an IP address and a collection of derived attributes. At best those signals are probabilistic. At worst they are attributes added somewhere along the supply chain because they increase the clearing price of the impression.

Furthermore, a recent CIMM and Go Addressable study found that IP-to-postal address matches are accurate only 13% of the time. IP-to-email matches: 16%. Those are not rounding errors. They point to something fundamentally broken in the programmatic supply chain.

Jon Watts, Managing Director of CIMM, described it this way: there’s a “profound anxiety that marketers are, in a sense, being sold a bit of a pup.” He continued: there is “this promise in the industry of completely accurate, deterministic data…and unsurprisingly, the truth is much more complicated than that.” 

Further analysis of bid request attribute patterns over time makes the compounding error clear. Assigning an advertiser’s desired audience, or a third-party segment, to a CTV device often requires several hops and identity matches across the programmatic supply chain. Each step introduces its own margin of error.

Over time, what should appear as a stable network pattern for a household can fragment into a much larger set of IP addresses than basic network behavior would suggest. Some of those addresses belong to entirely different homes, meaning impressions intended for one audience can end up delivered to another.

The implications are not theoretical. If the matching layer is wrong, everything built on top of it begins to drift. Frequency is miscounted. Reach is overstated. Incrementality tests start measuring noise instead of outcomes.

Deterministic means a declared and verifiable association to a real person or household, maintained not just at the moment of impression but across the full lifecycle of a campaign, including any lookback windows. No inference. No modeling. If an educated guess exists anywhere in the chain, the result is no longer deterministic. It is simply a guess with better branding.

The publisher often starts with the truth. The farther the signal travels, the more expensive the lie becomes.

Every intermediary in the programmatic stack, SSPs, exchanges, resellers, and curation seats, has an economic incentive to increase the perceived value of the impression passing through them. A low-value impression becomes more attractive when it carries premium attributes: the right device type, the right app label or bundle ID, the right audience signal. Those attributes often appear in the bidstream whether they were observed or not.

The most popular example of this is now known as ID bridging. Ian Trider, VP of Product at Basis Technologies, was direct about it: “Whenever an ID is inserted into the buyer ID field, and it’s not the native browser cookie ID, as far as I’m concerned it’s a spoofed ID. You can’t just say, ‘It’s related’ — that’s not how the ID field works.” 

Ari Paparo at Marketecture made the same point with less patience: “I don’t see a meaningful difference between spoofing the URL an ad is coming from (which we all agree is fraud) versus spoofing the ID of the user the ad will be served to.”

The industry has framed supply path optimization primarily as a cost discussion: shorter paths mean lower fees and cheaper CPMs. That framing misses the real point. The shorter and more direct the path from publisher to buyer, the higher the probability that the signal on the other end is the same signal that left the publisher. Signal fidelity is the goal. Lower CPMs are just the side effect.

Georgia-Pacific ran this exact playbook in 2025. Their senior director of digital media, Paras Shah, explained they reduced the number of SSPs they worked with by around 70% specifically to enforce quality at the source. The cleaner the auction, the more predictable the supply becomes.

A Clean Supply Path Is Not Enough

Shorter supply paths improve signal fidelity. They do not solve identity alignment.

A direct connection between publisher and buyer helps preserve the signal that originated with the publisher’s authenticated relationship with the viewer. But it does not automatically reconcile the identity systems on both sides of the transaction.

The advertiser has its view of the customer. The publisher has its view of the subscriber. If those two systems are not reconciled before the impression is served, a clean supply path simply accelerates the delivery of misaligned media.

This is where identity graphs often create confusion. The presence of a graph integration in the supply chain is often interpreted as proof that identity resolution has occurred. In practice it often means only that the observed identifier has been mapped to a cluster of other identifiers, without transparency into how those associations were created or validated over time.

As a result, if we ran a test using the exact same audience segment across multiple publishers or platforms, the overlap would likely be far smaller than expected. What should feel like a single, unified campaign targeting a common segment often ends up serving several loosely related versions of that audience with limited overlap.

Digital identifiers frequently represent the application or browser, not the person using it. An email address captures only one slice of a person’s digital footprint. And a household is not an individual at all. It is multiple people involved in, and influencing, a purchase decision.

This is where the earlier story becomes more precise.

Sam’s wife saw the ad. Sam was the one who drove along Route 1 to find the MacKenzie-Childs product in the store.

A person-level identifier alone – like UID2 – would struggle to connect those two events. The identity layer needs to understand them as part of the same household and attribute accordingly.

Households anchored to a terrestrial address remain one of the most durable identity units available at scale. Cookies expire. IP addresses rotate and are increasingly truncated. Email addresses change, are masked, and multiply with changing jobs. A household relationship tends to persist far longer than any of those signals.

What that requires in practice is reconciliation between the publisher’s authenticated user data and an identity spine capable of clustering identifiers, devices, emails, and people under a common household construct. 

The Right Household Isn’t Always the Right Moment

The best illustration of what complete addressability looks like is Meta.

What makes Meta effective is not simply that it knows who you are. It knows who you are and what you are consuming at that exact moment.

Identity and context operate simultaneously.

The ad feels timely rather than intrusive because both signals are present at the same time.

CTV has the potential to operate the same way, but only if the data layer connecting those signals is robust enough to support it.

Imagine the same MacKenzie-Childs advertisement running in the middle of a dark crime thriller. The household may still be correct. The moment is not.

Buyers purchase CTV inventory by application name. That is largely a design limitation of the ecosystem. But an application is not the same as content.

Paramount+ is not a single viewing environment. Disney+ is not a single viewing environment. An interior design show and a UFC fight do not create the same audience mindset simply because they appear inside the same app.

When buyers cannot see the environment clearly, the market tends to default to price. And when the market defaults to price, the inventory that clears most efficiently is rarely the inventory that delivers the strongest advertising outcome.

Context changes outcomes.

Studies of contextual targeting in CTV have shown meaningful lifts in ad recall, brand favorability, and downstream outcomes like store visits and sales when ads are aligned with the surrounding content rather than simply the platform delivering them (Upwave + IRIS.TV have published several examples of this).

Identity answers the question of who. Content intelligence answers the question of whether this is the right moment. Both matter.

Proving It Actually Worked

Attribution asks who converted. Incrementality asks what additional conversions occurred because the campaign existed. Those are very different questions.

Digital advertising has become extremely good at measuring demand capture. When a consumer is already close to purchase, showing up near the end of the journey and claiming credit is straightforward. It is measurable and often efficient.

It is not the same as demand creation.

CTV at its best operates as a demand creation channel. The channel that introduces, persuades, and moves someone who was not already halfway to conversion toward action.

But proving that requires identity continuity across the entire lifecycle of a campaign. The identity used to select the audience must be the same identity used to measure the outcome.

If the identity layer changes between targeting and measurement, the test is already broken. Exposure counts cannot be trusted. Frequency is misread. And most critically, holdout groups become contaminated.

In the MacKenzie-Childs example, Mary saw the ad. Sam went to the store and made the purchase. There was no click. No digital breadcrumb connecting the exposure to the transaction.

In many measurement frameworks, that campaign receives no credit. Not because it failed, but because the system measuring it was never capable of observing the relationship in the first place.

Connecting that impression to that purchase requires an identity framework capable of maintaining the relationship between exposure and outcome over time. Not just at the moment of impression, but across the entire lifecycle of the campaign, including the lookback windows used in measurement.

When that continuity exists, incrementality tests can begin to measure true causal impact rather than correlation.

Attention measurement represents the next layer on top of this framework. Delivery is not exposure. Exposure is not attention. And attention is not guaranteed impact.

The industry has begun to incorporate attention metrics into cross-platform measurement models because simply counting impressions does not fully explain advertising outcomes. Understanding whether an ad was actually viewed and cognitively registered improves the quality of incrementality models.

Measurement infrastructure is evolving, but structural reform cannot move faster than the identity systems underneath it.

What Actually Has to Be True

Mary sees the ad.

It reaches the right household in the right content environment at the right moment. She remembers it the next morning. Sam drives to Target. The purchase happens.

And the system is capable of connecting that outcome back to the exposure that created it. None of that happens by accident.

Every layer underneath it has to work. The supply path has to preserve the original signal. The identity layer has to remain stable from targeting through measurement. The content environment has to be visible to the buyer. And the measurement framework has to observe the relationship between exposure and outcome.

If any of those layers break, the outcome still happens in the real world but disappears from the data meant to measure it.

That is the central problem the open internet still has to solve. Fixing it requires structural changes to both the technology stack and the business relationships that govern how data moves through it.

The identity used to select the audience must be the same identity used to measure the outcome. Not translated halfway through the supply chain. Not swapped out between exposure and attribution. The same identity system must persist across the entire lifecycle of the campaign.

That starts with how the publisher signal enters the system.

A deterministic data match between a publisher’s authenticated user and the identity spine creates that foundation. When a publisher’s first-party user data is securely matched against a persistent identity framework, the signal entering the system no longer relies on the translation layers that introduce uncertainty into the bidstream. The relationship between the publisher and the audience is preserved at the point of activation.

From there, identity must resolve at the household level.

People buy things as individuals, but advertising outcomes frequently occur across the household. The person who sees the ad is not always the person who completes the purchase. A durable identity spine anchored to a terrestrial address provides the stable reference point required to connect those outcomes over time.

Underneath that spine sits the clustering layer. Individual identifiers, operating system level IDs, emails, devices, and alternative identifiers resolve upward into the household construct rather than competing as separate identity systems. The household becomes the persistent anchor, while the surrounding identifiers provide the observable signals that connect exposures, devices, and outcomes across the campaign lifecycle.

Content identification and alignment must be equally precise.

Understanding the household is only half of the equation. The system must also understand the content environment in which the ad appears. IRIS_ID provides that layer, creating a standardized identity for video content itself so buyers can evaluate the actual programming environment rather than relying on application-level labels.

Together, these layers create continuity.

Performance is only as credible as the identity system underneath it.

If the open internet cannot preserve identity continuity from targeting through measurement, budgets will continue flowing toward platforms that already can. The companies with closed ecosystems do not face these problems. The window to solve this is not unlimited.

Comments
Join the Discussion and Share Your Opinion
Add a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

New posts, in your inbox.