The dairy farmer in Normandy wakes at five. His herd grazes on salt-meadow grass, the kind that grows only in pastures flooded twice daily by Atlantic tides. The milk his cows produce has a higher butterfat content, a distinct mineral profile, a flavour you can taste in the finished cheese.
The dairy farmer in Normandy wakes at five. His herd grazes on salt-meadow grass, the kind that grows only in pastures flooded twice daily by Atlantic tides. The milk his cows produce has a higher butterfat content, a distinct mineral profile, a flavour you can taste in the finished cheese.
It is, by any objective measure, exceptional.
At collection, it goes into the same tank as standard industrial milk. No grading. No clear labelling. No traceability at the decision point. The cooperative tests for basic standards — both pass. Both farmers are paid $0.38 per litre.
The Normandy farmer protests. My milk is different. Anyone who tastes it can tell. The cooperative shrugs. Maybe. But buyers cannot verify that at scale in the tank.
That is exactly what happens in programmatic.
Publisher value exists before the auction. It exists in context, attention conditions, user dynamics, quality controls, and supply behaviour. But when that value is weakly expressed in the decisioning layer, premium supply is treated as average supply.
The open web is not broken. It is unstructured.
The issue is not that value does not exist. The issue is that the value is not clear enough for buyers to evaluate where pricing decisions are made.
Programmatic advertising is structurally imbalanced. The buy side holds most of the decisioning intelligence; the supply side is left reactive and too often commoditised. This is why the programmatic supply chain is broken.
It did not happen by accident. The buy side invested in algorithmic decisioning: modelling, optimisation, and machine-driven execution. The supply side invested in access: more integrations, more routes, more pipes.
The assumption was that more routes would create more competition and better outcomes. In practice, route multiplication increased duplication and noise faster than it improved value recognition. It expanded access, but did not reliably improve how value is understood.
So decisioning remains one-sided. Buyers optimise with buy-side intelligence for buy-side objectives. Publishers participate, but with limited influence over how their value is recognised and priced.
This is not a minor optimisation issue. It is structural.
The market does not evaluate all available supply. It evaluates what survives filtering.
Throttling is not inherently wrong. The problem is throttling under weak or noisy signals. When high-value and low-value opportunities are difficult to distinguish early, filtering logic removes both. Premium opportunities disappear before they can be correctly evaluated and priced.
Value leakage happens before auction.
Even when impressions pass filtering, buyer algorithms only optimise across what they can see and interpret.
If high-value opportunities are not surfaced in decision-useful ways, they remain invisible. This is why publishers with comparable quality can generate very different outcomes: one publisher’s value is discoverable in the decisioning layer, the other’s is not.
Buyers cannot consistently bid on value they cannot clearly identify.
When signals are weak, inconsistent, or easy to imitate, decision quality deteriorates. Systems fall back on blunt proxies. Precision drops. Waste rises. Fraud Increases. Commoditisation accelerates.
This is the paradox publishers live with: creating real value while struggling to convert that value into consistent pricing outcomes.
In Normandy terms, the milk can be exceptional. But if the market cannot verify it at the transaction point, it is priced as ordinary.
The farmer could provide documentation: veterinary records, grazing schedules, butterfat lab tests, testimonials from Michelin-starred chefs. But if the cooperative doesn't transmit that information to buyers, or if buyers have learned not to trust producer claims after being burned before, the certification becomes decorative. The signal exists. It's just not credible or accessible where pricing decisions actually happen.
Publishers lose first and most directly. Like the Normandy farmer, they carry the production cost of quality — investigative journalism, rigorous fact-checking, active community moderation — but cannot consistently convert that investment into pricing power because the transaction layer cannot distinguish their inventory from cheaper substitutes.The math is brutal: spend more to produce quality, receive the same CPM as content farms, watch margins compress until quality becomes economically irrational. They become price-takers in a system that rewards what it can algorithmically verify, not what is actually best.
SSPs are also affected. They are asked to create value in a noisy chain where upstream inputs are hard to interpret. Match quality suffers. Meaningful opportunities are missed. Margin pressure increases.
Buyers feel it too. Campaigns can still hit surface metrics, but not always the outcomes they actually want. Trust erodes when decision-making is forced through incomplete proxies.
Everyone loses when supply value is hard to read. Publishers lose most because they are the ones holding the underpriced asset.
What is required now is not incremental. It is structural.
The open web is missing a Supply Intelligence layer: one that makes supply identifiable, differentiable, and decisionable where market decisions are made.
In practical terms, this means:
In agriculture, this problem was solved through certification systems like Appellation d'Origine Contrôlée. The AOC label doesn't make the milk better — it makes the value visible to buyers at scale, allowing certified producers to command premium pricing even in commodity markets.
Programmatic needs the equivalent: signals that make supply identifiable, packaging that makes premium inventory discoverable, and quality standards that travel to the bid request — so buyer algorithms can recognise value at the speed of decisioning.
1. Can buyers reliably identify our best inventory in the bidstream?
Not only do we have premium content, but can models distinguish our high-value opportunities at the decision point?
2. How much value are we losing before auction through filtering and throttling?
If high-value opportunities are bundled with weaker ones, leakage is inevitable.
3. Are our signals decision-useful or merely descriptive?
If signals do not change buyer behaviour, they are not protecting the margin.
The Normandy farmer's milk was always exceptional. The problem was never quality. The problem was that the market had no mechanism to recognise quality at the moment pricing decisions were made.
For decades, this was accepted as inevitable: small-scale quality cannot survive in commodity markets. Then the AOC system proved otherwise. Proper labelling infrastructure — standardised, verified, transaction-level — allowed premium products to capture premium pricing at scale.
Publishers are at the same inflection point.
Publishers’ inventory is not underperforming because it lacks quality. It's underperforming because quality is invisible in the decisioning layer. The buyers exist. The willingness to pay exists. What's missing is the intelligence infrastructure that makes differentiation clear and verifiable at auction speed. And unlike quality, which publishers spent years building, labelling infrastructure can be implemented at the supply chain level.
That infrastructure is called supply intelligence. And the gap increasingly determines whether premium inventory captures premium pricing or drifts toward commodity rates. — every auction, every day, from every DSP that cannot tell your best inventory from average supply.
This is the direction behind Pubstack’s Supply Intelligence work and the thinking behind Spark.