The Fifty-Year Problem at the Heart of Innovation

The Fifty-Year Problem at the Heart of Innovation

The Fifty-Year Problem at the Heart of Innovation

The innovation industry has known about its core problem for a long time. Not years. Decades.

The innovation industry has known about its core problem for a long time. Not years. Decades.

Date

Date

/

Category

Category

Transformation

Transformation

/

Writer

Writer

Chris Johns

Chris Johns

The innovation industry has known about its core problem for a long time. Not years. Decades.

In the 1970s, behavioural economists began documenting a consistent and uncomfortable finding: what people say they will do, and what they actually do, are not the same thing. The gap isn't marginal. In some contexts it's enormous. And it's not because consumers are dishonest. It's because the conditions of a survey — no real alternatives, no real stakes, no real money leaving a real wallet — don't produce real decisions. They produce guesses. Thoughtful, well-intentioned guesses. But guesses.

Economists gave this a name. Stated preference. It describes the data you get when you ask people what they would do. And for fifty years, it has been the primary evidence base on which product launches are built.

That's the problem.

What the research actually says

The field that emerged to challenge stated preference is called revealed preference — a concept formalised by Paul Samuelson in the late 1930s and developed extensively through the latter half of the twentieth century. The core idea is straightforward: if you want to understand what someone values, don't ask them. Watch what they choose when the choice is real.

This isn't a fringe position. It's mainstream behavioural economics. Kahneman and Tversky's work on decision-making under uncertainty. John List's field experiments demonstrating that laboratory findings often collapse in real market conditions. Decades of research showing that hypothetical willingness-to-pay consistently overstates actual willingness-to-pay — sometimes by a factor of three or more.

The industry knows this. The research has been there for a long time. And yet the dominant consumer research toolkit — surveys, focus groups, concept testing — remains almost entirely built on asking people what they'd do, rather than observing what they do.

The reason is partly practical and partly structural. Getting revealed preference data used to mean launching. And launching meant committing: production runs, retailer listings, distribution agreements, marketing budgets. The evidence arrived when the bets were already irreversible.

So the industry made a pragmatic compromise. It kept using stated preference, because that was the data you could get before the commitment. And it hoped the gap between what consumers said and what they did would be narrow enough not to matter.

Sometimes it is. Often it isn't.

What happens in practice

Picture a brand that has done everything right by conventional standards. They've run a quantitative concept test. Strong purchase intent scores — 40% definitely or probably would buy. They've run focus groups. Positive response. Good energy in the room. Internal alignment is high. The senior team is confident. The launch goes ahead.

Six months later, the product is underperforming. Reorder rates are low. Distribution is pulling back. The post-mortem begins.

What they find, eventually, is that the purchase intent scores were real — as a measure of what consumers said in a survey. But when those same consumers stood in front of a shelf with twenty alternatives, at a real price, with real money, they made a different decision. The category was more competitive than the concept test had accounted for. The price point felt different in context. The packaging didn't stand out the way the mock-up had suggested.

None of this was hidden. It was just invisible to the tools being used to look for it.

This story is not unusual. The 70-80% failure rate for new consumer product launches has been a persistent industry figure for decades. It hasn't moved meaningfully, despite better research tools, better data, and more sophisticated internal processes. The methodology at the centre of the decision hasn't changed.

The gap that can be closed

Here's what I've come to believe, after two decades of working on launches across categories: the problem is not that revealed preference is unavailable pre-launch. It's that nobody built the infrastructure to get it there.

The tools exist. Real paid media can be run against a Minimum Viable Brand — a believable but uncommitted version of the concept — before a single unit is produced. Real consumers can be served real ads, driven to a real landing page, and asked to make a real decision with their attention, their data, and increasingly their money. The behavioural signal that used to arrive six months after launch can be generated six months before it.

This is not a replacement for qualitative research. Consumer interviews still surface the texture — the language people use, the emotional associations, the friction points that numbers alone don't reveal. Synthetic audience testing can identify directional signals at speed and low cost before committing to a full programme. Both are part of a rigorous process.

But neither produces the thing that matters most: observed behaviour under real conditions. That requires actually putting the concept in front of real consumers with real choices to make. And that can be done before the irreversible commitments are placed.

Why this matters now

The case for moving revealed preference earlier isn't new in theory. Field experiments and live demand testing have existed as concepts for years. What's changed is the infrastructure.

Paid social media has made it possible to reach precisely defined audiences at low cost. Landing page tools have made it possible to simulate a purchasing environment in days rather than months. Analytics have made it possible to measure not just what people say when they see a concept, but what they do — click rates, conversion rates, dwell time, sign-up behaviour. The data that was once only available post-launch is now available pre-launch, if you build the right process around it.

The brands that recognise this shift earliest will make better launch decisions. Not because they'll always get a green light — sometimes the evidence will tell them to stop, and that's a success too. But because they'll make those decisions on the basis of what consumers actually do, rather than what they say they might.

Fifty years of research told us stated preference wasn't enough. The infrastructure to do something about it finally exists.

The question is whether the industry is ready to use it.