It's time to innovate the innovation process

It's time to innovate the innovation process

It's time to innovate the innovation process

Marketing got a revolution. Innovation is still waiting for one.

Marketing got a revolution. Innovation is still waiting for one.

Date

Date

/

Category

Category

Transformation

Transformation

/

Writer

Writer

Chris Johns

Chris Johns

Marketing got a revolution. Innovation is still waiting for one.

Over the past decade, the marketing function has been fundamentally rebuilt. Real-time data. Behavioural targeting. Multivariate testing. Attribution modelling. We know more about how consumers respond to messages than at any point in history.

And yet the process that decides what to launch — the validation methodology sitting upstream of all that precision — has not changed.

It still looks like this:

— Internal stakeholder sessions where the most senior opinion wins — A consumer survey asking people what they'd be likely to buy — A focus group producing rich qual that gets filed in a deck — A launch decision made on gut feel and competitive pressure

The problem is not the research. It's the architecture.

Most brands going into a launch have more research than they can act on. Surveys. Qual. Internal alignment documents. Category data. They've spent real money on it.

But it all sits in silos. There's no integrating layer. No scoring. No mechanism that takes everything they already have and produces a clear, weighted verdict.

So when the launch decision arrives, it defaults to the same dynamic it always has: whoever makes the best argument in the room wins. The HiPPO dynamic — Highest Paid Person's Opinion — isn't a cultural failure. It's a structural one. In the absence of an integrated decision framework, seniority fills the gap.

The missing piece is behavioural.

Survey data tells you what consumers say they'll do. Focus groups tell you how they feel about a concept. Neither tells you what they actually do when a real decision is in front of them.

Behavioural testing does. Real paid media. Real landing pages. Real purchase signals. Not hypothetical intent — observed behaviour.

This isn't a new idea in isolation. Field experiments, revealed preference methodology, live demand testing — these have existed for years. What hasn't existed is a structured programme that integrates behavioural testing with everything else a brand already has, scores it across consistent dimensions, and produces a clear go/no-go verdict in a timeframe that's usable.

The irreversibility problem.

The moment that makes all of this matter is the one just before a brand commits to a production run, agrees a retail listing, or signs off a launch budget. That commitment is largely irreversible. You can't un-book the factory. You can't un-agree the listing terms.

Everything upstream of that moment is a cost. Everything downstream is a consequence.

The validation process sits right at that boundary — and right now, most brands arrive at it with fragmented, unintegrated, non-behavioural evidence. The first real signal of whether the launch will work arrives as a sales number. At the one point where it can't change anything.

What the transformation actually looks like.

It starts with the architecture, not the research. Most brands already have the components — surveys, qual interviews, internal alignment sessions. The issue is that none of it is connected, none of it is scored, and none of it is behavioural.

The transformation means adding the missing layer: synthetic audience testing that models real consumer response at scale. Consumer interviews that add depth to the signal. Live behavioural campaigns that measure what people actually do — not what they say they'll do.

All connected. All scored. One output that tells you whether to go, refine, or stop — before the irreversible commitment is made.

Not a deck of findings. A decision.

Brands have been investing in the components for years. The gap is the system that connects them.

That's what needs to change.

And that's what we've built.