
In paid media, one size never fits all. Whether you’re marketing a luxury product, a service in a highly regulated sector, or a charity initiative, each industry comes with its challenges, including diverse audiences, behaviours and budgets. The ability to adapt your campaign to the reality of the market you are in is invaluable.
However, even the most tailored campaigns can run into a common issue: not everything that influences a consumer can be tracked. Most attribution models rely on clicks and clear, measurable interactions. Yet, we know that not every engagement results in a click. Influence often begins long before a user makes any measurable action. For instance, a user may see an ad on Facebook, scroll past a TikTok, or spot your brand name in a Digital Out-Of-Home advert and later convert weeks down the line through a direct search.
Those moments matter. They foster awareness and familiarity and in some cases, they’re the reason a user eventually converts. Unfortunately, this influence is rarely credited in platform dashboards.
I want to explore that disconnect and how marketers can approach attribution with a more realistic lens. We need to find ways to recognise influences that don’t result in clicks and methods like Marketing Mix Modelling (MMM) can help us move beyond the limitations of click-based performance. Because when we can understand what we’re not seeing, we can make more informed decisions, regardless of the market we are working in. It’s time to look beyond the click.
The problem with click-based attribution
Attribution should help us make smart decisions. Where to invest, what to optimise and which channels are really pulling their weight. But most attribution models put clicks on a pedestal.
Yes, clicks are easy to track. They come with timestamps and conversion paths. But while they tell us something, they don’t tell us everything.
In tools such as Google Analytics, attribution paths often present a narrow picture of the journey through overemphasis on the final click. But clicks alone don’t fully reflect how people make decisions.
In reality, user journeys are more complex than linear. Someone might:
- Watch your YouTube ad, but never click
- Swipe through a Meta carousel without taking action
- Absorb your brand tone from a single LinkedIn scroll
They might not click on any of these. But each exposure still plays a role, particularly in the subconscious. It builds recognition, reinforces messaging and shapes perception, all of which can contribute to a future decision.
And yet, they still convert a week later via a brand search. The final touchpoint gets all the credit, while everything else is forgotten.
When we rely solely on click-based attribution, we risk undervaluing brand activity, early-stage engagement, and platform synergy. It’s not that clicks don’t matter, they just don’t tell the full story. And if we want to be more strategic with how we allocate budget and interpret performance, we need to start acknowledging the signals we can’t always measure.
What happens if we over-rely on clicks
When attribution is based purely on clicks, we fall into the trap of optimising for what’s easiest to measure, not necessarily what’s most effective.
This can lead to a few common pitfalls:
- Over-investing in lower-funnel channels, because they show quick wins
- Under-funding upper-funnel activity, not because it isn’t working, but because it’s harder to prove
- Misjudging channel performance, especially when platforms over-claim conversions, they played only a small role in
This challenge is even greater when working with limited budgets, where we of course lean on what’s trackable. However, that can lead to short-term thinking, with campaigns that perform well on the platform and for reporting but struggle to move the needle over time.
It’s not about ignoring constraints, it’s about finding smarter ways to account for impact, even when it isn’t directly clickable. And that starts with broadening our view of what performance looks like.
Moving beyond clicks without forgetting practicality
You don’t need a seven-figure media budget or a custom MMM model to do attribution better. You just need to start looking for signals beyond the obvious ones.
MMM, when used by large brands, gives a high-level view of what’s working across all channels — even when people don’t click. It blends in sales, external factors (like seasonality), and media spend to model (estimate) the true impact of each channel. It’s far from perfect, and certainly not cheap, but it’s a reminder that clicks aren’t the only signal of success.
So, how do we apply that thinking on a smaller scale?
Below are some examples of how to begin to fill the gaps in click-based attribution:
- Look at trends, not just conversions
- Instead of only measuring CPA or ROAS, observe how brand search volume, direct traffic, or engagement metrics shift over time. These can indicate influence, even if they’re not directly attributed
- Run micro-tests to develop a deeper perspective of external factors
- Pause a campaign briefly, exclude a small region from a specific channel or phase out spending to a certain demographic. What happens to performance? These light-touch experiments can reveal what’s driving results — no big model required
- Ask customers how they found you
- A quick post-purchase question like “Where did you hear about us?” can reveal channels that played a role but didn’t get clicked, like a Facebook ad they scrolled past or a YouTube video they watched earlier
- Keep track of offline influence. Track brand moments like events or press coverage in your reports. A spike in conversions often follows, even if it’s not ‘trackable’
- If your client is running things like events or print ads, note that in your reporting. You might not have data on it, but seeing a spike in searches or conversions after these moments is often no coincidence
- Try different attribution models in Google Analytics
Try alternative attribution models
Instead of relying on last click (which gives all the credit to the final step), try models like:
- “Position-based” (U-shaped model), which splits credit between the first click, the last click, and everything in between (40:20:40), is better for journeys with multiple steps as it gives a more holistic perspective on how different channels contribute to conversions.
- “Time decay” gives more weight to touchpoints that happened closer to the conversion. This is useful if you’re running short bursts of activity or promotions, as the model assumes that recent interactions are more impactful than those further in the past
They’re not perfect, but they’re a step closer to reality.
Data-led, human-informed
Performance marketing is built on measurement. But that doesn’t mean we should blindly follow the numbers. Not every ad gets clicked. Not every view is logged. And not everything worth knowing can be found in a dashboard.
At Passion, we believe in marrying performance with imagination. That means trusting the data but also your instinct, your strategy and your understanding of how people behave.
Rethinking attribution isn’t about abandoning numbers. It’s about broadening our view. When we start recognising the value in what we don’t see, we get closer to the truth and better outcomes for our clients.
Imagine better. Measure smarter. That’s the Passion way.