Trusted by over 68k marketers (LinkedIn & TikTok)

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

© Copyright 2025. All rights reserved

Why Your Marketing Keeps Failing (And It's Probably Not Your Strategy)

By
Oren Greenberg
March 28, 2026

I keep hearing the same thing dressed up differently: we know what we should be doing, but we can't make it actually work.

The revenue leader reckons the strategy needs fixing. The marketing leader reckons they're doing the work but something's off. Both of them are looking at the strategy, and from what I'm seeing, that's almost never where the problem actually lives.

The market is full of folk selling better GTM playbooks. Tighter ICP definitions. More sophisticated demand gen frameworks. And organisations keep buying them, because the problem genuinely looks like a strategy problem from the outside.

But go into the actual system - the ad account, the CRM, the team's day-to-day workflow, the data infrastructure - and you almost always find something else entirely. Something that has nothing to do with the quality of the plan.

I've found five patterns that come up again and again. None of them are the strategy.

1. The marketer is trying. That's what makes it hard to see.

The most common pattern, and the slowest to diagnose.

A revenue leader has a series of disappointing moments. The marketer said they'd do something and didn't, or they're oblivious to why a deadline matters, or the results keep coming in below expectation. But the marketer is clearly doing 'stuff.' Some of that stuff is doing 'something.' So there's not enough pain to force a change - until eventually there is.

Senior pattern recognition cannot be substituted by junior effort, no matter how hard they work. That's the core of this one. A junior or mid-level marketer can run campaigns, produce content, test variations - but they can't see which variable is actually the problem. They optimise what's in front of them. Someone with fifteen years of experience walks in and changes the thing they didn't know to look at.

I worked with one company where the marketer had been at it for eight months - tweaking creative, testing different audiences on Meta, iterating on copy. Proper busy. But they were running variants on the wrong channel entirely, targeting audiences that weren't in market. I came in, shifted spend to Google Ads to reach people actively searching, changed the messaging around the value prop and the customer journey, and got them from zero to 500 enquiries a day in six weeks. The marketer was genuinely learning - but he didn't have the experience to see that the channel was the problem, not the creative.

That's what makes this invisible. The output - the campaigns, the content, the reports - all looks like strategy work. So the revenue leader commissions a new strategy. New positioning. New messaging framework. None of it changes anything because the person executing it still can't tell which lever to pull. The actual gap never gets named, because naming it feels brutal.

2. The strategy was built for a team that doesn't exist yet.

This one is related but distinct. The team isn't necessarily bad at what they do - there just aren't enough of them to cover what's been scoped.

A three-person team with a junior-to-mid skill mix cannot run signal-based outbound, programmatic SEO, and paid social simultaneously - and do any of them properly. But that's often exactly what gets planned. Everyone nods in the meeting. Nobody does the maths on whether the capacity exists. The gap doesn't show up in the planning document. It shows up three months later as slow progress and missed commitments.

The strategy calls for a proper outbound engine. The execution is one person doing manual data entry. From the outside, outbound is 'running.' The gap between what was planned and what's actually happening is enormous, and nobody's looking at the operational level closely enough to see it.

The activity exists. The infrastructure doesn't. And because nobody's comparing the strategy document against what's actually happening day-to-day, the gap quietly compounds.

So the board concludes the GTM motion isn't working and brings in a consultant to redesign it. The motion was fine. There just weren't enough people to run it.

The fix isn't always "hire more people." Sometimes it's narrowing the scope to what the team can genuinely execute with quality. But you can't make that call until you've looked at what's actually happening on the ground, not just what the plan says should be happening.

3. The marketing leader has a plan. The CEO doesn't believe in marketing.

Sometimes the strategy is right, the team is capable, the execution is happening - and the results still don't come. Not because of anything in the marketing function. Because of what's happening above it.

The pattern looks like this: a marketing leader who has a budget, has a plan, has genuine capability - but is fighting for meeting time, chasing sign-offs, and watching alignment drag on for weeks. Every decision takes longer than it should. Every initiative needs another round of approval. From the outside it looks like slow execution. What it actually is: an organisation that doesn't believe marketing contributes meaningfully to revenue.

I had a contact describe it to me recently. The CEO keeps hiring more salespeople. The marketing budget is modest. The channels that are 'working' are partners and outbound sales - channels that exist completely independently of anything the marketing function does. Marketing is tolerated, not believed in.

The marketing leader knows this, by the way (at least the ones I've worked with do). They're not oblivious. But they think they can fix it with results, or they think belief will build over time, or they're just keeping the function alive until the political winds shift.

From what I've seen, most of them are wrong about all three.

No amount of better execution fixes a fundamental belief problem at leadership level. The organisation won't give marketing the resources, the authority, or the runway to actually succeed - and so marketing keeps underdelivering, which confirms the original belief that it doesn't work. It's a closed loop. And it's invisible because it looks exactly like a marketing execution problem from every angle.

The CEO looks at the pipeline numbers, concludes marketing isn't delivering, and cuts the budget further. Which confirms exactly what they already believed. Nobody is looking at whether the organisation was ever set up to let marketing succeed in the first place.

You can have the right strategy and the right team and still get nowhere if the belief isn't there above you. That's not a marketing problem. That's an organisational one. And it's genuinely one of the harder things to name, because naming it means questioning decisions that were made well above the marketing function.

4. The thing got built. Nobody's using it.

This one I've seen up close, and it's maddening.

Loads of time and real effort goes into building something genuinely good - a scoring model, a dashboard, a set of automations that would actually work - and then the team routes around it, doing things manually because that's what they know.

The output exists. The behaviour doesn't change.

What I've seen specifically: a sales team handed a predictive scoring model and proper data infrastructure, and their response was essentially that they'd ignore it and manually hack leads on LinkedIn instead. Not because the model was wrong. Because using it required them to change how they worked, and nobody had made that case, set that expectation, or built any accountability around it. The tool was real. The adoption plan wasn't.

Building something is satisfying and visible - you can show it, demo it, stick it in a board update. Changing how a team actually works day-to-day is slow and invisible and genuinely difficult. So organisations fund the first thing and assume the second will follow.

The cost of this is invisible in a specific way: the organisation spent the money, so they think they made the investment. The tool exists, so they think they have the capability. But the capability is theoretical. The behaviour hasn't changed, which means the results won't change either. Eventually they'll conclude the tool didn't work. The real conclusion should be that adoption was never actually planned for - it was just assumed.

So leadership looks at the flat results and concludes the tool was a bad investment. They buy a different one. Same outcome. That's a different project from building the tool, and most organisations treat them as the same project.

5. The measurement is broken and everyone's making decisions on it anyway.

Technical, this one. And more common than anyone wants to admit.

Conversion tracking firing on the wrong events. Double-counting across platforms. A form collecting leads into a database nobody's checked since February. These aren't edge cases - I've seen all three running for weeks before anyone noticed.

One situation I saw: conversion tracking was broken for three consecutive weeks. Double-counting, wrong events firing, Chrome preview mode giving unreliable results. Three weeks of a company spending real money on ads with no reliable measurement. And the problem kept sliding because fixing it is unglamorous work - it's not a new campaign, it's not a new strategy, it's not anything you can put in a board update with a sense of momentum.

So it slides. And the reason it slides is often structural: the fix sits in the cross-functional gap between teams. I saw this recently with a company where tracking had been broken for months. Finance didn't own it. Web ops didn't own it. Engineering had capacity issues. The problem lived in the overlap between three teams, and because none of them fully owned it, it just sat there. That's the mechanism - it's not that nobody cares, it's that nobody's job description includes fixing it.

And while it slides, decisions get made on bad data. Channels get cut that were actually working. Budgets shift toward campaigns that look good in the dashboard but are benefiting from attribution errors. The team is optimising against the wrong signal.

The specific invisible quality of broken measurement is that it doesn't look broken. It looks like a working dashboard. The numbers are there. The graphs are moving. The reports get sent. Everyone in the meeting is looking at data and making decisions and feeling like they're running a data-driven marketing function (they're not - they're running a data-decorated one, which is a very different thing).

The CMO reviews the dashboard, sees paid isn't performing, and shifts budget to organic. Except paid was performing - the tracking was just broken. The strategy adapts to a reality that doesn't exist, and nobody questions whether the data underneath was ever trustworthy.

Why this keeps happening

Every one of these five patterns looks like a strategy problem from inside the organisation.

The capability gap looks like the strategy needing more refinement. The capacity problem looks like slow execution. The political dynamic looks like a marketing performance issue. The adoption failure looks like the tool not working. The broken measurement looks like the campaigns underperforming. Same surface symptoms, completely different underlying causes.

That's what makes this frustrating to watch. Organisations keep hiring strategy consultants and refining their GTM playbooks because that's what the problem looks like. And every time the strategy improves and the results don't follow, it confirms the belief that the strategy just needs more work. It's a loop that can run for years.

What breaks the loop isn't a better strategy. It's someone going into the actual system and finding where the operational or technical reality has diverged from what everyone assumed was happening. The ad account. The CRM. The team's actual day-to-day workflow. The data infrastructure. The org chart and who actually has authority to make decisions.

That diagnostic step is the thing most organisations skip - not because they don't care, but because it requires someone who understands marketing well enough to know what good looks like, and understands the operational and technical layer well enough to see where it's broken. Those two things don't often live in the same person.

If your marketing isn't working and the strategy looks fine, the strategy is probably fine. Stop refining the plan. Start looking at what's happening between the plan and the result.

Article by

Oren Greenberg

A fractional CMO who specialises in turning marketing chaos into strategic success. Featured in over 110 marketing publications, including Open view partners, Forbes, Econsultancy, and Hubspot's blogs. You can follow here on LinkedIn.

Spread the word

Sign up to my newsletter

Get my 6-part free diagnostic email series.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Get my 6-part free diagnostic email series.

Send it Over