A few months ago, a CMO I work with killed two thirds of her paid search spend on a Friday afternoon. Her board called the next Monday in mild panic. The HubSpot dashboard had been showing paid search as the top "first touch" source for 18 months. Last touch was direct traffic. Both told her almost nothing.
Pipeline went up the next quarter. Not because paid search was useless, but because the attribution model had been pointing her at the wrong inputs.
This is the part of B2B marketing that does not get said out loud often enough. Multi-touch attribution, the thing every CMO has been told to set up since 2018, is mostly broken in 2026. Not because the math is wrong, but because the data going in is wrong. Most of the buying journey happens in places your tracking pixel cannot see, and the touches you can see are the ones that matter least.
I have spent the last decade running RevOps for B2B SaaS companies, most recently as Founding GTM Engineer at Peec AI. I have built attribution models in HubSpot, Salesforce, custom data warehouses, and Snowflake. I have watched marketing teams optimize their way to record traffic and zero pipeline. Here is what I think is actually going on, and what to do about it.
What multi-touch attribution was built for
Multi-touch attribution came out of digital advertising in the late 2000s. The job was simple: a person clicks on a Google ad, then sees a Facebook retargeting ad, then comes back via email and buys a $40 pair of shoes. Which channel deserves credit? Linear, time decay, U-shaped, W-shaped, every model is a way to slice the credit across that visible click stream.
It works because in B2C, the click stream is most of the story. The buying decision is fast, mostly individual, and almost entirely on the device the marketer can track.
B2B is the opposite of every part of that.
A typical B2B SaaS deal in 2026 looks like this: 6 to 10 people on the buying committee. Average 27 touchpoints with vendor content before a sales conversation, per Forrester. 67% of the buying journey done before talking to sales, per Gartner's 2024 buyer survey. Buyers spend 17% of their evaluation time on vendor websites and 27% talking to peers in private channels.
The website touchpoints, the ones your HubSpot tracks, are the smallest slice. The peer conversations, the Slack DMs, the LinkedIn comments your buyer scrolled past three weeks ago, the podcast they were half listening to on a run, none of it shows up in the model. So the model fills in the credit with whatever it can see, which is usually paid search or organic search. And the CMO optimizes more spend toward exactly the wrong channel.
Average share of the B2B buying journey that happens before a buyer ever touches your tracked website. Per Gartner, 2024. This is what your attribution model is missing.
Why your model is lying to you
There are three structural reasons multi-touch attribution gives B2B teams bad answers in 2026, and they have all gotten worse since 2020.
The first is the dark funnel. This is a phrase from Chris Walker at Refine Labs that has stuck because it describes something every B2B marketer has felt. Buyers research in places that do not pass UTM parameters. Slack communities, Discord servers, private LinkedIn DMs, podcast episodes, internal vendor lists shared in Notion docs, coffee chats with a friend who already bought your competitor. None of this attaches to a contact record. None of it shows up in your model.
The second is third-party cookie collapse. Safari started restricting in 2017. Firefox in 2019. Chrome's third-party cookie deprecation, while delayed several times, has effectively reduced the trackable identity graph by 60% or more depending on whose data you trust. iOS 14.5 broke Facebook's pixel for half the audience. The cross-device, cross-session click stream that multi-touch was built on is now a fraction of what it used to be.
The third is bot traffic and AI agents. By the start of 2026, an estimated 30% of B2B website traffic is non-human. Most of it is AI assistants pulling pages on behalf of buyers, plus traditional bots. They click links, they hit landing pages, they trigger the pixel, but they are not buying. Your model thinks a session happened. Your funnel says nothing happened.
Add it up and you have a model trying to assign credit across an incomplete, partly fake, partly anonymous trail of digital crumbs.
The metrics that look fine but are not
When I audit a B2B marketing org, I usually find the same set of dashboards. They look healthy. They are deceiving in specific, predictable ways.
Marketing sourced pipeline by first touch channel. This is the worst offender. First touch is almost always either organic search, direct, or paid brand search, because that is what the cookie sees first. None of these are causal. They are the last digital surface someone tapped before becoming a known contact.
Last touch attribution on closed won deals. Slightly less bad, but still mostly captures whatever the buyer did 5 minutes before filling out the demo form. Usually a branded search or a direct visit. The 4 weeks of dark funnel research that actually drove the decision get zero credit.
CPL by channel. Cost per lead is fine for paid social tracking but it does not tell you which leads close. A channel can have a $40 CPL and a 0.2% close rate and still be a disaster. I have seen this with cold outbound list buys more than once.
MQL volume. Marketing qualified leads as a top-line metric became popular because it is easy to count. It is also one of the easiest metrics to game with low intent traffic. A 30% increase in MQLs that comes from gated content downloads tells you almost nothing about pipeline.
What actually works in 2026
The honest answer is that no single attribution model is going to give you the truth. What works is a stack of three imperfect signals that, read together, tell you most of what you need to know.
Self reported attribution
The simplest, most underused tool in B2B marketing right now. Add a single required field to your demo request form: "How did you hear about us?" Make it free text or a short list. Look at the answers monthly.
When Refine Labs ran this on their own audience, they found that LinkedIn and podcasts were the dominant drivers, while their multi-touch model had been crediting paid search. HockeyStack's 2024 attribution study across 600+ B2B companies found that 41% of closed won deals self-reported a channel that the multi-touch model gave less than 5% credit to. That is the gap.
Self reported has its own bias. People misremember. They name what they remember last, not what actually moved them. But it gives you signal on the dark funnel that no pixel can. If 30% of your closed won deals self report "podcast" or "I saw your CEO on LinkedIn" and your model gives those channels 2% credit, you know where the model is wrong.
Channel cohort analysis
Forget per-deal attribution. Look at cohorts. Take everyone who first hit your site in March 2025 and split them by acquisition channel. Watch how each cohort progresses through the funnel over the next 6 to 12 months. Which channel cohort produces the most pipeline at the lowest CAC? Which converts fastest?
This is harder to set up than a HubSpot dashboard. You need a data warehouse, or at least a Hex or Mode notebook pulling from your CRM. But the answer is more honest. You are not slicing credit across a fake click stream. You are watching real cohorts of real people behave over real time.
Marketing mix modeling, the lite version
For larger spend, the comeback story of 2024 to 2026 is media mix modeling, MMM. The big version requires statisticians and 2 years of data. The lite version is simpler: spend up, spend down, watch what happens to pipeline. Run a structured holdout. Cut paid social by 50% in one region for 6 weeks and see if pipeline falls. If it does not, you have your answer. If it does, you have your answer.
Both Google and Meta have shipped open source MMM tools (Meridian and Robyn) in the last 18 months. They are not magic, but for any B2B team spending more than $50K a month on ads, this is a more honest input than last touch attribution.
How to set this up in HubSpot
Most of my clients are on HubSpot, so here is the practical version. Salesforce setup is similar with different field names.
On the contact object, add these custom properties: self_reported_source (single line text or dropdown), first_touch_channel_clean (calculated, default to self reported when present), cohort_month (calculated date trunc on created_at), and last_inbound_meeting_source (single line text, populated by the SDR after a discovery call asks the same question).
On the demo form, add the "How did you hear about us?" question as required. Keep the options short: 5 to 7 channels plus an "Other" with free text. Track Other answers monthly, you will spot patterns.
Build a dashboard that shows pipeline created by self_reported_source rather than the standard "First Touch" report. Compare it side by side with the HubSpot first touch report for 90 days. Show both to the leadership team. The deltas are the conversation you need to have.
For closed won analysis, use the deal property original_source and original_source_drill_down_2 together with self reported source from the contact. When they disagree, trust self reported. HubSpot's first touch is right about half the time and wrong in predictable ways.
Set up a monthly RevOps review where you read the actual self reported answers out loud. Not the bucketed data. The actual sentences buyers wrote. "I saw Abhishek post about this on LinkedIn." "My CFO mentioned you in our QBR." "I heard about you on the SaaStr podcast." These are gold for marketing. They tell you what to do more of.
If you want help wiring this up properly, that is the kind of HubSpot RevOps work we do most. The setup itself is a 2 to 3 week project, the value comes from running the review religiously.
Stop assigning credit. Start asking buyers and reading cohorts.
The point of attribution is not to be precise. It is to know what to do more of and what to cut. Self reported answers plus cohort behavior beat any multi-touch model in B2B in 2026.
What changes when you do this
Three things tend to happen with the teams I work through this with.
Paid search gets cut, often dramatically, because once you stop crediting it for first touch, you can see what it actually does. For most B2B SaaS in the $1M to $20M ARR range, paid brand search is necessary defense. Paid non-brand search is mostly setting fire to money. The cohort analysis usually shows it.
Content and community spend goes up. Once self reported answers start showing podcasts, newsletters, LinkedIn posts, and Slack communities as real drivers, the question shifts from "how much CPL on this channel" to "how do we be in more of those rooms." This usually means hiring a person whose job is community and creator presence, not running more retargeting.
Sales and marketing alignment improves. The reason marketing and sales argue about lead quality is that marketing's metrics (MQLs, traffic, MTAs) are not the same as sales metrics (pipeline, closed won). When you replace MQL goals with sales accepted opportunities and channel cohort pipeline, the conversation changes. Both teams start looking at the same scoreboard.
The mistakes I see most often
Treating self reported as a replacement instead of a layer. Self report is one input. Cohort and holdout are the other two. Running on self report alone is just survey data, which has its own biases.
Bucketing the answers too early. The first 90 days, do not bucket. Read the raw answers. You will spot channels you did not know you had, like a specific podcast appearance or a viral LinkedIn post. Bucket only after you understand the long tail.
Using first touch from HubSpot as the source of truth. HubSpot's first touch is technically accurate, it is just measuring the wrong thing. It captures the first time a cookie was set, which is almost never the first time a buyer heard of you.
Ignoring the asymmetry between deal sizes. Self reported answers from $5K ACV deals tell you very different things than $200K ACV deals. Always cut the analysis by deal size or ICP segment. The dark funnel is way bigger for enterprise than for SMB.
Letting the marketing ops team be the only ones who read the data. The CMO needs to be in the monthly review. So does the head of sales. Attribution is a leadership conversation, not a reporting one.
A note on AI search and zero-click research
One trend worth watching that is rewriting attribution again is AI assisted research. Buyers ask ChatGPT, Claude, or Perplexity for a shortlist of vendors. The AI returns 3 to 5 names with reasons. The buyer goes direct to one of those websites and books a demo. No click trail. No referrer. No UTM. Just a "direct" hit on your site.
In 2024, this was a curiosity. By mid 2026, multiple agencies including ours are seeing 5 to 12% of B2B demo requests come from this path. The signal is buyers writing "ChatGPT recommended you" or "Claude said you were the best fit" in self reported fields. If you are not asking that question, you are not seeing it. If you are running pure multi-touch, this is a 100% direct traffic event with zero credit assigned anywhere.
The fix is the same one this whole post is about. Ask buyers. Read what they say. Trust them more than your pixel.
Want to fix your attribution stack?
If your HubSpot dashboards say one thing and your gut says another, that is usually the model lying. We rebuild B2B attribution stacks that tell you the truth in 2 to 3 weeks.
Book a free 30-minute audit →FAQ
Is multi-touch attribution dead in B2B?
Not dead, just deprioritized. As one input alongside self reported answers and cohort analysis, it still has value. As a single source of truth, it gives misleading answers in 2026 because most of the B2B journey is invisible to it.
How accurate is self reported attribution?
Less precise than a pixel, more accurate at the channel level. Buyers misremember specific touchpoints, but they reliably name the channel that influenced them most. For B2B, channel level direction is more useful than per-touch precision.
What about HubSpot's built-in attribution reports?
They are technically correct and strategically misleading. First touch and last touch in HubSpot measure what the cookie saw, not what changed the buyer's mind. Use them as one data point, not the answer.
How long does it take to see real attribution data?
The self reported field starts producing useful data in week 2 once you have 30+ answers. Cohort analysis needs at least 6 months of data, ideally 12, because B2B sales cycles are long. Plan for a year before the picture is complete.
Should I still run paid ads if I cannot attribute them well?
Yes, but with structured holdouts. Run paid for 6 weeks, pause for 4, run again. Watch what happens to pipeline in each window. If pipeline tracks the spend, paid is working. If it does not, you have an answer.
For more on the GTM stack we build and the AI automation that runs underneath, those pages have specifics. Or just book a call and we can audit your setup directly.