A founder I work with told me last quarter that 60% of their losses were "price." I asked how he knew. He said the reps put it in the closed-lost reason field in HubSpot.
We pulled 12 of those deals and called the buyers. Three of them had no idea what we were charging by the time they made the call. Two said the rep never showed up to the second meeting. One said our product was "too complicated for the team," which had nothing to do with price. Only four were genuinely budget calls, and even those had a deeper story: the buyer didn't trust the ROI math we sent over.
That's what win/loss analysis actually is. Not a dropdown in your CRM. A process for finding out what really happened so you can stop fixing the wrong things.
This post is how I run it for B2B SaaS teams. What questions to ask, who should run the interviews, how to spot the bias, and how to feed what you learn back into the funnel without it becoming a slide deck nobody reads.
What win/loss analysis actually is
Win/loss analysis is the practice of going back to deals you closed (won and lost) and reconstructing what really drove the decision. Not the rep's version. The buyer's version.
Most teams confuse it with three other things:
- The closed-lost reason field in their CRM.
- A NPS survey to recent customers.
- A quarterly retro where sales tells the story to itself.
None of those are win/loss analysis. They're internal artifacts of internal opinions. Win/loss work needs the buyer in the conversation. Without that, you're just compounding your own blind spots.
Your closed-lost reasons field is a survey of your reps, not your buyers.
Reps have an incentive to underreport process failure and overreport price and product gaps. The first time you talk to actual buyers, you find out the data you've been making decisions on is mostly fiction.
Why most B2B teams get the wrong answer
Three patterns I see over and over.
The rep self-report problem
Sales reps fill out the lost reason field. The reasons cluster around things outside the rep's control: price, product gap, timing, "no budget." Almost never around things inside the rep's control: didn't follow up, lost momentum, sent a generic deck, never spoke to the actual decision maker.
This is not because reps are dishonest. It's because the form is asking them to grade their own homework. Even with the best intent, you get a softened version.
When we re-interview a sample of those same buyers, the breakdown shifts. Across about 200 interviews I've run or sat in on, real losses look closer to:
- 20% price or budget
- 15% genuine product gap
- 25% process failure on our side (slow follow-up, wrong stakeholders, weak demo)
- 20% trust gap (buyer not convinced ROI was real)
- 10% timing
- 10% incumbent or competitor moved faster
The first two add up to 35%. The middle three, all internal, add up to 65%. That's the version that should be running your roadmap and your enablement plan, not the CRM dropdown.
The wrong person doing the interviews
If your VP of Sales calls a lost buyer and asks why the deal died, the buyer will say "price." If your product lead calls, the buyer will say "the product was great, just bad timing." Neither is true. Buyers are polite. They will not tell the person who built the thing that the thing was confusing, and they will not tell the person who manages the rep that the rep dropped the ball.
You need a third party or at least someone outside the deal team. I've seen the cleanest answers come from a junior PMM, a contract analyst, or a founder who isn't the one who sold.
Sample size theater
A team will do six interviews, see two mentions of "onboarding feels heavy," and rebuild onboarding. Six is not enough to act on. The rule of thumb I use: 20 interviews to start, then a steady drumbeat of 8 to 10 per month. Anything less and you're acting on noise.
A scope that actually works
When I set up a win/loss program, the first decision is scope. Get this wrong and the rest of the program drifts.
The trap is wanting to cover everything. SMB churn looks nothing like enterprise loss. A self-serve trial loss has different signals than a 6-month sales cycle loss. Pick one segment, get sharp insights, then expand.
The interview that gets real answers
I keep the structure simple. 30 minutes. 10 to 12 open questions. The job is to get the buyer talking about their process, not to grade us.
The opening matters more than people think. I never start with "why did you choose us" or "why didn't you choose us." That puts the buyer in defense mode. I start with the problem they were trying to solve and walk them through the timeline.
The questions I actually use
- Take me back to when you started looking at this. What was happening in the business that made it a priority?
- Who else was involved in the decision on your side? When did each of them get pulled in?
- Walk me through the vendors you considered. How did each one get on the list?
- What stood out about each of them in your first conversation?
- Where did the process slow down or get harder?
- What was the conversation inside your team when you were narrowing it down?
- What finally tipped the decision?
- If you could replay one part of the process, what would you change about how the vendors approached you?
- Six months in (for wins), what's working and what's surprised you, good or bad?
- If a peer asked you about us tomorrow, what would you say?
Question 6 is the gold one. The conversation inside the buyer's team is where the real decision happens, and reps almost never see it. Once you've heard 20 versions of that internal conversation, you start to see what the buyer's champion actually has to argue against.
How to actually run the program
This is where most teams fall apart. They hire someone to do 10 interviews, get a deck, file it, and three months later nobody remembers what was in it. That isn't a program. It's a content piece.
A program runs on rhythm.
The piece nobody talks about is the routing. If insights aren't owned by someone with the authority to change something, the program is a journal entry. Sales gets a deck about messaging gaps and assumes marketing will handle it. Marketing gets a deck about onboarding pain and assumes product will handle it. Nothing moves.
I make every theme map to one named owner with a deadline. If a theme can't get an owner, we don't put it in the report. Better to have three real actions than 12 pretty observations.
Tools and stack
You don't need a $40K platform to run this. You need a clean stack that captures, transcribes, tags, and reports.
If you want a dedicated platform, Klue and Clozd are the two I see most often in the mid-market. Both are good. Neither replaces the discipline of actually running the program. I'd rather see a team run it well in a Notion table than badly in Klue.
For the CRM and pipeline side of this, the workflow setup matters more than the tooling. We've written more on that in the HubSpot workflows playbook and the HubSpot deal stages guide, and how it all sits inside the broader CRM and RevOps build.
What you do with what you learn
Here's the part that separates programs that change the business from programs that produce reports.
For sales
The most common pattern I find: reps lose deals at stage 3, the demo. They show the product in a generic flow, the buyer doesn't see how it solves their specific problem, and the deal stalls. Win/loss reveals this clearly because winners say "the demo was tailored to our use case" and losers say "the demo was a tour of the product."
The fix is not a new demo deck. It's a discovery checkpoint before the demo where the rep confirms 3 specific use cases and builds the demo around them. Simple, mechanical, and shows up in stage 3 conversion within 60 days.
For product
If three buyers in a quarter mention the same missing feature, that's a roadmap input. If one buyer mentions it, that's noise. Win/loss is the volume control on product feedback that everyone says they want and nobody actually filters.
We had a client cancel a 3-month integration build because win/loss showed only 2 out of 25 lost deals mentioned it as a real blocker. The other 23 had different problems. They saved a quarter of engineering time by listening to the data.
For marketing
The first thing marketing usually fixes is messaging. If 6 out of 10 buyers describe what you do differently than your homepage does, the homepage is wrong. The buyers are right. They don't have to be reading your copy.
The second thing is content. If buyers say "we couldn't find proof you'd done this in our industry," that's a case study problem, not a sales problem.
Average lift in win rate within 6 months when a team runs win/loss properly and acts on the top 3 themes. The math comes from fixing trust gaps and demo flow, not adding features.
The bias check
Before any win/loss program goes live, I run one bias check on the team.
Ask 5 people internally to write down what they think the top 3 reasons for losses are. Sales, product, marketing, customer success, founder. Don't let them see each other's answers.
You'll get 5 different lists. That's the point. The interview program is the tiebreaker. After 20 interviews, you'll find the answer matches none of the 5 lists exactly, and that's the moment people realize the program is doing real work. They were all wrong, just in different ways.
Without this exercise, every internal team thinks the win/loss data is "confirming what we already knew." It's not. They're cherry-picking the parts that match their pre-existing story.
When not to run win/loss
Worth saying: if you're under 50 closed deals a year, this is overkill. The interview discipline matters less than the act of just calling 5 lost buyers a quarter and listening. You don't need a program. You need a habit.
If you're over 200 closed deals a year, the question isn't whether to run it. It's whether to staff it internally or hire a third party. I generally recommend a third party for the first 30 interviews because the answers are sharper, then bring it in-house once the program has rhythm.
The middle (50 to 200 deals a year) is where most of my clients sit, and where this whole post is aimed.
Want help setting this up?
If your closed-lost data feels suspicious, it probably is. Book a 30-minute audit and we'll walk through what your reps are reporting vs. what your buyers are actually saying, and where the gap will cost you the most.
Book an audit →FAQ
How many interviews do I need before the data is useful?
For a single segment, 20 is the floor. Below that, you're reading patterns into noise. After 20, you'll have 4 to 6 themes that repeat and you can act on. Then keep a steady cadence of 8 to 10 per month so the data stays current.
Should I interview wins, losses, or both?
Both. Wins tell you what to keep doing and what your real differentiators are (which are usually not what your marketing claims). Losses tell you where the funnel breaks. Skipping wins is a common mistake because teams assume they already know why they win. They almost never do.
Should sales reps do the interviews?
No. The rep involved in the deal should never lead the interview. Buyers will not be honest with the person who sold to them. Best case is a neutral interviewer (third party, junior PMM, an analyst, or the founder if they weren't on the deal). Reps can listen to the recordings after.
What incentive works to get buyers on a call?
A $100 gift card or a $100 donation to a charity of their choice gets us a 40% to 50% acceptance rate from lost buyers. For wins, the incentive matters less because they already like you. For losses, skip the incentive at your peril, you'll get 10% acceptance and a biased sample.
How do I tag the interviews so the data is usable?
Keep the tag list short. 8 to 12 tags total, grouped into Sales process, Product, Pricing, Trust/Proof, Timing, Competitor. Add tags as themes emerge. Don't start with 40 tags trying to be precise. You'll never tag consistently and the data will be useless. Better to have 8 tags applied consistently than 40 applied randomly.