Busy SDRs, Empty Pipeline: You have A Funnel Qualification Problem
- Avner Baruch
- Jan 2
- 7 min read

In most organizations, this problem is identified far too late.
Teams sense that something isn’t working - but they look in the wrong direction. By the time leadership realizes the issue isn’t skills, it’s already expensive to fix. Headcount has been churned. Tools have been swapped. Processes have been “tightened.” And still, nothing moves.
This article is meant to help TOFU teams identify overqualification early - and remediate it before it calcifies into a systemic failure.
The reason most businesses don’t catch it sooner is uncomfortable, but common: we default to blaming the human layer.
When results disappoint, we question execution. We replace SDRs.We retrain teams. We introduce new playbooks.
What we rarely do is stop and look in the mirror - and ask whether the system itself was poorly designed.
Overqualification doesn’t always announce itself loudly. It shows up in subtle, corrosive ways:
A long tail of “qualified” leads that generate no real activity
Weak engagement signals -low reply rates, shallow interactions, chronic ghosting
Leads keep coming in and keep reps busy, yet nothing materializes downstream
Every contact gets attention… and yet - nothing really moves
The funnel doesn’t narrow. The conversations don’t deepen. The pipeline doesn’t grow.
At this point, attention shifts - almost instinctively - to the TOFU teams. SDRs. BDRs. Agentic campaigns.
But this is not a demand problem. It’s not even a volume problem.
It’s what happens when qualification exists in name - but not in function.
This article explores the very real risk of overqualifying without actually qualifying, and why CMOs, CROs, Sales Development leaders, and Enablement teams should treat it as an urgent, system-level issue - not a performance one.
1. What is overqualifying?

Put simply, overqualification is letting non-buyers in - and exhausting the system to the point where we can no longer give proper attention to those who actually matter.
It’s not about being too strict. It’s about being too permissive without differentiation.
The funnel fills with contacts that look qualified on paper but carry no real buying intent - until signal, focus, and energy are diluted beyond usefulness.
More often than not, overqualification is the result of qualification that exists in name, but not in function.
It happens when:
Qualification stages do not materially change lead volume
Scoring does not influence prioritization or handling
Every lead receives roughly the same treatment
In other words, qualification becomes a labeling exercise - not a decision-making mechanism.
When qualification doesn’t narrow the funnel, shape behavior, or signal probability, it stops qualifying. It merely creates overhead.
2. Overqualifying vs. effective B2B SaaS qualification
In healthy B2B SaaS systems:
NEW → PQL shows a meaningful drop
PQL → MQL shows another clear reduction
Each stage change alters priority, SLA, and depth of engagement

In overqualified (flat) systems:
NEW ≈ PQL ≈ MQL
Scoring exists, but does not influence action
Conversion rates between stages are almost identical
This isn’t a volume issue. It’s a system design failure.
3. The real symptoms of overqualification
Overqualification reveals itself through patterns, not isolated metrics.
Funnel-level symptoms
The funnel does not narrow
Very little difference between NEW, PQL, and MQL volumes
Stages exist, but almost no natural drop-off occurs between them
The funnel stops behaving like a funnel.
Scoring symptoms
Lead scoring is flat and ineffective
Any signal of interest produces a similar score
No meaningful separation between casual curiosity and buying intent
MQL and PQL labels may exist - but in practice, every lead is treated the same
This typically happens because:
Real intent and buying signals are not factored in
PQL (Persona Qualified Lead) logic was never properly engineered
MQL definitions reward activity, not context
Operational symptoms
Every lead gets rep attention
There is no tiering system for handling
Humans and agentic SDRs are forced to treat all leads as equally important
4. The predictable outcome
The most visible aftermath of overqualification is false funnel health:
No healthy drop-off between early funnel stages
A distorted perception of execution and performance
Too many low-intent leads get qualified - you’re catering to window shoppers
Meanwhile, real ICP prospects get lost in the noise
On top of this, businesses typically experience:
Massive handling overhead
Shallow engagement
High false positives
Very little pipeline generation
Declining morale among handling teams - driven by a false narrative of poor execution or lack of accountability
5. The impact on Sales Development - human and agentic
Flat qualification systems quietly damage everyone operating within them.
Human SDRs
Constant context switching
No clear signal on where to focus
Cognitive overload driven by “everything matters”
Over time:
Conversations become shallow
Judgment erodes
Burnout accelerates
Performance declines and sick days begin to accumulate
Agentic SDRs
Automation faithfully executes flawed logic
Weak signals are treated as high-priority triggers
Noise scales faster than learning
Agentic systems don’t fix bad qualification. They amplify it.
6. Downstream implications for pipeline management
When qualification doesn’t differentiate early:
Pipeline reviews become opinion-driven
Forecasting relies on hope instead of signal
Sales teams chase volume rather than probability
Marketing feels the impact too:
Campaigns attract every possible persona
Budget fuels activity, not traction
The funnel becomes a leaky bucket - endlessly refilled, never pressurized
Money isn’t just wasted. It’s cannibalized by noise.
And quietly, the hidden wall between Marketing and Sales grows taller than ever.
7. Recommendations - how to avoid overqualification
This is a system problem, and it must be solved by engineering a tiered qualification model - from the end backwards.
Start by auditing what good actually looks like:
Analyze closed-won opportunities and top revenue-contributing customers
Extract the attributes of success
Compare them against your existing scoring criteria (personas, PQL, MQL)
Break success down by:
Persona
Segment
Company size
Market
Trigger events
Buying context
Then reverse-engineer:
What actually mattered early
Which signals predicted success
Which signals created noise
Redesign scoring into a tiered model with clear definitions of high, medium, and low priority - not a single flat threshold.
Route low-score leads to education and/or automation
Route high-score leads to immediate, human engagement
Align desired outcomes with headcount reality - for example, how many accounts a rep can meaningfully handle per week or per month.
Finally, introduce a 2D MQL model that distinguishes between:
Intent signals (e.g., website activity, gated content)
Buying signals (e.g., chat interactions, form behavior, explicit requests)
Shave off friction wherever possible by automating for efficiency and visibility:
Lead enrichment via waterfall processes
LinkedIn connection requests and messages
Personalized outbound emails
SLA tracking and reporting
Closing thought

If your funnel doesn’t narrow, your system isn’t qualifying - it’s just relabeling noise.
If you’re looking for concrete examples, teardown frameworks, and practical guidance on rebuilding qualification systems that actually work, you’ll find them throughout the Project Moneyball book series.
Avner Baruch Founder & Author, Project Moneyball
Bonus section - PQL/MQL Blueprint:
1) Fit criteria (who they are)
These answer: Should we even care if they engage?
Company / account fit
Industry / vertical match (target vs. non-target)
Employee size
Revenue
Geo / region served
Ownership / compliance needs (public, regulated, gov, etc.)
Tech maturity (cloud-native vs. legacy-heavy)
Known “bad fit” exclusions (students, consultants, competitors)
2) Persona / role criteria (who they are in the org)
These answer: Can they buy, influence, or champion?
Role alignment
Target personas (HR/ VP Eng / RevOps / VP Sales / IT Director, etc.)
Seniority (Manager / Director / VP / C-level)
Department match (Security/IT vs. random)
“Wrong persona” penalties (HR for a security product, etc.)
Buying committee coverage
Multiple relevant personas from same account engaged (strong)
One champion + one economic/influencer engaged (very strong)
3) Intent criteria (what they do)
These answer: Are they leaning in, or just grazing?(High leverage when done right.)
High-intent web behavior
Pricing page visits (especially repeated)
Integration docs / API docs
Security / compliance pages (SOC2, DPA, legal)
Case studies in a relevant vertical
Product pages beyond homepage (depth)
Returning sessions within 7–14 days
Time on site AND depth (avoid time alone)
Content intent
BOFU assets: “evaluation guide”, “RFP template”, “migration”, “buyer’s guide”
Webinar attendance live (higher than on-demand)
“Comparison” pages (X vs Y) or “alternatives” content
Downloading implementation/security docs vs. generic ebooks
Email intent
Replies (strongest)
Clicks on “book a meeting” / “request demo” (very strong)
Multiple opens aren’t enough alone (weak signal)
4) Buying signals (explicit “I’m in-market”)
These answer: Are they signaling motion, urgency, or procurement reality?
Direct request behaviors
“Request demo” submission
“Talk to sales” / “contact us” form
Trial signup / product access request (if you offer it)
“Book a meeting” completed (obviously)
Evaluation behaviors
Inviting colleagues / adding teammates in trial
Creating projects/workspaces
Connecting integrations
Hitting activation milestones (for PQL/MQL hybrid)
Asking questions in chat with evaluation language (“pricing”, “timeline”, “implementation”, “security review”)
Procurement language
Budget, timing, vendor list, RFP, renewal date, deadline
“Need this for audit”, “board asked”, “incident happened” (trigger events)
5) Tech stack fit
Must-have integrations present (Salesforce, HubSpot, Okta, AWS, etc.)
“Plays well with” indicators (CDP, data warehouse, SIEM, ticketing tools)
6) Engagement quality (not just activity)
These answer: Is this engagement meaningful or noisy?
Positive signals
Multi-touch across channels (web + email + event + chat)
Fast follow-through (e.g., returns within 48 hours)
Specific page clusters (pricing + docs + case study)
Negative signals / de-scoring
Only top-of-funnel content consumption (blogs only)
Job seekers / students
Competitors
Agencies / consultants (unless you sell to them)
Personal email domains (sometimes a penalty, not an auto-DQ)
One-and-done visits with no depth
7) Account-level signals (especially for ABM / mid-market / enterprise)
These answer: Is the account warming up even if one contact is imperfect?
Target account list match (Tier 1/2/3)
Account-wide engagement spikes (multiple visitors, multiple sessions)
ICP account surging on intent tools (if you use them)
Existing customer / expansion motion (different model, but powerful)
8) Patterns (the underrated multiplier)
These answer: Is it happening now?
Recency weighting (last 3–7 days > last 30)
Velocity (multiple actions in short time window)
Sequence patterns (e.g., pricing → case study → demo request)
9) “Benchmark-style” scoring patterns (common in SaaS)
A. Two-layer model (recommended)
Fit score (firmographic + persona) determines whether we care
Intent score (behavior + buying signals) determines how fast we act
B. Three tiers output (what ops teams like)
Tier 1 (high priority): immediate human engagement (SLA minutes/hours)
Tier 2 (medium): fast follow-up + light automation + monitor
Tier 3 (low): nurture/education + retargeting
C. Explicit “MQL must have BOTH” rule
Must pass a minimum fit threshold
Must show a minimum intent/buying thresholdThis is how you prevent “everyone is qualified.”




I like your thinking, Avner. It doesn’t blame SDRs. It doesn’t default to effort or discipline. It looks at the system instead of the people caught inside it. That alone makes it worth engaging with
Where I’d add something is not as a correction, as a continuation of the thought. Because in most teams, qualification isn’t where things start to go wrong. It’s where earlier choices finally show themselves
Long before an SDR sees a lead, someone usually notices something that doesn’t quite line up. A lead that never should have entered the system. A buyer who sounds interested but can’t really decide. An opportunity that looks fine in a report but feels off in conversation
Saying that out loud…