---
title: B2B Intent Data Is BS (Unless You Do This)
description: Most B2B intent data fails because companies treat single signals as buying proof.
canonical: https://deep-y.com/blog/intent-data-b2b
author: Maor Raichman
date: 2026-04-13
---

# B2B Intent Data Is BS (Unless You Do This)

**Category:** Strategy | **Read time:** ~9 min | **Author:** Maor Raichman

B2B intent data has a reputation problem. Companies buy it, use it for 90 days, see mediocre results, and conclude that intent data does not work. The vendors respond by improving their data quality. The results stay mediocre. The debate continues.

The actual problem is not data quality. The problem is that companies treat a single intent signal as a buying indicator and act on it alone. That is not how buying intent works - and the misunderstanding is what makes intent data perform far below its potential.

## What Is B2B Intent Data?

B2B intent data is information about companies' online behaviors that suggests interest in a specific topic, product category, or problem. It comes in two forms:

**Third-party intent:** Aggregated behavioral data from content networks (Bombora, G2, TechTarget) showing which companies' employees are reading articles, comparing products, and visiting competitor pages across the web.

**First-party intent:** Behavioral data from your own properties - website visits, content downloads, webinar attendance, email opens, demo requests.

Both forms have value. Neither form alone is sufficient.

## Why Intent Data Fails: 4 Root Causes

### Failure 1: The Single-Signal Trap

A company visits your G2 page. That is a signal. But it could mean:

- A junior analyst doing competitive research for a presentation
- An existing customer comparing you to a competitor
- A student writing a paper on your industry
- Someone considering buying from you

The signal is real. The meaning is ambiguous. When companies treat a single intent signal as buying intent and launch aggressive sequences immediately, they reach a mix of genuine buyers and irrelevant lookers - diluting their reply rates and burning contacts who would have been warm if approached later.

### Failure 2: The Vendor Incentive Problem

Intent data vendors are incentivized to sell volume. Their metrics are intent scores, topic surges, and companies "in-market." What they are not measuring is whether those companies actually bought from anyone in the category - or whether the "intent signal" turned into a sale for their customers.

The incentive gap means vendor intent data quality is hard to verify independently. Companies running these campaigns produce results that vary wildly from vendor to vendor and campaign to campaign.

### Failure 3: The Timing Problem

Third-party intent data has a 2-4 week lag. By the time you act on an intent signal, the prospect may have already made their decision. A company that was actively comparing vendors in week 1 and receives your outreach in week 4 is past the evaluation window.

The solution is combining third-party intent with real-time first-party signals that have no lag - website visits, email engagements, content downloads - to reach prospects while the evaluation is still active.

### Failure 4: The Context Problem

Intent signals tell you that a company is interested in a topic. They do not tell you why, at what stage of the buying process, with what budget, under what constraints, or with what alternatives being considered. Acting on raw intent signals without context produces outreach that is vaguely relevant but lacks the specificity that drives replies.

## The Multi-Signal Approach: What Actually Works

Instead of a single intent signal, effective intent-based targeting uses 6 categories of signals simultaneously and requires a company to match multiple signals before outreach begins.

**Signal Category 1: Hiring signals.** Job postings in roles that indicate a buying-window need. A company posting for a "Revenue Operations Manager" is building sales infrastructure. A company posting for a "Director of Customer Success" is scaling post-sales operations. These are observable, current signals with direct implications for what the company needs right now.

**Signal Category 2: Funding and growth signals.** Series A-D funding announcements, executive hires, office expansions, press releases about new initiatives. Post-funding companies have budget and are in active build mode.

**Signal Category 3: Technographic signals.** Tools the company is currently using (detectable via job postings and LinkedIn profiles), tools they are replacing, and tools their team is certified in. A company replacing their CRM is in an evaluation cycle that extends to adjacent tools.

**Signal Category 4: Behavioral signals.** G2 page visits, competitor comparison activity, review page visits, and similar third-party intent data. Used as one signal layer, not the primary signal.

**Signal Category 5: Social signals.** LinkedIn posts from decision-makers that indicate active pain, strategy changes, or initiative launches. A VP of Sales posting about building an outbound motion is explicitly signaling a need.

**Signal Category 6: First-party signals.** Website visits, content downloads, email opens, event registrations. The highest-confidence signal layer because you control the data and there is no lag.

## The 3% Rule and Why Multi-Signal Matters

At any given moment, approximately 3% of your ICP is in an active buying cycle. That 3% is who your outreach should reach. The remaining 97% is not ready, and reaching them aggressively produces low reply rates, spam complaints, and burned domain reputation.

The math:

- Market size: 1,000 ICP-fit companies
- Companies with any single intent signal (Bombora surge): 180-250
- Actual buyers in those 180-250: 18-25 (10-14% conversion from signal to actual buyer)
- Companies you missed who were actually buying: 5-12 (buyers who did not trigger the specific signal)
- False positive rate: ~90%

With multi-signal stacking:

- Companies matching 3+ correlated signals: 60-80
- Actual buyers in those 60-80: 28-35 (40-50% conversion from multi-signal to actual buyer)
- False positive rate: ~55%

The multi-signal approach cuts false positives nearly in half and captures most of the actual buyers in the market. That improvement is what drives the gap between 1-3% reply rates (single signal) and 8-15% reply rates (multi-signal stacking).

## 5-Step Operational Guide

**Step 1: Define buying triggers, not demographics.** "VP of Sales at a 200-person SaaS company" is a demographic. "VP of Sales at a 200-person SaaS company that hired a new CRO in the last 60 days" is a buying trigger. The trigger is the event that makes your solution relevant right now.

**Step 2: Select 5-8 signals that correlate with your buying trigger.** For each trigger, identify which observable signals reliably predict a buying window. Test these against your closed-won data: what signals were present in your best customers 30-90 days before they bought?

**Step 3: Build a scoring model that requires 2-3 minimum signals.** No single signal qualifies a company for outreach. Set a threshold score and only contact companies above it. This eliminates the false positive problem.

**Step 4: Set up real-time monitoring, not periodic list pulls.** Intent data used monthly is already stale. Configure Clay or a similar tool to monitor signals continuously and add qualifying companies to your outreach queue automatically as they cross the threshold.

**Step 5: Personalize to the trigger, not just the demographic.** The first email references the specific signal that qualified the contact - "Saw you hired a new CRO last month" is more specific and more credible than "I help VP Sales at growing SaaS companies." The signal is the reason for the outreach; reference it explicitly.

## Deep-Y's 50-Signal Intelligence Layer

The campaigns we run for clients typically monitor 50+ signals per target account across the 6 categories above. For AirCentral, the signal stack was built around 3 primary triggers: building ownership changes, facilities management hires, and HVAC-related maintenance coordinator postings. Those 3 triggers (out of 50 monitored) identified 312 companies in the AirCentral territory that were in active buying windows, out of 8,000 ICP-fit accounts.

312 out of 8,000. That is the filtering power of signal stacking versus demographic outreach alone.

Reaching those 312 companies with targeted, trigger-personalized outreach produced 89% open rate and 64% reply-to-meeting conversion. Reaching all 8,000 with the same copy would have produced approximately 22% open rate and 1-2% reply rate - and burned AirCentral's sending infrastructure from the volume.

## Proof Numbers

- $540K pipeline generated in 90 days from signal-targeted outreach
- 89% open rate (industry average: 22%)
- $0.30 cost per qualified lead (industry average: $80-200)
- 18,247+ qualified leads generated over 12-week campaign

## Frequently Asked Questions

**What is B2B intent data?**
B2B intent data is information about companies' online behaviors - content consumption, competitor page visits, product research - that suggests interest in a specific topic or buying category. It comes in two forms: third-party intent (behavioral data aggregated from content networks like Bombora and G2) and first-party intent (behavioral data from your own properties - website visits, content downloads, demo requests). Intent data is most valuable when combined with other signal types rather than used in isolation.

**Does B2B intent data actually work?**
It depends on how it is used. Companies that use a single intent signal as their primary targeting criterion see mediocre results because the false positive rate is 85-90% - most of the companies triggering an intent signal are not actually in a buying cycle. Companies that combine 3-5 correlated signals in a scoring model see significantly better results - false positive rates drop to 50-60%, and the contacts that do qualify convert at much higher rates.

**What is the best B2B intent data provider?**
Bombora is the most comprehensive for third-party intent data at enterprise scale. G2 is the best for software categories where review comparison behavior is a strong buying signal. TechTarget is strongest for technology infrastructure buyers. For most B2B programs, first-party signals (your own website and engagement data) combined with real-time hiring and firmographic data from Clay produce better results than any third-party intent provider alone.

**How do you use intent data for cold email?**
Build a signal scoring model that requires 2-3 correlated signals before adding a company to your outreach queue. Configure real-time monitoring to catch companies as they cross the threshold. Write first-touch emails that reference the specific signal that triggered the outreach - "Saw your team posted for a Revenue Operations Manager" rather than generic ICP-based outreach. The signal is the reason for the email; make it explicit.

**What signals work better than intent data?**
Real-time buying triggers typically outperform aggregated intent data because they are current (no 2-4 week lag), specific (tied to a concrete event rather than general topic interest), and verifiable (the prospect can confirm the trigger is accurate). Hiring signals, funding announcements, leadership changes, and technology stack changes are all real-time observable events that indicate a buying window more reliably than content consumption patterns alone.
