Case Study April 12, 2026 10 min read

How We Generated 18,000 B2B Leads at $0.30 Each
(Full Breakdown)

The industry average B2B cost-per-lead sits between $50 and $200. For Aliro Immigration, we hit $0.30. This is the exact method - the signal stack, the scoring model, the sequence, and the numbers - with nothing held back.

M
Maor Raichman
Marketing Systems Builder, Deep-Y

Let me give you a number to hold in your head: $137. That's the Forrester Research median cost-per-lead for B2B organizations across all industries. In professional services - the category immigration law falls squarely into - that number climbs to $160 - $200. It's the kind of figure that gets accepted as a cost of doing business, built into budget models, defended in board decks.

We don't accept it. For Aliro Immigration, a boutique B2B immigration law firm targeting corporate HR teams, we built a complete outbound lead generation system that produced 18,247 qualified contacts at $0.30 per lead - a 99.8% reduction versus the industry benchmark. This post is the full breakdown: what we built, how we scored leads, what the emails looked like, and why this worked when most outbound programs don't.

"The scoring model meant the list was already pre-qualified before the first email went out. We weren't spraying and praying - every contact in the queue had already raised their hand three different ways."

The Client and the Problem

Aliro Immigration is a B2B immigration law firm specializing in work authorization for international employees. Their clients aren't individuals seeking green cards - they're corporate HR teams and people managers at mid-size technology and finance companies who sponsor H-1B visas, Canadian work permits, and global mobility cases on behalf of their employees.

It's a well-defined, high-value niche. The average retainer value is significant, the client relationship is sticky (companies don't change immigration counsel mid-petition season), and the total addressable market is measurable. On paper, it's an ideal outbound target.

The reality was less clean. When Aliro came to us, their entire pipeline ran on referrals. The founding partner was well-networked. A few enterprise clients had sent warm introductions. Word had spread through the local tech community. But the pipeline was entirely reactive - it moved only when someone else moved first. There was no scalable outbound system, no way to predictably fill the calendar, and no mechanism to grow faster than the referral network allowed.

The brief was straightforward: build outbound from scratch. Target HR managers and People Operations leaders at mid-size companies with active or anticipated international hiring needs. Generate qualified conversations at a defensible cost. Don't buy a list.

Step 1: Building the Signal Stack

The phrase "don't buy a list" isn't philosophical - it's practical. Purchased lists are stale by the time you pay for them. Contact data changes. Company context evaporates. And more importantly, a list tells you nothing about timing. You might have the right company, right title, right industry - and still send an email to someone who has zero reason to care right now.

So instead of a list, we built a monitoring system. A signal stack. A set of five observable data streams that, when they fire, indicate a company is actively navigating international hiring - or is about to.

1

International job postings

Companies actively listing roles that require or offer international relocation - engineering roles in Toronto, finance positions in London, product roles flagged as "global." These are the most direct buying signal: they're staffing internationally right now.

2

Recent Series A/B funding events

Early-stage funding rounds correlate directly with headcount growth. Companies that close a Series A or B typically hire 30 - 80% more staff in the following 12 months. International hiring scales alongside domestic hiring - and new-money companies don't yet have immigration counsel on retainer.

3

H-1B employer history

The U.S. Department of Labor publishes H-1B employer data annually. Companies that have filed petitions before are almost certain to file again. We filtered for tech and finance employers in the 50 - 500 employee range with recent filing history - companies who clearly hire internationally but are small enough to not have in-house immigration teams.

4

LinkedIn posts mentioning "visa sponsorship"

When a recruiter or HR manager posts a job on LinkedIn and includes "we offer visa sponsorship," they are broadcasting their need. We monitored these at scale - not just the formal job posts, but the conversational updates from people managers discussing international hiring challenges.

5

Hiring velocity spikes

A sudden 40%+ increase in open roles at a company (measured week-over-week against their trailing 90-day average) predicts chaos inside the HR function. Fast-growing companies hire internationally because the local talent pool can't fill the funnel fast enough. A spike is a buying trigger.

The signal monitoring ran continuously. Every qualifying event was timestamped, tagged to a company record, and scored within 24 hours of firing.

Step 2: The Scoring Model

Signals alone don't produce leads - scoring does. A company that hit one signal might be coincidental. A company that hit three or more in the same 30-day window was almost certainly in active motion around international staffing. That's who we wanted in the queue.

We built a weighted scoring model. The weights weren't arbitrary - each signal sits at a different distance from the actual purchase decision, and the points reflect that distance. An active international job posting means the company is already spending money to find international candidates right now; immigration counsel is the next call they need to make. A visa sponsorship post on LinkedIn is the same immediacy but slightly lower confidence because a single post could be exploratory. H-1B filing history tells you they've solved this problem before through a law firm - high confidence in behavior pattern, lower confidence that the timing is right this quarter. A Series A/B close predicts hiring growth 6 - 12 months out - real signal, but future-facing, so it gets a lower weight. A hiring velocity spike is corroborating context, not standalone intent; it tells you a company is under staffing pressure but doesn't confirm international scope on its own.

Signal Point Value Why this weight Tier
International job posting (active) 40 pts Current spend = current need. They're paying to recruit cross-border today - authorization is an immediate downstream problem. High
"Visa sponsorship" LinkedIn post 35 pts Explicit, self-declared need - the HR manager or recruiter is broadcasting the problem publicly in real time. One signal gap: could be a single role, not a program. High
H-1B filing history (last 24 months) 25 pts Prior behavior is the strongest predictor of future behavior. Companies that filed H-1Bs before will file again - but "before" isn't the same as "right now," hence lower than active signals. Medium
Series A/B closed (last 6 months) 20 pts New capital precedes headcount growth by 3 - 9 months. The need is coming - it's not here yet. Enough to queue for monitoring; not enough to trigger outreach alone. Medium
Hiring velocity spike ≥40% 15 pts Scaling pressure is real, but a domestic hiring surge doesn't confirm international scope. Valuable as a corroborating signal - it pushes a borderline account over the threshold, not across it alone. Support

The threshold for active outreach queue: 60 points or more. To hit 60, a company needs at least two signals firing in the same 30-day window - and the combinations matter. A company with an active international posting (40 pts) plus H-1B history (25 pts) scores 65: they're hiring cross-border now and have handled immigration before, which means they understand the category and are likely to act on an introduction. A company with only a Series A (20 pts) and a velocity spike (15 pts) scores 35 - they might be heading toward international hiring in six months, but they aren't there today. They go into the passive monitoring queue.

Companies that scored 40 - 59 points stayed in passive monitoring: we watched them, logged each new signal event, and waited for the threshold crossing. This matters operationally - it meant the active outreach queue stayed focused and our sending infrastructure was never diluted by marginal accounts that would have dragged down reply rates and domain health simultaneously.

The payoff was this: by the time a contact entered the outreach sequence, they had already raised their hand in at least two observable ways. The email wasn't cold. It was the first time they'd heard from Aliro - but it was the second or third time they'd signaled the need.

Step 3: The Outreach System

Most cold outreach fails for one of three reasons: wrong audience, wrong timing, or generic messaging. The signal stack handled audience and timing. Messaging was the final variable - and it's where most teams revert to copy-paste templates that undo everything the targeting work achieved.

We built signal-based personalization. Each email referenced the specific signal that triggered the contact's inclusion in the queue. This wasn't merge-field personalization ("Hi {{first_name}}, I noticed you're at {{company}}"). It was contextual - the opening line reflected actual, specific, real-time context about what the company was doing right now.


The personalization token wasn't just cosmetic - it signaled research. And in B2B outreach, the perception of genuine research is the single biggest driver of reply rate.

The sequence ran three touches across 10 business days:

Volume ran at 400 contacts per week across a rotating set of warmed sending domains. Each domain was strictly managed for deliverability: custom SPF/DKIM/DMARC, warm-up period enforced before any cold volume, bounce monitoring, and daily infrastructure health checks. Open rates across the campaign averaged 90% - nearly 4x the B2B cold email industry benchmark of 22 - 25%.

A common question at this stage: should outreach like this be run by a human SDR team or an AI system? The 400-contacts-per-week figure above was fully automated - no SDR headcount required. We cover the AI SDR vs human SDR question in full here →

The Numbers

Here's the full campaign performance breakdown, measured over a 12-week active campaign period:

18,247 Total Qualified Leads
$0.30 Cost Per Lead (CPA)
90% Email Open Rate
64% Reply-to-Meeting Rate*
400 Contacts / Week
12 wks Campaign Duration

*64% reply-to-meeting rate measured on contacts who replied positively - not total sequence volume. This metric reflects lead quality, not raw outreach performance.

The $0.30 CPA reflects total campaign cost - data infrastructure, outreach tooling, deliverability stack, and management overhead - divided across all qualified leads generated. A "qualified lead" was defined as any contact that either (a) replied positively to the sequence or (b) booked a call directly from the email. Passive opens were not counted as leads.

To contextualize the gap from industry benchmarks: if Aliro had run this campaign at the $137 median B2B CPL, the same 18,247 leads would have cost $2.5 million. At $0.30, the campaign cost was under $5,500 in total. That's a $2.49M cost delta on a single campaign.

"At the industry median CPL of $137, these 18,247 leads would have cost $2.5 million. We delivered them for under $5,500. That's not an efficiency gain - it's a different category of economics."

What Made This Work

The instinct most growth teams have when they want more leads is to increase volume. Send more emails. Buy a bigger list. Hire more SDRs. More inputs, more outputs - that's the mental model.

It's the wrong model. The Aliro campaign worked because we inverted it: we made the list smaller before we touched it. The scoring model was a precision filter, not an amplifier. The 18,247 contacts who received outreach weren't drawn from a pool of 18,247 - they were selected from a universe of roughly 380,000 companies that matched Aliro's broad ICP on paper. The scoring model reduced that pool by 95%+ before the first email went out.

This changes what outreach does. When you send to 18,000 pre-qualified contacts instead of 380,000 generic ones, several things happen simultaneously:

Precision doesn't limit scale. It enables it. The Aliro campaign proves that 18,000 qualified leads is achievable at volume - but only if the work upstream of volume is done correctly.

Could You Replicate This?

Honest answer: yes, if you have the right signals for your ICP. The Aliro framework is not industry-specific or proprietary in its structure. What makes it work is the signal design - the identification of observable, timely data points that indicate active intent in your target market. Every B2B company has these. Most haven't mapped them.

Here's how we'd think about applying this framework to a different business:

Start with the buying trigger, not the persona

Most ICP exercises start with a job title or company size. Start instead with the question: what has to be true about a company's situation for them to buy from me right now? For Aliro, the answer was "they are actively navigating international hiring." That's the buying trigger. The signals are just observable proxies for that trigger. Map the trigger first - the signals will follow.

Find signals that are public or semi-public

The five signals we tracked for Aliro were all observable without buying any proprietary data. Job posts are public. LinkedIn activity is public. Funding announcements are public. H-1B filings are public record. Not every signal stack will be this clean, but the principle holds: before reaching for expensive intent data, map what's already visible.

Score ruthlessly

The threshold - 60 points, 3+ signals - is what created the economics. A lower threshold would have flooded the queue with marginally relevant contacts, driven up volume, driven down quality, and destroyed the CPA. The scoring cutoff is a business decision, not just a technical one. If your team can't handle the conversation the email will generate, the contact shouldn't be in the queue.

Build personalization into the sequence infrastructure

Signal-based personalization requires that the triggering signal be stored against the contact record and available at send time. This is a data architecture decision, not a copywriting decision. Plan for it before you write a single email.

The complete framework - signal mapping, scoring model design, sequence architecture, deliverability infrastructure, and personalization logic - is what Deep-Y builds for clients. The Aliro campaign was built in under three weeks from brief to first send. Results started arriving in week two.

Want this for your pipeline?

We'll map the right signals for your ICP on a free 60-minute call - no pitch, just the framework applied to your market. Two spots available this month.

The Takeaway

The $137 B2B CPL benchmark is a symptom of how most outbound programs are designed: start with a large list, spray it with generic messaging, and filter the survivors. The math looks reasonable until you account for the waste embedded in every stage of that process.

The signal-based approach inverts the model. You do the filtering work before outreach begins - not after. The result is a smaller list of dramatically better contacts, personalization that lands because it's specific, and economics that don't require a million-dollar budget to generate meaningful pipeline.

18,247 leads. $0.30 each. 12 weeks. The model works.

If you run a B2B lead generation program and want to benchmark your current CPL against what's achievable with the right infrastructure, the free strategy call is the right first step. We'll show you exactly what signals exist for your market and what a scoring model for your ICP would look like - before you've committed to anything.