"Is my reply rate good?" is the single most asked question we hear from new customers. And it's a surprisingly hard one to answer honestly, because nearly every vendor you'll read has an incentive to quote a number that makes their product look magical. Forty percent. Sixty percent. Higher.
Those numbers aren't lies, exactly. They describe the top decile of carefully curated campaigns. But they're not what you should expect in month one with a cold list, and benchmarking against them only makes a healthy campaign feel like a failure.
Here is an honest look at LinkedIn reply rate benchmarks as of Q2 2026 — what "good" looks like, how it varies by industry and ICP, and how to tell whether your numbers point to a targeting problem, a message problem, or a profile problem.
1. Start With What "Reply Rate" Actually Means
Before we talk numbers, align on definitions. Most reporting lumps everything together and the result is a metric that measures nothing usefully.
- Raw reply rate: Any reply, including auto-responders, "unsubscribe," and "wrong person." Useful as a sanity check but not as a performance metric.
- Human reply rate: Strips out automated replies and out-of-offices. This is the number most platforms quote.
- Positive reply rate: Replies that express interest, ask for more info, or book a call. This is the only number that predicts pipeline.
- Reply-to-connection rate: Of the prospects who accepted your request, what share eventually replied to a message? This is the real yardstick for message quality.
When somebody quotes a "reply rate" without specifying the denominator, assume they mean raw reply rate. It looks bigger. Ask which number they mean before you compare yourself to it.
2. The Honest Baselines by Channel
Across the campaigns we see run through Infonet, the following ranges represent the middle 60% of performance — not the cherry-picked tops, not the failing bottom.
- Connection request with a note: 25–45% acceptance rate. Above 45% means exceptional targeting or an unusually strong profile. Below 20% means the list, the note, or the profile needs work.
- First message after acceptance: 18–32% human reply rate. This is the metric that reflects message craft.
- Full sequence (4–6 touches over 3 weeks): 28–48% human reply rate. Roughly 60% of replies come after the first message, not on it.
- InMail (Sales Nav Premium): 10–22% human reply rate. Lower than post-accept messaging because the recipient has no prior context.
- Positive reply rate (any channel): 6–14% of prospects messaged. Everything above that is noise you'll have to filter.
If your numbers land inside these ranges, the campaign is working. Optimizing from "within range" to "top of range" is a different problem than rescuing a campaign that's outside the range entirely.
3. How Industry Shifts the Benchmark
Industry matters more than most people acknowledge. Reply rates in high-signal, high-spam verticals (crypto, agency services to SMBs, generic "AI" tools) are structurally lower because recipients are being pitched constantly. Rates in quieter verticals are higher because every message is a novelty.
Rough adjustments we see consistently:
- SaaS to SaaS: At or slightly below baseline. Buyers get a lot of mail.
- Agency / services to SMB: 30–40% below baseline. Extremely crowded inbox.
- Enterprise-only tools: 10–20% above baseline. Fewer senders, higher signal.
- Manufacturing, logistics, industrial: 20–40% above baseline. Far less outreach competition.
- Healthcare administration (not clinical): At baseline. Long cycles but engaged buyers.
- Legal, accounting: Below baseline. Professional skepticism is high. But replies convert well when they come.
4. Seniority Changes the Dynamic
The executive myth — "C-suite never replies" — is half true. Executives reply less often in absolute terms, but the replies that do come are more qualified. A 9% reply rate to VPs with a 60% positive-reply ratio beats a 28% reply rate to managers with an 18% positive ratio, every time.
- Individual contributors and SDRs: 30–45% reply rate. Noisy. Lots of "not my role."
- Managers: 22–35% reply rate. Your sweet spot for volume plays.
- Directors: 14–24% reply rate. Highest-signal layer for most B2B campaigns.
- VPs: 8–16% reply rate. Brief messages win. Anything over 150 words is auto-archived.
- C-suite: 4–11% reply rate. Referrals and peer intros beat cold by orders of magnitude.
If you're pitching C-suite with the same message you pitched managers, you are silently selecting for an 8% rate when 14% is within reach.
5. Message Type Also Matters
Not all first messages carry the same conversion load. A "question" message and a "pitch" message produce very different numbers, even when both are personalized.
- Question-led opener: "Curious how your team handles X?" Typical range: 24–38%. Lowest friction.
- Observation-led opener: Referencing a recent post or announcement. Typical range: 22–34%. Requires real research to avoid feeling fake.
- Pitch-led opener: "We help teams like yours do Y." Typical range: 11–19%. Feels like a sales pitch because it is one.
- Resource-led opener: "Put together a short teardown of your approach to X." Typical range: 20–32%. Works well when the resource is actually useful.
6. How to Measure Yours Correctly
Most outreach teams measure reply rate wrong in one of three ways: they include auto-responders, they mix send dates with reply dates, or they lump new and follow-up replies together. The cleanest measurement:
- Pick a closed cohort — every prospect messaged between two specific dates.
- Wait at least 21 days after the last message in the sequence to let late replies land.
- Classify every reply into positive, neutral, not interested, out-of-office, wrong person.
- Report human reply rate and positive reply rate separately. Don't collapse them.
Campaigns measured this way look different from campaigns measured loosely. You'll typically see 20–30% fewer "replies" but the number will be actionable. You can now tell whether a change moved the metric or just moved noise.
7. Reply Rate Is Sometimes the Wrong North Star
A 35% reply rate that produces zero meetings is a worse outcome than a 14% reply rate that produces a quarter of your pipeline. The higher rate usually means you're attracting a lot of "thanks but no thanks" politeness, or you're filtering for people who reply to everyone — neither predicts buying.
Healthier metrics once you're out of the cold-start phase:
- Meetings per 100 prospects messaged — the true pipeline metric.
- Cost per meeting booked — captures both volume and labor.
- Reply quality score — a simple 1-to-5 rubric applied to every reply, averaged weekly.
- Days from first touch to meeting — a shortening number means your sequence is working, even if reply rate is flat.
8. Where Most Campaigns Leak
If your numbers are meaningfully below the ranges above, the leak is almost always in one of four places. Diagnose in this order, top to bottom:
- Targeting: If acceptance rate is below 20%, you're contacting people who don't recognize themselves as a fit for what you sell. No message fixes this.
- Profile: If acceptance is fine but first-message reply is under 15%, prospects are accepting out of politeness and then visiting your profile and bouncing. Fix the headline and About section before changing the message.
- Message craft: If acceptance and visits are fine but replies are low, the first message is either too long, too pitchy, or indistinguishable from every other vendor's opener.
- Follow-up cadence: If message 1 performs okay but total sequence reply rate is flat, your follow-ups are probably "just checking in" and adding no value. Every follow-up needs a new angle.
Almost nobody's reply-rate problem is actually a reply-rate problem. It's usually a targeting problem or a profile problem dressed up as a messaging problem.
9. Where AI Personalization Changes the Math
AI-generated personalization — when it's grounded in the prospect's actual recent activity and not just a merge field with a name — lifts first-message reply rate by roughly 8–14 percentage points in the campaigns we've instrumented. It moves a 20% baseline campaign to 30%, not to 60%.
Platforms like Infonet pull the prospect's recent posts, the company's recent news, and the context of how you found them to produce a first line that could only have been written to that specific person. That single line is doing most of the lift — not the rest of the message.
Be honest with yourself about where AI actually helps: openers, first-line hooks, and follow-up angles. It does not fix targeting. It does not fix a profile that looks like a resume. It does not fix asking for a 30-minute call in message one.
10. A Sane Target Curve for a New Campaign
If you're starting a new sequence from scratch, don't expect top-of-range numbers in week one. A realistic curve:
- Weeks 1–2: Below baseline. You're still fixing wrong targeting assumptions and awkward message wording. Review every reply yourself.
- Weeks 3–4: Baseline range. You've cut the obviously bad segments and rewritten the opener twice.
- Weeks 5–8: Upper half of baseline. You've stabilized which segments convert and which don't.
- Weeks 9–12: Potentially top-of-range for your best ICP segment. The worst segments are excluded entirely.
Campaigns that start above baseline in week one are usually working from a warm list or a referral graph. Cold from scratch takes a month to tune. If you treat that tuning period as failure, you'll change everything twice and learn nothing. Give each hypothesis a real cohort and three weeks.
Where to Go From Here
Benchmarks are useful only as a diagnostic — they tell you whether you have a problem worth solving. The harder work is figuring out which problem you actually have. Start by splitting your reply rate into the four diagnostic buckets above. One of them will be clearly worse than the others. That's where the next week of effort goes.
And resist the urge to A/B test everything at once. Change one variable per cohort. Let the data breathe. A month from now you'll have a campaign that hits the top of the range for your industry and ICP, and you'll know why.