AI Killed the ICP
Everyone can now be your customer
Common startup advice of the B2B SaaS era says:
pick your ICP and focus
That was right.
But it’s not anymore.
The Constraint was never the Strategy
Let’s be precise about what that advice was actually doing, because the reason it worked matters.
Meaningful personalization required human effort—researchers who found the relevant detail on a prospect, writers who crafted a message for a specific pain, SDRs who spent 40 minutes per contact making the outreach feel considered rather than templated. A five-person team running that work across 20 different segments simultaneously would collapse under the weight of it. So the strategic advice was: pick your best bet, go deep, don’t dilute.
The data validated it. Companies with tightly-defined ICPs spent 50% less on sales and marketing and see 24% shorter CAC payback. The advice worked because the constraint and results were real.
But here’s the thing — the constraint was doing all the work, not the principle. The goal was never narrowness for its own sake. It was finding the customers who genuinely want what you’re building and serving them well. ICP focus was the most resource-efficient path to that goal given what was operationally possible. When you couldn’t afford to personalize at scale, picking one segment and going deep was the right call.
Now? The operational ceiling changed. So the advice needs to change.
GTM as a “Yes, and” machine
There’s a principle in improv called “Yes, and.” When your scene partner makes a move, you don’t block it—you accept what they’ve given you and build on it. That’s how scenes go somewhere. The moment someone says “No” or “but” the scene dies.
Traditional GTM is the same. It runs on “No, but.” The ICP is a filter, and the filter does most of the work before any real conversation starts. Wrong industry, wrong company size, wrong title—the answer is no, before the prospect ever sees a message tailored to their situation. The filter was necessary because building something relevant for the people it filtered out would have cost more than it was worth.
When the cost of personalization approaches zero, the filter becomes the liability.
Every visitor can get a landing page built for their industry, their role, their specific framing of the problem. Every keyword gets an ad written for the intent behind it. Every outbound prospect gets a sequence that speaks to what their company is actually working through right now. The filter disappears because the reason for having it disappears.
The conventional wisdom was “you can’t boil the ocean—pick your cup.” The real constraint was that heating any cup took expensive human effort, so you could only afford to heat a few.
But now you can heat every cup precisely, simultaneously, for nearly nothing—and when you do, you find customers that the company still running the old filter never knew existed.
Can you hear me now?
Introducing Frequency, a tool for modern GTM.
The ICP was always a declaration. You sit in a room, look at your first handful of customers, and announce who your ideal customer is. Then you build a filter from that declaration—the profile drives the channel, the channel drives the content—three sequential stages, each derived from the last. Months of work optimized around one hypothesis before the market responds.
What AI makes possible is a different atomic unit entirely: the GTM Frequency.
A Frequency is defined by three elements, collapsed into one thing:
WHO: the individual it’s tuned for: their role, their company context, their individual signals.
WHAT: the value proposition, message, or call to action carried to that person
HOW: the channel it travels through: outbound email, paid ad, landing page, LinkedIn, or an onboarding flow.
In the old model, these were separate decisions made in sequence. The profile (who), determined the content (what), determined the channel (how). Three stages, each dependent on the last. But when the cost of each drops to zero, new opportunities arise.
A Frequency collapses them into a single horizontal element—one testable thing that either resonates or doesn’t.
The permutation space this opens is enormous.
10 WHO × 10 HOW × 10 WHAT = 1,000 distinct Frequencies
With AI creating and executing them simultaneously, you can test all 1,000 in the time it used to take to run one campaign for one declared ICP. And you don’t declare which ones will work. You can just… find out.
You don’t declare a Frequency before testing. You generate thousands of them, broadcast simultaneously across every GTM touchpoint, and let the market tell you which ones hit.
The ones that work aren’t a “segment” you’ve identified or a “profile” you’ve declared. They’re just the frequencies that resonate, for the individuals they resonate with. The market did the ranking.
What does that change?
When we were building Tarka’s 2nd product, an AI SDR, we ran a manual version of this without having the framework to name it. We had a product with real horizontal value — AI-powered professional relationship management — and instead of picking one ICP and building everything around it, we ran parallel framings simultaneously. Customer discovery teams, market researchers, product teams running continuous discovery, people building expert networks — each was a real use case, each needed completely different language for the same underlying capability.
We built all of them.
Custom landing pages, custom sequences, custom messaging for each framing. Individuals responded differently to each one. The language that worked for a product manager had no resonance with a sales leader.
What came back wasn’t just conversion data — it was a map of which frequencies were hitting and which were broadcasting into silence. The market told us who our customers were, and how they thought about what we’d built. Then we built the product dynamically to match what the signal confirmed.
That was a manual process, run by people, tracked in spreadsheets.
What’s possible now is that system running automatically — generating and testing new frequencies continuously, retiring the ones that don’t resonate, surfacing the ones that do, and feeding that signal directly into product decisions. The gap between “declare an ICP” and “run a Frequency portfolio” is the gap between one upfront guess and a continuous market truth-telling system.
Broadcasting at Scale
Most teams who discover AI personalization treat it as a single technique. They adopt Clay and start running personalized outbound. Or they implement Mutiny and start varying landing pages by segment. They pick one, measure it, and call that their AI GTM motion.
But that’s just one frequency.
The shift is broadcasting across the full spectrum simultaneously — every channel generating Frequencies in parallel, across every WHO hypothesis you have. The WHO varies by job title, company type, intent signal. The HOW varies by channel — paid, outbound, landing page, onboarding. The WHAT varies by value proposition, message angle, and call to action. AI writes the variants in seconds and executes them across every touchpoint at once as fast as there are individuals.
The reason this compounds is that signal from every channel feeds the same discovery loop. A Frequency that converts in paid acquisition but falls flat in outbound tells you something specific about timing and intent. A landing page variant that holds attention but doesn’t convert tells you the framing is landing but the offer isn’t. When every touchpoint is instrumented and running simultaneously, you learn faster than any sequential experiment on a single declared ICP could produce.
The numbers from companies who’ve adopted parts of this out are not marginal improvements on the old model.
LaunchDarkly built more than 100 account-specific microsites in 60 days and hit 150% of their quarterly pipeline goal.
Zapier built 50,000 programmatic landing pages and now drives 16 million organic visitors a month.
Teams running AI-personalized outbound at the right level of specificity are hitting 15–25% reply rates against an industry average below 5%. These results come from a categorically different operating model — one where the market is constantly responding to you, and you have the infrastructure to hear it at the individual level.
It doesn’t stop at growth
This Changes What “Product” Means
When your Frequency portfolio is running continuously — when the market is giving you real-time signal about which combinations of who, what, and how are converting — that signal doesn’t just tell you who to target. It tells you what the product should be for the individuals who respond. The GTM discovery loop and the product development loop start to merge.
Product-market fit stops being a milestone you declare after enough iteration and becomes something you’re reading and responding to continuously, at the individual level.
What this enables over time is building more like a platform than a single-use SaaS product.
SaaS says: declare your ICP, build the features they need, hold that surface stable enough to sell and support.
Platform says: build a flexible core, let your Frequency portfolio tell you how different individuals and markets want to use it, and let the product evolve in direct response to what you’re learning. You’re not building features for a profile you declared. You’re building in response to the frequencies that are actually resonating.
“Sell to everyone” used to describe an unfocused company that hadn’t done the hard work of finding product-market fit. We think it’s becoming a description of an AI-native company that has built the infrastructure to find fit continuously — across many markets, at the individual level — and serve each of them well.
⚠️ This doesn’t mean every company should attempt to serve every market immediately. If the individuals responding to different frequencies genuinely need different core products with no shared foundation, that’s multiple products or even multiple companies, not one platform.
But if you have a product with real horizontal value and you’ve been conservative about your ICP because personalization at scale seemed operationally impossible — the thing holding you back isn’t your product. It’s an assumption about what’s feasible that AI has already made obsolete.
So what do you do, today?
Get started.
Your competitors with five-person GTM teams might already be running hundreds of Frequencies.
Custom experience for every visitor type. Custom message for every meaningful keyword combination. Custom sequence for every individual’s context and signals. And when certain Frequencies start resonating — when the market starts responding — they’re already building the product toward what those individuals actually need, not what a declared ICP would have predicted.
The ICP declaration made sense when personalization was expensive. When running one GTM motion well required the full capacity of a small team. The mental model it created — pick one customer type, declare them ideal, filter everything else out — was a rational response to a real constraint.
The constraint is gone. The mental model is the last thing holding most teams back. Prove to your investors and stakeholders that this is the future or prove that you lost.
We don’t fully know yet what the ceiling is for companies that get this right — that build the infrastructure to generate, test, and refine thousands of Frequencies continuously, with the product loop feeding back into the GTM loop feeding back into the product. We think it’s significantly higher than what the ICP model could ever reach.
Let’s find out.
Tarka builds AI-first infrastructure for GTM teams — including the discovery loops that make Frequency-driven GTM possible. See our playbook and more at tarka.ai.



