by Scott Tewel
Head of Product
3/24/26
Key Takeaways
You already know that inactive customers are more likely to churn. Everyone knows that. The harder questions are: at what threshold does inactivity become a near-certain indicator? How much more likely is churn compared to healthy accounts? And is inactivity even the strongest signal, or does something like declining usage trajectory or depth of adoption matter more?
Those are the questions I could never get answered fast enough in previous roles. And I think that experience is pretty universal for Product and GTM leaders.
I spent years as a product leader navigating the same cycle most of you have probably lived through. You have a strategic question — something like, "what are the leading product usage indicators of churn for customers who've churned or downgraded in the last 12–24 months?" — and you know the answer could reshape how your team prioritizes their roadmap or book of business. So you submit a ticket to the BI team.
Then you wait... Days go by before an analyst picks it up. When they do, they often have shallow knowledge about the data in your area of the product. You’ve filled out a form describing your goal, but you end up repeating yourself in a kickoff meeting anyway, re-explaining what you're actually looking for and why it matters. Then the analyst goes off to explore: what data is available, what it means, and how to turn it into something useful.
If you're lucky, you get a first pass back in a week. And the painful reality is that the first pass was seldom ready for use. Maybe the analyst pulled the wrong time window, or the metric normalization doesn't account for customer size, or you realize the data needs to be sliced differently to be actionable. Each round of feedback adds days. The total cycle from question to an answer you'd actually act on routinely stretched into weeks.
What happens in practice is predictable: you stop asking the question. You rely on instinct instead, because the cost of rigor starts to feel like a "perfect becoming the enemy of good" situation. You know product usage patterns matter for retention — you just can't prove it with enough specificity to set thresholds, build alerts, or hold teams accountable to leading indicators. The temptation to just go with your gut becomes rational, which is kind of a problem when you're trying to run a data-informed organization.
We ran this exact analysis recently with a customer using Magnify's AI assistant. They wanted to understand the product usage signals most strongly correlated with churn across their install base. Within minutes, the research agent came back with eight findings like this:
Days Since Last Visit
Is the headline surprising? Not really. Inactivity is intuitively a churn signal. But the difference between "we think inactivity is bad" and "here's a 5x differential with a specific threshold you can operationalize" is the difference between a slide in a QBR deck and a trigger in an automated workflow. One is an observation; the other is something your team can act on this week.
In the same session, the AI Assistant surfaced several other indicators with the same level of statistical grounding.
Pre-renewal usage trend (final 90 days vs. prior 90 days).
This one was striking. The assistant compared product engagement in the 90-day window immediately before renewal against the prior quarter, and the two populations weren't just different — they were moving in opposite directions. Churned accounts showed a median ~30% decline in usage heading into renewal. Renewed accounts showed a median ~40% increase. About 60% of churned accounts were trending down; only ~30% of renewed accounts were. And roughly 15% of churned accounts had dropped to zero usage in their final 90 days — they'd effectively left before the contract expired. A team watching this metric in real time could intervene while there's still a chance to change the outcome.
Total page events and time-in-product, normalized per $1M ARR.
Raw usage numbers are misleading when you're comparing a $50K account to a $2M account, and the agent handled that automatically. Renewed accounts generated roughly 2.5x more page events per $1M ARR and spent about 2.5x more time in-product per dollar. One detail I found interesting: session depth (minutes per event) was nearly identical between the two populations. Churned accounts didn't engage less deeply when they showed up — they just showed up far less often. That's an important distinction if you're building health scores.
Unique users per $1M ARR.
Accounts with fewer than 20 active users were roughly 3x more likely to churn. Narrow adoption concentrated in one or two power users is a high-risk pattern — when your product's value lives in one person's workflow, it leaves when they do.
What makes this powerful for me as a product leader isn't any single finding. It's that the agent produced a multi-signal picture — with population comparisons, thresholds, and interpretive context for each — in the time it used to take me to write the BI ticket. And the analyst I was waiting weeks for would have been unlikely to think about how best to normalize usage metrics.
I get the skepticism. "An AI told me" isn't something you'd put in a strategic doc or deck. So it's worth understanding what's actually happening when you ask the research agent a question.
The agent isn't generating answers from a language model's training data. When you ask a question in natural language, it kicks off a structured research workflow. It determines which data sources are relevant — product usage logs, support tickets, CRM fields, engagement data — and formulates a plan. It generates and executes SQL queries against your actual data warehouse, pulling real numbers from your environment. Then it iterates: examining initial results, refining its approach, exploring adjacent dimensions, cross-referencing findings. It's doing the same exploratory work a good analyst does — understanding what data is available, figuring out what it means, testing different ways to cut it — just compressing weeks of that work into minutes.
The output isn't a raw data dump, either. The agent contextualizes findings with population comparisons, flags signal strength, suggests operational thresholds, and explains why a pattern matters. Every number traces back to a query against your data, and you can ask follow-up questions or push back on the approach in the same conversation.
This is the part that changed my perspective on the whole workflow. In the old model, iteration was the silent killer. You'd wait a week for an initial analysis, review it, realize you need it sliced by customer segment or normalized differently, send it back, and wait again. Another few days. Each individual round trip was reasonable; in aggregate, it was frustrating. By the time you had something you'd act on, the business context had sometimes shifted.
With the AI assistant, you review the initial findings and say "break this down by customer tier" or "what does this look like over 24 months instead of 12," and you have an updated analysis in minutes. You can explore five variations of an analysis in a single sitting. That was functionally challenging before, and it changes how you operate — when the cost of a follow-up question drops to near zero, you stop settling for the first adequate answer. You pressure-test from multiple angles in real time, and you end up with analyses you're genuinely confident acting on.
The goal was never just faster analysis — it was faster action.
When you can identify that 60+ days of inactivity is a churn threshold in under an hour, your CS team can operationalize that signal the same week. When you can see that churned accounts were declining 30% heading into renewal while retained accounts were growing 40%, you can build a pre-renewal intervention trigger instead of hoping someone notices the trend in a QBR.
The bottleneck in most organizations isn't a lack of ideas about what to measure. It's the cost and latency of proving those ideas with data. Remove that, and leaders stop managing by instinct — not because their instincts were wrong, but because they no longer have to choose between speed and evidence.
See how Magnify helps revenue and product teams uncover churn risk, identify expansion signals, and act on insights in minutes—not weeks.
Book a Demo