AI metrics
Tokens, cost on your BYOK key, confidence histogram, deflection rate, top topics, and the KB sources the AI cites most. The page that justifies the AI line item.
AI metrics
Two AI-related KPIs live on /analytics: AI deflection and AI savings. They sit alongside the rest of the headline tiles and respect the same 7d / 14d / 30d / 90d period selector.
This page explains what each one counts, what a healthy number looks like, and how to read them together.
AI deflection
The share of period conversations the AI auto-resolved without any human reply.
A conversation counts as "deflected" if:
- It closed during the period.
- The AI sent at least one reply.
- No human ever replied.
The aiAutoSent count next to the percentage tells you the absolute number of auto-resolved conversations behind the percentage. A 40% deflection on 5,000 conversations is a different story from 40% on 50.
Healthy deflection rates depend on your product. Self-serve SaaS can hit 40-60%. Complex B2B sits around 15-25%. Track the trend, not the absolute.
AI savings
The dollar value of the auto-resolved conversations, computed against your provider key. The number is the cost the team would have incurred handling those conversations manually, minus the AI's actual usage cost.
This is the "did the AI pay for itself this period" tile. For most teams, the answer is "yes, by a wide margin," but the number is honest — if your AI cost spikes one month, savings drops accordingly.
How to read them together
The AI tiles tell you two different things:
- Deflection is operational: how many conversations did the AI handle on its own?
- Savings is financial: what did that save you, net of cost?
A high deflection with low savings means the AI is handling a lot of cheap conversations. A lower deflection with high savings means the AI is taking on the expensive ones (long threads, multi-touch tickets) where the value-per-deflection is bigger.
When the numbers look wrong
A few common reasons:
- A volume spike across the period. More conversations means more opportunities to deflect, but the AI may not scale linearly.
- A confidence-threshold change. If you tightened or loosened the AI's auto-send threshold, deflection moves with it.
- A model change. Switching models changes per-reply cost, which moves savings.
Auditing what the AI handled
Filter the inbox to AI-resolved conversations to see exactly which threads contributed to the deflection number. Spot-check them: if the AI is closing things customers would actually rate as bad, the percentage is hollow. Cross-reference with CSAT — AI-resolved conversations that earn 4-5 stars are real deflection; ones earning 1-2 are technically closed but not actually resolved.
What we don't ship today
A few things teams sometimes ask about that aren't in the product:
- No per-model cost breakdown on the analytics page.
- No spend-cap UI.
- No auto-clustering of "top topics" by frequency.
- No confidence-histogram view.
If those matter to you, tell us.
Period selector
Same 7d / 14d / 30d / 90d as everywhere else. Both tiles recompute when the period changes.
Related
Was this article helpful?