Introducing the Ochre AI support workspace. Start a 14-day trial

CSAT trends

Average score over time, response rate, top positive and negative comments, and a low-score drill-down. The signal that says whether your support is actually working.

By ChristopherUpdated 3 min read

CSAT trends

CSAT is the only metric on /analytics that comes from customers, not from the system. Treat it accordingly. The score is the headline; the comments are where the work is.

Where it lives

The CSAT card sits on /analytics alongside the volume KPIs and charts. It uses the same 7d / 14d / 30d / 90d period selector as the rest of the page.

What the card shows

  • Average score, on the 1-5 scale.
  • Response rate — responses divided by invites.
  • Promoter percentage — share of responses that scored 4 or 5.
  • Detractor percentage — share that scored 1 or 2.
  • Recent responses — the most recent CSAT ratings, color-coded (red / amber / green) so you can see at a glance whether the latest customers were happy.

There's no separate per-channel chart on the page. CSAT data is per-channel-aware in the database (each response records the channel it came back on), but the card surfaces the aggregate. To investigate per-channel trends, export the raw responses.

Reading the numbers

Average score

Mean rating across all responses in the period. Most teams sit between 4.4 and 4.8. A 4.6 with 200 responses is more meaningful than a 4.9 with 8.

The trend matters more than the absolute. A team holding 4.5 is better than a team that drifted from 4.7 to 4.5.

Response rate

Healthy response rates are 15-25% for email, 40-60% for widget, 30-60% for Slack Connect. Below 10% across all channels and your sample is too small to trust the average.

If response rate is dropping, the survey is the problem before the support is. See Survey deliverability.

Recent responses

The most recent ratings, color-coded. This is your fastest "is anything on fire today" read. A red in the strip is worth clicking into and reading the comment.

Reading low scores

Click any low-score response in the recent strip to jump to the conversation. Read the thread, read the comment. For each, you have three options:

  1. Reach back out. Most low-score customers are surprised when you do. Conversion to high-score on a follow-up is around 40%.
  2. Tag for review. Add a tag like csat-low so the conversation flows into your QA queue.
  3. File product feedback. If three customers in a week complained about the same thing, that's product, not support.

Per-agent breakdown

The agent leaderboard on the same page shows per-agent CSAT. Sort by score with the response count visible (a 4.6 with 50 responses beats a 4.9 with 5 responses).

A few guardrails:

  • Don't grade individuals on small samples.
  • Read the comments before drawing conclusions. Customers often blame the closest agent for upstream issues.
  • Use trends, not snapshots.

See Agent leaderboard.

What CSAT is not telling you

A few honest limits:

  • CSAT is biased toward responders. Happy and angry customers respond. Indifferent customers do not. Your true average is probably 0.1-0.2 lower than what you see.
  • CSAT lags. A bad week today shows up in scores next week, after surveys go out and responses trickle in. Pair it with the response medians on the same page for real-time signal.
  • One bad reply can sink a conversation. A 1-star score on a 12-message thread is rarely about all 12 messages. Read the comment, find the moment.

CSAT and the AI

Conversations the AI auto-resolved are part of the same CSAT pool. If AI-resolved conversations are pulling the average down, it's a signal that the AI is closing things customers are not actually happy with. Tighten the AI's confidence threshold or reduce its auto-send scope.

Was this article helpful?

CSAT trends - Ochre help center · Ochre