Analytics
Track your AI agent's performance with conversation metrics, user satisfaction data, top questions analysis, and source performance insights.
Analytics Dashboard
The Analytics dashboard is your command center for understanding how your AI agent performs in the real world. It transforms raw conversation data into actionable insights, helping you identify what is working, what is not, and where to focus your optimization efforts. Effective use of analytics is the difference between an agent that stagnates and one that continuously improves.
Every conversation your agent handles generates data. The Analytics dashboard aggregates that data into metrics, trends, and visualizations that inform decisions about training data, system prompts, model selection, and deployment strategy.
Dashboard Overview and Navigation
Access Analytics by selecting your agent from the dashboard and clicking the Analytics tab. The dashboard is organized into several sections, each focusing on a different aspect of agent performance.
| Section | What It Shows |
|---|---|
| Overview | High-level summary cards showing total conversations, messages, satisfaction score, and resolution rate for the selected period. |
| Conversation Metrics | Charts and tables tracking conversation volume, trends, peak hours, and duration. |
| Message Metrics | Detailed message-level data including totals, averages, and response times. |
| User Satisfaction | Feedback scores from thumbs up/down ratings and satisfaction trends over time. |
| Top Questions | The most frequently asked questions ranked by volume, with satisfaction data for each. |
| Source Performance | Which training sources are cited most often and which have gaps. |
The date picker at the top of every section allows you to filter all data by time period. All charts and tables update dynamically when the date range changes.
Conversation Metrics
Conversation metrics give you a macro view of how your agent is being used.
Volume and Trends
The conversation volume chart shows the number of unique chat sessions initiated over time. Use this to identify growth trends, seasonal patterns, and the impact of marketing campaigns or product launches on support volume.
A rising conversation count with stable or improving satisfaction scores indicates healthy growth. A rising count with declining satisfaction may signal that your agent's training data has not kept pace with new user questions.
Peak Hours
The peak hours heatmap shows when your agent receives the most traffic, broken down by day of the week and hour of the day. This data is valuable for:
- Staffing decisions -- knowing when to have human agents available for escalations.
- Maintenance scheduling -- planning source updates or configuration changes during low-traffic windows.
- Marketing alignment -- understanding how campaigns drive support volume.
Average Conversation Duration
This metric tracks how long conversations typically last. Short conversations (1-3 messages) that end with positive feedback suggest the agent is resolving questions efficiently. Long conversations may indicate the agent is struggling to understand the user's intent or the question requires multiple clarifications.
Resolution Rate
The resolution rate measures the percentage of conversations that are resolved by the agent without requiring human escalation. This is one of the most important metrics for measuring agent effectiveness.
| Resolution Rate | Interpretation | Recommended Action |
|---|---|---|
| 90%+ | Excellent. Your agent handles the vast majority of queries independently. | Monitor and maintain. Focus on edge cases. |
| 70-89% | Good. There is room for improvement in specific topic areas. | Review escalated conversations to identify patterns. |
| 50-69% | Needs work. Significant gaps in training data or instructions. | Audit top questions, add sources, refine system prompt. |
| Below 50% | Critical. The agent is not providing sufficient value. | Major review of sources, instructions, and model settings required. |
Message-Level Metrics
While conversation metrics show volume, message-level metrics reveal depth and efficiency.
| Metric | Description | What Good Looks Like |
|---|---|---|
| Total Messages | Total number of messages exchanged (user + agent) across all conversations. | Tracks overall engagement volume. |
| Avg. Messages per Conversation | The average number of back-and-forth exchanges in a single session. | 2-4 messages for FAQ-style agents; 5-8 for complex support. |
| Avg. Response Time | How quickly the agent generates a response after receiving a user message. | Under 3 seconds for standard queries. |
| First Response Time | Time from conversation start to the agent's first reply. | Under 2 seconds. Users expect instant acknowledgment. |
User Satisfaction Tracking
If your agent has feedback collection enabled (thumbs up/down buttons on responses), satisfaction data appears in this section.
Satisfaction Score
The satisfaction score is calculated as the percentage of positive ratings out of all rated responses. Not every user will rate every response, so this metric reflects the sentiment of users who actively engage with the feedback mechanism.
Satisfaction Trend
The trend chart shows how satisfaction changes over time. Look for:
- Drops after source changes -- a new source may have introduced inaccurate or conflicting information.
- Improvements after prompt refinements -- validates that your changes are having the desired effect.
- Consistent low scores on specific topics -- signals that certain areas of your knowledge base need attention.
Negative Feedback Review
Click on any negatively rated response to see the full conversation context. This is the fastest path to identifying and fixing quality issues. For each negative rating, ask:
- Was the response factually incorrect?
- Was the response incomplete?
- Was the tone inappropriate?
- Did the agent misunderstand the question?
The answer determines whether you need to fix a source, add a Q&A pair, adjust the system prompt, or refine your instructions.
Top Questions Analysis
The Top Questions section ranks the most frequently asked questions across all conversations. This is one of the most actionable sections of the analytics dashboard.
Identifying Patterns
Cluster similar questions together to understand themes. If you see "How do I cancel?" "Cancel my account" and "Where is the cancellation button?" appearing separately, they represent the same underlying intent. Ensure your agent handles all variations of high-volume questions with consistent, high-quality answers.
Knowledge Gaps
Questions with low satisfaction scores or high escalation rates represent knowledge gaps. These are areas where your training data is insufficient, outdated, or conflicting. Prioritize adding or updating sources for the top 10-20 questions with the lowest satisfaction.
Trending Topics
Compare top questions across different time periods to identify trending topics. A sudden spike in questions about a specific feature may indicate a bug, a confusing UI change, or a successful marketing campaign driving interest.
Source Performance
Source performance metrics show which training sources are contributing most to your agent's responses and which are underperforming.
| Metric | Description |
|---|---|
| Citation Frequency | How often each source is referenced in agent responses. High-citation sources are the backbone of your agent's knowledge. |
| Satisfaction by Source | Average satisfaction score for responses that cite a specific source. Low scores indicate the source content may be inaccurate or poorly structured. |
| Unused Sources | Sources that have never been cited in a response. These may be irrelevant, poorly formatted, or covering topics users never ask about. |
| Gap Indicators | Topics where the agent falls back to generic responses because no source adequately covers the question. |
Use this data to prioritize source maintenance. High-citation, low-satisfaction sources need immediate attention. Unused sources should be reviewed and either improved or removed to reduce noise in the training data.
Filtering and Segmentation
All analytics data can be filtered and segmented to focus your analysis.
Date Range Filters
- Last 7 days -- for monitoring recent changes and their impact.
- Last 30 days -- for monthly performance reviews.
- Last 90 days -- for quarterly trend analysis.
- Custom range -- for specific campaigns, launches, or A/B test periods.
Additional Filters
- Source -- Filter conversations by which training source was cited, useful for evaluating the impact of a specific source addition.
- Channel -- Filter by deployment channel (widget, iframe, API, shareable link) to compare performance across touchpoints.
- Satisfaction -- Filter to show only positively or negatively rated conversations.
Exporting Data
Analytics data can be exported for further analysis in external tools like spreadsheets, BI platforms, or custom dashboards.
CSV Export
Click the Export button in the top-right corner of any analytics section to download the current view as a CSV file. The export respects your current filters and date range.
API Access
For programmatic access to analytics data, use the Analytics API endpoints:
curl -X GET "https://api.chatsby.co/v1/agents/YOUR_AGENT_ID/analytics?period=30d" \
-H "Authorization: Bearer YOUR_API_KEY"The API returns JSON with all metrics, allowing you to build custom dashboards or pipe data into tools like Grafana, Metabase, or Looker.
Setting Up Alerts and Thresholds
Configure alerts to be notified when key metrics cross thresholds you define.
| Alert Type | Example Threshold | Use Case |
|---|---|---|
| Satisfaction Drop | Score falls below 70% | Early warning that something has degraded. |
| Volume Spike | Conversations exceed 2x daily average | Detects unusual traffic, possibly from a viral post or an outage. |
| Escalation Rate | Escalations exceed 30% of conversations | Signals the agent is struggling with current question volume or topics. |
| Response Time | Average response time exceeds 5 seconds | May indicate model performance issues or rate limiting. |
Alerts can be delivered via email or webhook. Configure them in Settings > Notifications.
Using Analytics to Improve Agent Performance
Analytics is only valuable if it drives action. Here is a concrete workflow for using analytics data to improve your agent.
Review Top Questions Weekly
Every week, check the top 20 questions. Identify any new questions that appeared and any existing questions where satisfaction dropped. These are your immediate action items.
Audit Low-Satisfaction Responses
Filter conversations by negative feedback. Read through the full conversation context for each. Categorize issues as source gaps, tone problems, or instruction failures.
Update Sources and Instructions
For source gaps, add new training data (Q&A pairs for precision, text or documents for broader coverage). For tone issues, refine the system prompt. For instruction failures, add specific behavioral rules.
Test Changes in the Playground
Before deploying changes, test the exact questions that triggered negative feedback in the Playground. Verify the responses now meet your quality bar.
Monitor the Impact
After deploying changes, monitor the satisfaction score and resolution rate for the following 7 days. Confirm that the improvements hold and no new issues were introduced.
Best Practices for Analytics Reviews
- Establish a weekly review cadence. Set aside 30 minutes each week to review analytics. Consistency is more important than depth -- small, frequent improvements compound over time.
- Track metrics over time, not just snapshots. A 75% satisfaction score is meaningless without context. Is it trending up or down? Did it change after a recent source update?
- Focus on the lowest-performing areas first. Improving a topic from 40% to 70% satisfaction has more impact than improving one from 85% to 90%.
- Share analytics with your team. Customer-facing teams can provide context that raw data cannot. A sudden spike in questions about billing might correlate with a pricing change they know about.
- Use analytics to justify investment. Track resolution rate and conversation volume to quantify the value your agent provides. These metrics translate directly into support costs avoided and customer satisfaction improved.
- Set quarterly goals. Define targets for resolution rate, satisfaction score, and response time. Review progress each quarter and adjust your optimization strategy accordingly.
On this page
- Dashboard Overview and Navigation
- Conversation Metrics
- Volume and Trends
- Peak Hours
- Average Conversation Duration
- Resolution Rate
- Message-Level Metrics
- User Satisfaction Tracking
- Satisfaction Score
- Satisfaction Trend
- Negative Feedback Review
- Top Questions Analysis
- Identifying Patterns
- Knowledge Gaps
- Trending Topics
- Source Performance
- Filtering and Segmentation
- Date Range Filters
- Additional Filters
- Exporting Data
- CSV Export
- API Access
- Setting Up Alerts and Thresholds
- Using Analytics to Improve Agent Performance
- Best Practices for Analytics Reviews