Elvin Analytics

Elvin Analytics provides administrators with two ways to analyze data from Elvin Copilot interactions. These analytics tools help you understand user behavior, improve response quality, and drive continuous improvement of your AI assistant.

Both analytics sections are accessible through: Elvin Copilot → Learn

1. Conversations

Location: Elvin Copilot → Learn → Conversations

The Conversations Analytics section provides detailed monitoring of how users interact with Elvin Copilot. It allows you to review conversation volumes, measure resolution rates, check user satisfaction, and analyze specific conversations. This helps you verify that Elvin is functioning as expected and identify areas where improvements may be required.

Time Range Selection

By default, the analytics dashboard displays data for the last 7 days, offering a quick way to view the most recent usage patterns.

The time range can be changed using the selector in the top-right corner of the screen:

  • Predefined ranges: 30, 60, or 90 days
  • Custom range: select a specific start and end date to focus on a particular period

Selecting a shorter range (for example, 7 or 30 days) is useful when you want to check the effect of recent changes or updates. Longer ranges (such as 60 or 90 days) are better when you want to identify patterns over time or compare activity across multiple months.

Key Metrics

The metrics panel provides three essential performance indicators:

Total Conversations The number of conversations initiated during the selected time range. This counts conversations as units, not the number of individual messages exchanged within them.

Chatting Users The total number of distinct users who had at least one conversation with Elvin Copilot in the selected time range. Multiple conversations from the same user are counted once.

Resolved Conversations The number of conversations that ended with a resolved status. This is shown as both an absolute number and as a percentage of total conversations.

Data Visualization

The Visualization section displays graphs that represent conversation activity and user interactions over time.

You can choose to display either total conversations or chatting users. The Show Values option overlays the numerical values directly onto the graph. Data can be grouped by different levels of granularity: daily, weekly, monthly, quarterly, or yearly.

This makes it possible to view both short-term fluctuations and long-term patterns depending on the selected range and grouping.

Filtering Options

By default, all conversations are shown. To focus on specific cases, filters can be applied:

Upvotes / Downvotes

  • Upvote: the user indicated the response was useful
  • Downvote: the user indicated the response was not useful

Filtering by votes makes it easier to identify successful responses or to isolate problem areas.

Status Categories Every conversation is automatically assigned a status:

  • Resolved: Multi-message conversations are considered resolved if the last few messages show neutral or positive sentiment, or if the user does not downvote, does not request a human agent, and leaves the conversation inactive for at least 24 hours. Single-message conversations are considered resolved if there is no downvote and no human support request within 24 hours of Elvin's reply.
  • Escalated: The user explicitly requested a human agent. Escalations indicate that Elvin did not provide a sufficient answer.
  • Pending Evaluation: The conversation has ended, but the user has not yet given feedback (upvote or downvote).

These filters allow you to focus on problem conversations, unresolved interactions, or cases where no feedback has been given.

Conversation Table

Below the graphs, a detailed conversation table lists each recorded conversation individually. The table includes the following columns:

  • Conversation Title: the initial user query that started the conversation
  • Date: the date when the conversation occurred
  • Messages Count: the total number of messages exchanged in the conversation
  • Upvotes and Downvotes: the recorded user feedback on response quality
  • Status: resolved, escalated, or pending evaluation

Detailed Conversation Analysis

Clicking on a conversation entry opens two tabs with further information:

Feedback Tab Displays any votes the user gave (upvote or downvote) along with the user's name, the exact time of the vote, and any written feedback they provided.

Conversation Tab This view shows the full transcript of the conversation, including both user messages and Elvin's responses, so that the entire interaction can be reviewed in context. It gives you a complete picture of how the exchange unfolded and allows you to assess both the user experience and Elvin's performance.

In this tab, you can see the entire conversation flow and review the feedback left on individual messages. Each message may have its own rating, so one response might be upvoted while the next is downvoted.

Quality Control Tools

You can evaluate the quality of Elvin's answers by reviewing the sources that were used to generate them. By tracing responses back to their sources, you can better understand why certain answers were produced, improve the underlying source material, and reduce hallucinations.

If you find that your sources are correct, coherent, and clear, but Elvin has still generated a hallucinated response, you can report it directly using the Report hallucination button at the bottom of the response.

2. Outcomes (Early Access)

Location: Elvin Copilot → Learn → Outcomes

The Outcomes section transforms your conversation data into actionable insights through natural language queries. This feature allows administrators to gain strategic understanding of user interactions without manually reviewing individual conversations.

How Outcomes Work

Query-Based Analysis Instead of manually sifting through conversation data, you can ask Elvin questions about your conversation analytics using natural language. The system analyzes all past conversations between end-users and both Elvin Copilot and the legacy Copilot to provide comprehensive insights.

Example Questions You Can Ask:

  • "What are the top 5 most discussed topics with Elvin?"
  • "What are the top 3 actions you would recommend based on past user chats?"
  • "Which products are mentioned most frequently in conversations?"
  • "What are the main pain points users are experiencing?"
  • "How has user satisfaction changed over the past month?"

Report Generation Process

Outcome Creation When you submit a question, Elvin generates a human-readable report called an "Outcome." These reports are formatted in clear text and typically generate within 20 seconds, making it easy to get quick insights into your conversation data.

Data Sources All reports are based on comprehensive analysis of end-user conversations with both Elvin Copilot and the legacy Copilot, ensuring you get a complete picture of user interactions across your AI assistant ecosystem.

Report Management Features

Historical Access The Outcomes section maintains a complete history of your past queries and generated reports. You can easily return to previous analytical outputs without needing to regenerate them, making it simple to track insights over time or reference earlier findings.

Organization Tools

  • View all past Outcome reports chronologically
  • Access the original questions that generated each report
  • Delete outdated or unnecessary reports to keep your workspace organized

Requirements and Limitations

Minimum Data Threshold To generate meaningful Outcome reports, you need at least 10 end-user conversations in your system. This ensures there's sufficient data for Elvin to identify patterns and provide valuable insights.

Early Access Considerations Since Outcomes is currently in Early Access phase, please be aware that:

  • Some issues and bugs may still exist in the system
  • This feature is primarily intended for administrator testing and feedback
  • Functionality may not be fully stable in all scenarios
  • You should report any issues encountered during your testing

Best Practices for Early Access

  • Test the feature with various types of questions to understand its capabilities
  • Compare Outcomes insights with your manual observations for accuracy
  • Provide feedback on any inconsistencies or technical issues
  • Use results as guidance rather than definitive conclusions while the feature is being refined

Getting Started with Elvin Analytics

Step-by-Step Approach

  1. Begin with Conversations: Start by exploring the Conversations section to familiarize yourself with individual user interactions and overall conversation patterns.
  2. Apply Filters Strategically: Use the filtering tools to identify specific areas of interest, such as conversations with negative feedback or unresolved status.
  3. Monitor Key Metrics: Regularly track your resolution rates, user satisfaction trends, and conversation volumes to establish baseline performance.
  4. Report Quality Issues: When you encounter hallucinations or inaccurate responses, use the reporting feature to help improve Elvin's accuracy.
  5. Experiment with Outcomes: Once you have sufficient conversation data, test the Outcomes feature to gain broader strategic insights.

Recommended Workflow

Daily Monitoring

  • Check recent conversation metrics for any unusual patterns
  • Review conversations with downvotes or escalations
  • Monitor resolution rates and user satisfaction trends

Weekly Analysis

  • Use Outcomes to identify recurring topics or issues
  • Analyze conversation patterns over longer time periods
  • Review and address any reported hallucinations

Monthly Strategic Review

  • Generate comprehensive Outcomes reports on user needs and satisfaction
  • Compare performance across different time periods
  • Plan improvements based on identified patterns and user feedback

Best Practices for Effective Analytics

Conversation Analysis

  • Regular Review Schedule: Establish a consistent routine for reviewing conversation data to catch issues early
  • Focus on Problem Areas: Prioritize conversations with downvotes, escalations, or unresolved status
  • Source Verification: When reviewing responses, always check the sources Elvin used to understand the reasoning behind answers
  • Pattern Recognition: Look for recurring themes in both successful and unsuccessful interactions

Quality Improvement

  • Documentation: Keep records of common hallucination patterns to address root causes systematically
  • Source Management: Use conversation insights to identify gaps in your knowledge base and improve source materials
  • Feedback Integration: Regularly incorporate user feedback and conversation learnings into your AI assistant's training

Strategic Decision Making

  • Data-Driven Improvements: Use both granular conversation data and high-level Outcomes insights to guide enhancement priorities
  • User-Centric Focus: Let conversation patterns and user feedback drive decisions about feature updates and content improvements
  • Continuous Monitoring: Establish metrics benchmarks and track improvement over time using both analytics sections

This comprehensive approach to Elvin Analytics ensures you have both detailed conversation-level insights and strategic understanding of your AI assistant's performance, enabling continuous improvement and optimal user experience.

Was this article helpful?