Chat
This report provides a comprehensive view of interactions and transfers. It allows for a detailed examination of interactions transferred to executive assistance, as well as the characteristics of those transfers—such as how many were transferred, to which transfer engine, which agents handled them, session duration, and more.
All charts in the report include an interactive legend that enables dynamic data exploration. Each legend item represents a data group. By clicking on an item, its associated values are hidden from the chart, helping users focus on other categories and improving interpretability.
Interaction Performance
This bar chart helps analyze the distribution and evolution of handled interactions over time, aiming to identify the balance between interactions resolved automatically by the bot and those requiring human intervention. This analysis supports evaluating the efficiency of the automated system, identifying demand patterns across time periods, and optimizing human and technical resource allocation to improve customer experience.
Data Interpretation
- X-Axis: Represents time, divided into specific intervals based on the selected reporting range. This helps observe how interactions vary over different periods.
- Y-Axis: Indicates the total number of interactions, making it easy to compare volumes across intervals.
Each bar represents the total number of interactions in a given time period, divided into two colors to distinguish:
- Interactions transferred to agents: Cases that required human intervention.
- Automatically resolved interactions: Cases the bot resolved without human assistance.
The chart offers an integrated view of automation efficiency and human resource load.
Use Case
Description: A financial services company implements an automated support system with a chatbot for frequent queries and human agents for complex cases. The team wants to measure the bot’s effectiveness, optimize resources, and enhance customer experience.
Chart Use:
Assess chatbot efficiency:
- A high proportion of automatically resolved interactions indicates the bot is successfully handling frequent queries.
- If transfers to agents increase, it may suggest more complex customer needs or a need to adjust the bot.
Identify demand peaks: Time interval analysis reveals high-volume periods (e.g., during promotions or key dates), aiding resource planning.
Optimize customer experience: If demand peaks align with more transfers to agents, it may be necessary to improve bot flows or agent training.
Monitor improvements: Compare data before and after system adjustments to measure the effectiveness of implemented changes.
Session Lifetime
This chart analyzes metrics related to session duration and queue wait time (in seconds) for tenant interactions over time. It provides key insights to assess system or process performance, identify trends, and detect improvement areas.
Data Interpretation
- X-Axis: Represents time in days (date format), allowing for trend analysis over various periods.
- Y-Axis: Represents metric values in seconds, enabling comparison of duration or wait time over time.
Lines with data points:
- Each line represents a specific metric, such as:
- max queue: Maximum queue time.
- min queue: Minimum queue time.
- avg queue: Average queue time.
- max duration: Maximum session duration.
- min duration: Minimum session duration.
- avg duration: Average session duration.
- Data points highlight specific values for each day.
- Each line represents a specific metric, such as:
Shaded areas:
- Represent ranges between specific metrics (e.g., min to max values).
Legend:
- Clearly identifies each line and shaded area, associating colors with their corresponding metrics.
Insights that can be drawn from this chart:
General trends:
- Queue metrics show a steady decrease in max and average values over time.
- Duration metrics are more stable, with relatively consistent average and minimum values.
Anomalies or spikes:
- Sudden peaks (e.g., an increase in max queue time) may indicate system issues.
Variability:
- Shaded areas reveal gaps between min and max values, indicating consistency—or lack thereof—in system performance.
Use Case
Description: A customer service operations team uses this chart to monitor wait time and session duration metrics in an automated queue system.
Chart Analysis:
- Identify bottlenecks: Rising max queue times may signal system overloads, requiring attention capacity adjustments.
- Measure efficiency: A consistent drop in average and max queue times shows system improvement.
- Evaluate consistency: Comparing max, min, and average values reveals whether recent improvements are reducing wait time variability.
- Resource planning: Detected patterns help allocate staff and technical resources during high-demand periods.
Transfer Engine
This chart shows the distribution of sessions handled by agents according to the chat engine used. Its purpose is to identify which engines are most used and analyze their relevance in interactions requiring human attention.
Data Interpretation
The pie chart segments sessions handled by agents based on the chat engine used (e.g., Kyubo, Open Messaging, Pure Cloud). Each slice represents one engine and includes the total number of processed sessions, making usage comparisons easy.
Use Case
Description: A customer service manager can use this chart to evaluate the workload and performance of deployed chat engines.
Chart Use:
- Identify the dominant engine: Analyze which engine handles the most sessions (e.g., Kyubo), which may reflect reliability or configuration for higher volume.
- Detect underutilized engines: Identify engines with low session counts (e.g., Open Messaging) to assess potential configuration issues or lower relevance in certain channels.
- Plan resources: Allocate more resources to engines handling greater volumes to maintain performance and reduce wait times.
- Optimize the system: Distribute load more evenly across engines to prevent saturation and improve overall efficiency.
Transfer to Agent
This chart shows how transfers to agents are distributed over time and allows for performance comparisons across different transfer engines. By analyzing the curves, patterns such as spikes or drops in transfers can be identified—indicating when users most require human assistance and which engines are most effective or popular. This information helps optimize engine use, allocate resources efficiently, and enhance customer experience.
Data Interpretation
The line chart illustrates the number of agent transfers over the selected period, with each line representing a different transfer engine.
- X-Axis: Represents time, segmented into defined intervals for tracking transfer evolution and patterns.
- Y-Axis: Indicates the number of interactions transferred to agents, allowing volume comparisons across time.
Each line is color-coded to distinguish transfer engines. This shows how many sessions resulted in a transfer, which engines handled them, and when. It helps detect usage and performance patterns.
Use Case
Description: On a customer support platform using multiple engines to manage transfers, the goal is to optimize engine efficiency and improve customer experience.
- Identify the dominant engine: Helps determine which engine is handling more sessions—indicating reliability or better configuration.
- Detect underutilized engines: Understand why certain engines handle fewer sessions (e.g., due to misconfiguration or low channel relevance).
- Plan resources: Allocate more executives or engine capacity where interaction volume is higher.
- System optimization: Redistribute load across engines to balance work and improve system efficiency.
Channel Transfer
The goal of this chart is to show how transferred sessions are distributed based on their source channel, offering a clear view of which channels generate more interactions requiring human attention. This helps identify where self-service may need improvements and which channels need additional support. By analyzing transfer proportions per channel, both automation and human resource allocation can be optimized to improve efficiency and reduce wait times.
Data Interpretation
This pie chart analyzes the proportion of sessions transferred to agents from different origin channels. Each slice represents a channel and its size reflects the percentage of total transfers from that channel.
- Large segments: Represent channels with higher volumes of human-assisted interactions—indicating areas where automation may need improvement or more resources.
- Small segments: Indicate fewer transfers, possibly due to effective automation or lower usage.
Use Case
Description: On a customer service platform operating across web chat, social media, email, and mobile apps, the goal is to analyze transfers by channel to optimize resources and enhance user experience.
- Identify the dominant channel: Determine which channel drives the most agent transfers (e.g., web chat), indicating higher demand or weaker automation.
- Detect underutilized channels: Smaller segments (e.g., social media) may show either better automation or lower usage—guiding whether evaluation or adjustment is needed.
- System optimization: High-transfer channels may indicate automation gaps or process failures—highlighting improvement areas to balance resources and maximize efficiency.
Agent Transfer Details
This table provides a detailed breakdown of sessions transferred to agents, showing key information like agent name, support date, source channel, and chat engine used. It helps analyze each agent’s workload, identify performance patterns by channel and schedule, and assess resource assignment efficiency. It also supports detecting improvement areas in transfer distribution and optimizing customer support processes.
Data Interpretation
This table lists all sessions transferred to agents, complementing the previous pie chart with detailed records by date and agent.
Table columns include:
- Agent Name: Name of the agent who handled the session.
- Date: Date of the interaction.
- Channel: Channel from which the session originated.
- Total Transferred: Number of transfers received by the agent on the given date.
- Chat Engine: Engine through which the agent handled the interaction.
- Last Transfer: Time of the most recent transfer to the agent.
The table clearly shows how transferred sessions are distributed among agents, helping identify workload and performance by channel.
Use Case
Description: In a multi-channel customer service platform, the goal is to optimize resource allocation, detect support trends, and evaluate agent performance.
Table Use:
- Workload evaluation: Helps identify agents handling more transfers—highlighting workload imbalances and informing task redistribution.
- Channel pattern detection: Analyzing transferred sessions by channel shows where demand is higher—guiding automation efforts or resource reinforcement.
- Agent efficiency monitoring: With data on chat engine and last transfer time, it helps identify high-performing agents and replicate best practices.
- Shift and resource optimization: Reviewing transfer dates and times helps detect peak demand periods for better scheduling.
Session Time Details
This table provides a daily breakdown of sessions transferred to agents, focusing on interaction duration and support process efficiency. With columns such as Session ID, Channel, Agent, and Queue Time, it allows evaluation of wait times, agent workload, and potential system bottlenecks. Additional views like Dialogs, Debug, and Health support detailed review of each session—helping identify technical issues, interaction quality, and session status to improve support efficiency and resource usage.
Data Interpretation
Table columns:
- Session ID: Unique identifier to track interactions.
- Channel: Channel through which the session originated (Teams, WhatsApp, Telegram, etc.).
- Agent: Name of the agent who handled the interaction.
- Queue Time: Time the session spent waiting before being handled.
Additional session views:
- Dialogs: Shows chat history in a message format. Clicking a dialog opens a key-value detail table.
- Debug: Displays technical logs and error details.
- Health: Shows session status (success or issues).
Each column includes filters for searching and organizing data efficiently.
Dialogs
This view shows the session conversation. On the right, messages are displayed like a chat UI. Clicking a message opens a detailed key-value table.
- conversationPart: Who is speaking (Client, Bot, Agent).
- text: The message content from the speaker.
- options: Response options offered by the bot (if the response is a menu).
- date: Exact date and time the message was sent.
- currentIntent: Intent currently being executed.
- previousIntent: Previously executed intent.
- confidence: Confidence level of the intent match.
- cognitiveEngine: Engine managing the conversation.
- minConfidence: Minimum confidence threshold required.
- isOnChat: Indicates if the session is in a live chat.
- conversationId: Unique ID to link all parts of the session.
- partName: Name of the flow part being handled.
- correlationTrace: Correlation tracking value.
- subLevel: Level within the flow structure.
Debug
Provides technical insight into errors or issues within the session.
Health
Monitors indicators of success or problems to identify improvement opportunities.
Use Case
Description: On a multi-channel customer support platform requiring executive intervention for complex cases, a detailed session breakdown helps improve service efficiency, detect bottlenecks, and optimize resource allocation.
Table Use:
- Wait time evaluation: Reviewing "Queue Time" identifies long-wait sessions, which may signal delays or capacity issues—guiding improvements in workload distribution.
- Agent workload analysis: Shows which agents handle more sessions and how long they take—helping prevent overload and improve team efficiency.
- Cognitive engine performance monitoring: The "Dialogs" view reveals how AI engines interact with users. By analyzing intents and confidence, administrators can refine flows before escalation.
- Technical issue detection: The "Debug" view helps spot recurring problems affecting performance, enabling proactive resolution.
- Session quality monitoring: The "Health" view flags problematic sessions. Investigating causes (e.g., chat engine errors or flow issues) helps improve user experience.