Wayfound
  • Welcome to Wayfound
  • Key Concepts
  • Getting Started
  • Manager
    • Performance
    • Overview
    • Alignment
    • Meetings
    • Report Sharing
  • User Feedback
  • Agent Management Best Practices
  • Agents
    • The Agents Page
    • Definition
    • Guidelines
    • Connecting Agents
    • Connecting Tool Calls
    • Connecting User Feedback
    • Connecting other Events
    • OpenTelemetry Event Data
  • Sessions
    • Recordings
    • Suggestions
    • Link Clicks
  • Visitors
  • Settings
    • Organizations
    • User Management
    • Integrations
      • HubSpot Integration
      • Salesforce Integration
    • Actions
  • API
Powered by GitBook
On this page
  • Assessment Outcomes
  • Assessment components
  • User Satisfaction
  • Knowledge Gaps:
  • Guideline Violations:
  • Action Failures:
  • Potential issues:
  • Follow-Up Analysis
  1. Manager

Performance

PreviousGetting StartedNextOverview

Last updated 3 months ago

The Performance tab provides a comprehensive overview of your individual agents' performance, offering insights and areas for improvement. It is powered by Wayfound's AI Manager, which continually monitors your active agents. The Wayfound AI Manager is powered by state-of-the-art LLMs. It currently uses OpenAI's o3-mini model to analyze agent performance.

This view is designed to help you quickly understand the strengths and weaknesses of your agents and identify directions for improvement.

The Performance tab provides insights for all agents in the organization with at least 5 user interactions. The performance of a given agent can be displayed by clicking or hidden by clicking next to each agent's name.

Assessment Outcomes

The AI Manager evaluates the performance of your agents by reading and rating its interactions with users. The overall results are displayed on the Performance tab to the right of each agent's name. Possible outcomes include:

  • User Rating 1-3

  • Negative sentiment

  • Agent Goal was not successful

  • 1 or 2 or Knowledge Gaps

  • Guidelines Violation where user specifies a "Needs review" priority

  • Action Failure

  • 3+ Knowledge Gaps

  • Guidelines Violation where the user specifies a "Needs attention" priority

More information can be accessed using the link to the Suggestions tab, which provides specific recommendations for improvement.

Assessment components

During its review of agent performance, the AI Manager identifies each agent's knowledge gaps, considers user satisfaction scores, and searches transcripts for specific issues. These components are all displayed in the tab.

User Satisfaction

Knowledge Gaps:

As part of its assessment, the AI Manager identifies the knowledge gaps that emerged in the agent's recent interactions. The Performance tab displays a graph of recordings that indicate knowledge gaps as a share of total recordings. Clicking the graph displays specific knowledge gaps on the right-hand side of the page:

Guideline Violations:

Users can provide feedback to improve the AI Manager's application of agent guidelines by opening a session containing a guideline violation. See User Feedback for more information.

Action Failures:

The AI Manager monitors your agents' action calls and calculates their success rates. Click the overall summary of action failures to open a more detailed view of failure rates by action. Each action links to sessions where that action was called.

Potential issues:

As part of its review, the AI Manager identifies potential issues in the agent's interactions with users. These issues may concern the agent's behaviors or the overall outcome of its interactions. The manager includes references to specific recordings that demonstrate these issues. Clicking on a reference opens it on the right side of the page:

As shown, each recording is also given a status, an explanation of that status, and suggestions for improvement based on the individual interaction. Below the suggestions is the transcript itself.

Follow-Up Analysis

The performance tab displays a chat window where you can interact with the AI Manager agent for additional analysis. You can probe further on any of the insights it provides or ask questions about other aspects of the agent's performance. This allows the AI Manager to explain its assessment, enhancing your understanding and confidence in its analysis.

Based on the feedback it provided, the AI Manager suggests custom follow-up questions. These suggestions are found below the key topics. In addition to the suggested questions, you can also prompt the AI manager to:

  • Provide more details or evidence for a specific insight

  • Compare this agent's performance to similar agents in the organization

  • Suggest ways to leverage and expand on the highlighted strengths

  • Offer recommendations for resolving specific issues

Hot to go!: The agent is meeting expectations in its interactions. However, the AI Manager can still raise potential issues and provide suggestions for improvement.

Needs review: The agent's performance is satisfactory, but there are areas that require closer attention and potential improvement. The AI Manager will flag an agent as "Needs review" when any of the following are triggered:

Needs attention: the agent is facing significant challenges or issues that require immediate focus and resolution. The AI Manaager will flag an agent as "Needs attention" when any of the following are triggered:

Wayfound collects ratings given by users at the end of their interactions with agents. It summarizes them here with an overall average score and a distribution of scores. Click on the summary of user satisfaction scores to view a detailed breakdown of sessions by score. For each session, click to view the corresponding transcript.

Click each theme to expand with more details with summaries of relevant sessions. For each session, click to view the corresponding transcript.

The AI Manager assesses the performance of each agent according to the custom Guidelines you set for it. The performance tab displays the count and percentage of recordings where the agent was compliant vs in violation of the guidelines. Clicking on this chart provides more detail for each guideline, along with example sessions for each guideline violation. For each example session, click to view the corresponding transcript.

For each example session, click to view the corresponding transcript. Session transcripts display where an action was called. Click the action to show more detail about the request: