Genesys Cloud - Main

 View Only

Sign Up

Expand all | Collapse all

New Virtual Supervisor Automated Evaluation features that automates evaluation generation and expands evaluation limits

  • 1.  New Virtual Supervisor Automated Evaluation features that automates evaluation generation and expands evaluation limits

    Posted 12-08-2025 13:29
    Edited by Jose Ruiz 12-11-2025 20:52
      |   view attached

    Now available: The ability to automatically generate agent auto-complete evaluations on a recurring basis-at scale. With the new Agent Evaluation Scoring Rules API, you can define rule-based logic to trigger evaluations as soon as interactions close, eliminating manual tasks and ensuring consistent quality assurance coverage. 

    You can also now increase the default daily evaluation limit from 50 to up to 200 evaluations per agent per day. To request an increase, simply submit Genesys Cloud Customer Care ticket with a clear business case explaining why you need the higher volume. 

    Together, these enhancements unlock powerful flexibility to automate and scale your evaluation programs like never before. 

    What you can do with the new Agent Scoring Rules APIs 

    These public APIs let administrators and developers: 

    • Create, update, and validate rule configurations 

    • Automate rule deployment and lifecycle management 

    • Control access using role-based permissions for Quality and Speech and Text Analytics (STA) admins 

    • Programmatically enable or disable scoring rules 

    • Generate and submit evaluations in real time 

    How it works 

    Agent scoring rules rely on two required inputs: 

    • The agent auto-complete evaluation form to use 

    • The percentage of interactions that should be evaluated 

    Once defined and enabled, these rules generate evaluations automatically for qualifying interactions. 

    For more information about Agent auto-complete evaluation forms, see Agent auto-complete form type. 

    Step-by-step: Creating, editing or deleting a rule

    1. Select a program 
      Programs are a container of Speech and Text Analytics settings (e.g., Topics) and Agent Auto evaluation rules. Programs are mapped to specific queues or flows.  For more information about Programs, see Work with a program article.  Ensure you have a program ID to associate with your program rule. The program Id can be obtained using the GET /api/v2/speechandtextanalytics/programs 

    1. Use the POST /api/v2/quality/programs/{programId}/agentscoringrules API to create a rule: 
      Define key parameters in your API request: 

      • Sampling type: Set to all or define a percentage (e.g., 50%) 
      • Submission type: Set to automated 
      • Evaluation form context ID: Retrieved via the evaluation form API 
      • Enable: Set to true to activate the rule  
    • POST Body Contract

      {

       "programId": "string (UUID)",

       "samplingType": "Percentage | All",

       "samplingPercentage": "number (0–100, required if samplingType = 'Percentage')",

       "submissionType": "Automated",

       "evaluationFormContextId": "string (UUID)",

       "enabled": true,

       "published": true

      }

    • Field descriptions:
      Field Type Required Description
      programId UUID Yes Program ID for which the rule is created
      samplingType string Yes Allowed values: "Percentage" or "All"
      samplingPercentage number Conditional Required only if samplingType = Percentage
      submissionType string Yes Must be "Automated"
      evaluationFormContextId UUID Yes Retrieved from the Evaluation Form API (contextId)
      enabled boolean Yes Turns the rule on/off
      published boolean Yes Controls whether the rule is active for execution
       
    1. Update or disable rules as needed: 
      Use the PUT /api/v2/quality/programs/{programId}/agentscoringrules/{ruleId} API to adjust sampling percentages or temporarily disable a rule by toggling enabled to false. 

    • PUT Body Contract

      {

       "programId": "string (UUID)",

       "samplingType": "Percentage | All",

       "samplingPercentage": "number (0–100, required if samplingType = 'Percentage')",

       "submissionType": "Automated",

       "evaluationFormContextId": "string (UUID)",

       "enabled": true,

       "published": true

      }

    • Field descriptions:
      Field Type Required Description
      programId UUID Yes Program ID for which the rule is created
      samplingType string Yes Allowed values: "Percentage" or "All"
      samplingPercentage number Conditional Required only if samplingType = Percentage
      submissionType string Yes Must be "Automated"
      evaluationFormContextId UUID Yes Retrieved from the Evaluation Form API (contextId)
      enabled boolean Yes Turns the rule on/off
      published boolean Yes Controls whether the rule is active for execution
       
    1. Monitor rule activity 
      Use the GET /api/v2/quality/programs/{programId}/agentscoringrules/{ruleId} API to confirm rule creation and view associated rules per program. 

    1. Remove rules if necessary 
      Use the DELETE /api/v2/quality/programs/{programId}/agentscoringrules/{ruleId} API to remove any rule by its ID. 

    What to expect 

    Once active, rules operate continuously. Evaluations are created and submitted automatically after each interaction ends-no delay, no manual work. You can define up to five rules per program, each using different forms or sampling logic. 

    You can also combine rules. For example, apply one rule with a low sampling percentage for random checks, and another at 100% for high-priority interactions. Keep in mind that the daily evaluation limit per agent applies. The default is 50, but you can request an increase up to 200 evaluations per day by submitting a Genesys Cloud Customer Care ticket that includes your business justification. 

    This expanded automation-combined with configurable limits-means your team can scale quality assurance without scaling effort. 

    For further information, please refer to the recorded demo attached that showcases the functionality of the newly available APIs. 

    What's next 

    By the end of January, we'll release public APIs for accessing question-level evaluation data, including auto-complete evaluations. This will enhance your ability to analyze QA results and improve data operations across your environment. 

    Regards,


    #AIScoring(VirtualSupervisor)
    #SupervisorCopilot(AIInsights)
    #QualityEvaluations

    ------------------------------
    Jose Ruiz
    Genesys - Employees
    Product Manager
    jose.ruiz@genesys.com



  • 2.  RE: New Virtual Supervisor Automated Evaluation features that automates evaluation generation and expands evaluation limits
    Best Answer

    Posted 12-08-2025 14:12

    Great news ! ! ! 



    ------------------------------
    David Betoni
    Principal PS Consultant
    ------------------------------



  • 3.  RE: New Virtual Supervisor Automated Evaluation features that automates evaluation generation and expands evaluation limits

    Posted 29 days ago

    @Jose Ruiz couple questions:

    • Is the percentage sampling done on a daily basis?
    • How would we cap maximum amount of evaluations done? Customers could get a bill shock if there is no upper limit defined
    • How do we ensure that agent evaluations are evenly distributed?


    ------------------------------
    Anish
    ------------------------------



  • 4.  RE: New Virtual Supervisor Automated Evaluation features that automates evaluation generation and expands evaluation limits

    Posted 28 days ago

    Hello @Anish Sharma

    • The percentage sampling is done on an ongoing basis - Let's say you set it at 25%, the sampling will work so that every matched interaction has a 25% chance to have an evaluation generated for it. Until a limit is reached.
    • We still have the 50 evaluations per agent per day limit, which can be increased to either 100 or 200 evaluations per day at customer request. 
    • By evenly distributed, do you mean the same number of evaluations per agent, per timeframe?

    Regards,



    ------------------------------
    Jose Ruiz
    Genesys - Employees
    Product Manager
    jose.ruiz@genesys.com
    ------------------------------



  • 5.  RE: New Virtual Supervisor Automated Evaluation features that automates evaluation generation and expands evaluation limits

    Posted 26 days ago

    Hi everyone,

    Thanks for sharing the update - this is a really exciting enhancement.

    I tried to locate the Agent Evaluation Scoring Rules APIs, but I couldn't find them yet, even after checking the Genesys Preview APIs site. I may be missing something, but I wasn't able to identify the endpoints or documentation for these APIs.

    Could someone please help by pointing me to the correct documentation or API references?
    Any guidance would be greatly appreciated.

    Thanks in advance!



    ------------------------------
    Mateus Nunes
    Tech Leader Of CX at Solve4ME
    Brazil
    ------------------------------



  • 6.  RE: New Virtual Supervisor Automated Evaluation features that automates evaluation generation and expands evaluation limits

    Posted 26 days ago

    @Mateus Nunes,

    There was a hiccup getting the API documentation published before the holidays. It should be available in the Developer Center / API Explorer by late next week. 

    Regards,



    ------------------------------
    Jose Ruiz
    Genesys - Employees
    Product Manager
    jose.ruiz@genesys.com
    ------------------------------



  • 7.  RE: New Virtual Supervisor Automated Evaluation features that automates evaluation generation and expands evaluation limits

    Posted 26 days ago

    Thank you for the update.

    I appreciate the clarification and I'm looking forward to having access to the documentation so I can start testing it as soon as it becomes available.



    ------------------------------
    Mateus Nunes
    Tech Leader Of CX at Solve4ME
    Brazil
    ------------------------------



  • 8.  RE: New Virtual Supervisor Automated Evaluation features that automates evaluation generation and expands evaluation limits

    Posted 26 days ago

    Is there a possibility of having this option available through the UX interface?



    ------------------------------
    Hugo Reboucas
    Opportunity Brazil - RPA
    ------------------------------



  • 9.  RE: New Virtual Supervisor Automated Evaluation features that automates evaluation generation and expands evaluation limits

    Posted 26 days ago

    Hello @Hugo Reboucas,

    Yes, the ability to create an Agent Auto-Complete rule that will generate evaluations will be available in the UI as part of Programs. It is scheduled to be released in early February.

    Regards,



    ------------------------------
    Jose Ruiz
    Genesys - Employees
    Product Manager
    jose.ruiz@genesys.com
    ------------------------------



  • 10.  RE: New Virtual Supervisor Automated Evaluation features that automates evaluation generation and expands evaluation limits

    Posted 22 days ago
    Edited by Mateus Nunes 22 days ago

    Hi Jose,


    I followed your guidance using the new AI Scoring APIs to distribute and complete Quality forms via Supervisor Copilot, and I can confirm the whole process worked exactly as described.

    This is going to be extremely useful for our quality operations, especially to scale evaluations and reduce manual effort while keeping consistency.

    As an extra validation, we also ran a quick test to understand the timing behavior: after the interaction is closed, it took around 15 minutes for the Quality form to become available and linked to the interaction. This helps a lot to set the right expectations for supervisors and QA teams.

    One limitation we noticed during the process is that the agent is not notified when the evaluation is created, and the form does not appear in the agent’s completed evaluations view or agent evaluation reports. From a change-management and transparency perspective, this could be a challenge, especially for operations that rely on agent visibility and acknowledgment of evaluations.

    Is this the expected behavior for evaluations created via the AI Scoring APIs and Supervisor Copilot? If so, are there any plans on the roadmap to improve this flow, specifically allowing agents to see and be notified about these evaluations in the future?

    Really appreciate you taking the time to document and share this with the community - great contribution 🙌
    Thanks again!



    ------------------------------
    Mateus Nunes
    Tech Leader Of CX at Solve4ME
    Brazil
    ------------------------------



  • 11.  RE: New Virtual Supervisor Automated Evaluation features that automates evaluation generation and expands evaluation limits

    Posted 14 days ago

    Mateus. 

    For evaluations to be visible on the portal, agents must sign and confirm them after viewing, except for automated evaluations, which do not require this confirmation. 
    Currently, these automated evaluations can only be accessed through the interactions screen. 
    To optimize this workflow, we need to add a score tab that appears when clicking the "+" symbol directly on that screen.
    Additionally, when selecting a specific interaction, the system should allow the full form to open so the evaluation details can be reviewed



    ------------------------------
    David Betoni
    Principal PS Consultant
    ------------------------------