Genesys Cloud - Main

 View Only

Sign Up

Expand all | Collapse all

New Virtual Supervisor Automated Evaluation features that automates evaluation generation and expands evaluation limits

  • 1.  New Virtual Supervisor Automated Evaluation features that automates evaluation generation and expands evaluation limits

    Posted 4 hours ago
      |   view attached

    Now available: The ability to automatically generate agent auto-complete evaluations on a recurring basis-at scale. With the new Agent Evaluation Scoring Rules API, you can define rule-based logic to trigger evaluations as soon as interactions close, eliminating manual tasks and ensuring consistent quality assurance coverage. 

    You can also now increase the default daily evaluation limit from 50 to up to 200 evaluations per agent per day. To request an increase, simply submit Genesys Cloud Customer Care ticket with a clear business case explaining why you need the higher volume. 

    Together, these enhancements unlock powerful flexibility to automate and scale your evaluation programs like never before. 

    What you can do with the new Agent Scoring Rules APIs 

    These public APIs let administrators and developers: 

    • Create, update, and validate rule configurations 

    • Automate rule deployment and lifecycle management 

    • Control access using role-based permissions for Quality and Speech and Text Analytics (STA) admins 

    • Programmatically enable or disable scoring rules 

    • Generate and submit evaluations in real time 

    How it works 

    Agent scoring rules rely on two required inputs: 

    • The agent auto-complete evaluation form to use 

    • The percentage of interactions that should be evaluated 

    Once defined and enabled, these rules generate evaluations automatically for qualifying interactions. 

    For more information about Agent auto-complete evaluation forms, see Agent auto-complete form type. 

    Step-by-step: Create a scoring rule 

    1. Select a program 
      Programs are a container of Speech and Text Analytics settings (e.g., Topics) and Agent Auto evaluation rules. Programs are mapped to specific queues or flows you.  For more information about Programs, see Work with a program article.  Ensure you have a program ID to associate with your program rule. The program Id can be obtained using the GET /api/v2/speechandtextanalytics/programs 

    1. Use the POST /api/v2/quality/programs/{programId}/agentscoringrules API to create a rule 
      Define key parameters in your API request: 

      • Sampling type: Set to all or define a percentage (e.g., 50%) 
      • Submission type: Set to automated 
      • Evaluation form context ID: Retrieved via the evaluation form API 
      • Enable: Set to true to activate the rule  
    1. Update or disable rules as needed 
      Use the PUT /api/v2/quality/programs/{programId}/agentscoringrules API to adjust sampling percentages or temporarily disable a rule by toggling enabled to false. 

    1. Monitor rule activity 
      Use the GET /api/v2/quality/programs/{programId}/agentscoringrules API to confirm rule creation and view associated rules per program. 

    1. Remove rules if necessary 
      Use the DELETE /api/v2/quality/programs/{programId}/agentscoringrules API to remove any rule by its ID. 

    What to expect 

    Once active, rules operate continuously. Evaluations are created and submitted automatically after each interaction ends-no delay, no manual work. You can define up to five rules per program, each using different forms or sampling logic. 

    You can also combine rules. For example, apply one rule with a low sampling percentage for random checks, and another at 100% for high-priority interactions. Keep in mind that the daily evaluation limit per agent applies. The default is 50, but you can request an increase up to 200 evaluations per day by submitting a Genesys Cloud Customer Care ticket that includes your business justification. 

    This expanded automation-combined with configurable limits-means your team can scale quality assurance without scaling effort. 

    For further information, please refer to the recorded demo attached that showcases the functionality of the newly available APIs. 

    What's next 

    By the end of January, we'll release public APIs for accessing question-level evaluation data, including auto-complete evaluations. This will enhance your ability to analyze QA results and improve data operations across your environment. 

    Regards,


    #AIConfiguration
    #WEM-Quality,WFM,Gamification,etc

    ------------------------------
    Jose Ruiz
    Genesys - Employees
    Product Manager
    jose.ruiz@genesys.com
    ------------------------------


  • 2.  RE: New Virtual Supervisor Automated Evaluation features that automates evaluation generation and expands evaluation limits

    Posted 3 hours ago

    Great news ! ! ! 



    ------------------------------
    David Betoni
    Principal PS Consultant
    ------------------------------