For evaluations to be visible on the portal, agents must sign and confirm them after viewing, except for automated evaluations, which do not require this confirmation.
Currently, these automated evaluations can only be accessed through the interactions screen.
To optimize this workflow, we need to add a score tab that appears when clicking the "+" symbol directly on that screen.
Additionally, when selecting a specific interaction, the system should allow the full form to open so the evaluation details can be reviewed
Original Message:
Sent: 01-12-2026 07:54
From: Mateus Nunes
Subject: New Virtual Supervisor Automated Evaluation features that automates evaluation generation and expands evaluation limits
Hi Jose,
I followed your guidance using the new AI Scoring APIs to distribute and complete Quality forms via Supervisor Copilot, and I can confirm the whole process worked exactly as described.
This is going to be extremely useful for our quality operations, especially to scale evaluations and reduce manual effort while keeping consistency.
As an extra validation, we also ran a quick test to understand the timing behavior: after the interaction is closed, it took around 15 minutes for the Quality form to become available and linked to the interaction. This helps a lot to set the right expectations for supervisors and QA teams.
One limitation we noticed during the process is that the agent is not notified when the evaluation is created, and the form does not appear in the agent's completed evaluations view or agent evaluation reports. From a change-management and transparency perspective, this could be a challenge, especially for operations that rely on agent visibility and acknowledgment of evaluations.
Is this the expected behavior for evaluations created via the AI Scoring APIs and Supervisor Copilot? If so, are there any plans on the roadmap to improve this flow, specifically allowing agents to see and be notified about these evaluations in the future?
Really appreciate you taking the time to document and share this with the community - great contribution 🙌
Thanks again!
------------------------------
Mateus Nunes
Tech Leader Of CX at Solve4ME
Brazil
Original Message:
Sent: 12-08-2025 13:29
From: Jose Ruiz
Subject: New Virtual Supervisor Automated Evaluation features that automates evaluation generation and expands evaluation limits
Now available: The ability to automatically generate agent auto-complete evaluations on a recurring basis-at scale. With the new Agent Evaluation Scoring Rules API, you can define rule-based logic to trigger evaluations as soon as interactions close, eliminating manual tasks and ensuring consistent quality assurance coverage.
You can also now increase the default daily evaluation limit from 50 to up to 200 evaluations per agent per day. To request an increase, simply submit a Genesys Cloud Customer Care ticket with a clear business case explaining why you need the higher volume.
Together, these enhancements unlock powerful flexibility to automate and scale your evaluation programs like never before.
What you can do with the new Agent Scoring Rules APIs
These public APIs let administrators and developers:
Agent scoring rules rely on two required inputs:
Once defined and enabled, these rules generate evaluations automatically for qualifying interactions.
Step-by-step: Creating, editing or deleting a rule
Select a program
Programs are a container of Speech and Text Analytics settings (e.g., Topics) and Agent Auto evaluation rules. Programs are mapped to specific queues or flows. For more information about Programs, see Work with a program article. Ensure you have a program ID to associate with your program rule. The program Id can be obtained using the GET /api/v2/speechandtextanalytics/programs
Use the POST /api/v2/quality/programs/{programId}/agentscoringrules API to create a rule:
Define key parameters in your API request:
- Sampling type: Set to all or define a percentage (e.g., 50%)
- Submission type: Set to automated
- Evaluation form context ID: Retrieved via the evaluation form API
- Enable: Set to true to activate the rule
POST Body Contract
{
"programId": "string (UUID)",
"samplingType": "Percentage | All",
"samplingPercentage": "number (0–100, required if samplingType = 'Percentage')",
"submissionType": "Automated",
"evaluationFormContextId": "string (UUID)",
"enabled": true,
"published": true
}
- Field descriptions:
| Field | Type | Required | Description |
| programId | UUID | Yes | Program ID for which the rule is created |
| samplingType | string | Yes | Allowed values: "Percentage" or "All" |
| samplingPercentage | number | Conditional | Required only if samplingType = Percentage |
| submissionType | string | Yes | Must be "Automated" |
| evaluationFormContextId | UUID | Yes | Retrieved from the Evaluation Form API (contextId) |
| enabled | boolean | Yes | Turns the rule on/off |
| published | boolean | Yes | Controls whether the rule is active for execution |
Update or disable rules as needed:
Use the PUT /api/v2/quality/programs/{programId}/agentscoringrules/{ruleId} API to adjust sampling percentages or temporarily disable a rule by toggling enabled to false.
PUT Body Contract
{
"programId": "string (UUID)",
"samplingType": "Percentage | All",
"samplingPercentage": "number (0–100, required if samplingType = 'Percentage')",
"submissionType": "Automated",
"evaluationFormContextId": "string (UUID)",
"enabled": true,
"published": true
}
- Field descriptions:
| Field | Type | Required | Description |
| programId | UUID | Yes | Program ID for which the rule is created |
| samplingType | string | Yes | Allowed values: "Percentage" or "All" |
| samplingPercentage | number | Conditional | Required only if samplingType = Percentage |
| submissionType | string | Yes | Must be "Automated" |
| evaluationFormContextId | UUID | Yes | Retrieved from the Evaluation Form API (contextId) |
| enabled | boolean | Yes | Turns the rule on/off |
| published | boolean | Yes | Controls whether the rule is active for execution |
Monitor rule activity
Use the GET /api/v2/quality/programs/{programId}/agentscoringrules/{ruleId} API to confirm rule creation and view associated rules per program.
Remove rules if necessary
Use the DELETE /api/v2/quality/programs/{programId}/agentscoringrules/{ruleId} API to remove any rule by its ID.
Once active, rules operate continuously. Evaluations are created and submitted automatically after each interaction ends-no delay, no manual work. You can define up to five rules per program, each using different forms or sampling logic.
You can also combine rules. For example, apply one rule with a low sampling percentage for random checks, and another at 100% for high-priority interactions. Keep in mind that the daily evaluation limit per agent applies. The default is 50, but you can request an increase up to 200 evaluations per day by submitting a Genesys Cloud Customer Care ticket that includes your business justification.
This expanded automation-combined with configurable limits-means your team can scale quality assurance without scaling effort.
For further information, please refer to the recorded demo attached that showcases the functionality of the newly available APIs.
By the end of January, we'll release public APIs for accessing question-level evaluation data, including auto-complete evaluations. This will enhance your ability to analyze QA results and improve data operations across your environment.
Regards,
#AIScoring(VirtualSupervisor)
#SupervisorCopilot(AIInsights)
#QualityEvaluations
------------------------------
Jose Ruiz
Genesys - Employees
Product Manager
jose.ruiz@genesys.com