You can now access form, question group, question, and answer-level evaluation data for completed Quality evaluations programmatically through Genesys Cloud. These new capabilities extend beyond aggregating form scores and unlock deep visibility into how evaluations perform at every level-helping Quality leaders, analysts, and developers pinpoint trends, gaps, and coaching opportunities with precision.
What you can do with the Question-Level Reporting APIs
These public APIs allow administrators and developers to:
-
Filter results by date, evaluator, agent, queue, media type, submission type, and evaluation status
Question-level reporting builds on the existing Evaluation Aggregates and Evaluation Search APIs, expanding them to expose granular evaluation metrics once data processing is complete.
-
Query by conversation dates, evaluation lifecycle dates, agents, teams, queues, and more
Once queried, results return aggregated data that reflects evaluation performance across the selected dimensions.
API Use Cases
API Endpoint: /api/v2/quality/evaluations/search
Query evaluation data (form, group, question, and answer level)
Example: Get all evaluations released in the last month for a specified form
The following query returns all released evaluations for the specified form, ordered by most recently released.
"startValue": "2025-12-01T00:00:00.000Z",
"endValue": "2026-01-01T00:00:00.000Z"
"value": "fbca658b-47be-4912-93f3-7dcc69c433a0"
Building on top of this with a simple aggregation, it's possible to get the average total score of all evaluations returned in the previous query with the following request:
"startValue": "2025-12-01T00:00:00.000Z",
"endValue": "2026-01-01T00:00:00.000Z"
"value": "fbca658b-47be-4912-93f3-7dcc69c433a0"
Which would return a response like this:
{
"pageSize": 0,
"pageNumber": 1,
"results": [],
"aggregations": {
"avgScore": {
"value": 85
}
}
}
Example: Agreement rate for all AI configured questions within a specified evaluation form
Below is a query to get the number of times each question of a specified form has been successfully AI Scored (meaning the AI selected option matches the actual selected option), where true corresponds with a case of the AI selection matching, and false corresponding with a case of the selection not matching.
"field": "submittedDate",
"startValue": "2025-12-01T00:00:00.000Z",
"endValue": "2025-12-28T00:00:00.000Z"
"value": "fbca658b-47be-4912-93f3-7dcc69c433a0"
"field": "evaluationStatus",
"name": "by_question_id",
"name": "questionAiScored",
"field": "questionAiScored",
For the request above, the response would look like this:
{
"pageSize": 0,
"pageNumber": 1,
"results": [],
"aggregations": {
"byQuestionId": {
"documentCountErrorUpperBound": 0,
"sumOtherDocumentCount": 0,
"buckets": [
{
"key": "bac11fb1-3483-42c8-a393-e848bbc84ab8",
"documentCount": 10,
"subAggregations": {
"questionAiScored": {
"documentCountErrorUpperBound": 0,
"sumOtherDocumentCount": 0,
"buckets": [
{
"key": "false",
"documentCount": 2,
"subAggregations": {}
},
{
"key": "true",
"documentCount": 8,
"subAggregations": {}
}
]
}
},
{
"key": "769a010e-6f32-41b6-83d6-3a89182242be",
"documentCount": 10,
"subAggregations": {
"questionAiScored": {
"documentCountErrorUpperBound": 0,
"sumOtherDocumentCount": 0,
"buckets": [
{
"key": "false",
"documentCount": 5,
"subAggregations": {}
},
{
"key": "true",
"documentCount": 5,
"subAggregations": {}
}
]
}
}
}
]
}
}
}
The results indicate that the questions with ids bac11fb1-3483-42c8-a393-e848bbc84ab8 and 769a010e-6f32-41b6-83d6-3a89182242be have agreement rates of 80% and 50% respectively, out of 10 cases where an evaluation was completed with said question being answered.