Workforce Engagement Management

 View Only

Sign Up

Expand all | Collapse all

Now available: Quality Evaluation Question-level APIs

  • 1.  Now available: Quality Evaluation Question-level APIs

    Posted 01-21-2026 11:19
    Edited by Tracy Vickers 01-30-2026 03:41

    You can now access form, question group, question, and answer-level evaluation data for completed Quality evaluations programmatically through Genesys Cloud. These new capabilities extend beyond aggregating form scores and unlock deep visibility into how evaluations perform at every level-helping Quality leaders, analysts, and developers pinpoint trends, gaps, and coaching opportunities with precision. 

    What you can do with the Question-Level Reporting APIs 

    These public APIs allow administrators and developers to: 

    • Retrieve evaluation data at form, question group, question, and answer level 

    • Analyze average, lowest, and highest scores across questions and groups 

    • Track critical score performance independently from total score 

    • Measure answer selection counts and percentages 

    • Identify how often questions or answers are marked N/A 

    • Analyze AI Scoring and Evaluation Assistance metrics, including: 

    • Generated rate 

    • Agreement rate 

    • Failure rate 

    • Filter results by date, evaluator, agent, queue, media type, submission type, and evaluation status 

    • Power custom reporting, BI tools, and analytics pipelines 

     

    How it works 

    Question-level reporting builds on the existing Evaluation Aggregates and Evaluation Search APIs, expanding them to expose granular evaluation metrics once data processing is complete. 

    Key concepts include: 

    • Evaluation form hierarchy 

    • Form → Question Group → Question → Answer 

    • Aggregated metrics 

    • Scores, counts, percentages, and AI performance metrics are calculated across evaluations 

    • Flexible filtering 

    • Query by conversation dates, evaluation lifecycle dates, agents, teams, queues, and more 

    Once queried, results return aggregated data that reflects evaluation performance across the selected dimensions. 

     

    API Use Cases 

    API Endpoint: /api/v2/quality/evaluations/search

    Query evaluation data (form, group, question, and answer level) 

    Common use cases 

    • Identify low-performing questions across a population 

    • Analyze answer distribution for specific questions 

    • Compare AI-scored vs evaluator-selected answers 

    • Track critical failures tied to fatal or high-impact questions 

    • Build custom QA dashboards outside of Genesys Cloud 

    Example: Get all evaluations released in the last month for a specified form 

    The following query returns all released evaluations for the specified form, ordered by most recently released. 

    { 

      "query": [ 

        { 

          "type": "DATE_RANGE", 

          "field": "releaseDate", 

          "startValue": "2025-12-01T00:00:00.000Z", 

          "endValue": "2026-01-01T00:00:00.000Z" 

        }, 

        { 

          "type": "EXACT", 

          "field": "formId", 

          "operator": "AND", 

          "value": "fbca658b-47be-4912-93f3-7dcc69c433a0" 

        }, 

        { 

          "type": "EXACT", 

          "field": "released", 

          "values": [ 

            "true" 

          ], 

          "operator": "AND" 

        } 

      ] 

      "pageNumber": 1, 

     "pageSize": 50, 

     "sortOrder": "DESC", 

     "sortBy": "releaseDate" 

    } 

     

    Building on top of this with a simple aggregation, it's possible to get the average total score of all evaluations returned in the previous query with the following request: 

     

    { 

      "query": [ 

        { 

          "type": "DATE_RANGE", 

          "field": "releaseDate", 

          "startValue": "2025-12-01T00:00:00.000Z", 

          "endValue": "2026-01-01T00:00:00.000Z" 

        }, 

        { 

          "type": "EXACT", 

          "field": "formId", 

          "operator": "AND", 

          "value": "fbca658b-47be-4912-93f3-7dcc69c433a0" 

        }, 

        { 

          "type": "EXACT", 

          "field": "released", 

          "values": [ 

            "true" 

          ], 

          "operator": "AND" 

        } 

      ], 

      "pageNumber": 1, 

      "aggregations": [ 

        { 

          "field": "totalScore", 

          "type": "AVERAGE", 

          "name": "avgScore" 

        } 

      ] 

    } 

     

    Which would return a response like this: 

     

    { 
     "pageSize": 0, 
     "pageNumber": 1, 
     "results": [], 
     "aggregations": { 
       "avgScore": { 
         "value": 85 
       } 
     } 
    } 

    Example: Agreement rate for all AI configured questions within a specified evaluation form 

    Below is a query to get the number of times each question of a specified form has been successfully AI Scored (meaning the AI selected option matches the actual selected option), where true corresponds with a case of the AI selection matching, and false corresponding with a case of the selection not matching. 

    { 

      "query": [ 

        { 

          "type": "DATE_RANGE", 

          "field": "submittedDate", 

          "startValue": "2025-12-01T00:00:00.000Z", 

          "endValue": "2025-12-28T00:00:00.000Z" 

        }, 

        { 

          "type": "EXACT", 

          "field": "formId", 

          "operator": "AND", 

          "value": "fbca658b-47be-4912-93f3-7dcc69c433a0" 

        }, 

        { 

          "type": "EXACT", 

          "field": "evaluationStatus", 

          "values": [ 

            "Finished" 

          ], 

          "operator": "AND" 

        } 

      ], 

      "aggregations": [ 

        { 

          "name": "by_question_id", 

          "field": "questionId", 

          "type": "TERM", 

          "subAggregations": [ 

            { 

              "name": "questionAiScored", 

              "field": "questionAiScored", 

              "type": "TERM" 

            } 

          ] 

        } 

      ], 

      "pageNumber": 1 

    } 

     

    For the request above, the response would look like this: 

     

    { 
     "pageSize": 0, 
     "pageNumber": 1, 
     "results": [], 
     "aggregations": { 
       "byQuestionId": { 
         "documentCountErrorUpperBound": 0, 
         "sumOtherDocumentCount": 0, 
         "buckets": [ 
           { 
             "key": "bac11fb1-3483-42c8-a393-e848bbc84ab8", 
             "documentCount": 10, 
             "subAggregations": { 
               "questionAiScored": { 
                 "documentCountErrorUpperBound": 0, 
                 "sumOtherDocumentCount": 0, 
                 "buckets": [ 
                   { 
                     "key": "false", 
                     "documentCount": 2, 
                     "subAggregations": {} 
                   }, 
                   { 
                     "key": "true", 
                     "documentCount": 8, 
                     "subAggregations": {} 
                   } 
                 ] 
               } 
             }, 

          { 
             "key": "769a010e-6f32-41b6-83d6-3a89182242be", 
             "documentCount": 10, 
             "subAggregations": { 
               "questionAiScored": { 
                 "documentCountErrorUpperBound": 0, 
                 "sumOtherDocumentCount": 0, 
                 "buckets": [ 
                   { 
                     "key": "false", 
                     "documentCount": 5, 
                     "subAggregations": {} 
                   }, 
                   { 
                     "key": "true", 
                     "documentCount": 5, 
                     "subAggregations": {} 
                   } 
                 ] 
               } 
             } 
           } 
         ] 
       } 
     } 
    } 

     

    The results indicate that the questions with ids bac11fb1-3483-42c8-a393-e848bbc84ab8 and 769a010e-6f32-41b6-83d6-3a89182242be have agreement rates of 80% and 50% respectively, out of 10 cases where an evaluation was completed with said question being answered. 

     

    What's next 

    Building on these new question-level reporting APIs, we are targeting Q2 to introduce the new Evaluation Performance analytics view that provides a clear, intuitive way for Quality leaders and supervisors to explore evaluation results without relying on custom reporting or API integrations. 

    This upcoming UI experience will allow users to: 

    • Start from an aggregate evaluation form view to understand overall performance at a glance 

    • Drill down into question groups to identify sections that are driving results 

    • Further zoom into individual questions and answers to pinpoint specific behaviors, trends, and gaps 

    • Navigate seamlessly across levels using filters, trends, distributions, and drill-through links 

    • Analyze score, critical score, answer selection, and AI scoring metrics in a user-friendly, visual format 


    #AIScoring(VirtualSupervisor)
    #QualityEvaluations

    ------------------------------
    Jose Ruiz
    Genesys - Employees
    Product Manager
    jose.ruiz@genesys.com
    ------------------------------



  • 2.  RE: Now available: Quality Evaluation Question-level APIs

    Posted 01-21-2026 13:00

    This is an excellent feature - really powerful.
    Having question- and answer-level visibility unlocks a whole new level of analysis for QA, especially around AI Scoring performance and critical questions.
    I'll definitely be testing this soon. Thanks for the very detailed and clear explanation in the post!



    ------------------------------
    Mateus Nunes
    Tech Leader Of CX at Solve4ME
    Brazil
    ------------------------------



  • 3.  RE: Now available: Quality Evaluation Question-level APIs

    Posted 01-22-2026 03:33

    Hi Jose,

    This is great but it seems to be missing the actual API being used. Only the existing POST /api/v2/analytics/evaluations/aggregates/query seems to be available in API Explorer which does not have any documentation for these items.



    ------------------------------
    Richard Chandler
    Connect
    ------------------------------



  • 4.  RE: Now available: Quality Evaluation Question-level APIs

    Posted 01-23-2026 03:14

    I am also unable to find an API requesting this level of information, and can only see POST /api/v2/analytics/evaluations/aggregates/query which does not provide the level of information this release provides. I had hoped it would be available this morning, and was perhaps just delayed but I am still unable to see it. Please can you provide the API we should use for this?  

    Thanks, Heather 



    ------------------------------
    Heather Henderson
    ------------------------------



  • 5.  RE: Now available: Quality Evaluation Question-level APIs

    Posted 01-23-2026 12:57
    Edited by Jose Ruiz 01-23-2026 13:02

    Hello @Heather Henderson,

    Hello Richard,

    The endpoint is: /api/v2/quality/evaluations/search

    The updated API Explorer link is: https://developer.genesys.cloud/devapps/api-explorer#post-api-v2-quality-evaluations-search

    Regards,



    ------------------------------
    Jose Ruiz
    Genesys - Employees
    Product Manager
    jose.ruiz@genesys.com
    ------------------------------



  • 6.  RE: Now available: Quality Evaluation Question-level APIs

    Posted 01-23-2026 12:54
    Edited by Jose Ruiz 01-23-2026 13:03

    Hello @Richard Chandler

    The endpoint is: /api/v2/quality/evaluations/search

    The updated API Explorer link is: https://developer.genesys.cloud/devapps/api-explorer#post-api-v2-quality-evaluations-search

    Regards,



    ------------------------------
    Jose Ruiz
    Genesys - Employees
    Product Manager
    jose.ruiz@genesys.com
    ------------------------------



  • 7.  RE: Now available: Quality Evaluation Question-level APIs

    Posted 01-22-2026 04:31
    Hello Jose, great news - this was a much‑needed feature. Could you please specify which endpoints we should use?


    ------------------------------
    Jérémy LE MORVAN
    CHEF DE PROJET
    ------------------------------



  • 8.  RE: Now available: Quality Evaluation Question-level APIs

    Posted 01-23-2026 12:59

    Hello @Jérémy LE MORVAN,

    Hello Richard,

    The endpoint is: /api/v2/quality/evaluations/search

    The updated API Explorer link is: https://developer.genesys.cloud/devapps/api-explorer#post-api-v2-quality-evaluations-search

    Regards,



    ------------------------------
    Jose Ruiz
    Genesys - Employees
    Product Manager
    jose.ruiz@genesys.com
    ------------------------------



  • 9.  RE: Now available: Quality Evaluation Question-level APIs

    Posted 26 days ago

    Evaluation query endpoints still lack an option to retrieve data by last modification. This is a huge gap for my org as we occasionally have evals rescored well after they are released and we have to iterate across all evals in a wide range to find them via the current endpoints. Is there any consideration being made to this endpoint to filter on the last modified date on the eval? Thanks!



    ------------------------------
    Benjamin Wyatt
    ------------------------------



  • 10.  RE: Now available: Quality Evaluation Question-level APIs

    Posted 26 days ago

    Hello @Benjamin Wyant,

    Thank you for your feedback. @Hari Dasaratharaman can give you a better response on plans to add reporting using the last modification endpoint.

    With that said, have you created or voted for an idea that clearly states the business use case for product to review?. If not, please do so at your earliest convenience.

    Regards,



    ------------------------------
    Jose Ruiz
    Genesys - Employees
    Product Manager
    jose.ruiz@genesys.com
    ------------------------------



  • 11.  RE: Now available: Quality Evaluation Question-level APIs

    Posted 26 days ago

    Thank you Jose. 

    Hello @Benjamin Wyant, while we don't have plans to address this specific use case immediately, I will investigate the feasibility of doing this. As Jose mentioned, It will help for us to have an idea and then we can float it for wider community reach and tracking it. You can also reach me at my email hari.dasaratharaman@genesys.com 

    ________________

    Hari Dasaratharaman

    Genesys - Employees

    Product Manager

    hari.dasaratharaman@genesys.com



    ------------------------------
    Hari Dasaratharaman
    Principal Product Manager
    ------------------------------



  • 12.  RE: Now available: Quality Evaluation Question-level APIs

    Posted 26 days ago

    https://genesyscloud.ideas.aha.io/ideas/DARAR-I-2742  Thanks

    Evaluation Query By Last Modified

    Modify existing evaluation query/search endpoints to allow date range for the query to be the last modified datetime, including if any attribute in the evaluation was modified after the evaluation was released. As an optional stretch, also improve performance of /api/v2/quality/evaluations/query to limit query timeouts, particularly when querying more than one conversation at a time.

    Use Cases
      • Primary use case is to populate evaluation results in a data warehouse for reporting where required data includes Genesys and non-Genesys facts.

      • Evaluations in my org are occasionally modified or rescored after the initial release, and we need to capture updates.

      • This would also allow us to reliably capture updates to the Agent Reviewed flag long after the eval is released without having to requery the full set of evals over the date range.

      • Currently, batch eval retrieval is not performant outside of requesting one eval at a time, thus requiring several thousand calls for my org to get all evals released or assigned in a reasonable timeframe. Querying only recently modified evals will significantly reduce the number of calls required.



    ------------------------------
    Benjamin Wyatt
    ------------------------------



  • 13.  RE: Now available: Quality Evaluation Question-level APIs

    Posted 26 days ago

    Thank you Benjamin. I had added this idea for our Genesys community to give feedback on the Ideas portal. 



    ------------------------------
    Hari Dasaratharaman
    Principal Product Manager
    ------------------------------