Workforce Engagement Management

 View Only

Sign Up

  • 1.  Now available: Quality Evaluation Question-level APIs

    Posted 8 days ago
    Edited by Jose Ruiz 6 days ago

    You can now access form, question group, question, and answer-level evaluation data for completed Quality evaluations programmatically through Genesys Cloud. These new capabilities extend beyond aggregating form scores and unlock deep visibility into how evaluations perform at every level-helping Quality leaders, analysts, and developers pinpoint trends, gaps, and coaching opportunities with precision. 

    What you can do with the Question-Level Reporting APIs 

    These public APIs allow administrators and developers to: 

    • Retrieve evaluation data at form, question group, question, and answer level 

    • Analyze average, lowest, and highest scores across questions and groups 

    • Track critical score performance independently from total score 

    • Measure answer selection counts and percentages 

    • Identify how often questions or answers are marked N/A 

    • Analyze AI Scoring and Evaluation Assistance metrics, including: 

    • Generated rate 

    • Agreement rate 

    • Failure rate 

    • Filter results by date, evaluator, agent, queue, media type, submission type, and evaluation status 

    • Power custom reporting, BI tools, and analytics pipelines 

     

    How it works 

    Question-level reporting builds on the existing Evaluation Aggregates and Evaluation Search APIs, expanding them to expose granular evaluation metrics once data processing is complete. 

    Key concepts include: 

    • Evaluation form hierarchy 

    • Form → Question Group → Question → Answer 

    • Aggregated metrics 

    • Scores, counts, percentages, and AI performance metrics are calculated across evaluations 

    • Flexible filtering 

    • Query by conversation dates, evaluation lifecycle dates, agents, teams, queues, and more 

    Once queried, results return aggregated data that reflects evaluation performance across the selected dimensions. 

     

    API Use Cases 

    API Endpoint: /api/v2/quality/evaluations/search

    Query evaluation data (form, group, question, and answer level) 

    Common use cases 

    • Identify low-performing questions across a population 

    • Analyze answer distribution for specific questions 

    • Compare AI-scored vs evaluator-selected answers 

    • Track critical failures tied to fatal or high-impact questions 

    • Build custom QA dashboards outside of Genesys Cloud 

    Example: Get all evaluations released in the last month for a specified form 

    The following query returns all released evaluations for the specified form, ordered by most recently released. 

    { 

      "query": [ 

        { 

          "type": "DATE_RANGE", 

          "field": "releaseDate", 

          "startValue": "2025-12-01T00:00:00.000Z", 

          "endValue": "2026-01-01T00:00:00.000Z" 

        }, 

        { 

          "type": "EXACT", 

          "field": "formId", 

          "operator": "AND", 

          "value": "fbca658b-47be-4912-93f3-7dcc69c433a0" 

        }, 

        { 

          "type": "EXACT", 

          "field": "released", 

          "values": [ 

            "true" 

          ], 

          "operator": "AND" 

        } 

      ] 

      "pageNumber": 1, 

     "pageSize": 50, 

     "sortOrder": "DESC", 

     "sortBy": "releaseDate" 

    } 

     

    Building on top of this with a simple aggregation, it's possible to get the average total score of all evaluations returned in the previous query with the following request: 

     

    { 

      "query": [ 

        { 

          "type": "DATE_RANGE", 

          "field": "releaseDate", 

          "startValue": "2025-12-01T00:00:00.000Z", 

          "endValue": "2026-01-01T00:00:00.000Z" 

        }, 

        { 

          "type": "EXACT", 

          "field": "formId", 

          "operator": "AND", 

          "value": "fbca658b-47be-4912-93f3-7dcc69c433a0" 

        }, 

        { 

          "type": "EXACT", 

          "field": "released", 

          "values": [ 

            "true" 

          ], 

          "operator": "AND" 

        } 

      ], 

      "pageNumber": 1, 

      "aggregations": [ 

        { 

          "field": "totalScore", 

          "type": "AVERAGE", 

          "name": "avgScore" 

        } 

      ] 

    } 

     

    Which would return a response like this: 

     

    { 
     "pageSize": 0, 
     "pageNumber": 1, 
     "results": [], 
     "aggregations": { 
       "avgScore": { 
         "value": 85 
       } 
     } 
    } 

    Example: Agreement rate for all AI configured questions within a specified evaluation form 

    Below is a query to get the number of times each question of a specified form has been successfully AI Scored (meaning the AI selected option matches the actual selected option), where true corresponds with a case of the AI selection matching, and false corresponding with a case of the selection not matching. 

    { 

      "query": [ 

        { 

          "type": "DATE_RANGE", 

          "field": "submittedDate", 

          "startValue": "2025-12-01T00:00:00.000Z", 

          "endValue": "2025-12-28T00:00:00.000Z" 

        }, 

        { 

          "type": "EXACT", 

          "field": "formId", 

          "operator": "AND", 

          "value": "fbca658b-47be-4912-93f3-7dcc69c433a0" 

        }, 

        { 

          "type": "EXACT", 

          "field": "evaluationStatus", 

          "values": [ 

            "Finished" 

          ], 

          "operator": "AND" 

        } 

      ], 

      "aggregations": [ 

        { 

          "name": "by_question_id", 

          "field": "questionId", 

          "type": "TERM", 

          "subAggregations": [ 

            { 

              "name": "questionAiScored", 

              "field": "questionAiScored", 

              "type": "TERM" 

            } 

          ] 

        } 

      ], 

      "pageNumber": 1 

    } 

     

    For the request above, the response would look like this: 

     

    { 
     "pageSize": 0, 
     "pageNumber": 1, 
     "results": [], 
     "aggregations": { 
       "byQuestionId": { 
         "documentCountErrorUpperBound": 0, 
         "sumOtherDocumentCount": 0, 
         "buckets": [ 
           { 
             "key": "bac11fb1-3483-42c8-a393-e848bbc84ab8", 
             "documentCount": 10, 
             "subAggregations": { 
               "questionAiScored": { 
                 "documentCountErrorUpperBound": 0, 
                 "sumOtherDocumentCount": 0, 
                 "buckets": [ 
                   { 
                     "key": "false", 
                     "documentCount": 2, 
                     "subAggregations": {} 
                   }, 
                   { 
                     "key": "true", 
                     "documentCount": 8, 
                     "subAggregations": {} 
                   } 
                 ] 
               } 
             }, 

          { 
             "key": "769a010e-6f32-41b6-83d6-3a89182242be", 
             "documentCount": 10, 
             "subAggregations": { 
               "questionAiScored": { 
                 "documentCountErrorUpperBound": 0, 
                 "sumOtherDocumentCount": 0, 
                 "buckets": [ 
                   { 
                     "key": "false", 
                     "documentCount": 5, 
                     "subAggregations": {} 
                   }, 
                   { 
                     "key": "true", 
                     "documentCount": 5, 
                     "subAggregations": {} 
                   } 
                 ] 
               } 
             } 
           } 
         ] 
       } 
     } 
    } 

     

    The results indicate that the questions with ids bac11fb1-3483-42c8-a393-e848bbc84ab8 and 769a010e-6f32-41b6-83d6-3a89182242be have agreement rates of 80% and 50% respectively, out of 10 cases where an evaluation was completed with said question being answered. 

     

    What's next 

    Building on these new question-level reporting APIs, we are targeting Q2 to introduce the new Evaluation Performance analytics view that provides a clear, intuitive way for Quality leaders and supervisors to explore evaluation results without relying on custom reporting or API integrations. 

    This upcoming UI experience will allow users to: 

    • Start from an aggregate evaluation form view to understand overall performance at a glance 

    • Drill down into question groups to identify sections that are driving results 

    • Further zoom into individual questions and answers to pinpoint specific behaviors, trends, and gaps 

    • Navigate seamlessly across levels using filters, trends, distributions, and drill-through links 

    • Analyze score, critical score, answer selection, and AI scoring metrics in a user-friendly, visual format 


    #AIScoring(VirtualSupervisor)
    #QualityEvaluations

    ------------------------------
    Jose Ruiz
    Genesys - Employees
    Product Manager
    jose.ruiz@genesys.com
    ------------------------------



  • 2.  RE: Now available: Quality Evaluation Question-level APIs

    Posted 8 days ago

    This is an excellent feature - really powerful.
    Having question- and answer-level visibility unlocks a whole new level of analysis for QA, especially around AI Scoring performance and critical questions.
    I'll definitely be testing this soon. Thanks for the very detailed and clear explanation in the post!



    ------------------------------
    Mateus Nunes
    Tech Leader Of CX at Solve4ME
    Brazil
    ------------------------------



  • 3.  RE: Now available: Quality Evaluation Question-level APIs

    Posted 7 days ago

    Hi Jose,

    This is great but it seems to be missing the actual API being used. Only the existing POST /api/v2/analytics/evaluations/aggregates/query seems to be available in API Explorer which does not have any documentation for these items.



    ------------------------------
    Richard Chandler
    Connect
    ------------------------------



  • 4.  RE: Now available: Quality Evaluation Question-level APIs

    Posted 6 days ago

    I am also unable to find an API requesting this level of information, and can only see POST /api/v2/analytics/evaluations/aggregates/query which does not provide the level of information this release provides. I had hoped it would be available this morning, and was perhaps just delayed but I am still unable to see it. Please can you provide the API we should use for this?  

    Thanks, Heather 



    ------------------------------
    Heather Henderson
    ------------------------------



  • 5.  RE: Now available: Quality Evaluation Question-level APIs

    Posted 6 days ago
    Edited by Jose Ruiz 6 days ago

    Hello @Heather Henderson,

    Hello Richard,

    The endpoint is: /api/v2/quality/evaluations/search

    The updated API Explorer link is: https://developer.genesys.cloud/devapps/api-explorer#post-api-v2-quality-evaluations-search

    Regards,



    ------------------------------
    Jose Ruiz
    Genesys - Employees
    Product Manager
    jose.ruiz@genesys.com
    ------------------------------



  • 6.  RE: Now available: Quality Evaluation Question-level APIs

    Posted 6 days ago
    Edited by Jose Ruiz 6 days ago

    Hello @Richard Chandler

    The endpoint is: /api/v2/quality/evaluations/search

    The updated API Explorer link is: https://developer.genesys.cloud/devapps/api-explorer#post-api-v2-quality-evaluations-search

    Regards,



    ------------------------------
    Jose Ruiz
    Genesys - Employees
    Product Manager
    jose.ruiz@genesys.com
    ------------------------------



  • 7.  RE: Now available: Quality Evaluation Question-level APIs

    Posted 7 days ago
    Hello Jose, great news - this was a much‑needed feature. Could you please specify which endpoints we should use?


    ------------------------------
    Jérémy LE MORVAN
    CHEF DE PROJET
    ------------------------------



  • 8.  RE: Now available: Quality Evaluation Question-level APIs

    Posted 6 days ago

    Hello @Jérémy LE MORVAN,

    Hello Richard,

    The endpoint is: /api/v2/quality/evaluations/search

    The updated API Explorer link is: https://developer.genesys.cloud/devapps/api-explorer#post-api-v2-quality-evaluations-search

    Regards,



    ------------------------------
    Jose Ruiz
    Genesys - Employees
    Product Manager
    jose.ruiz@genesys.com
    ------------------------------