Genesys Cloud - Main

 View Only

Sign Up

  • 1.  Release Automated agent scoring evaluations

    Posted 13 hours ago

    I have been testing the Automated agent scoring using programs. I found that the evaluation do not release and its difficult to report on it. 

    Is there a way to release these evaluations? 

    Will there be a new dashboard specific to Automated agent scoring? 

    If not, will we be able to filter on these? The eval shows Virtual Supervisor as the Evaluator, but you are unable to filter by it. 

    The /api/v2/quality/conversations/{conversationId}/evaluations/{evaluationId} API does not allow me to update an eval scored by Virtual Supervisor. I was thinking I could run the API to release the Evals. 


    #AIConfiguration
    #Roadmap/NewFeatures
    #WEM-Quality,WFM,Gamification,etc

    ------------------------------
    Nick Argeson
    Telephony Admin
    ------------------------------


  • 2.  RE: Release Automated agent scoring evaluations

    Posted 13 hours ago

    Good Day Nick

    I found the following on the community regarding this challenge - https://community.genesys.com/discussion/automatic-submission-of-completed-ai-evaluations

    According to the below, there will be some changes released in mid-Feb

    Regards



    ------------------------------
    Stephan Taljaard
    EMBEDIT s.r.o
    ------------------------------



  • 3.  RE: Release Automated agent scoring evaluations

    Posted 13 hours ago

    I'm not sure if it's related to your question, but Genesys released an API for the quality module in January and mentioned on the Ideas portal that they will release reports in the user interface by the end of the year. 

    link: https://genesyscloud.ideas.aha.io/ideas/WEM-I-316



    ------------------------------
    Kaio Oliveira
    Sr Systems Analyst
    GCP - GCQM - GCS - GCA - GCD - GCO - GPE & GPR - GCWM
    ------------------------------



  • 4.  RE: Release Automated agent scoring evaluations

    Posted 13 hours ago

    Hi Nick, great questions. I've been testing Automated Agent Scoring using programs as well and wanted to share a few observations from hands-on usage.

    The automated assignment and scoring flow itself is very solid and promising. The ability to score at scale without manual intervention is a big win.

    That said, from a quality process perspective, a few gaps became very clear during testing:

    Distribution logic
    Today the distribution feels very generic. It would be extremely valuable if automated scoring could better respect existing quality policies, especially in environments where not all forms are 100% AI-driven and still follow specific assignment rules.

    Release to agent (auto-feedback)
    Evaluations scored by Virtual Supervisor are not released to the agent. This becomes a major limitation if the goal is auto-feedback and continuous improvement. If the evaluation is final enough to score performance, it should also support controlled release.

    Contestation workflow
    There is currently no contestation path for AI-scored evaluations. This is especially critical for AI forms, where misinterpretations can happen. A human-in-the-loop contestation step feels essential to build trust and adoption.

    Given these points, I wanted to ask the Genesys team:
    Is there an action plan or roadmap to evolve Automated Agent Scoring into a more complete quality workflow, including release, contestation, and richer distribution logic?

    Overall, the foundation is very strong. With these additions, Automated Agent Scoring could truly become a first-class quality process rather than just a scoring mechanism.

    Thanks for opening the discussion.



    ------------------------------
    Mateus Nunes
    Tech Leader Of CX at Solve4ME
    Brazil
    ------------------------------