Genesys Cloud - Main

 View Only

Sign Up

Expand all | Collapse all

AI scoring - implementation and use cases

  • 1.  AI scoring - implementation and use cases

    Posted 09-29-2025 17:32

    Hi everyone,

    I am interested in understanding how others have implemented AI scoring as part of their QA process. I am interested in how the standard QA role has evolved to include AI noting that AI scoring (gen AI) is not designed for decision making and that the recommendation is to have a human in the loop to validate AI responses?

    There is some concern that the use of AI scoring may cause QA team members to become disengaged as AI would do the heavy lifting.

    Also with AI scoring, does this do 100% of all interactions if it is switched on for a queue? How are others managing the human in the loop review process for this with their current resourcing levels?

    Cheers,


    #QualityManagement

    ------------------------------
    Deepa Galaiya
    Product Owner, Customer Interactions
    ------------------------------


  • 2.  RE: AI scoring - implementation and use cases

    Posted 09-30-2025 13:26

    It is currently limited to 50 evaluations per agent per day, not unlimited...so may or may not evaluate all conversations for a queue each day.

    Also, still limited to 20 AI Scored questions per evaluation, which could impact if you have really long evaluations.

    Currently, an Evaluator still has to review the AI Scored evaluations before they are released to the Agent. Then, the Agent can give feedback...so human is not totally out of the loop even after the automatic release of such evaluations becomes active.



    ------------------------------
    George Ganahl GCCX-AI, GCP, GCSME
    Technical Adoption Champion
    Genesys
    2024 Community Member of the Year
    ------------------------------



  • 3.  RE: AI scoring - implementation and use cases

    Posted 09-30-2025 17:36
    Edited by Deepa Galaiya 10-01-2025 02:41

    Thanks George,

    Our agents typically do 30ish interactions a day so this would equate to 100%.

    I wonder if others are finding that they have to expand their Quality teams to review all of the AI scores and how the QA role has evolved. Also, if others in the community are finding they are capturing more incidents, compliance issues or complaints which would be excellent to for risk maturity but how they are setting themselves up to manage the ripple effect.  

    Does Genesys have a vision to eventually remove the need to complete a human in the loop review as this would make the short term effort really valuable to get a long term efficiency goal or will the human in the loop always be required. 

    With AI scoring Is it that:

    • With AI scoring Is it that:

      • AI scoring completes 100% interaction reviews(or 50 per agent per day and 20 questions)
      • Alerts the QA team on certain flags, phases, low scores preset rules for the team to review
      • QA do a full review of these flagged calls or ones with concerns to validate the AI score.
      • Complaints, incidents, compliance issues captured by QA team based on AI scores (I wonder if we could design a rule to trigger this alert and a workflow to the complaints, compliance and risk teams)
      • Feedback to genesys to fine tune the AI scoring rules if required and send a sample back to the agent.
      • Agent has the ability to contest the result and provide feedback.
      • Use insights to develop coaching plans to uplift capability.

    What are your thoughts about the above process. Is this what it is designed to do?

    Cheers,

    Deepa 



    ------------------------------
    Deepa Galaiya
    Product Owner, Customer Interactions
    ------------------------------



  • 4.  RE: AI scoring - implementation and use cases
    Best Answer

    Posted 10-01-2025 12:17

    Hello Deepa,

    The current limits on the number of AI scoring questions per form and number of evaluations per agent/day are soft limits, and we are working on getting them increased. 

    Yes, the described process largely reflects what AI scoring is designed to do: automate coverage of interactions at scale, surface signals for human validation, and provide actionable insights for coaching, compliance, and process improvements. AI scoring prevents QA resources and/or supervisors from having to spend many hours listening to recordings and/or reading transcripts, and allows them to focus their time where it matters most, which is coaching agents, reviewing processes, and training procedures and documents



    ------------------------------
    Jose Ruiz
    Genesys - Employees
    Product Manager
    jose.ruiz@genesys.com
    ------------------------------



  • 5.  RE: AI scoring - implementation and use cases

    Posted 10-01-2025 18:03

    Thanks Jose

    Thats great to confirm that it is designed to automate coverage of interactions at scale. Noting that there is a requirement to complete a human in the loop review, it would still be interesting on how we evolve to make the review more efficient to complete the increased scale of interactions. If we were to follow the current process, the reviewer would listen to the call to validate AI responses are accurate. I can imagine this might be easy for standard questions like a "Proper greeting" "closing the interaction" "checking the customer is satisfied with the responses provided" but still time consuming to validate some of the more technical questions. 

    The coaching, compliance and process improvements actionable insights will be very beneficial for uplift in multiple areas. Is there an easy Genesys Solution to quickly report on these categories or will this be through manual analysis of root cause?

     Cheers



    ------------------------------
    Deepa Galaiya
    Product Owner, Customer Interactions
    ------------------------------



  • 6.  RE: AI scoring - implementation and use cases

    Posted 10-01-2025 19:41

    Note that each AI-scored answer includes the AI explanation of why it chose the answer it did, so the supervisor does not have to do a lot of listening to the recording unless really desired.

    The main thing I usually spend a little time on is when the AI cannot figure out an answer. Then I look at the transcript at that time marker to see what was said, and if necessary listen to the recording. I then also need to spend a little time tuning the question itself so it can better be answered by AI.



    ------------------------------
    George Ganahl GCCX-AI, GCP, GCSME
    Technical Adoption Champion
    Genesys
    2024 Community Member of the Year
    ------------------------------



  • 7.  RE: AI scoring - implementation and use cases

    Posted 10-01-2025 22:00

    Thanks George, 

    It might be that we work through evolving our processes to review the "exceptions" and complete a light touch review on the rest.

    Have you had much success with using this feature? How are you finding the reporting aspect to draw out actionable insights on coaching, compliance and process improvments?

    Cheers



    ------------------------------
    Deepa Galaiya
    Product Owner, Customer Interactions
    ------------------------------



  • 8.  RE: AI scoring - implementation and use cases

    Posted 10-02-2025 12:51

    I don't use it in production in a contact center, so I cannot speak to reporting aspects



    ------------------------------
    George Ganahl GCCX-AI, GCP, GCSME
    Technical Adoption Champion
    Genesys
    2024 Community Member of the Year
    ------------------------------



  • 9.  RE: AI scoring - implementation and use cases

    Posted 10-03-2025 00:09

    Thanks George - hopefully others from the community will share some of their experience with reporting :)



    ------------------------------
    Deepa Galaiya
    Product Owner, Customer Interactions
    ------------------------------



  • 10.  RE: AI scoring - implementation and use cases

    Posted 10-03-2025 13:32

    I have a question related to AI scoring for quality assurance. 

    Are there any controls over which evaluation forms go the agents and count in their scoring performance? I ask because if we were scoring for example, 50 calls each day for each agent, we wouldn't want all 50 evaluation forms being sent to the agent to review and sign off on, they wouldn't have time for this daily. So how is this controlled? 

    Thanks,



    ------------------------------
    James Starling
    Member Service Center Quality Assurance Manager
    Global Federal Credit Union
    j.starling@globalcu.org
    Anchorage, AK
    United States
    ------------------------------



  • 11.  RE: AI scoring - implementation and use cases

    Posted 10-03-2025 14:17

    Hello James,

    As of now, we AI Scoring evaluations require a user to review and submit the evaluation. At the time of submission the user decides if the evaluation should be released to the agent or not.

    For the soon to be released fully automated evaluation process, the plan os for evaluation release to agent functionality to not be available. 



    ------------------------------
    Jose Ruiz
    Genesys - Employees
    Product Manager
    jose.ruiz@genesys.com
    ------------------------------



  • 12.  RE: AI scoring - implementation and use cases

    Posted 10-03-2025 20:05

    Hi Jose,

    For the soon to be released fully automated evaluation process, will this still require a human in the loop review for all scored evaluations before it is 'officially' counted? Are there any smart ways to flag ones where the score is low or there is an issue for a human in the loop review?

    Cheers



    ------------------------------
    Deepa Galaiya
    Product Owner, Customer Interactions
    ------------------------------