Workforce Engagement Management

 View Only

Sign Up

Expand all | Collapse all

Feedback Wanted: Your Experience with Virtual Supervisor (AI Scoring)

  • 1.  Feedback Wanted: Your Experience with Virtual Supervisor (AI Scoring)

    Posted 10-30-2025 11:06

    Hi everyone,

    The Virtual Supervisor (AI Scoring) feature has now been available in Genesys Cloud for several months, and we'd love to hear from you!

    We're looking to understand how this capability has impacted your organization - what benefits have you seen so far, and how has it influenced your quality management or performance processes?

    At the same time, we'd like to know about any barriers or challenges that have limited your use or expansion of the feature.

    To help you get the most out of Virtual Supervisor, we've also just launched a new AI Scoring Best Practices Guide. This resource shares recommendations and practical tips for optimizing AI scoring accuracy, aligning it with your evaluation forms, and driving better agent performance outcomes. We encourage you to review it and share your feedback or additional insights from your own experience.

    Your input will help us identify opportunities to improve the experience and guide future enhancements.

    Looking forward to hearing your thoughts - both the wins and the pain points!


    #AIScoring(VirtualSupervisor)

    ------------------------------
    Jose Ruiz
    Genesys - Employees
    Product Manager
    jose.ruiz@genesys.com
    ------------------------------


  • 2.  RE: Feedback Wanted: Your Experience with Virtual Supervisor (AI Scoring)

    Posted 11-06-2025 02:58

    Let's help our PM team - don't be shy, share your thoughts



    ------------------------------
    Tracy
    Genesys
    ------------------------------



  • 3.  RE: Feedback Wanted: Your Experience with Virtual Supervisor (AI Scoring)

    Posted 11-10-2025 03:25

    Hi.

    One of our customers has spent quite a lot of time refining the question wording in order to get what it needs from this, but with patience and experience in adjusting the prompt, they are getting there. One issue encountered was that the AI scoring cannot differentiate between the ACD messaging and the agent's spoken words, which is impacting the AI response in some of the questions.

    Heather



    ------------------------------
    Heather Henderson
    ------------------------------



  • 4.  RE: Feedback Wanted: Your Experience with Virtual Supervisor (AI Scoring)

    Posted 11-10-2025 16:47

    Hello @Heather Henderson, Thank you for your feedback. Adjusting an existing evaluation for AI Scoring does take some time with trial and error included. We published this AI Scoring best practices article a few weeks ago, hopefully you and your customers will find it useful. 

    Segment based evaluations is an enhancement we are planning to release by mid 2026. You will be able to focus on an individual agent, virtual agents (bots) or the entire interactions (Customer Experience, etc).

    Regards,



    ------------------------------
    Jose Ruiz
    Genesys - Employees
    Product Manager
    jose.ruiz@genesys.com
    ------------------------------



  • 5.  RE: Feedback Wanted: Your Experience with Virtual Supervisor (AI Scoring)

    Posted 11-12-2025 03:56

    Hi @Jose Ruiz

    In our company we have rather strict policies for Digital and AI Ethics. Where one of the no-go rules is evalution of co-worker behavior with the use of AI. For us even to start the assessment of the feature, some shifts should be made in how scores are assigned to evaluations. 
    It would be great if we could use AI scoring for general customer experience assessment without assigning a score to a single agent. Let's say we could have the scores per queue, per skill, per flow, but not in agent performance. 
    Do you think something like this might come later?



    ------------------------------
    Ekaterina (Kate) Kononova
    Product Development | Data, Analytics & Quality Management
    ------------------------------



  • 6.  RE: Feedback Wanted: Your Experience with Virtual Supervisor (AI Scoring)

    Posted 11-12-2025 12:43

    Hello @Ekaterina Kononova, thank you for your feedback. Yes, we do have AI driven interaction assessment in our roadmap for 2026. 

    Based on the evaluation form that you create, it will answer the questions, calculate a score, but not assign it to an agent, but rather the interaction itself. You can pursue use cases like customer churn score, customer effort score, etc. 

    Regards,



    ------------------------------
    Jose Ruiz
    Genesys - Employees
    Product Manager
    jose.ruiz@genesys.com
    ------------------------------



  • 7.  RE: Feedback Wanted: Your Experience with Virtual Supervisor (AI Scoring)

    Posted 11-11-2025 10:05

    Jose - I really appreciate the expansion of documentation and support for AI scoring.  One question I had, in the best practices document, it references "Review AI scoring reports monthly to identify low-confidence questions." under continuous improvement.  Can you point me to this reporting in Genesys?   I am digging through resources, and the tool itself, and I could not find another mention of it.



    ------------------------------
    Russell Donald
    InflowCX
    ------------------------------



  • 8.  RE: Feedback Wanted: Your Experience with Virtual Supervisor (AI Scoring)

    Posted 11-11-2025 11:00

    Very good catch. That portion of the document is forward looking a bit, as we will be releasing in mid January a feature where you will be able to view the agreement rate per AI scoring or Evaluation Assistance question. The rate is calculated using the number of times a user changes an answer from what AI Scoring or Evaluation Assistance selected. A low agreement rate conveys that there are issues with how the question and/or help text are worded. 



    ------------------------------
    Jose Ruiz
    Genesys - Employees
    Product Manager
    jose.ruiz@genesys.com
    ------------------------------



  • 9.  RE: Feedback Wanted: Your Experience with Virtual Supervisor (AI Scoring)

    Posted 11-11-2025 11:59

    Thanks Jose!  I am very excited about the enhanced reporting for Quality.  Looking forward to seeing this in action!



    ------------------------------
    Russell Donald
    InflowCX
    ------------------------------



  • 10.  RE: Feedback Wanted: Your Experience with Virtual Supervisor (AI Scoring)

    Posted 16 days ago

    Hi Jose - Can I clarify an item in the AI best practices guide. Under the AI scoring playbook it has a suggestion for Dead Air management and has a help text as below.

    Help Text:
    Silences longer than 15 seconds are acceptable only when explained to the customer (for example, "I'll place you on a brief hold while I check this").

    I was under the impression that the AI scoring cannot look at timestamps so would not be able to detect 15 seconds of silence. Can you clarify?


    Also, another question if I may. We are looking at implementing this in the future and I have been testing it out to see if it's suitable. You mention segment based scoring is coming next year and I have found through testing that not only does it confuse the agent with the ACD messages or digital bot flow messages but I have been able to get the customers leg to confuse the scoring as if it was the agent. Am I correct that the AI is inferring who is the customer and agent based on the transcript and there are no other 'smarts' or metadata being used to differentiate? 



    ------------------------------
    Michael Ball
    Workforce Planner
    ------------------------------



  • 11.  RE: Feedback Wanted: Your Experience with Virtual Supervisor (AI Scoring)

    Posted 11-11-2025 12:16

    This is primarily a UI-related observation. The help text section is crucial for providing additional guidance (especially for cases where we want to ensure it understands what is factually correct), but the input field does not expand when lengthy guidance is included. This makes it difficult for users to review and confirm the full content.



    ------------------------------
    Melissa Callender
    Senior Operations Specialist
    Ontario Teachers Pension Plan
    ------------------------------



  • 12.  RE: Feedback Wanted: Your Experience with Virtual Supervisor (AI Scoring)

    Posted 11-12-2025 13:02

    @Melissa Callender thank you for your feedback. We are planning to update the help text box in the forms so it is more user friendly when entering an AI Scoring prompt. 



    ------------------------------
    Jose Ruiz
    Genesys - Employees
    Product Manager
    jose.ruiz@genesys.com
    ------------------------------



  • 13.  RE: Feedback Wanted: Your Experience with Virtual Supervisor (AI Scoring)

    Posted 11-12-2025 08:31
    We did a trial of both agent Copilot and supervisor Copilot.  We found the AI scoring to be the weakest aspect of Supervisor copilot.
    We found that the reliance on the transcription service to be the major issue.  We are using the Genesys native transcription service and had Genesys teams engaged to assist with tuning for the scoring but when the transcription service can't accurately capture something like the greeting which should all be the same for each agent, it makes the automated scoring not worth it.
    Ultimately at least at this point we have decided to not rely upon the AI scoring but could revisit in the future.  We are also considering using a different transcription service rather than the Genesys native transcription service


    ------------------------------
    Bob Hall
    .
    ------------------------------



  • 14.  RE: Feedback Wanted: Your Experience with Virtual Supervisor (AI Scoring)

    Posted 11-12-2025 13:05

    Thank you for your feedback @Bob Hall. May I know what language/dialect is being used in your case for transcription?.

    We have made considerable updates to the Native transcription quality for core dialects and will release a net new transcription model in the coming weeks. Feel free to send over more information on the issues you faced directly to me so we can review them in detail.

    Regards,



    ------------------------------
    Jose Ruiz
    Genesys - Employees
    Product Manager
    jose.ruiz@genesys.com
    ------------------------------



  • 15.  RE: Feedback Wanted: Your Experience with Virtual Supervisor (AI Scoring)

    Posted 25 days ago

    Sorry for the late reply.  English US is what we were using, we are in the financial services sector (Banking & Insurance), but when it came to scoring for quality that would have been irrelevant though did not improve with the addition of financial terminology that has been added.  The scoring was done for the basic aspects of agent interaction.  Proper greeting and closing if a specific statement was made, for example "my agent ID is 13245" the transcription service did not pick these up accurately enough to make the AI scoring reliable.  Our quality team still had manually score all of the questions due to the inaccuracy of the scoring fueled by the transcription service.  

    We worked with Genesys professional services and solutions consultants but just couldn't get it to be accurate.  

    Our use case was to get the simple pieces of the scoring done by the AI that would be agnostic of line of business or customer servicing, in order to speed along the scoring and allow the quality team to still score the client specific or more difficult questions.



    ------------------------------
    Bob Hall
    .
    ------------------------------



  • 16.  RE: Feedback Wanted: Your Experience with Virtual Supervisor (AI Scoring)

    Posted 21 days ago

    Hello @Bob Hall,

    Thank you for the information. We have a brand new model for our native transcription engine under limited availability. The customers that have tested it so far have very positive feedback stating that they have seen a marked transcription accuracy improvement. 

    As you properly conveyed, with better transcription AI Scoring can be more accurate when answering questions. 

    If you are interested in participating in the LA, feel free to reach out to @Liore Finkelstein directly

    Regards,



    ------------------------------
    Jose Ruiz
    Genesys - Employees
    Product Manager
    jose.ruiz@genesys.com
    ------------------------------



  • 17.  RE: Feedback Wanted: Your Experience with Virtual Supervisor (AI Scoring)

    Posted 20 days ago

    Liore @Liore Finkelstein and Jose, I would like to test the new native transcription or understand the improvements over the existing transcription mode. We have a customer in the financial vertical who is struggling with the existing model, especially around number recognition. They are using EN(US). Thanks.



    ------------------------------
    Martin Bunting
    New Era Technology
    Senior Solutions Consultant
    ------------------------------



  • 18.  RE: Feedback Wanted: Your Experience with Virtual Supervisor (AI Scoring)

    Posted 11-12-2025 09:13

    I think its pretty good, we have tested it on a few of our calls and it seems to be pretty accurate.  we have multiple languages we support and I havent tested it on any other than English so far.  tuning the AI "agent" to get what you need makes big improvements but out of the box with some simple questions also seems to work fairly well



    ------------------------------
    Andy Jackson
    Senior Unified Communications Engineer
    ------------------------------



  • 19.  RE: Feedback Wanted: Your Experience with Virtual Supervisor (AI Scoring)

    Posted 11-12-2025 13:09

    @Andy Jackson thank you for your feedback. Once you test it for more languages, please let us know how it goes!.



    ------------------------------
    Jose Ruiz
    Genesys - Employees
    Product Manager
    jose.ruiz@genesys.com
    ------------------------------



  • 20.  RE: Feedback Wanted: Your Experience with Virtual Supervisor (AI Scoring)

    Posted 21 days ago

    Thank you for your feedback @Andy Jackson!



    ------------------------------
    Jose Ruiz
    Genesys - Employees
    Product Manager
    jose.ruiz@genesys.com
    ------------------------------



  • 21.  RE: Feedback Wanted: Your Experience with Virtual Supervisor (AI Scoring)

    Posted 24 days ago

    AI Scoring is great, but we need to be able to have more than 3 answers.  Most clients are looking to have 5 or 6 answers for many of their questions.  Also, there needs to be a way to provide prompts for each of the answers, especially when all you have is something like all the time, sometimes, not at all.  The other example for yes or no to actually define what is required for a yes or no.  

    The other think is the time - most clients complain about the amount of time to score if over about 3 questions.  

    I believe many question when the charge happens - when it scores or when evaluation is submitted.  This needs better clarification in the Resource Center.

    The Best Practice Guide is great, but I had to write a more comprehensive, yet easier-to-understand guide for our clients



    ------------------------------
    Robert Wakefield-Carl
    ttec Digital
    Sr. Director - Innovation Architects
    Robert.WC@ttecdigital.com
    https://www.ttecDigital.com
    https://RobertWC.Blogspot.com
    ------------------------------



  • 22.  RE: Feedback Wanted: Your Experience with Virtual Supervisor (AI Scoring)

    Posted 21 days ago

    Hello @Robert Wakefield-Carl

    Thank you for the feedback. We do have expanding the number of answers for AI Scoring questions from 3 to 5 in our roadmap for 2026. 

    In terms of when the AI Scoring charge takes place, we are adding an FAQ specifically that covers that question. It will publish by end of week.

    I would love to review the guide you put in place if you don't mind. Feel free to send it directly to me.

    Regards,



    ------------------------------
    Jose Ruiz
    Genesys - Employees
    Product Manager
    jose.ruiz@genesys.com
    ------------------------------



  • 23.  RE: Feedback Wanted: Your Experience with Virtual Supervisor (AI Scoring)

    Posted 22 days ago

    The AI form was not able to provide scores or did not work well with the attributes on our current monitoring forms. Currently these attributes can be open-ended, and scoring is not always straightforward. There is a bruise vs broken mindset. We rely on scoring alignment documents for each LOB to guide QAs, Team Leads, and People leaders in determining what qualifies as a Yes, No, or NA. We are interested in a way to score interactions with qualitative scaled scoring rather than just quantitative(Y/N) scoring.



    ------------------------------
    Jon Dreier
    Solotution Architect
    ------------------------------



  • 24.  RE: Feedback Wanted: Your Experience with Virtual Supervisor (AI Scoring)

    Posted 22 days ago

    @Jon, if you are interested in an alternative with automatic evaluation scoring with qualitative scaled scoring and automatic calibration, feel free to reach out.



    ------------------------------
    Hichem Agrebi
    hichem.agrebi@cc-expertise.com
    CC-Expertise Ltd
    ------------------------------



  • 25.  RE: Feedback Wanted: Your Experience with Virtual Supervisor (AI Scoring)

    Posted 21 days ago

    Hello Jon,

    I would appreciate if I can review your form to see how it can best utilize the automated scoring options we have available. Feel free to reach out directly to me.

    Regards,



    ------------------------------
    Jose Ruiz
    Genesys - Employees
    Product Manager
    jose.ruiz@genesys.com
    ------------------------------



  • 26.  RE: Feedback Wanted: Your Experience with Virtual Supervisor (AI Scoring)

    Posted 19 days ago

    Hi everyone,

    We've been using Virtual Supervisor (AI Scoring) in a large retail operation in Brazil, and the impact on our Quality Management process has been very significant.

    Before AI Scoring, the entire quality process was fully manual and fragmented. Customer interactions happened in one platform, while quality evaluations were performed in another, with no native integration between them. This resulted in high operational effort, limited scalability, and long evaluation times.

    With the introduction of Genesys Quality Management, Policies, and AI Scoring, we were able to redesign the end-to-end process and achieve substantial gains:

    • Scale quality evaluations from ~2,000 forms/month to over 15,000 forms/month, without increasing operational cost

    • Reduce the number of quality analysts from ~30 to ~15, while significantly expanding evaluation coverage

    • Use Genesys policies to automatically distribute evaluation forms, ensuring consistent and unbiased sampling

    • Leverage AI Scoring to pre-fill and score form questions, accelerating the evaluation process

    • Use AI Insights to quickly understand the interaction context, highlights, and opportunities for improvement

    Even when human review and validation were still required, the efficiency gains were very expressive:

    • The average evaluation time dropped from ~18m30s per form to ~4m30s, enabling the quality process to keep up with the actual volume of interactions being handled by the operation

    From a performance and management perspective, AI Scoring became a strong enabler for:

    • Faster and more frequent feedback cycles for agents

    • Greater consistency and standardization across evaluations, reducing subjectivity

    • A shift in the QA team's role from manual scoring to calibration, coaching, and quality strategy

    There were also important learnings and challenges along the way:

    • Evaluation form design is critical. Clear, objective questions are essential for good AI Scoring accuracy.

    • Initial calibration and continuous tuning are mandatory, especially during the early stages.

    • Change management plays a big role: positioning AI Scoring as an accelerator for quality teams, not a replacement, was key for adoption.

    Overall, Virtual Supervisor enabled a transition from a manual, sample-based quality model to a scalable, integrated, and data-driven quality strategy, which would not be feasible with traditional processes alone.

    We're looking forward to reviewing the AI Scoring Best Practices Guide and continuing to evolve this model as the product matures.

    Happy to exchange experiences with others who are also scaling AI Scoring in high-volume environments.



    ------------------------------
    Mateus Nunes
    Pre sales
    ------------------------------



  • 27.  RE: Feedback Wanted: Your Experience with Virtual Supervisor (AI Scoring)

    Posted 18 days ago

    Hello @Mateus Nunes

    Thank you so much for sharing this thoughtful and detailed post - your success story is incredibly insightful and a fantastic example of the impact AI Scoring can have in a high-volume operation.

    The scale and efficiency gains you've achieved in your quality process are truly impressive - especially going from 2,000 to 15,000 evaluations per month while reducing analyst workload. It's also great to hear how AI Scoring not only accelerated form completion but helped shift the QA team's focus toward higher-value activities like coaching and strategy. Your emphasis on form design, calibration, and change management really highlights key success factors for others looking to adopt AI Scoring at scale.

    Regards,



    ------------------------------
    Jose Ruiz
    Genesys - Employees
    Product Manager
    jose.ruiz@genesys.com
    ------------------------------



  • 28.  RE: Feedback Wanted: Your Experience with Virtual Supervisor (AI Scoring)

    Posted 14 days ago

    Overall its been really well received. But you do need the transcription to be clear and the best dialect I find as well as the below:

    It's a real skill getting the wording of a question right. I think having some standard, out‑of‑the‑box examples for common questions that most companies use would be a good idea - things like DPA or agent empathy.

    The recently released auto‑submit feature wasn't explained very clearly in the release notes. As I understand it, it doesn't work with policies because programmes are released only via the API, which many people don't use.

    The summaries explaining why a score has been given are useful, but they can be quite wordy. Having an option to shorten them would be helpful, especially when dealing with large forms.

    I believe policies are being phased out, but when a client has multiple forms across different departments it can get confusing. It's not always clear which forms are fully AI‑scored and which are a mix of evaluation assistance or manual scoring, let alone which department they belong to. At the moment you have to rely on naming conventions, but I think something like colour‑coding would make it much clearer.






    ------------------------------
    Neil Draycott
    ------------------------------