Original Message:
Sent: 01-07-2025 09:27
From: Anik Dey
Subject: Insights/ Experiences on Genesys Agent Copilot
@Leor Grebler can take this one. For Copilot, the model is not self-learning. We update the global models periodically based on the feedback we receive in the field. Most of the feedback comes to us directly from customers through CARE tickets, and some from the thumbs up/thumbs down. Other fine-tuning capabilities are coming soon - ability to customize the interaction summary across several domains: conciseness, formatting, how agents and customers are referred to, masking of PII, and more
------------------------------
Anik Dey
Genesys - Employees
Original Message:
Sent: 01-06-2025 01:01
From: Robert Wakefield-Carl
Subject: Insights/ Experiences on Genesys Agent Copilot
@Anik Dey or @Leor Grebler, how best to answer how training works when nothing is shared outside of the customer ORG or the individual bot for that matter.
------------------------------
Robert Wakefield-Carl
ttec Digital
Sr. Director - Innovation Architects
Robert.WC@ttecdigital.com
https://www.ttecDigital.com
https://RobertWC.Blogspot.com
Original Message:
Sent: 01-06-2025 00:04
From: Ramsha Shaikh
Subject: Insights/ Experiences on Genesys Agent Copilot
How is the feedback mechanism for Summarization and Wrap-Up codes incorporated into the model? Is the summarization model retrained quarterly using this feedback, or is the feedback utilized in a different way?
if Genesys doesn't use customer data to train the model. Does this policy also apply to the feedback provided for Summarization?
------------------------------
Ramsha Shaikh
Telecom/AI Engineer
Newfold Digital
Original Message:
Sent: 01-05-2025 23:54
From: Robert Wakefield-Carl
Subject: Insights/ Experiences on Genesys Agent Copilot
So, for the speaker problem, that is being worked on. You need to be sure to have dual-channel on the recording so that the system does not have to guess between the internal and external speaker.
As for the learning, the only mechanisms now are the feedback for the summarization and the wrap-up prediction. Genesys hopes to have more controls and the ability for you to bring your own LLM into the picture at a later date. If you want it really to learn better, considering donating 200 hours of recordings to Genesys to be privately used to train the models.
One thing I would like to test soon enough is the Email Intent feature. That might address your last issue.
------------------------------
Robert Wakefield-Carl
ttec Digital
Sr. Director - Innovation Architects
Robert.WC@ttecdigital.com
https://www.ttecDigital.com
https://RobertWC.Blogspot.com
Original Message:
Sent: 01-05-2025 23:41
From: Ramsha Shaikh
Subject: Insights/ Experiences on Genesys Agent Copilot
Hi Robert,
Thank you for taking the time to share such detailed information!
My initial question was intended to gather general insights. However, it might be helpful to focus specifically on summarization as a feature. If it's not too much trouble, could you kindly provide more insights into the questions I raised earlier, specifically within the context of conversation/ interaction summarization? For reference, we are currently utilizing only the summarization feature of Agent Copilot at this point.
Looking forward to your reply!
------------------------------
Ramsha Shaikh
Telecom/AI Engineer
Newfold Digital
Original Message:
Sent: 12-04-2024 21:08
From: Robert Wakefield-Carl
Subject: Insights/ Experiences on Genesys Agent Copilot
1. With CoPilot, the learning is based in the knowledge base itself. While some of the LLM may be trained across the knowledge bases themselves, that data is never sent to Genesys to train their overarching LLM. So, that comes down to design - do you store all your articles in one KB and train that or keep them in separate repositories. I think and Genesys confirm that the Generative AI portion of CoPilot comes from a larger LLM in Genesys that uses the individual articles to produce the summary. Again, nothing is brought in or shared out of the larger Genesys LLM.
2. I like to use the Knowledge Optimizer every day for the first couple of weeks and then once or twice a week after that. Usually by then, most of the "I don't know what I don't know" stuff will be exposed and inserted into the KB. Of course, this goes back to your first question and if you consolidate or not. I prefer the fast and furious start and getting it right and later on checking and refining individual instances of utterance failure or recognition issues.
3. While the feedback helps the AI to learn a correlation between questions and the answers provided, I don't see how in it's current form, it can really learn anything of substance that would correct what is presented to the agent. The feedback is really useful for the content team to curate the knowledge.
As for your last three points, 4 words for you: Garbage In, Garbage Out. If you don't have extra phrases for the questions or more than a 5-word answer, you are just asking for all of the above. Luckily, you can easily catch these in Optimizer and correct the articles with a drag and drop.
------------------------------
Robert Wakefield-Carl
ttec Digital
Sr. Director - Innovation Architects
Robert.WC@ttecdigital.com
https://www.ttecDigital.com
https://RobertWC.Blogspot.com
Original Message:
Sent: 12-04-2024 11:25
From: Ramsha Shaikh
Subject: Insights/ Experiences on Genesys Agent Copilot
Hello All,
I'm reaching out to gather your experiences and insights for Genesys Agent Copilot. Specifically, I have a few questions:
How does the model learn and improve over time?
What steps have you found effective in improving its accuracy?
Continuous Learning vs Periodic Updates
Is the model capable of continuous learning in a live environment, or do updates and improvements typically happen in batches?
Incorporating Feedback
How is feedback incorporated into the model? In your experience, how quickly have you seen feedback reflected in performance improvements?
To provide some context, during our Copilot testing, we encountered some challenges in generating accurate summaries, such as:
- Misattributions (e.g., customer statements interpreted as coming from the agent)
- Assumptions about actions taken
- Missed references to critical issues, such as malware or spam emails.
Have any of you experienced similar issues? If so, how did you address them, and what steps helped improve accuracy in your testing?
Looking forward to hearing about your experiences, and any insights would be appreciated
#ConversationalAI(Bots,AgentAssist,etc.)
#Roadmap/NewFeatures
------------------------------
Ramsha Shaikh
Telecom/AI Engineer
------------------------------