Hello all,
I'm doing some testing of the new Knowledge Fabric capabilities via Copilot and have a few questions/observations.
On this page: https://help.genesys.cloud/articles/create-a-knowledge-configuration/m, it states: "When enabled, previous conversation turns are included in the search query, allowing the AI to generate more relevant, context-aware answers. Contextual answers are especially useful in multi-turn conversations while non-contextual search includes only the latest user query."
My question is: what does this exactly entail?
Say, for example the conversation starts off by talking about the weather. Or, more common, the agent asking the customer for authentication details. Will those utterances factor into the search query, when an actual question is made?
And, when the topic changes from Address Update to Account Balance, how does the system "clear" the memory" and start a "fresh search"? Would love to better understand this flow so I can better communicate the expected behavior.
An aside - It would be really useful to actually see in the UI the normalized search query from the conversation context, so that we can understand what exactly has been searched. The Copilot UI still shows the "utterances" that lead to the search, but if he actually search query is different than that, it should be reflected int he UI
I've also noticed that we no longer have control over the confidence levels of the search with a Copilot using Knowledge Fabric Configuration and created an idea appropriately for it: Genesys Cloud Ideas Portal
Thanks,
Peter
#AICopilot(Agent,Supervisor,Admin)
------------------------------
Peter Stoltenberg
Engineer
------------------------------