I believe I have found the issue causing our low self service %, so posting here again in case it can help someone else.
Using browser developer tools, I pulled the JSON response from the Interactions view for a single day and searched on selfserved to count out how many interactions were marked with "selfServed": true, then I read through the transcript on each chat. When the engineer from our Genesys partner developed our digital bot flows, they used a Card Carousel to display the top 3 FAQ for each bot. So if I was on the Mesa Fire website and opened the bot, it might show something like this with the most common questions asked on that site:
If the customer doesn't interact with the bot, that appears to be marked as a negative interaction. Which, ok, I guess I understand that really doesn't fit the criteria of "self served". The bigger problem for us is that if the customer selects one of the card carousel buttons but never types out a query, that is also considered a negative interaction. That is majority of our customers, and why we use the FAQ. So moving forward if I want to display these types of interactions as positive, I will need to rethink and then re-engineer all of our bot flows.
JSON Response from v2/analytics/conversations/details/query:
------------------------------
Nicole VanWie
State of Arizona - City of Mesa
------------------------------
Original Message:
Sent: 04-16-2024 13:49
From: Nicole VanWie
Subject: Digital Bot Sessions Self Served Confusion
Like many here, we recently migrated from the Bold360/Genesys DX platform and have since implemented several Cloud CX bots using the same knowledge base. On the Knowledge Performance view the Sessions Self Service metric is consistently low (~25%). I have manually digested all transcripts from interactions within a date range and I cannot validate this low number - by my calculations it should be more in the 75 - 80% range which is what we saw in the old platform. I am having a difficult time interpreting what is used to create this metric even when reviewing with a hired consultant provided by the Genesys partner/VAR.
The metric definition states "The total number of successfully self served intent and/or knowledge consultations made during the bot session" and the added info on the actual dashboard states "Sessions without negative signals, divided by the total amount of sessions. A session is self-served if: At least one user query was answered., It was not escalated., It does not include negative feedback." For what it's worth, we do not have feedback turned on so that shouldn't affect our metric.
In order to get a better idea of what the dashboard is doing and to create our own PowerBI reports, we are calling the v2/analytics/bots/aggregates/query API to pull in tBotSession, oBotSessionQuery and oBotSessionQuerySelfServed metrics. I thought I would get better granularity, but this really just provides the calculated fields for the SUM and COUNT which I can use to reproduce the metric I believe is off in the built in views.
Does anybody have experience with the Self Served metrics or a better understanding of what could cause our low score? I am starting to think there is something being introduced into our actual bot build that is affecting this but not getting anywhere with my analysis. Any insight is greatly appreciated.
#ConversationalAI(Bots,AgentAssist,etc.)
#Reporting/Analytics
------------------------------
Nicole VanWie
State of Arizona - City of Mesa
------------------------------