Hi All,
Do we know if this is true?
"Machine learning algorithms analyze tone, pitch, word choice and pacing to detect emotional cues like frustration or satisfaction. Trained on large datasets, these models continuously refine their accuracy, helping distinguish between neutral and emotionally charged interactions." statement is true today and if it's generally available?
Can we confirm if speech-analytics actually does this today?
When did it start monitoring tone from audio and not relying on speech transcription?
Thanks,
Sincerely,
Suren
#Reporting/Analytics#Roadmap/NewFeatures------------------------------
Suren Nathan
Product Owner - Telecom and Contact Centre
SUNWING VACATIONS INC.
------------------------------