Legacy Dev Forum Posts

 View Only

Sign Up

Conversation endTime Issue

  • 1.  Conversation endTime Issue

    Posted 06-05-2025 18:14

    Chris_Phillips | 2018-11-02 20:19:28 UTC | #1

    Hello

    We need to do work on conversations when they have ended.

    We are processing all conversations by checking the system for any updated conversations every N seconds (where N currently == 60). (calling /api/v2/analytics/conversations/details/query with an interval)

    For each conversation we load the full conversation from 'v2/conversations/{id}' and check for the 'endTime' to see if its finished for now.

    We have now seen cases where we are missing conversations because the act of updating the "endTime" value in the conversation doesn't seem to mark this conversation as "changed" so its not appearing in our query again for us to discover it now has an endTime.

    Is this expected? Any ideas what we can do about this?

    Thanks

    Chris


    Becky_Powell | 2018-11-02 21:20:28 UTC | #2

    Hi there Chris,

    I think you could simplify your workflow here by adjusting your query to return only those conversations for which 'conversationEnd' exists. This would save you the step of getting the conversation api object just to check it for endTime.

    As an example, here's a sample query that I generated in our Developer Tools:

    {
     "interval": "2018-10-25T15:29:42.000Z/2018-10-25T15:30:42.000Z",
     "order": "asc",
     "orderBy": "conversationStart",
     "paging": {
      "pageSize": 25,
      "pageNumber": 1
     },
     "conversationFilters": [
      {
       "type": "and",
       "predicates": [
        {
         "type": "dimension",
         "dimension": "conversationEnd",
         "operator": "exists",
         "value": null
        }
       ]
      }
     ]
    }

    If using this query filter does not resolve your issue, could you please explain your workflow in a bit more detail so that I can better understand your problem?

    Thanks!

    -Becky


    Chris_Phillips | 2018-11-02 21:58:08 UTC | #3

    Hi Becky

    Thanks for replying!

    I was trying to simplify my explanation a bit, a few extra points..

    1. We also process emails which don't end (and for which endTime isn't important)
    2. We also process voice calls which usually end (and for which endtime is important)
    3. For the voice calls we wait until we see an endTime to do any processing. Our system is robust so if we get the same conversation twice as having ended (eg callbacks) we can handle it.
    4. For our processing we need the full conversation objects details, not just the abbreviated ones provided by the analytics query, so we have to make the second call.

    What we are seeing is this.

    1:59:47 pm phone call c2 ends. 2:00:00 pm server queries all calls in previous minute that have changed and gets back array [c1, c2] 2:00:01 pm server queries GET /conversations/c2, conversation object returned has endTime == null, conversation is ignored 2:00:?? pm conversation endTime is set to 1:59:47pm 2:01:00 pm server queries all calls in previous minute that have changed gets back array [c3,c4]. Conversation c2 never appears in the list again so we never discover it now has an end time

    Thoughts?

    Thanks

    Chris


    tim.smith | 2018-11-02 22:11:41 UTC | #4

    It sounds like you're running into a bit of data latency. Does this issue usually happen at the top of the hour or on the 30 minutes? A lot of people like to do a lot of querying exactly at those times all across PureCloud and it can cause a very brief spike in latency to process new data in certain cases (talking a handful of seconds). If you need an investigation and definitive answer for specific cases of this, please open a case with PureCloud Care to investigate. They'll need the conversation IDs and correlation IDs of your API requests along with the description of the data you got vs. the data you expected for each request.

    To work around this, my first suggestion is to use notifications instead of polling. This will give you notifications of conversation updates in real-time and will prevent you from having to poll the API and deal with rate limiting; IIRC, the conversation notifications have the same schema/data as what is returned from the conversations APIs.

    If notifications aren't an option, I'd recommend delaying your interval by an additional minute. If your interval is from 1 to 2 minutes ago, vs. 0 to 1 minutes ago, that should give any data latency ample time to get caught up.


    Chris_Phillips | 2018-11-02 22:22:52 UTC | #5

    Hi Tim

    Thanks for the quick response.

    We are polling ~ every minute, and are looking up from the end of the previous lookup to 'now()'. So its approx 1 minute.

    In the case that I noticed it did occur around the hour mark (though it was a fluke I noticed it). We are doing this on the server side so I don't know of a mechanism to receive notifications of events?

    We have discussed lagging our lookup as a possible solution, but wanted to make sure that inconsistent latency was responsible, and that the max amount would be a few seconds so that lagging by 60 would allow us to be confident we were not running into the issue.

    Thanks

    Chris


    Chris_Phillips | 2018-11-02 22:27:02 UTC | #6

    Hi Tim

    We were also wondering what will mark a conversation as "changed" . We have seen cases where a call being marked as "flagged" (using the UI button) didn't trigger for us either.

    Is that likely the same issue?

    Thanks

    Chris


    tim.smith | 2018-11-05 14:03:25 UTC | #7

    We are doing this on the server side so I don't know of a mechanism to receive notifications of events?

    It's no different than for client side applications. Every mainstream language has websocket support. https://developer.mypurecloud.com/api/rest/v2/notifications/notification_service.html

    We have discussed lagging our lookup as a possible solution, but wanted to make sure that inconsistent latency was responsible

    That's my best guess, but if you need a definitive answer, the Care team can investigate and verify.

    We were also wondering what will mark a conversation as "changed" . We have seen cases where a call being marked as "flagged" (using the UI button) didn't trigger for us either.

    Something in the analytics data model has to change. If it's not a property that analytics tracks, analytics won't know there was activity for that conversation at that time. If there are analytics properties that are changing and those changes not being included in results for the interval containing the change, I'd report that to Care as that's inconsistent with my understanding of how analytics is supposed to work.


    Chris_Phillips | 2018-11-05 19:19:50 UTC | #8

    Hi Tim

    Okay, it is true that libraries support, but it is not a common use case one sees in documentation :slight_smile:

    Anyway, I reviewed the notification options again, and remembered they do not cover our use case. You need to explicitly listen to conversation changes by User or By Queue.

    We want all conversations regardless of Queues (of which we have more than 20 already).

    I will do some more digging.

    Chris


    Becky_Powell | 2018-11-05 21:10:14 UTC | #9

    Hi Chris,

    Yes, you must subscribe to conversation topics either by user or by queue.

    I would echo Tim's suggestion to delay your interval by an additional minute and let out Care team know if you continue to miss conversations.

    Have a great day!


    Chris_Phillips | 2018-11-29 23:01:49 UTC | #10

    Hello Becky

    So we made a change to delay by 1 minute. Since then we have detected at least one instance of a conversation missed by this. There may be lots missed, but aren't setup to easily scan for misses...

    At the time we ran the lookup the query returned 3 conversations. When the same was run the next day it returned 4 conversations.

    We do not know what to do now.... should we delay longer? What range would guarantee we would NOT miss conversations like this?

    We are calling

    https://api.mypurecloud.com/api/v2/analytics/conversations/details/query

    And our query is like ...

    { "interval": "2018-11-28T21:50:13/2018-11-28T21:51:14", "paging": { "pageSize": 100, "pageNumber": 1 } }

    We have talked about processing the last 30 minute range every 1 minute, and keeping track of identical payloads so as not to issue duplicates. But this is pushing a bunch of work onto us to work around this issue with the API.

    Please advice

    Thanks

    Chris


    tim.smith | 2018-11-30 15:19:48 UTC | #11

    At this point, it would probably be best to open a case with PureCloud Care to investigate why data isn't showing up quickly enough. Without knowing what's causing it (that's what Care can determine) my only answer is to add additional delay to a point where you don't miss any data.


    system | 2018-12-31 15:19:50 UTC | #12

    This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.


    This post was migrated from the old Developer Forum.

    ref: 3894