keep in mind it's not that it's going to be unreliable, it's just that it's not a 100% guarantee. Coupled with your statement about the fact these flows are running fully independently and without any knowledge of each other, it gives me pause to recommend such an approach on a mission critical operation; especially when there are other options available that avoid those potential pitfalls.
Original Message:
Sent: 09-16-2025 11:03
From: Marcello Jabur
Subject: Parallelize API / Data Actions
Interesting Richard, thank you for sharing! It's good to have that in mind
I guess that confirms my idea wouldn't be really reliable!
------------------------------
Marcello Jabur
Original Message:
Sent: 09-16-2025 11:00
From: Richard Schott
Subject: Parallelize API / Data Actions
@Marcello Jabur see: https://help.mypurecloud.com/faqs/how-quickly-a-trigger-invokes-the-workflow/. Most of the time triggers run near instantaneously, but occasionally there are internal systemic conditions that may slow their processing. This is partly due to the distributed nature of Trigger processing, as well as the inherent nature of the pub/sub process for eventing. In practical terms it's not likely to make a difference, but it's worth noting; especially for a use case like this, where we're attempting to aggregate information that is going to be used throughout the conversation lifecycle, so it just has to work.
------------------------------
Richard Schott
Product Manager
Original Message:
Sent: 09-16-2025 10:52
From: Marcello Jabur
Subject: Parallelize API / Data Actions
Hi Richard,
Yes, I absolutely agree that my solution is probably not a great idea haha. Just trying to give him some ideas. It could be useful for someone who wants to try to "parallel" some things like that, but like you said it might not be the greatest approach.
But regarding what you mentioned on the point of getting the current participant data (that the flow might not have access to it), if that's the case it could be achieved by using a Data Action (instead of the "Get Participant Data" component). Through the action I believe the Participant Data should be the latest updated, so this would be a way to not require a transfer to other flow.
Also, can you expand on what you mentioned regarding "triggers are not guaranteed to run in real time"?
Is that a common issue and it's expected to be kept that way? I wasn't aware of that
------------------------------
Marcello Jabur
Original Message:
Sent: 09-16-2025 10:41
From: Richard Schott
Subject: Parallelize API / Data Actions
@Marcello Jabur, while I applaud the ingenuity on the approach, I'm not sure I'd hang my hat on that; Triggers are not guaranteed to run in real time, and if there is a delay in system processing it's entirely possible that the trigger runs later than your attempt to fetch participant data. I also believe that the flow consumes the conversation at the ingest/start of the flow, and works off of the data available on the conversation at that point; if the conversation is updated for participant data while the flow is executing, I don't believe the flow has access to those updated attributes. I have seen people resolve this last point by transferring to another flow to "reset" the participant data to current, although i can't advise as to whether this is a good strategy.
IMHO, the most effective way to solve this is to leverage Jerome's suggestion. Functions were specifically designed as a way to handle scatter/gather use cases, as the single function call can parallelize multiple HTTP calls and aggregate all of the responses into a single cohesive response that gets returned to the flow. It will take a little more in the way of setup, getting all the appropriate credentials onto the integration, structuring the request template, and developing/deploying the node.js code to handle all the HTTP requests, but once it's complete it becomes a really elegant way to handle the use case (then the action becomes named something like "Customer Intake Query", and becomes really easy to understand its usage within the flow and bind all the outputs to flow variables that can be leveraged later on). Depending on the particulars, this might need to be broken into two Functions that run serially, simply to fit under the package size limitations (see: https://help.mypurecloud.com/articles/add-function-configuration/; package size must remain below 256 MB), but that is still substantially better/more readable within the flow layout than 10-12 serial API requests.
------------------------------
Richard Schott
Product Manager
Original Message:
Sent: 09-15-2025 13:15
From: Marcello Jabur
Subject: Parallelize API / Data Actions
Hello Alex,
I have thought about that in the past, and I do believe there is something you can try.
I'm not saying it's a good solution or anything (probably isn't to be honest), but you can take a look at it and see if it helps.
You could trigger an Workflow to run parallel to your Inbound Call Flow.. Maybe trigger it on "Customer Start" event.
On the Workflow you can can run all APIs (one by one, not parallel), while on your Call Flow you can have a "Welcome prompt" or something similar that has barge-in disabled... With that you can be sure that the customer has to spend at least X amount of time listening to that first prompt (let's say 5 seconds, or 10..), and while that happens your Workflow should be able to finish running all APIs. (So, you should make sure that this first prompt is long enough to always have finished all API requests)
When the Workflow is done, you could add all relevant information as Participant Data (multiple or a single one with everything in it).
And then the Inbound Call Flow would use "Get Participant Data" to get all that information and set it on the necessary variables.
---------
I don't think it's a great solution... But it's something you could try to improve.
The reason it isn't a great solution is because the Workflow and the Call Flow have no knowledge of the other running. (So the Call Flow doesn't actually know that the Workflow started and is running the APIs... and the Workflow doesn't know if the Call Flow is still running or if the customer already hang up, etc..)
But I don't think there is any other solution, you can't really use an audio to play in the call flow during those 10 APIs, because the prompt will be interrupted when each API is finished.
Let me know if this helps in any way, or if I misunderstood your needs!
------------------------------
Marcello Jabur
Original Message:
Sent: 09-15-2025 08:43
From: Alex Whyatt
Subject: Parallelize API / Data Actions
Hi all
We have around 10-12 API data actions pulling data from systems as a call is answered by the IVR. Fraud score, customer profile, language preference, tenure, repeat call counter etc etc
We have optimized each to be just a few 100ms each, but when there are a few in series it can add up and we can touch 4 or 5 seconds to accumulate everything and have the flow make a choice of routing.
Question - is there a way to parallelize the data tasks? What have other people done? We are investigating trying to collapse a few smaller calls into one larger one, but we cant really leave any out or push later in the flow as we are aiming for a good level of personalization in the IVR
Any tips or tricks? Would this be something to log on the ideas platform?
thanks
Alex
#API/Integrations
------------------------------
Alex Whyatt
Scrum Master
------------------------------