Jack,
I'm not sure if even Genesys can give you a complete answer here.
Obviously, way behind the scenes, there are servers, but Genesys Cloud is composed of hundreds of microservices that run in what is known as a "serverless" mode. Additional instances of the microservice that handles each function are automatically spun up when demand requires, and are terminated again automatically when no longer required. This is done by AWS.
AFAIK you cannot even be sure that successive calls to the lookup will even be processed by the same machine in the background, so whether it's done from disk (which, I believe is mostly SSD) or from memory is largely irrelevant. The whole idea is that if load increases, it auto-scales, so the performance should be fairly consistent regardless of load.
The article Robert references makes a good read, I would also look up Developer Engagement channel on YouTube. (https://www.youtube.com/@developerengagement6052) That being said, a lot of these are concerned with ways of optimizing when accessing via the API (in many cases from an external system). Internally, the data-table lookups should be optimized for you.
HTH.
------------------------------
Paul Simpson
Eventus Solutions Group
------------------------------
Original Message:
Sent: 05-11-2023 13:37
From: Jack Liu
Subject: Performance of Data table behind the scenes
Hi all,
We plan on creating a large data table that will pretty much hit every limit. My question is how the data table is accessed in the cloud server. Is the entire table read in memory or does it require a dip to a db? Also is the look up done thru a hashed dictionary based on key, which is pretty fast or is the row located thru a sequential search?
I'm very concerned about performance since it will be accessed on the order of thousands of times per hour. We need the look up to be as fast as possible.
Thanks!
#ArchitectureandDesign
------------------------------
Jack Liu
TTEC Digital, LLC fka Avtex Solutions, LLC
------------------------------