Genesys Cloud - Main

 View Only

Sign Up

  Thread closed by the administrator, not accepting new replies.
  • 1.  Performance of Data table behind the scenes

    Posted 05-11-2023 13:38
    No replies, thread closed.

    Hi all,

    We plan on creating a large data table that will pretty much hit every limit.  My question is how the data table is accessed in the cloud server.  Is the entire table read in memory or does it require a dip to a db?  Also is the look up done thru a hashed dictionary based on key, which is pretty fast or is the row located thru a sequential search? 

    I'm very concerned about performance since it will be accessed on the order of thousands of times per hour.  We need the look up to be as fast as possible.

    Thanks!


    #ArchitectureandDesign

    ------------------------------
    Jack Liu
    TTEC Digital, LLC fka Avtex Solutions, LLC
    ------------------------------


  • 2.  RE: Performance of Data table behind the scenes

    Posted 05-11-2023 13:56
    No replies, thread closed.

    Aside from the size limitations which you already know about, there is a limit to data table operations at 5000/min, which is adjustable to probably 25K, but not sure on that. 

    I would look at this blueprint for building a data table cache:  Design Architect flow data actions for resiliency (genesys.cloud)



    ------------------------------
    Robert Wakefield-Carl
    ttec Digital
    Sr. Director - Innovation Architects
    Robert.WC@ttecdigital.com
    https://www.ttecDigital.com
    https://RobertWC.Blogspot.com
    ------------------------------



  • 3.  RE: Performance of Data table behind the scenes

    Posted 05-12-2023 10:57
    No replies, thread closed.

    Jack,
    I'm not sure if even Genesys can give you a complete answer here.
    Obviously, way behind the scenes, there are servers, but Genesys Cloud is composed of hundreds of microservices that run in what is known as a "serverless" mode. Additional instances of the microservice that handles each function are automatically spun up when demand requires, and are terminated again automatically when no longer required. This is done by AWS.
    AFAIK you cannot even be sure that successive calls to the lookup will even be processed by the same machine in the background, so whether it's done from disk (which, I believe is mostly SSD) or from memory is largely irrelevant. The whole idea is that if load increases, it auto-scales, so the performance should be fairly consistent regardless of load.

    The article Robert references makes a good read, I would also look up Developer Engagement channel on YouTube. (https://www.youtube.com/@developerengagement6052) That being said, a lot of these are concerned with ways of optimizing when accessing via the API (in many cases from an external system). Internally, the data-table lookups should be optimized for you.

    HTH.



    ------------------------------
    Paul Simpson
    Eventus Solutions Group
    ------------------------------