Genesys Cloud - Developer Community!

 View Only

Sign Up

Inefficient Export: genesyscloud_tf_export fetches thousands of rows from all datatables despite specific include_filter_resources

  • 1.  Inefficient Export: genesyscloud_tf_export fetches thousands of rows from all datatables despite specific include_filter_resources

    Posted 19 hours ago

    Hi everyone,

    We are using the Genesys Cloud Terraform Provider to build a data extraction tool. We've hit a performance wall when exporting Datatable Rows with the genesyscloud_tf_export resource. Even with granular filters, the provider's execution time scales with the total Org size rather than the filtered data.

    We are reporting a significant performance issue and inefficient resource retrieval in genesyscloud_tf_export. When attempting to export a single datatable's rows using a specific filter, the provider still fetches and pages through all rows of every datatable in the organization.

    **Example of filter configuration:

    _resource "genesyscloud_tf_export" "datatables_row" {
    directory = "./datatablesrow"
    include_filter_resources = ["genesyscloud_architect_datatable_row::Prioridades.ATR_AGENTES"]
    export_as_hcl = false
    include_state_file = true
    }

    The Issue: Even with this specific filter, the process takes 10+ minutes. Our TRACE logs reveal that the provider is performing GET requests for dozens of unrelated datatables and paginating through them entirely.

    Evidence from Logs: We see the provider paginating through high page numbers (e.g., pageNumber=40) for datatables that are NOT in our filter:

    • GET /api/v2/flows/datatables/0f2436e5-.../rows?pageNumber=40&pageSize=100

    • GET /api/v2/flows/datatables/ce0f73c0-.../rows?pageNumber=9&pageSize=100

    Multiple "Read Datatable Row" operations for IDs that should have been excluded by the filter.

    Ineffective Filtering: The include_filter_resources seems to be applied locally after the provider has already downloaded the entire organization's datatable data.

    Performance Scaling: The export time scales with the total volume of data in the Org, not the filtered subset. In our case, 40,000 rows across 100+ tables cause a ~11-minute wait for a result that only contains a few rows.

    API Usage: While we aren't hitting rate limits, the sheer volume of unnecessary GET requests is inefficient and prone to timeouts in larger environments.

    Why is the provider retrieving rows from all data tables when the filter is explicitly targeting only one?

    Is there a way for the tf_export resource to detect the filter before starting the API discovery/download phase?

    Is there a way to optimize the data download process?

    Thank you very much.


    #CXasCode

    ------------------------------
    Baeza Guardon
    ------------------------------