Genesys Cloud - Developer Community!

 View Only

Sign Up

Expand all | Collapse all

Inefficient Export: genesyscloud_tf_export fetches thousands of rows from all datatables despite specific include_filter_resources

  • 1.  Inefficient Export: genesyscloud_tf_export fetches thousands of rows from all datatables despite specific include_filter_resources

    Posted 01-09-2026 03:44

    Hi everyone,

    We are using the Genesys Cloud Terraform Provider to build a data extraction tool. We've hit a performance wall when exporting Datatable Rows with the genesyscloud_tf_export resource. Even with granular filters, the provider's execution time scales with the total Org size rather than the filtered data.

    We are reporting a significant performance issue and inefficient resource retrieval in genesyscloud_tf_export. When attempting to export a single datatable's rows using a specific filter, the provider still fetches and pages through all rows of every datatable in the organization.

    **Example of filter configuration:

    _resource "genesyscloud_tf_export" "datatables_row" {
    directory = "./datatablesrow"
    include_filter_resources = ["genesyscloud_architect_datatable_row::Prioridades.ATR_AGENTES"]
    export_as_hcl = false
    include_state_file = true
    }

    The Issue: Even with this specific filter, the process takes 10+ minutes. Our TRACE logs reveal that the provider is performing GET requests for dozens of unrelated datatables and paginating through them entirely.

    Evidence from Logs: We see the provider paginating through high page numbers (e.g., pageNumber=40) for datatables that are NOT in our filter:

    • GET /api/v2/flows/datatables/0f2436e5-.../rows?pageNumber=40&pageSize=100

    • GET /api/v2/flows/datatables/ce0f73c0-.../rows?pageNumber=9&pageSize=100

    Multiple "Read Datatable Row" operations for IDs that should have been excluded by the filter.

    Ineffective Filtering: The include_filter_resources seems to be applied locally after the provider has already downloaded the entire organization's datatable data.

    Performance Scaling: The export time scales with the total volume of data in the Org, not the filtered subset. In our case, 40,000 rows across 100+ tables cause a ~11-minute wait for a result that only contains a few rows.

    API Usage: While we aren't hitting rate limits, the sheer volume of unnecessary GET requests is inefficient and prone to timeouts in larger environments.

    Why is the provider retrieving rows from all data tables when the filter is explicitly targeting only one?

    Is there a way for the tf_export resource to detect the filter before starting the API discovery/download phase?

    Is there a way to optimize the data download process?

    Thank you very much.


    #CXasCode

    ------------------------------
    Baeza Guardon
    ------------------------------


  • 2.  RE: Inefficient Export: genesyscloud_tf_export fetches thousands of rows from all datatables despite specific include_filter_resources

    Posted 01-14-2026 04:03

    Hi Baeza, we've forwarded your inquiry to the team. Thank you.



    ------------------------------
    Catherine Agnes Corpuz
    Software Development Engineering Manager
    ------------------------------



  • 3.  RE: Inefficient Export: genesyscloud_tf_export fetches thousands of rows from all datatables despite specific include_filter_resources

    Posted 01-20-2026 03:03

    Hi Baeza

    Thank you for the query with examples. We've investigated this issue.

    The performance issue you're experiencing stems from a fundamental limitation in the Genesys Cloud Architect API, not just the Terraform provider.
    The available API endpoints for datatable rows are:
    The API does not provide:
    • A way to search for rows by name across all datatables
    • A way to filter rows within a table by row key during retrieval
    • A cross-table row lookup endpoint
    This means the provider must iterate through all datatables and all their rows to discover what exists before any filtering can be applied. 

    We can implement an optimization using the name parameter in GetFlowsDatatables API based on the row naming convention used in the provider and identify the table name  but it would be a limitation for rows added outside the provider. and the solution would not be generic. 
    I will get in touch with the API team if there is a plan to have filter by name for datatable rows in the future.

    Thanks
    Hemanth




    ------------------------------
    Hemanth Dogiparthi
    Manager, Software Engineering
    ------------------------------