Genesys Cloud - Main

 View Only

Sign Up

  • 1.  Clarification on Knowledge Fabric AI Experience Token Consumption Model

    Posted 2 days ago

    I'm trying to get clarification specifically about Knowledge Fabric pricing and would appreciate clarification to ensure accurate understanding of the commercial model.
    At first glance it seems that knowledge fabric is free but we all understand that adding a RAG solution has a cost therefore some transparency around the cost and billing are necessary to ensure usage fits into predictable budgets.

    Based on Genesys published information, my understanding is as follows:

    My Current Understanding

    1. Core Knowledge Fabric capabilities (Knowledge Portal and Knowledge Search) are included in GC2, GC3, and AI Experience bundles at no additional cost.
    2. There is no separate metering for search or retrieval within the Knowledge Portal or Knowledge App.
    3. However, when Knowledge Fabric is used by:
      • Agent Copilot
      • Virtual Agents
      • Other AI-powered tools

    the interaction is billed through AI Experience Tokens.

    1. Genesys states that:

    "Each time Knowledge is used to generate an answer, perform reasoning, or execute an action through an AI-powered tool, the interaction consumes a predefined number of tokens based on complexity and compute usage."

    From this, it appears that:

    • AI-powered use of Knowledge Fabric is metered.
    • Token consumption is interaction-based.
    • Token usage depends on complexity and compute usage.

    Clarification Questions

    To better understand the commercial implications, could you please clarify:

    1. How many AI Experience Tokens are consumed per Agent Copilot interaction when Knowledge Fabric is used to generate an answer?
      • Is there a published token consumption table?
      • Is token usage fixed per interaction or variable?
    2. Does the Copilot license allocation (e.g., 40 tokens named / 60 tokens concurrent) represent:
      • A baseline entitlement that covers typical usage? in such case is there a fair use of knowledge fabric usage included or is it all charged on top?
      • Or is it simply a feature activation cost, with additional tokens consumed per interaction?
    3. When Knowledge Fabric (RAG-based reasoning) is used within Copilot,
      • Does this consume more tokens than legacy Copilot text search or knowledge retrieval?
      • Is there a separate complexity tier for RAG-powered responses? or is any RAG answer considered by default complex and is billed extra?
    4. Is there any fair-use guidance available?
      • For example, expected number of Copilot interactions per user-month before additional token purchases are required?
    5. For forecasting purposes, is there documentation that outlines:
      • Token burn rates by AI feature
      • Complexity tiers
      • Overages modeling examples

    Objective

    The goal is simply to ensure accurate cost forecasting and avoid misunderstandings around AI Experience Token consumption when Knowledge Fabric is powering Copilot or Virtual Agents.

    Clear guidance on token consumption per AI interaction would be extremely helpful for planning and customer discussions.

    Thank you in advance for the clarification.


    #AIConfiguration
    #AICopilot(Agent,SupervisorAdmin)
    #API/Integrations
    #ArchitectandDesign
    #ConversationalAI(Bots,VirtualAgent,etc.)
    #DigitalChannels
    #Implementation
    #Omni-ChannelDesktop/UserInterface
    #Outbound
    #PredictiveEngagement/Routing
    #Reporting/Analytics
    #Roadmap/NewFeatures
    #Routing(ACD/IVR)
    #Security
    #System/PlatformAdministration
    #Telephony
    #WEM-Quality,WFM,Gamification,etc
    #Other
    #CommunityAnnouncements
    #CommunityVideos(TAM,QA,etc.)
    #GenesysAnnouncements

    ------------------------------
    Hichem Agrebi
    ------------------------------


  • 2.  RE: Clarification on Knowledge Fabric AI Experience Token Consumption Model

    Posted yesterday

    Hi Hichem,

    The Tokens in this case are LLM Tokens not AI Experience Tokens. This is referenced in https://help.genesys.cloud/articles/understand-genesys-agent-copilot-ai-models-and-llm-input/ "Large language model (LLM) tokens are not the same as Genesys Cloud AI Experience tokens."

    The LLM Token costs are covered by the feature itself and are not metered per customer or passed on. The only time currently where there is some level of correlation is AI Guide creation where 1 AI Token is used which covers the variable underlying cost.



    ------------------------------
    Richard Chandler
    Connect
    ------------------------------



  • 3.  RE: Clarification on Knowledge Fabric AI Experience Token Consumption Model

    Posted yesterday

    Thanks Richard,

    Appreciate the clarification regarding LLM tokens vs AI Experience Tokens.
    However, I'm specifically referring to the wording in the Knowledge Fabric FAQ, which states:
     
    "Each time Knowledge is used to generate an answer, perform reasoning, or execute an action through an AI-powered tool, the interaction consumes a predefined number of tokens based on complexity and compute usage."
    And also:
    "Knowledge Fabric can only be used through Virtual Agents or Agent Copilot, and any AI consumption through those touchpoints is billed according to their respective usage rates (e.g., interaction-based pricing or AI Experience Tokens)."
     
    I fully understand that LLM tokens are not the same as Genesys Cloud AI Experience Tokens. Any AI query generates input and output or inference token usage anyway and there's an add-on cost for setting up and using a RAG piepline. Why would the response above mention billing if it was LLM tokens that would be included? May be it's my english, but the nuance is pretty obvious to me.
     
    My question is whether the "predefined number of tokens" referenced above refers to customer-billable AI Experience Tokens, or only to internal LLM tokens that are not metered per customer. It is clear that the setup of the RAG pipeline and document sync with data source is not charged but my question refers to the usage and knwoldge surfacing in copilot/VA.

    The wording suggests AI Experience Token consumption, but I would appreciate explicit confirmation from Genesys to avoid any misunderstanding. If Genesys confirms that this is all included in GC2/GC3/GC4 licences and covered by the $40/named or $60/concurrent user price then it's perfect and good news for all customers. If however there's an add-on cost, we need a way to measure and track it and budget it somehow upfront to avoid any surprises.

    There's also a mention of a limit of 17 sharepoint folders but nor indication at all about any max number of documents (in addition to a missing hint that only text documents are supported).

    Thank you.



    ------------------------------
    Hichem Agrebi
    ------------------------------



  • 4.  RE: Clarification on Knowledge Fabric AI Experience Token Consumption Model

    Posted yesterday

    Hi Hichem,

    For an official response you'll need to go to your account/partner manager. However there is no separate line item or AI Token item for RAG etc. I have been running a number of Knowledge Fabric and RAG based interactions with no additional charges except the expected AI Tokens.



    ------------------------------
    Richard Chandler
    Connect
    ------------------------------



  • 5.  RE: Clarification on Knowledge Fabric AI Experience Token Consumption Model

    Posted 36 minutes ago

    Hi Hichem,

    The tokens in this sentence refers to LLM tokens, not AI pricing Tokens.

    "Each time Knowledge is used to generate an answer, perform reasoning, or execute an action through an AI-powered tool, the interaction consumes a predefined number of tokens based on complexity and compute usage."

    Apologies for the confusion, I'll request this to be made clearer. Can you send me the url that you're referencing?

    Thanks,

    Amanda 



    ------------------------------
    Amanda Halpin
    Principal Product Manager, Knowledge @ Genesys
    ------------------------------



  • 6.  RE: Clarification on Knowledge Fabric AI Experience Token Consumption Model

    Posted a minute ago

    Thanks Amanda, the reference document isn't on resource center but on Genesys Genesys Cloud Knowledge Fabric - FAQ

    So this confirms then that there is no limit to using RAG pipelines through knowledge fabric when using Copilot hence all usage costs are included respectively in the 40token/named and 60token/concurrent users?

    LLM token consumption is evident for any AI query and the main ask was around associated experience toekn usage as in any add-on cost.
    Thanks for your confirmation
     



    ------------------------------
    Hichem Agrebi
    ------------------------------