Genesys Cloud - Main

 View Only

Sign Up

  • 1.  Send an enriched prompt to the Virtual Agent

    Posted 20 days ago

    Hi everyone,

    I'm looking for guidance around a specific enterprise orchestration pattern involving Architect and the Virtual Agent, and whether there is a supported way to satisfy this requirement in Genesys Cloud.

    High‑level flow

    1. A customer interacts with a Digital Bot.
    2. Architect detects a generic intent (for example: password).
    3. The Architect task associated with the intent starts a guided disambiguation dialog, asking for additional structured information, such as:

      "Do you need to reset the password for the website or for the mobile app?"

    4. The customer's response is captured in an Architect variable (e.g. passwordContext = website | mobile).
    5. The flow then hands off to the Virtual Agent, sending a prompt enriched with this additional context, so the Virtual Agent has a clearer and more constrained understanding of the request.

    Conceptual example of the prompt passed to the Virtual Agent:

    The customer is asking about a password reset.
    The reset applies to the mobile application.
    Please guide the customer through the correct resolution steps.

    Is there a supported way to enrich the prompt sent to the Virtual Agent with information collected in Architect (variables, disambiguation results, context), after intent resolution?

    I know that Genesys Cloud provides the ability to ask the customer "which KB article are you interested in?" The articles are those retrieved through the "RAG" process. I'd be interested to see if disambiguation can be managed in another way.

    Thanks


    #ConversationalAI(Bots,VirtualAgent,etc.)

    ------------------------------
    Vittorio Iessi
    Senior Consultant
    ------------------------------


  • 2.  RE: Send an enriched prompt to the Virtual Agent

    Posted 20 days ago

    Hello Vittorio, 

    Great question, this is actually a really solid use case, and yes, there's a supported way to do exactly what you're describing. The typical approach is to handle the initial intent detection and any disambiguation in Architect first using things like Ask for Slot, Yes/No, or menus, store that info in variables, and then pass it into your Virtual Agent using the "Call Guide" action.

    When you configure that action, you can map your Architect variables directly to input variables in your AI Guide, so the Guide starts with all the context you already collected and doesn't need to ask the same questions again. This works well for scenarios like narrowing down "password" issues into something more specific before handing off to the Guide.

    From there, the Guide can use that context to give more tailored responses and even pass data back to Architect if needed. It's essentially a hybrid model-Architect handles the routing and data collection, and the AI Guide takes over for the more dynamic, conversational part.

    If you're using Agentic Virtual Agents, there's also a similar option with the Call Agentic Virtual Agent action that supports passing input/output variables the same way. Overall, the key idea is that context is passed explicitly between components, which gives you a lot of control and keeps the experience smooth without repeating questions.

    Hope this helps!



    ------------------------------
    Cameron
    Online Community Manager/Moderator
    ------------------------------



  • 3.  RE: Send an enriched prompt to the Virtual Agent

    Posted 5 days ago

    Hi Cameron,
    thanks for the detailed reply - that's very helpful and aligns with the general orchestration pattern we had in mind.

    We are indeed following the approach you described: handling intent detection and guided disambiguation in Architect, storing contextual information in variables, and passing that context into the AI Guide via Call Guide input variables.

    The challenge we are currently facing is around how and when the Knowledge Base is actually queried once the AI Guide is invoked.

    From our testing, it appears that:

    • there is no explicit or deterministic way to invoke the KB from within the AI Guide;
    • the KB is queried implicitly by the Guide only when no other instruction applies;
    • most importantly, this implicit KB query seems to occur only in response to a new user utterance.

    In practice, this means that even if we pass a fully enriched context (from Architect variables) into the AI Guide, the Guide does not retrieve KB content unless it receives a conversational input from the user - effectively behaving as if a "wait for input" step were required.

    This makes it difficult to implement a true disambiguation-driven handoff, where Architect collects and resolves ambiguity upfront and then hands control to the Virtual Agent to proceed directly with a KB-guided resolution, without re-prompting the user.

    Given this behavior, I wanted to clarify:

    • is this dependency on a user input to trigger KB retrieval an expected / current design behavior of AI Guides?
    • are there any recommended patterns to leverage KB retrieval using only pre-collected context (Architect variables), without requiring an additional user turn?
    • or should this be considered a current limitation of the platform?

    Appreciate any additional guidance or confirmation - happy to share more details from our tests if useful.

    Thanks again!
    Vittorio



    ------------------------------
    Vittorio Iessi
    Senior Consultant
    ------------------------------