Genesys Cloud - Main

 View Only

Sign Up

  • 1.  Send an enriched prompt to the Virtual Agent

    Posted 8 hours ago

    Hi everyone,

    I'm looking for guidance around a specific enterprise orchestration pattern involving Architect and the Virtual Agent, and whether there is a supported way to satisfy this requirement in Genesys Cloud.

    High‑level flow

    1. A customer interacts with a Digital Bot.
    2. Architect detects a generic intent (for example: password).
    3. The Architect task associated with the intent starts a guided disambiguation dialog, asking for additional structured information, such as:

      "Do you need to reset the password for the website or for the mobile app?"

    4. The customer's response is captured in an Architect variable (e.g. passwordContext = website | mobile).
    5. The flow then hands off to the Virtual Agent, sending a prompt enriched with this additional context, so the Virtual Agent has a clearer and more constrained understanding of the request.

    Conceptual example of the prompt passed to the Virtual Agent:

    The customer is asking about a password reset.
    The reset applies to the mobile application.
    Please guide the customer through the correct resolution steps.

    Is there a supported way to enrich the prompt sent to the Virtual Agent with information collected in Architect (variables, disambiguation results, context), after intent resolution?

    I know that Genesys Cloud provides the ability to ask the customer "which KB article are you interested in?" The articles are those retrieved through the "RAG" process. I'd be interested to see if disambiguation can be managed in another way.

    Thanks


    #ConversationalAI(Bots,VirtualAgent,etc.)

    ------------------------------
    Vittorio Iessi
    Senior Consultant
    ------------------------------


  • 2.  RE: Send an enriched prompt to the Virtual Agent

    Posted 8 hours ago

    Hello Vittorio, 

    Great question, this is actually a really solid use case, and yes, there's a supported way to do exactly what you're describing. The typical approach is to handle the initial intent detection and any disambiguation in Architect first using things like Ask for Slot, Yes/No, or menus, store that info in variables, and then pass it into your Virtual Agent using the "Call Guide" action.

    When you configure that action, you can map your Architect variables directly to input variables in your AI Guide, so the Guide starts with all the context you already collected and doesn't need to ask the same questions again. This works well for scenarios like narrowing down "password" issues into something more specific before handing off to the Guide.

    From there, the Guide can use that context to give more tailored responses and even pass data back to Architect if needed. It's essentially a hybrid model-Architect handles the routing and data collection, and the AI Guide takes over for the more dynamic, conversational part.

    If you're using Agentic Virtual Agents, there's also a similar option with the Call Agentic Virtual Agent action that supports passing input/output variables the same way. Overall, the key idea is that context is passed explicitly between components, which gives you a lot of control and keeps the experience smooth without repeating questions.

    Hope this helps!



    ------------------------------
    Cameron
    Online Community Manager/Moderator
    ------------------------------