Genesys Casual

 View Only

Sign Up

Expand all | Collapse all

Evolving a Scalable Agent Copilot Strategy - Knowledge Design for Generated Responses

  • 1.  Evolving a Scalable Agent Copilot Strategy - Knowledge Design for Generated Responses

    Posted 2 days ago

    Hi everyone,

    We from the CX team at Solve4me are currently starting phase 2 of our Agent Copilot journey.

    In phase 1, we shared how we built the initial Copilot strategy and adoption approach:
    👉 https://community.genesys.com/discussion/building-a-scalable-agent-copilot-strategy-lessons-from-a-fintech-operation#bmee6f2502-b2ea-4509-9ec7-58a4b4b68f2c

    Now, phase 2 focuses on evolving the knowledge base design specifically for Content Search + Copilot generated responses.

    Our objective is to ensure Copilot consistently generates ready-to-use answers based on multiple articles, allowing agents to use the copy button - especially in chat interactions, but also in voice, where responses are already structured to be verbalized.

    Knowledge design strategy

    One key change we adopted:

    👉 Breaking large articles into smaller, intent-focused articles - always thinking about answering what the customer is actually asking.

    This approach improved precision significantly, since Copilot can combine multiple targeted articles instead of relying on a single generic one.

    As an initial test, we split a "Second Copy" article into smaller articles using only clear titles and well-structured content - without adding phrases yet - and Copilot accuracy was already very strong.

    Our hypothesis is that phrases will further improve disambiguation when articles are very similar and differ only in small details.
    This is something we are still testing.

    Copilot usage insight (API observation)

    While testing this new knowledge structure, we noticed something interesting.

    When agents use the copy button from AI-generated responses based on article context, this usage does not appear under the "Knowledge Base Copy" category.

    While investigating the copilot/query API (which we use to enrich external reporting), we observed that these responses seem to be classified under a different category.

    This raises a question for us:

    👉 Will this new category be reflected in native Genesys reporting?

    Open question to the community

    For those using Content Search + generated responses:

    • How are you tracking copy usage from AI-generated answers?

    • Have you noticed differences between Knowledge Copy vs generated response usage?

    • Any insights on reporting or best practices around this?

    • What learnings have you had when restructuring knowledge specifically for Copilot?

    We're continuing to test article granularity, phrases strategy, reporting impact, and authoring guidelines - would love to hear how others are approaching this.


    #General

    ------------------------------
    Mateus Nunes
    Tech Leader Of CX at Solve4ME
    Brazil
    ------------------------------


  • 2.  RE: Evolving a Scalable Agent Copilot Strategy - Knowledge Design for Generated Responses

    Posted 2 days ago

    Adding another interesting learning from our tests.

    We also experimented with structuring knowledge articles almost as AI guidance rather than purely informational content.

    Instead of describing a process, we created an article designed to guide how Copilot should reason about the customer request.

    Example article:

    Title: Calculation of percentage based on the provided value

    Content:
    If the customer sends a value and asks questions such as "calculate the percentage of this value" or "what is the percentage of this value", you should calculate 10% of that value and add the result to the original amount provided by the customer.

    Example interaction:

    Customer message:
    "What is the final amount after applying the percentage on top of 100?"

    Copilot generated answer:
    "10% of 100, added to the original value, results in 110."

    In our tests, Copilot correctly generated the expected response by interpreting the article as contextual guidance.

    This is still experimental and somewhat outside traditional knowledge authoring practices, but we found it very interesting because it opens possibilities for designing knowledge specifically for AI reasoning - not only for human reading.

    Curious if others have tried similar approaches.

    Copilot AI Generated


    ------------------------------
    Mateus Nunes
    Tech Leader Of CX at Solve4ME
    Brazil
    ------------------------------