Legacy Dev Forum Posts

 View Only

Sign Up

How I created automated tests that use WebRTC softphone

  • 1.  How I created automated tests that use WebRTC softphone

    Posted 06-05-2025 19:05

    Lucas1 | 2024-10-24 10:09:51 UTC | #1

    Hey,

    I just thought I'd share a blog article I posted today about how I wrote (and continue to write) automated tests that simulate a call between and agent and customer in order to test a feature in the agent's UI.

    https://makingchatbots.com/p/automated-tests-using-genesys-clouds

    Although I've created tests at varying levels I wanted a few end-to-end tests to give me confidence that I am correctly understanding the structure and lifecycle of:

    1. Live transcript events
    2. Agent's active conversations (and attributes of conversations)
    3. Embedded Framework events

    All of which my feature uses.

    Unfortunately though these are unlikely to become CI/CD tests since they rely on browser-based auth of the user.

    Q. Is there a better way to stream audio in/out of the WebRTC softphone?

    In my solution I'm using Puppeteer to override the behaviour of the browser's MediaDevices API so I can intercept the audio sent to and from the agent. I've always wondered if there is a better way?

    I've noticed the 'WebRTC Media Helper' toggle in the Phone Management area before, and am sure I saw reference to something similar when logging out the embedded framework object. But can't find docs referencing JS SDK APIs

    Sorry if I posted this in the wrong section


    system | 2024-11-23 15:00:23 UTC | #2

    This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.


    This post was migrated from the old Developer Forum.

    ref: 30029