Class VirtualBackgroundProcessor

The VirtualBackgroundProcessor, when added to a VideoTrack, replaces the background in each video frame with a given image, and leaves the foreground (person(s)) untouched. Each instance of VirtualBackgroundProcessor should be added to only one VideoTrack at a time to prevent overlapping of image data from multiple VideoTracks.

Example

import { createLocalVideoTrack } from 'twilio-video';
import { Pipeline, VirtualBackgroundProcessor } from '@twilio/video-processors';

let virtualBackground;
const img = new Image();

img.onload = () => {
virtualBackground = new VirtualBackgroundProcessor({
assetsPath: 'https://my-server-path/assets',
backgroundImage: img,
pipeline: Pipeline.WebGL2,

// Desktop Safari and iOS browsers do not support SIMD.
// Set debounce to true to achieve an acceptable performance.
debounce: isSafari(),
});

virtualBackground.loadModel().then(() => {
createLocalVideoTrack({
// Increasing the capture resolution decreases the output FPS
// especially on browsers that do not support SIMD
// such as desktop Safari and iOS browsers, or on Chrome
// with capture resolutions above 640x480 for webgl2.
width: 640,
height: 480,
// Any frame rate above 24 fps on desktop browsers increase CPU
// usage without noticeable increase in quality.
frameRate: 24
}).then(track => {
track.addProcessor(virtualBackground, {
inputFrameBufferType: 'video',
outputFrameBufferContextType: 'webgl2',
});
});
});
};
img.src = '/background.jpg';

Hierarchy

  • BackgroundProcessor
    • VirtualBackgroundProcessor

Constructors

Accessors

  • get backgroundImage(): HTMLImageElement
  • The HTMLImageElement representing the current background image.

    Returns HTMLImageElement

  • set backgroundImage(image): void
  • Set an HTMLImageElement as the new background image. An error will be raised if the image hasn't been fully loaded yet. Additionally, the image must follow security guidelines when loading the image from a different origin. Failing to do so will result to an empty output frame.

    Parameters

    • image: HTMLImageElement

    Returns void

  • get fitType(): ImageFit
  • The current [[ImageFit]] for positioning of the background image in the viewport.

    Returns ImageFit

  • set fitType(fitType): void
  • Set a new [[ImageFit]] to be used for positioning the background image in the viewport.

    Parameters

    Returns void

  • get maskBlurRadius(): number
  • The current blur radius when smoothing out the edges of the person's mask.

    Returns number

  • set maskBlurRadius(radius): void
  • Set a new blur radius to be used when smoothing out the edges of the person's mask.

    Parameters

    • radius: number

    Returns void

Methods

  • Load the segmentation model. Call this method before attaching the processor to ensure video frames are processed correctly.

    Returns Promise<void>

  • Apply a transform to the background of an input video frame and leaving the foreground (person(s)) untouched. Any exception detected will result in the frame being dropped.

    Parameters

    • inputFrameBuffer: OffscreenCanvas | HTMLCanvasElement | HTMLVideoElement

      The source of the input frame to process.

      OffscreenCanvas - Good for canvas-related processing that can be rendered off screen. Only works when using [[Pipeline.Canvas2D]].

      HTMLCanvasElement - This is recommended on browsers that doesn't support OffscreenCanvas, or if you need to render the frame on the screen. Only works when using [[Pipeline.Canvas2D]].

      HTMLVideoElement - Recommended when using [[Pipeline.WebGL2]] but works for both [[Pipeline.Canvas2D]] and [[Pipeline.WebGL2]].

    • outputFrameBuffer: HTMLCanvasElement

      The output frame buffer to use to draw the processed frame.

    Returns Promise<void>

Generated using TypeDoc