Background blur function using media-pipe's selfie-segmentation model. Based on volcomix/virtual-background.
Here are the performance observed for the whole rendering pipelines, including inference and post-processing, when using the device camera on smartphone Pixel 3 (Chrome).
Model | Input resolution | Backend | Pipeline | FPS |
---|---|---|---|---|
Meet | 256x144 | WebAssembly | Canvas 2D + CPU | 14 |
Meet | 256x144 | WebAssembly | WebGL 2 | 16 |
Meet | 256x144 | WebAssembly SIMD | Canvas 2D + CPU | 26 |
Meet | 256x144 | WebAssembly SIMD | WebGL 2 | 31 |
Meet | 160x96 | WebAssembly | Canvas 2D + CPU | 29 |
Meet | 160x96 | WebAssembly | WebGL 2 | 35 |
Meet | 160x96 | WebAssembly SIMD | Canvas 2D + CPU | 48 |
Meet | 160x96 | WebAssembly SIMD | WebGL 2 | 60 |
- Rely on alpha channel to save texture fetches from the segmentation mask.
- Blur the background image outside of the rendering loop and use it for light wrapping instead of the original background image. This should produce better rendering results for large light wrapping masks.
- Optimize joint bilateral filter shader to prevent unnecessary variables, calculations and costly functions like
exp
. - Try separable approximation for joint bilateral filter.
- Compute everything on lower source resolution (scaling down at the beginning of the pipeline).
- Build TFLite and XNNPACK with multithreading support. Few configuration examples are in TensorFlow.js WASM backend.
- Detect WASM features to load automatically the right TFLite WASM runtime. Inspirations could be taken from TensorFlow.js WASM backend which is based on GoogleChromeLabs/wasm-feature-detect.
- Experiment with DeepLabv3+ and maybe retrain
MobileNetv3-small
model directly.
You can learn more about a pre-trained TensorFlow.js model in the BodyPix repository.
Here is a technical overview of background features in Google Meet which relies on:
- MediaPipe
- WebAssembly
- WebAssembly SIMD
- WebGL
- XNNPACK
- TFLite
- Custom segmentation ML models from Google
- Custom rendering effects through OpenGL shaders from Google
In the project directory, you can run:
Runs the app in the development mode.
Open http://localhost:3000 to view it in the browser.
The page will reload if you make edits.
You will also see any lint errors in the console.
Launches the test runner in the interactive watch mode.
See the section about running tests for more information.
Builds the app for production to the build
folder.
It correctly bundles React in production mode and optimizes the build for the best performance.
The build is minified and the filenames include the hashes.
Your app is ready to be deployed!
See the section about deployment for more information.
Docker is required to build TensorFlow Lite inference tool locally.
Builds WASM functions that can infer Meet and ML Kit segmentation models. The TFLite tool is built both with and without SIMD support.