robust-video-matting

Need to separate the background from the foreground of your video? Robust Video Matting model will do the magic for you so you can keep being creative.

Hold: 0
Required: 25,000 $wRAI
21k runs
Demo
Examples
Input

input_video
Video to segment.
output_type
An enumeration.
Hold at least 25,000 wRAI to use this model
Multi AI platform is completely free, but most models are only accessible to wRAI token holders. If you have any questions, feel free to ask in our Telegram chat
Input
input_video
output_type
green-screen

Readme

RVM is specifically designed for robust human video matting. Unlike existing neural models that process frames as independent images, RVM uses a recurrent neural network to process videos with temporal memory. RVM can perform matting in real-time on any videos without additional inputs. It achieves 4K 76FPS and HD 104FPS on an Nvidia GTX 1080 Ti GPU. The project was developed at ByteDance Inc

Showreel

Watch the showreel video to see the model's performance.