Convert your adult VR Videos into Passthrough AR Videos using advanced background removal.
- Difference from v1: v3 solves mask merge jitter problems by implementing a custom
ArVideoWriter. See vr2ar-converter. - Difference from v2: v3 uses more modern models for background removal (MatAnyone) and replaces the v2 container. See vr2ar-converter-v2.
The only supported deployment method is via Docker to ensure all CUDA, FFmpeg, and model dependencies are correctly bundled.
Depending on your Docker/NVIDIA setup (CDI vs standard), you may need to use either the cuda1 or cuda2 example. Copy the appropriate one for your hardware:
cp docker-compose.cuda1.yaml.example docker-compose.yaml
# OR (for CDI setups)
cp docker-compose.cuda2.yaml.example docker-compose.yamlDeploy using Docker Compose:
docker compose up -d- Upload: Upload your VR video chunks (recommended length < 3 minutes).
- Configuration: Select the source projection (e.g., Equirectangular
eq) and adjust the mask size based on your available VRAM (e.g., 1440px requires ~20GB). - Masking:
- Extract projection frames.
- Generate initial masks using text prompts or point selections.
- Refine masks using the "add" or "subtract" tools.
- Process: Add the video to the job queue. The system will propagate the mask temporally and merge the result.
- Download: Download the converted AR video once processing is complete.
- NVIDIA GPU with CUDA support.
- Docker and NVIDIA Container Toolkit.