Skip to content

Stability-AI/stable-virtual-camera

Folders and files

NameName
Last commit message
Last commit date

Latest commit

42cf6ae Β· Mar 21, 2025

History

21 Commits
Mar 18, 2025
Mar 18, 2025
Mar 20, 2025
Mar 18, 2025
Mar 18, 2025
Mar 18, 2025
Mar 18, 2025
Mar 18, 2025
Mar 18, 2025
Mar 21, 2025
Mar 20, 2025
Mar 19, 2025
Mar 20, 2025

Repository files navigation

Stable Virtual Camera: Generative View Synthesis with Diffusion Models

Jensen (Jinghao) Zhou*, Hang Gao*
Vikram Voleti, Aaryaman Vasishta, Chun-Han Yao, Mark Boss
Philip Torr, Christian Rupprecht, Varun Jampani

Overview

Stable Virtual Camera (Seva) is a 1.3B generalist diffusion model for Novel View Synthesis (NVS), generating 3D consistent novel views of a scene, given any number of input views and target cameras.

πŸŽ‰ News

  • March 2025 - Stable Virtual Camera is out everywhere.

πŸ”§ Installation

git clone --recursive https://github.com/Stability-AI/stable-virtual-camera
cd stable-virtual-camera
pip install -e .

Please note that you will need python>=3.10 and torch>=2.6.0.

Check INSTALL.md for other dependencies if you want to use our demos or develop from this repo. For windows users, please use WSL as flash attention isn't supported on native Windows yet.

πŸ“– Usage

You need to properly authenticate with Hugging Face to download our model weights. Once set up, our code will handle it automatically at your first run. You can authenticate by running

# This will prompt you to enter your Hugging Face credentials.
huggingface-cli login

Once authenticated, go to our model card here and enter your information for access.

We provide two demos for you to interact with Stable Virtual Camera.

πŸš€ Gradio demo

This gradio demo is a GUI interface that requires no expert knowledge, suitable for general users. Simply run

python demo_gr.py

For a more detailed guide, follow GR_USAGE.md.

πŸ’» CLI demo

This cli demo allows you to pass in more options and control the model in a fine-grained way, suitable for power users and academic researchers. An example command line looks as simple as

python demo.py --data_path <data_path> [additional arguments]

For a more detailed guide, follow CLI_USAGE.md.

For users interested in benchmarking NVS models using command lines, check benchmark containing the details about scenes, splits, and input/target views we reported in the paper.

πŸ“š Citing

If you find this repository useful, please consider giving a star ⭐ and citation.

@article{zhou2025stable,
    title={Stable Virtual Camera: Generative View Synthesis with Diffusion Models},
    author={Jensen (Jinghao) Zhou and Hang Gao and Vikram Voleti and Aaryaman Vasishta and Chun-Han Yao and Mark Boss and
    Philip Torr and Christian Rupprecht and Varun Jampani
    },
    journal={arXiv preprint},
    year={2025}
}