Skip to content

bytedance/Sa2VA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

52 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Sa2VA: Marrying SAM2 with LLaVA for Dense Grounded Understanding of Images and Videos

[🏠 Sa2VA] [πŸ“œ arXiv] [πŸ€— HuggingFace] [πŸŽ₯ Introduction] [πŸ§‘β€πŸ’» GitHub] [Gradio Demo (Ours internal: Sa2VA-4B)] [Gradio Demo (By HuggingFace Offical)] [πŸ€– Replicate Demo]

Haobo Yuan1* Β· Xiangtai Li2*† Β· Tao Zhang2,3* Β· Zilong Huang2 Β· Shilin Xu4 Β·Shunping Ji3 Β·Yunhai Tong4 Β·

Lu Qi2 Β· Jiashi Feng2 Β· Ming-Hsuan Yang1

1UC Merced    2ByteDance Seed    3WHU    4PKU

† project lead * the first three authors equally contribute to the work.

Teaser

Opensource progress

  • Release Qwen2.5-VL related models.
  • Release Open-sourced training datasets.
  • Release Ref-SAM-v dataset.
  • Release evaluation code for each dataset.
  • Release 1B,4B,8B, 26B model.
  • Release training code for 1b, 4b, 8b model.
  • Release inference and test code.
  • Release demo code.

Overview

This repository contains the code for the paper "Sa2VA: Marrying SAM2 with LLaVA for Dense Grounded Understanding of Images and Videos".

Sa2VA is the first unified model for the dense grounded understanding of both images and videos. Unlike existing multi-modal large language models, which are often limited to specific modalities and tasks, Sa2VA supports a wide range of image and video tasks, including referring segmentation and conversation, with minimal one-shot instruction tuning. Sa2VA combines SAM-2, a foundation video segmentation model, with LLaVA, an advanced vision-language model, and unifies text, image, and video into a shared LLM token space.

Model Zoo

We provide the following models:

Model Name Base MLLM Language Part HF Link
Sa2VA-1B InternVL2.5-1B Qwen2.5-0.5B-Instruct πŸ€— link
Sa2VA-4B InternVL2.5-4B Qwen2.5-3B-Instruct πŸ€— link
Sa2VA-8B InternVL2.5-8B internlm2_5-7b-chat πŸ€— link
Sa2VA-26B InternVL2.5-26B internlm2_5-20b-chat πŸ€— link
Sa2VA-InternVL3-2B InternVL3-2B Qwen2.5-1.5B πŸ€— link
Sa2VA-InternVL3-8B InternVL3-8B Qwen2.5-7B πŸ€— link
Sa2VA-InternVL3-14B InternVL3-14B Qwen2.5-14B πŸ€— link
Sa2VA-Qwen2_5-VL-3B Qwen2.5-VL-3B-Instruct Qwen2.5-3B πŸ€— link
Sa2VA-Qwen2_5-VL-7B Qwen2.5-VL-7B-Instruct Qwen2.5-7B πŸ€— link
Sa2VA-Qwen3-VL-4B Qwen3-VL-4B-Instruct Qwen3-4B πŸ€— link

πŸ€— Gradio Demos

We provide a script that implements interactive chat using gradio, which requires installing gradio. You can try it to build a local chat interface quickly.

PYTHONPATH=. python projects/sa2va/gradio/app.py ByteDance/Sa2VA-4B

πŸš€ Quick Start

Our Sa2VA model is available on πŸ€—HuggingFace. With very few steps, you can try it with your own data. You can install the demo/requirements.txt to avoid training-only packages.

Option1 - scripts:

Supposing you have a folder (PATH_TO_FOLDER) that contains images of a video, you can use the following script to chat with the Sa2VA model or segment the objects in the videos.

python demo/demo.py PATH_TO_FOLDER --model_path ByteDance/Sa2VA-8B --work-dir OUTPUT_DIR --text "<image>Please describe the video content."

If the output contains the segmentation results, the results will be saved to OUTPUT_DIR.

Option2 - Jupter Notebook:

Please refer to demo.ipynb.

πŸŽ₯ Demo

Demo 1 Input Video (Source: La La Land 2016):

Error

Instruction: "Please segment the girl wearing the yellow dress."

Demo 2 Input Video (Source: La La Land 2016):

Error

Instruction: "Please segment the main character."

Demo 3 Input Video (Source: Internet):

Error

Instruction: "Please segment the person wearing sun glasses."

Demo 4 Input Video (Source: Internet):

Error

Instruction: "Instruction: "Please segment the singing girl."

Demo 5 Input Video:

Error

Instruction: "What is the atmosphere of the scene?"

Answer: "The scene has a dark and mysterious atmosphere, with the men dressed in suits and ties, and the dimly lit room."

Training

Installation

We provide two ways for installation. Using uv is recommended for a faster and more reliable setup.

Option 1: Using uv (Recommended)

First, install uv:

curl -LsSf https://astral.sh/uv/install.sh | sh

Then, create a virtual environment and sync the dependencies:

uv sync --extra=latest # or uv sync --extra=legacy for Sa2VA based on InternVL2/2.5
source .venv/bin/activate

Option 2: Using conda and pip

Deprecated.

Pretrained Model Preparation

You are expected to download the following pretrained models and place them in the ./pretrained directory:

You can download the remaining models from InternVL2.5 huggingface collections.

./ # project root
pretrained/
β”œβ”€β”€ sam2_hiera_large.pt
β”œβ”€β”€ InternVL2_5-1B
β”œβ”€β”€ InternVL2_5-4B
Data Preparation

Please download the training datasets and place them in the data directory. The download link is here.

Please directly put the zip files into the data directory and unzip them. For example, you can download the video_datas_mevis.zip and unzip it in the data directory like:

unzip video_datas_mevis.zip

The final data structure should be like:

data/
β”œβ”€β”€ video_datas
|   β”œβ”€β”€ revos
|   β”œβ”€β”€ mevis
|   └── davis17
|   └── chat_univi
|   └── sam_v_full # [!important] please download this from sam-2 directly.
|   └── Ref-SAV.json
β”œβ”€β”€ ref_seg
|   β”œβ”€β”€ refclef
|   β”œβ”€β”€ refcoco
|   β”œβ”€β”€ refcoco+
|   β”œβ”€β”€ refcocog
|   β”œβ”€β”€ 
β”œβ”€β”€ glamm_data
|   β”œβ”€β”€ images
|   β”œβ”€β”€ annotations
β”œβ”€β”€ osprey-724k
|   β”œβ”€β”€ Osprey-724K
|   β”œβ”€β”€ coco
β”œβ”€β”€ llava_data
|   β”œβ”€β”€ llava_images
|   β”œβ”€β”€ LLaVA-Instruct-150K
|   β”œβ”€β”€ LLaVA-Pretrain

sam_v_full is the SA-V dataset, which is not included in the download link. You can download it from here.

Training Script

Please run the following script to train using 8 GPUS, we suggest using at least 8 A100 GPUs:

bash tools/dist.sh train projects/sa2va/configs/sa2va_in30_8b.py 8
Fine-tuning

We provide a simple example for fine-tuning Sa2VA on an image referring segmentation task. For detailed instructions, please refer to our fine-tuning guide.

The example dataset is constructed from a few images from RefCOCO. To fine-tune on your own data, you can organize it in the same format as our example annotations.json. You can download the example dataset from Hugging Face.

For other types of data, you may need to customize the dataloader and configuration. Please refer to projects/sa2va/datasets/sa2va_data_finetune.py and projects/sa2va/configs/sa2va_finetune.py for guidance.

Convert trained model to huggingface format

Please run the following script to convert:

python tools/convert_to_hf.py projects/sa2va/configs/sa2va_in30_8b.py --pth-model PATH_TO_PTH_MODEL --save-path PATH_TO_SAVE_FOLDER

Evaluation

Image/Video Referring Segmentation Evaluation

Please adopt the following script to test Sa2VA on video object segmentation benchmarks using 8 GPUS.

You can use the following command to evaluate Sa2VA on all segmentation benchmarks at once:

python projects/sa2va/evaluation/run_all_evals.py /path/to/SA2VA/model --gpus 8

or you can evaluate Sa2VA on single segmentation benchmark(such as ReVOS):

./projects/llava_sam2/evaluation/dist_test.sh projects/llava_sam2/evaluation/ref_vos_eval.py path-to-hf-model 8 --work-dir path-to-output
Image/Video QA Evaluation

We use sa2va_eval (a modified version of VLMEvalKit) for Image/Video Chat benchmark evaluation.

Single-GPU Evaluation Example:

python run.py --data MMBench_DEV_EN MME SEEDBench_IMG --model Sa2VA-1B --verbose

Multi-GPU Evaluation Example:

torchrun --nproc-per-node=8 run.py --data MMBench_DEV_EN SEEDBench_IMG MMStar AI2D_TEST MMMU_DEV_VAL ScienceQA_TEST --model Sa2VA-4B Sa2VA-8B --verbose

References

If you find this repository useful, please consider referring to the following paper:

@article{sa2va,
  title={Sa2VA: Marrying SAM2 with LLaVA for Dense Grounded Understanding of Images and Videos},
  author={Yuan, Haobo and Li, Xiangtai and Zhang, Tao and Huang, Zilong and Xu, Shilin and Ji, Shunping and Tong, Yunhai and Qi, Lu and Feng, Jiashi and Yang, Ming-Hsuan},
  journal={arXiv},
  year={2025}
}

About

Official Repo For "Sa2VA: Marrying SAM2 with LLaVA for Dense Grounded Understanding of Images and Videos"

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 5