2025/03/14
: We released LatentSync 1.5, which (1) improves temporal consistency via adding temporal layer, (2) improves performance on Chinese videos and (3) reduces the VRAM requirement of the stage2 training to 20 GB through a series of optimizations. Learn more details here.
We present LatentSync, an end-to-end lip-sync method based on audio-conditioned latent diffusion models without any intermediate motion representation, diverging from previous diffusion-based lip-sync methods based on pixel-space diffusion or two-stage generation. Our framework can leverage the powerful capabilities of Stable Diffusion to directly model complex audio-visual correlations.
LatentSync uses the Whisper to convert melspectrogram into audio embeddings, which are then integrated into the U-Net via cross-attention layers. The reference and masked frames are channel-wise concatenated with noised latents as the input of U-Net. In the training process, we use a one-step method to get estimated clean latents from predicted noises, which are then decoded to obtain the estimated clean frames. The TREPA, LPIPS and SyncNet losses are added in the pixel space.
Original video | Lip-synced video |
demo1_video.mp4 |
demo1_output.mp4 |
demo2_video.mp4 |
demo2_output.mp4 |
demo3_video.mp4 |
demo3_output.mp4 |
demo4_video.mp4 |
demo4_output.mp4 |
demo5_video.mp4 |
demo5_output.mp4 |
(Photorealistic videos are filmed by contracted models, and anime videos are from VASA-1 and EMO)
- Inference code and checkpoints
- Data processing pipeline
- Training code
Install the required packages and download the checkpoints via:
source setup_env.sh
If the download is successful, the checkpoints should appear as follows:
./checkpoints/
|-- latentsync_unet.pt
|-- stable_syncnet.pt
|-- whisper
| `-- tiny.pt
|-- auxiliary
| |-- 2DFAN4-cd938726ad.zip
| |-- i3d_torchscript.pt
| |-- koniq_pretrained.pkl
| |-- s3fd-619a316812.pth
| |-- sfd_face.pth
| |-- syncnet_v2.model
| |-- vgg16-397923af.pth
| `-- vit_g_hybrid_pt_1200e_ssv2_ft.pth
These already include all the checkpoints required for latentsync training and inference. If you just want to try inference, you only need to download latentsync_unet.pt
and tiny.pt
from our HuggingFace repo
There are two ways to perform inference, and both require 6.8 GB of VRAM.
Run the Gradio app for inference:
python gradio_app.py
Run the script for inference:
./inference.sh
You can try adjusting the following inference parameters to achieve better results:
inference_steps
[20-50]: A higher value improves visual quality but slows down the generation speed.guidance_scale
[1.0-3.0]: A higher value improves lip-sync accuracy but may cause the video distortion or jitter.
The complete data processing pipeline includes the following steps:
- Remove the broken video files.
- Resample the video FPS to 25, and resample the audio to 16000 Hz.
- Scene detect via PySceneDetect.
- Split each video into 5-10 second segments.
- Affine transform the faces according to the landmarks detected by face-alignment, then resize to 256
$\times$ 256. - Remove videos with sync confidence score lower than 3, and adjust the audio-visual offset to 0.
- Calculate hyperIQA score, and remove videos with scores lower than 40.
Run the script to execute the data processing pipeline:
./data_processing_pipeline.sh
You should change the parameter input_dir
in the script to specify the data directory to be processed. The processed videos will be saved in the high_visual_quality
directory. Each step will generate a new directory to prevent the need to redo the entire pipeline in case the process is interrupted by an unexpected error.
Before training, you must process the data as described above and download all the checkpoints. We released a pretrained SyncNet with 94% accuracy on both VoxCeleb2 and HDTF datasets for the supervision of U-Net training. If all the preparations are complete, you can train the U-Net with the following script:
./train_unet.sh
We prepared three UNet configuration files in the configs/unet
directory, each corresponding to a different training setup:
stage1.yaml
: Stage1 training, requires 23 GB VRAM.stage2.yaml
: Stage2 training with optimal performance, requires 30 GB VRAM.stage2_efficient.yaml
: Efficient Stage 2 training, requires 20 GB VRAM. It may lead to slight degradation in visual quality and temporal consistency compared withstage2.yaml
, suitable for users with consumer-grade GPUs, such as the RTX 3090.
Also remember to change the parameters in U-Net config file to specify the data directory, checkpoint save path, and other training hyperparameters. For convenience, we prepared a script for writing a data files list. Run the following command:
python -m tools.write_fileslist
In case you want to train SyncNet on your own datasets, you can run the following script. The data processing pipeline for SyncNet is the same as U-Net.
./train_syncnet.sh
After validations_steps
training, the loss charts will be saved in train_output_dir
. They contain both the training and validation loss. If you want to customize the architecture of SyncNet for different image resolutions and input frame lengths, please follow the guide.
You can evaluate the sync confidence score of a generated video by running the following script:
./eval/eval_sync_conf.sh
You can evaluate the accuracy of SyncNet on a dataset by running the following script:
./eval/eval_syncnet_acc.sh
Note that our released SyncNet is trained on data processed through our data processing pipeline, which includes special operations such as affine transformation and audio-visual adjustment. Therefore, before evaluation, the test data must first be processed using the provided pipeline.
- Our code is built on AnimateDiff.
- Some code are borrowed from MuseTalk, StyleSync, SyncNet, Wav2Lip.
Thanks for their generous contributions to the open-source community.
If you find our repo useful for your research, please consider citing our paper:
@article{li2024latentsync,
title={LatentSync: Taming Audio-Conditioned Latent Diffusion Models for Lip Sync with SyncNet Supervision},
author={Li, Chunyu and Zhang, Chao and Xu, Weikai and Lin, Jingyu and Xie, Jinghui and Feng, Weiguo and Peng, Bingyue and Chen, Cunjian and Xing, Weiwei},
journal={arXiv preprint arXiv:2412.09262},
year={2024}
}