Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement DDPM inversion #24

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
67 changes: 48 additions & 19 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
# TokenFlow: Consistent Diffusion Features for Consistent Video Editing

## [<a href="https://diffusion-tokenflow.github.io/" target="_blank">Project Page</a>]

[![arXiv](https://img.shields.io/badge/arXiv-TokenFlow-b31b1b.svg)](https://arxiv.org/abs/2307.10373) [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/weizmannscience/tokenflow)
![Pytorch](https://img.shields.io/badge/PyTorch->=1.10.0-Red?logo=pytorch)



[//]: # ([![Replicate]&#40;https://replicate.com/cjwbw/multidiffusion/badge&#41;]&#40;https://replicate.com/cjwbw/multidiffusion&#41;)

[//]: # ([![Hugging Face Spaces]&#40;https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue&#41;]&#40;https://huggingface.co/spaces/weizmannscience/text2live&#41;)
Expand All @@ -15,15 +15,28 @@

https://github.com/omerbt/TokenFlow/assets/52277000/93dccd63-7e9a-4540-a941-31962361b0bb

**TokenFlow** is a framework that enables consistent video editing, using a pre-trained text-to-image diffusion model,
without any further training or finetuning.

**TokenFlow** is a framework that enables consistent video editing, using a pre-trained text-to-image diffusion model, without any further training or finetuning.
## 🚨 Updates 🚨
- **DDPM inversion:** The latest version now supports ddpm inversion, providing up to 95% reduced inversion times and better reconstruction quality. </br>
DDPM is now set as default. **Directory structure has been updated with this change.**

[//]: # (as described in <a href="https://arxiv.org/abs/2302.08113" target="_blank">&#40;link to paper&#41;</a>.)

[//]: # (. It can be used for localized and global edits that change the texture of existing objects or augment the scene with semi-transparent effects &#40;e.g. smoke, fire, snow&#41;.)

[//]: # (### Abstract)
>The generative AI revolution has been recently expanded to videos. Nevertheless, current state-of-the-art video mod- els are still lagging behind image models in terms of visual quality and user control over the generated content. In this work, we present a framework that harnesses the power of a text-to-image diffusion model for the task of text-driven video editing. Specifically, given a source video and a target text-prompt, our method generates a high-quality video that adheres to the target text, while preserving the spatial lay- out and dynamics of the input video. Our method is based on our key observation that consistency in the edited video can be obtained by enforcing consistency in the diffusion feature space. We achieve this by explicitly propagating diffusion features based on inter-frame correspondences, readily available in the model. Thus, our framework does not require any training or fine-tuning, and can work in con- junction with any off-the-shelf text-to-image editing method. We demonstrate state-of-the-art editing results on a variety of real-world videos.
> The generative AI revolution has been recently expanded to videos. Nevertheless, current state-of-the-art video mod-
> els are still lagging behind image models in terms of visual quality and user control over the generated content. In
> this work, we present a framework that harnesses the power of a text-to-image diffusion model for the task of
> text-driven video editing. Specifically, given a source video and a target text-prompt, our method generates a
> high-quality video that adheres to the target text, while preserving the spatial layout and dynamics of the input
> video. Our method is based on our key observation that consistency in the edited video can be obtained by enforcing
> consistency in the diffusion feature space. We achieve this by explicitly propagating diffusion features based on
> inter-frame correspondences, readily available in the model. Thus, our framework does not require any training or
> fine-tuning, and can work in conjunction with any off-the-shelf text-to-image editing method. We demonstrate
> state-of-the-art editing results on a variety of real-world videos.

For more see the [project webpage](https://diffusion-tokenflow.github.io).

Expand All @@ -32,50 +45,66 @@ For more see the [project webpage](https://diffusion-tokenflow.github.io).
<td><img src="assets/videos.gif"></td>

## Environment

```
conda create -n tokenflow python=3.9
conda activate tokenflow
pip install -r requirements.txt
```

## Preprocess

Preprocess you video by running using the following command:

```
python preprocess.py --data_path <data/myvideo.mp4> \
--inversion_prompt <'' or a string describing the video content>
```

Additional arguments:
```
--save_dir <latents>
--H <video height>
--W <video width>
--sd_version <Stable-Diffusion version>
--steps <number of inversion steps>
--save_steps <number of sampling steps that will be used later for editing>
--n_frames <number of frames>

```
--H <video height, defaults to 512>
--W <video width, defaults to 512>
--save_dir <defaults to 'latents'>
--sd_version <Stable-Diffusion version, defaults to 2.1>
--reconstruct <will reconstruct the original video if flag is set>
--steps <number of inversion steps. Should be between 50 ad 100 for DDPM and 500 and 1000 for DDIM inversion, respectively. defaults to 50>
--batch_size <batch size used, defaults to 40>
--save_steps <number of sampling steps that will be used later for editing. Ignored for DDPM inversion.>
--inversion <number of frames, defaults to 40>
--skip_steps <Initial diffusion steps to skip at inference>


```

more information on the arguments can be found here.

### Note:
The video reconstruction will be saved as inverted.mp4. A good reconstruction is required for successfull editing with our method.
### Note:

Contrary to DDIM, edit-friendly DDPM inversion guarantees perfect reconstruction for any number of timesteps, thus eliminating reconstruction errors.
Consequently, the number of inversion steps should be chosen equal to the number of inference steps

## Editing

- TokenFlow is designed for video for structure-preserving edits.
- Our method is built on top of an image editing technique (e.g., Plug-and-Play, ControlNet, etc.) - therefore, it is important to ensure that the edit works with the chosen base technique.
- The LDM decoder may introduce some jitterness, depending on the original video.
- TokenFlow is designed for video for structure-preserving edits.
- Our method is built on top of an image editing technique (e.g., Plug-and-Play, ControlNet, etc.) - therefore, it is
important to ensure that the edit works with the chosen base technique.
- The LDM decoder may introduce some jitter, depending on the original video.

To edit your video, first create a yaml config as in ``configs/config_pnp.yaml``.
Then run
Then run

```
python run_tokenflow_pnp.py
```

Similarly, if you want to use ControlNet or SDEedit, create a yaml config as in ``config/config_controlnet.yaml`` or ```configs/config_SDEdit.yaml``` and run ```python run_tokenflow_controlnet.py``` or ``python run_tokenflow_SDEdit.py`` respectivly.

Similarly, if you want to use ControlNet or SDEdit, create a yaml config as in ``config/config_controlnet.yaml``
or ```configs/config_SDEdit.yaml``` and run ```python run_tokenflow_controlnet.py```
or ``python run_tokenflow_SDEdit.py`` respectively.

## Citation

```
@article{tokenflow2023,
title = {TokenFlow: Consistent Diffusion Features for Consistent Video Editing},
Expand Down
5 changes: 3 additions & 2 deletions configs/config_pnp.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,15 +6,16 @@ output_path: 'tokenflow-results'
# data
data_path: 'data/woman-running'
latents_path: 'latents' # should be the same as 'save_dir' arg used in preprocess
n_inversion_steps: 500 # for retrieving the latents of the inversion
n_inversion_steps: 50 # for retrieving the latents of the inversion
n_frames: 40

# diffusion
sd_version: '2.1'
guidance_scale: 7.5
n_timesteps: 50
skip_steps: 5
prompt: "a marble sculpture of a woman running, Venus de Milo"
negative_prompt: "ugly, blurry, low res, unrealistic, unaesthetic"
negative_prompt: ""
batch_size: 8

# pnp params -- injection thresholds ∈ [0, 1]
Expand Down
1 change: 1 addition & 0 deletions configs/config_sdedit.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ n_frames: 40
sd_version: '2.1'
guidance_scale: 7.5
n_timesteps: 50
skip_steps: 5
prompt: a shiny silver robotic wolf
negative_prompt: "ugly, blurry, low res, unrealistic, unaesthetic"
batch_size: 8
Expand Down
Loading