基于DCT-NET风格迁移算法的视频自动化风格修改demo,源项目:https://github.com/menyifang/DCT-Net
DCT-Net: Domain-Calibrated Translation for Portrait Stylization, Yifang Men1, Yuan Yao1, Miaomiao Cui1, Zhouhui Lian2, Xuansong Xie1,
1DAMO Academy, Alibaba Group, Beijing, China
2Wangxuan Institute of Computer Technology, Peking University, China
In: SIGGRAPH 2022 (TOG) arXiv preprint
- python >= 3.7
- tensorflow >=1.14
- CuDNN == 11.3.1
- CUDA == 8.1.0
- easydict
- numpy
- both CPU/GPU are supported
- 下载并安装ModelScope library
conda create -n dctnet python=3.8
conda activate dctnet
conda install tensorflow==2.10
conda install "modelscope[cv]" -f https://modelscope.oss-cn-beijing.aliyuncs.com/releases/repo.html
- 模型加载和推理demo
from modelscope.pipelines import pipeline
from modelscope.utils.constant import Tasks
p = pipeline('image-portrait-stylization', 'damo/cv_unet_person-image-cartoon_compound-models')
- 运行视频转绘画风格demo
python demo.py
Multi-style models and usages are provided here.
git clone https://github.com/menyifang/DCT-Net.git
cd DCT-Net
- upgrade modelscope>=0.4.7
conda activate dctnet
pip install --upgrade "modelscope[cv]" -f https://modelscope.oss-cn-beijing.aliyuncs.com/releases/repo.html
- Download the pretrained models with specific styles [option: anime, 3d, handdrawn, sketch, artstyle]
python multi-style/download.py --style 3d
- Quick infer with python SDK, style choice [option: anime, 3d, handdrawn, sketch, artstyle]
python multi-style/run_sdk.py --style 3d
- Infer from source code & downloaded models
python multi-style/run.py --style 3d
@inproceedings{men2022dct,
title={DCT-Net: Domain-Calibrated Translation for Portrait Stylization},
author={Men, Yifang and Yao, Yuan and Cui, Miaomiao and Lian, Zhouhui and Xie, Xuansong},
journal={ACM Transactions on Graphics (TOG)},
volume={41},
number={4},
pages={1--9},
year={2022},
publisher={ACM New York, NY, USA}
}