This repository contains the core implementation of our paper:
Locally Attentional SDF Diffusion for Controllable 3D Shape Generation
Xin-Yang Zheng,
Hao Pan,
Peng-Shuai Wang,
Xin Tong,
Yang Liu and Heung-Yeung Shum
Following is the suggested way to install the dependencies of our code:
conda create -n sketch_diffusion
conda activate sketch_diffusion
conda install pytorch=1.9.0 torchvision=0.10.0 cudatoolkit=10.2 -c pytorch -c nvidia
pip install tqdm fire einops pyrender pyrr trimesh ocnn timm scikit-image==0.18.2 scikit-learn==0.24.2 pytorch-lightning==1.6.1
Please ref to SDF-StyleGAN for generating the SDF field from ShapeNet data or your customized data.
Please refer to prepare_sketch.py
for details.
We provide the pretrained models for the category-conditioned generation and sketch-conditioned generation. Please download the pretrained models from Google Drive and put them in checkpoints/
.
Please refer to the scripts in scripts/
for the usage of our code.
bash scripts/train_sketch.sh
bash scripts/train_category.sh
bash scripts/generate_category.sh
bash scripts/generate_sketch.sh
If you find our work useful in your research, please consider citing:
@article {zheng2023lasdiffusion,
title = {Locally Attentional SDF Diffusion for Controllable 3D Shape Generation},
author = {Zheng, Xin-Yang and Pan, Hao and Wang, Peng-Shuai and Tong, Xin and Liu, Yang and Shum, Heung-Yeung},
journal = {ACM Transactions on Graphics (SIGGRAPH)},
volume = {42},
number = {4},
year = {2023},
}