- [2025/4/18] 🔥 We released the technical report of IMAGGarment.
- [2025/4/18] 🔥 We release the train and inference code of IMAGGarment.
- [2025/4/17] 🎉 We launch the project page of IMAGGarment.
IMAGGarment-1 addresses the challenges of multi-conditional controllability in personalized fashion design and digital apparel applications.
Specifically, IMAGGarment-1 employs a two-stage training strategy to separately model global appearance and local details, while enabling unified and controllable generation through end-to-end inference.
In the first stage, we propose a global appearance model that jointly encodes silhouette and color using a mixed attention module and a color adapter.
In the second stage, we present a local enhancement model with an adaptive appearance-aware module to inject user-defined logos and spatial constraints, enabling accurate placement and visual consistency.

- Python>=3.8
- PyTorch>=2.0.0
- cuda>=11.8
conda create --name IMAGGarment python=3.8.8
conda activate IMAGGarment
pip install -U pip
# Install requirements
pip install -r requirements.txt
You can download our models from 百度云. You can download the other component models from the original repository, as follows.
- stabilityai/sd-vae-ft-mse(https://huggingface.co/stabilityai/sd-vae-ft-mse)
- if train: stable-diffusion-v1-5/stable-diffusion-v1-5, if test: SG161222/Realistic_Vision_V4.0_noVAE
- h94/IP-Adapter
- stable-diffusion-v1-5/stable-diffusion-inpainting
# Please download the GarmentBench data first
# and modify the path in train_color_adapter.sh, train_stage1.sh and train_stage2.sh
# train color adapter
sh train_color_adapter.sh
# Once training of color adapter is complete, you can convert the weights into the desired format.
python change.py
# train GAM model
sh train_GAM.sh
# train LEM model
sh train_LEM.sh
python inference_IMAGGarment-1.py \
--GAM_model_ckpt [GAM checkpoint] \
--LEM_model_ckpt [LEM chekcpoint] \
--sketch_path [your sketch path] \
--logo_path [your logo path] \
--mask_path [your mask path] \
--color_path [your color path] \
--prompt [your prompt] \
--output_path [your save path] \
--color_ckpt [color adapter checkpoint] \
--device [your device]
We would like to thank the contributors to the IMAGDressing and IP-Adapter repositories, for their open research and exploration.
The IMAGGarment code is available for both academic and commercial use. Users are permitted to generate images using this tool, provided they comply with local laws and exercise responsible use. The developers disclaim all liability for any misuse or unlawful activity by users.
If you find IMAGDressing-v1 useful for your research and applications, please cite using this BibTeX:
@article{shen2025imaggarment,
title={IMAGGarment-1: Fine-Grained Garment Generation for Controllable Fashion Design},
author={Shen, Fei and Yu, Jian and Wang, Cong and Jiang, Xin and Du, Xiaoyu and Tang, Jinhui},
booktitle={arXiv preprint arXiv:2504.13176},
year={2025}
}- Paper
- Train Code
- Inference Code
- GarmentBench Dataset
- Model Weights
- Upgraded Version for High-resolution Images
- IMAGEdit: Training-Free Controllable Video Editing with Consistent Object Layout. [可控多目标视频编辑]
- IMAGDressing: Controllable dressing generation. [可控穿衣生成]
- IMAGGarment: Fine-grained controllable garment generation. [可控服装生成]
- IMAGHarmony: Controllable image editing with consistent object layout. [可控多目标图像编辑]
- IMAGPose: Pose-guided person generation with high fidelity. [可控多模式人物生成]
- RCDMs: Rich-contextual conditional diffusion for story visualization. [可控故事生成]
- PCDMs: Progressive conditional diffusion for pose-guided image synthesis. [可控人物生成]
- V-Express: Explores strong and weak conditional relationships for portrait video generation. [可控数字人生成]
- FaceShot: Talkingface plugin for any character. [可控动漫数字人生成]
- CharacterShot: Controllable and consistent 4D character animation framework. [可控4D角色生成]
- StyleTailor: An Agent for personalized fashion styling. [个性化时尚Agent]
- SignVip: Controllable sign language video generation. [可控手语生成]
If you have any questions, please feel free to contact with us at [email protected] and [email protected].





