Skip to content

[CVPR 2025] Official repository for “MagicArticulate: Make Your 3D Models Articulation-Ready”

License

Notifications You must be signed in to change notification settings

Seed3D/MagicArticulate

Repository files navigation

MagicArticulate: Make Your 3D Models Articulation-Ready

Chaoyue Song1,2, Jianfeng Zhang2*, Xiu Li2, Fan Yang1, Yiwen Chen1, Zhongcong Xu2,
Jun Hao Liew2, Xiaoyang Guo2, Fayao Liu3, Jiashi Feng2, Guosheng Lin1*
*Corresponding authors
1 Nanyang Technological University 2 Bytedance Seed 3 A*STAR

CVPR 2025

Project | Paper | Video | Data: Articulation-XL2.0


News

  • 2025.4.18: We have updated the preprocessed dataset to exclude entries with skinning issues (118 from the training and 3 from the test, whose skinning weight row sums fell below 1) and duplicated joint names (2 from the training). You can download the cleaned data again or update it yourself by running: python data_utils/update_npz_rm_issue_data.py. Still remember to normalize skinning weights in your dataloader.
  • 2025.4.16: Release weights for skeleton generation.
  • 2025.3.28: Release inference codes for skeleton generation.
  • 2025.3.20: Release preprocessed data of Articulation-XL2.0 (add vertex normals), we split it into training (46.7k) and testing set (2k). Try it now!!!
  • 2025.2.27: MagicArticulate was accepted by CVPR2025, see you in Nashville! Data and code are coming soon—stay tuned! 🚀
  • 2025.2.16: Release paper, metadata for Articulation-XL2.0 and data visualization codes!

Dataset: Articulation-XL2.0

Overview

We introduce Articulation-XL2.0, a large-scale dataset featuring over 48K 3D models with high-quality articulation annotations, filtered from Objaverse-XL. Compared to version 1.0, Articulation-XL2.0 includes 3D models with multiple components. For further details, please refer to the statistics below.

Note: The data with rigging has been deduplicated (over 150K). The quality of most data has been manually verified.

Metadata

We provide the following information in the metadata of Articulation-XL2.0.

uuid,source,vertex_count,face_count,joint_count,bone_count,category_label,fileType,fileIdentifier

Preprocessed data

We provide the preprocessed data that saved in NPZ files, which contain the following information:

'vertices', 'faces', 'normals', 'joints', 'bones', 'root_index', 'uuid', 'pc_w_norm', 'joint_names', 'skinning_weights_value', 'skinning_weights_row', 'skinning_weights_col', 'skinning_weights_shape'

Check here to see how to read and how we save it.

Data visualization

We provide a method for visualizing 3D models with skeleton using Pyrender, modified from Lab4D. For more details, please refer here.

Autoregressive skeleton generation

Overview

We formulate skeleton generation as a sequence modeling problem, leveraging an autoregressive transformer to naturally handle varying numbers of bones or joints within skeletons. If you are interested in autoregressive in GenAI, check this awesome list.

Sequence ordering

We provide two ways for sequence ordering: spatial and hierarchical sequence ordering. More details please refer to the paper.

Installtation

git clone https://github.com/Seed3D/MagicArticulate.git --recursive && cd MagicArticulate
conda create -n magicarti python==3.10.13 -y
conda activate magicarti
pip install torch==2.1.1 torchvision==0.16.1 torchaudio==2.1.1 --index-url https://download.pytorch.org/whl/cu118
pip install -r requirements.txt
pip install flash-attn==2.6.3 --no-build-isolation

Then download checkpoints of Michelangelo and our released weights for skeleton generation:

python download.py

Evaluation

You can run the following command for evaluating our models on Articulation-XL2.0-test and ModelResource-test from RigNet. For your convenience, we also save ModelResource-test in our format (download it here). The inference process requires 4.6 GB of VRAM and takes 1–2 seconds per inference.

bash eval.sh

You can change save_name for different evaluation and check the quantitative results afterwards in evaluate_results.txt.

These are the numbers (the metrics are in units of 10−2) that you should be able to reproduce using the released weights and the current version of the codebase:

Test set Articulation-XL2.0-test ModelResource-test
CD-J2J CD-J2B CD-B2B CD-J2J CD-J2B CD-B2B
Paper (train on 1.0, spatial) - - - 4.103 3.101 2.672
Paper (train on 1.0, hier) - - - 4.451 3.454 2.998
train on Arti-XL2.0 (spatial) 3.043 2.293 1.953 3.936 2.979 2.588
train on Arti-XL2.0 (hier) 3.417 2.692 2.281 4.116 3.124 2.704
The performance comparison between models trained on Articulation-XL1.0 versus 2.0 demonstrates the importance of dataset scaling with high quality. If you wish to compare your methods with MagicArticulate trained on Articulation-XL2.0, you may reference these results as a baseline for comparison.

Demo

We provide some examples to test our models by running the following command. You can also test our models on your 3D objects, remeber to change the input_dir.

bash demo.sh

Acknowledgment

We appreciate the insightful discussions with Zhan Xu regrading RigNet and with Biao Zhang regrading Functional Diffusion. The code is built based on MeshAnything, Functional Diffusion, RigNet, Michelangelo and Lab4D.

Citation

@inproceedings{song2025magicarticulate,
      title={MagicArticulate: Make Your 3D Models Articulation-Ready}, 
      author={Chaoyue Song and Jianfeng Zhang and Xiu Li and Fan Yang and Yiwen Chen and Zhongcong Xu and Jun Hao Liew and Xiaoyang Guo and Fayao Liu and Jiashi Feng and Guosheng Lin},
      booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
      year={2025},
}

About

[CVPR 2025] Official repository for “MagicArticulate: Make Your 3D Models Articulation-Ready”

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published