Skip to content

Latest commit

 

History

History
144 lines (99 loc) · 6.41 KB

README.md

File metadata and controls

144 lines (99 loc) · 6.41 KB

3DCoMPaT++: An improved Large-scale 3D Vision Dataset for Compositional Recognition

PAper Jupyter Quickstart Documentation Download Website Workshop Challenge

📰 News

  • 19/08/2023: As our CVPR23 challenge has finished (congratulations to Cattalyya Nuengsikapian!), our test set has now been made public. Dataloaders have been updated in consequence: using the "EvalLoader" classes is not necessary anymore 😊

  • 18/06/2023: The 3DCoMPaT++ CVPR23 challenge has been concluded. We would like to congratulate Cattalyya Nuengsikapian, winner of both coarse and fine-grained tracks for her excellent performance in our challenge 🎉

Summary


3DCoMPaT models view


📚 Introduction

3DCoMPaT++ is a multimodal 2D/3D dataset of 16 million rendered views of more than 10 million stylized 3D shapes carefully annotated at part-instance level, alongside matching RGB pointclouds, 3D textured meshes, depth maps and segmentation masks. This work builds upon 3DCoMPaT, the first version of this dataset.

We plan to further extend the dataset: stay tuned! 🔥


🔍 Browser

To explore our dataset, please check out our integrated web browser:

3DCoMPaT Browser

For more information about the shape browser, please check out our dedicated Wiki page.


🚀 Getting started

To get started straight away, here is a Jupyter notebook (no downloads required, just run and play!):

Jupyter Quickstart

For a deeper dive into our dataset, please check our online documentation:

Documentation


📊 Baselines

We provide baseline models for 2D and 3D tasks, following the structure below:


🏆 Challenge

As a part of the C3DV CVPR 2023 workshop, we are organizing a modelling challenge based on 3DCoMPaT++. To learn more about the challenge, check out this link:

Challenge


🙏 Acknowledgments

⚙️ For computer time, this research used the resources of the Supercomputing Laboratory at King Abdullah University of Science & Technology (KAUST). We extend our sincere gratitude to the KAUST HPC Team for their invaluable assistance and support during the course of this research project. Their expertise and dedication continues to play a crucial role in the success of our work.

💾 We also thank the Amazon Open Data program for providing us with free storage of our large-scale data on their servers. Their generosity and commitment to making research data widely accessible have greatly facilitated our research efforts.


Citation

If you use our dataset, please cite the two following references:

@article{slim2023_3dcompatplus,
    title={3DCoMPaT++: An improved Large-scale 3D Vision Dataset
    for Compositional Recognition},
    author={Habib Slim, Xiang Li, Yuchen Li,
    Mahmoud Ahmed, Mohamed Ayman, Ujjwal Upadhyay
    Ahmed Abdelreheem, Arpit Prajapati,
    Suhail Pothigara, Peter Wonka, Mohamed Elhoseiny},
    year={2023}
}
@article{li2022_3dcompat,
    title={3D CoMPaT: Composition of Materials on Parts of 3D Things},
    author={Yuchen Li, Ujjwal Upadhyay, Habib Slim,
    Ahmed Abdelreheem, Arpit Prajapati,
    Suhail Pothigara, Peter Wonka, Mohamed Elhoseiny},
    journal = {ECCV},
    year={2022}
}

This repository is owned and maintained by Habib Slim, Xiang Li, Mahmoud Ahmed and Mohamed Ayman, from the Vision-CAIR group.

References

  1. [Li et al., 2022] - 3DCoMPaT: Composition of Materials on Parts of 3D Things.
  2. [Xie et al., 2021] - SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers.
  3. [He et al., 2015] - Deep Residual Learning for Image Recognition.