Skip to content

tudelft3d/SUM-Parts-Benchmarks

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SUM Parts: Benchmarking Part-Level Semantic Segmentation of Urban Meshes

CVPR 2025

Website Hugging Face Data YouTube Video arXiv License: MIT


Dataset Overview

SUM Parts provides part-level semantic segmentation of urban textured meshes, covering 2.5km² with 21 classes. From left to right: textured mesh, face-based and texture-based annotations. Classes include unclassified unclassified, terrain terrain, high vegetation high vegetation, water water, car car, boat boat, wall wall, roof surface roof surface, facade surface facade surface, chimney chimney, dormer dormer, balcony balcony, roof installation roof installation, window window, door door, low vegetation low vegetation, impervious surface impervious surface, road road, road marking road marking, cycle lane cycle lane, and sidewalk sidewalk.

📊 Benchmark Datasets

Our benchmark datasets include textured meshes and semantic point clouds sampled on mesh surfaces using different methods. The textured meshes are stored in ASCII ply files, while semantic point clouds are stored in binary ply files to save space. To download the dataset and view the corresponding instructions, please go to the hugging face repository.

Visualization

Mapple

For rendering semantic textured meshes, use the 'Coloring' function in the Surface module of Mapple:

  • f:color or v:color displays per-face or per-point colors.
  • scalar - f:label or scalar - v:label shows legend colors for different semantic labels.
  • h:texcoord displays mesh texture colors, with corresponding texture images or semantic texture masks selectable via the 'Texture' dropdown.
Dataset Overview

MeshLab

MeshLab can also visualize semantic textured meshes by displaying face colors or textures, but it cannot process scalar values (such as labels):

Dataset Overview

🛠️ Code

Semantic segmentation

In the semantic_segmentation folder, we host deep learning semantic segmentation algorithms for point clouds. For each method described in the paper, we provide input/output interfaces and configuration files for SUM Parts data.

  • KPConv: Modified files include train_UrbanMesh.py and UrbanMesh.py.
  • PointNeXt_bundle: Contains PointNet, PointNet++, PointNext, and PointVector. Modified files: cfgs/sumv2_texture/, cfgs/sumv2_triangle/, openpoints/dataset/sumv2_triangle/sumv2_triangle.py, openpoints/dataset/sumv2_texture/sumv2_texture.py.
  • Open3D_ML: Includes SparseconvUNet and RandLaNet. Modified files: ml3d/configs/, ml3d/datasets/sumv2_texture.py, ml3d/datasets/sumv2_triangle.py.
  • SPG: Modified files: learning/custom_dataset.py, learning/main.py, partition/partition.py, partition/my_visualize.py.

Refer to each method's ReadMe for compilation and execution.

For methods like RF_MRF, SUM_RF, and PSSNet, see the sumv2 branch of the PSSNet repository.

Evaluation

Due to diverse point cloud sampling methods and dual-track (mesh face and texture pixel labels) annotations, evaluation is complex. Currently, please use the built-in ground truth labels in each types of data for initial evaluation. For fine-grained test set evaluation consistent with the paper, send predictions to our email for local assessment. Auto-evaluation code will be added to Hugging Face soon.

Interactive annotation

The interactive_annotation folder provides code for SAM and SimpleClick, adapted for texture image segmentation with source code modifications. The Scripts folder includes scripts for annotation efficiency testing and image processing. For the mesh over-segmentation annotation tool, see 3D_Urban_Mesh_Annotator.

✏️ Annotation Service

To prevent potential cheating in benchmark evaluations and competitions (later), the annotation tool and source code are temporarily not publicly released. We will make them available later. The tool is designed for fine-grained annotation of textured meshes. Compared to 2D image or point cloud annotation tools, it is feature-complete but complex to operate, requiring at least 3 hours of professional training for proficiency. We will gradually create help documents and tutorial videos. For users needing annotation services, we offer paid semantic annotation for textured meshes. Contact us via email for quotation details.

📋 TODOs

  • Project page, code, and dataset
  • Evaluation script
  • Annotation tools, code, and manuals

🎓 Citation

If you use SUM Parts or SUM in a scientific work, please consider citing the following papers:

[paper]  [supplemental]  [arxiv]  [bibtex]

@InProceedings{Gao_2025_CVPR,
    author    = {Gao, Weixiao and Nan, Liangliang and Ledoux, Hugo},
    title     = {SUM Parts: Benchmarking Part-Level Semantic Segmentation of Urban Meshes},
    booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
    month     = {June},
    year      = {2025},
    pages     = {24474-24484}
}

[paper]  [project]  [arxiv]  [bibtex]

@article{Gao_2021_ISPRS,
    title = {SUM: A benchmark dataset of Semantic Urban Meshes},
    journal = {ISPRS Journal of Photogrammetry and Remote Sensing},
    volume = {179},
    pages = {108-120},
    year = {2021},
    issn = {0924-2716},
    doi = {https://doi.org/10.1016/j.isprsjprs.2021.07.008},
    url = {https://www.sciencedirect.com/science/article/pii/S0924271621001854}
}

⚖️ License

SUM Parts (including the software and dataset) is a free resource; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. The full text of the license can be found in the accompanying 'License' file.

If you have any questions, comments, or suggestions, please contact me at [email protected]

Weixiao GAO

Jun. 9, 2025

About

SUM Parts: Benchmarking Part-Level Semantic Segmentation of Urban Meshes (Code and Data)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published