-
Notifications
You must be signed in to change notification settings - Fork 13
Description
Hello,
First of all, thanks for your contribution!
I am trying to create a custom dataset of different objects which are NOT present in shapenet database from scratch. For that, I am trying to imitate the dataset structure that you shared. I am using BlenderProc to create the synthetic data. I have also downloaded the dataset which you used for training and analysed it.
I have a several doubts regarding the creation of different files in the dataset:
- How to create a depth image exactly like CAMERA dataset (for e.g. CAMERA/train/0000_depth.png) so that it looks like this:
Is there any script present in the repository to create this depth image? I did not understand how the depth information is encoded in an RGB image.
How to create a depth image exactly like in camera_full_depth folder (for e.g. camera_full_depth/train/0000/0000_composed.png) :
-
How to create bbox.txt for each object file in obj_models?
-
how to generate camera_train.pkl and camera_val.pkl in obj_models?
-
Why is mug_meta.pkl present in the obj_models folder?
-
How to create norm.txt and norm_vertices.txt in obj_models/real_train?
-
How to generate ‘Results’ folder and all the files in it?
-
In ‘sdf_rgb_pretrained’ folder, how to generate the ‘Latent Codes’ and all_train_ids.json inside it?
-
How to generate .pkl files in ‘gts’ folder?
-
Do we need to store 6D poses of all the objects in the scene somewhere as annotations for all the images?
Thank you in advance for your answers.