You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
1. Download and preprocess the dataset by following the instructions in [mvn/datasets/human36m_preprocessing/README.md](https://github.com/karfly/learnable-triangulation-pytorch/blob/master/mvn/datasets/human36m_preprocessing/README.md).
25
-
2. Place the preprocessed dataset to `data/human36m`. If you don't want to store the dataset in the directory with code, just create a soft symbolic link: `ln -s {PATH_TO_HUMAN36M_DATASET} ./data/human36m`.
26
-
3. Download pretrained backbone's weights from [here](https://drive.google.com/open?id=1TGHBfa9LsFPVS5CH6Qkcy5Jr2QsJdPEa) and place them here: `data/pretrained/human36m/pose_resnet_4.5_pixels_human36m.pth` (ResNet-152 trained on COCO dataset and finetuned jointly on MPII and Human3.6M).
27
-
4. If you want to train Volumetric model, you need rough estimations of the 3D skeleton both for train and val splits. You have two options:
28
-
- Rough 3D skeletons can be estimated by Algebraic model and placed to `data/precalculated_results/human36m/results_train.pkl` and `data/precalculated_results/human36m/results_val.pkl` respectively.
29
-
- Other option is to use the ground truth (GT) estimate of the 3D skeleton by setting `use_gt_pelvis: true` in a config file. Here you don't need any precalculated results, but such training mode overestimates the resulting accuracy, because pelvis is always perfectly defined.
25
+
2. Place the preprocessed dataset to `./data/human36m`. If you don't want to store the dataset in the directory with code, just create a soft symbolic link: `ln -s {PATH_TO_HUMAN36M_DATASET} ./data/human36m`.
26
+
3. Download pretrained backbone's weights from [here](https://drive.google.com/open?id=1TGHBfa9LsFPVS5CH6Qkcy5Jr2QsJdPEa) and place them here: `./data/pretrained/human36m/pose_resnet_4.5_pixels_human36m.pth` (ResNet-152 trained on COCO dataset and finetuned jointly on MPII and Human3.6M).
27
+
4. If you want to train Volumetric model, you need rough estimations of the 3D skeleton both for train and val splits. In the paper we estimate 3D skeletons via Algebraic model. You can use [pretrained](#model-zoo) Algebraic model to produce predictions or just take [precalculated 3D skeletons](#model-zoo).
30
28
31
-
#### CMU Panoptic
32
-
*Will be added soon*
33
-
34
-
## Train
35
-
Every experiment is defined by `.config` files. Configs with experiments from the paper can be found in `experiments` directory (results can be found below):
29
+
## Model zoo
30
+
In this section we collect pretrained models and configs. All **pretrained weights** and **precalculated 3D skeletons** can be downloaded from [Google Drive](https://drive.google.com/open?id=1TGHBfa9LsFPVS5CH6Qkcy5Jr2QsJdPEa) and placed to `./data` dir, so that eval configs can work out-of-the-box (without additional setting of paths).
Every experiment is defined by `.config` files. Configs with experiments from the paper can be found in the `./experiments` directory (see [model zoo](#model-zoo)).
47
40
48
41
#### Single-GPU
49
-
To train a Volumetric model with softmax aggregation and GT-estimated pelvises using **1 GPU**, run:
42
+
To train a Volumetric model with softmax aggregation using **1 GPU**, run:
@@ -58,11 +51,11 @@ The training will start with the config file specified by `--config`, and logs (
58
51
#### Multi-GPU (*in testing*)
59
52
Multi-GPU training is implemented with PyTorch's [DistributedDataParallel](https://pytorch.org/docs/stable/nn.html#distributeddataparallel). It can be used both for single-machine and multi-machine (cluster) training. To run the processes use the PyTorch [launch utility](https://github.com/pytorch/pytorch/blob/master/torch/distributed/launch.py).
60
53
61
-
To train a Volumetric model with softmax aggregation and GT-estimated pelvises using **2 GPUs on single machine**, run:
54
+
To train a Volumetric model with softmax aggregation using **2 GPUs on single machine**, run:
0 commit comments