-
|
Hi, First, thank you for this project. It seems a nice alternative to the ad hoc code usually used with 3D image datasets. After running through the colab tutorial, my understanding is that the composed transformation(s) are applied to the dataset one time, producing a transformed dataset. Then, during training, patches are periodically sampled from this transformed dataset. My questions are about augmentation. For augmentation, is the suggested method to generate an augmented dataset all at once prior to training, or can the augmentation transforms be performed on the fly during training? Are there any examples of the latter? In the examples I have seen, the transformed dataset contains the same number of samples as the initial dataset. Is there a simple way to augment an entire SubjectDataset, i.e. applying the composed transform a fixed number of times to generate a variety of samples? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
|
Hi, @vinpa64.
This is not accurate. When the dataset is indexed during training, 1) the volume is loaded, 2) the composed transform is applied, 3) patches are sampled and added to the queue. This means that augmentation transforms are different at each iteration. This is consistent with typical |
Beta Was this translation helpful? Give feedback.
Hi, @vinpa64.
This is not accurate. When the dataset is indexed during training, 1) the volume is loaded, 2) the composed transform is applied, 3) patches are sampled and added to the queue. This means that augmentation transforms are different at each iteration. This is consistent with typical
torchvisionapplications.