-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training model on 7T MP2RAGE images #65
Comments
Obtaining GT labels and nnUNet model training1. Obtaining GT labelsIn the I tried to run T2w r20240523 model on inverse UNIT1 data (5 subjects) to obtain rootlets segmentation. This segmentations needed manual corrections (see below) to be considered as groundtruth labels: NOTE: Each manually corrected label fits to all from 3 contrasts (INV1, INV2 and UNIT1), so this is big advantage for us.2. Model trainingI created 4 datasets for 4 model training:
3. Results - one testing subject4. Results summary - one testing subject |
nnUNet model training (10 training vs 15 training subjects, default vs increased patch-size)More manual corrections were made - see hc-leipzig-7t-mp2rage_train-test_split.csv and 6 subjects were excluded due to poor data quality. Then we tried to train single-contrast models (based on UNIT1-neg images) and multi-contrast models (based on UNIT1, inv1 and inv2 images). New results are comparing:
UNIT1-neg models (single-contrast)Training with default vs increased patch-sizeTraining log (Dataset026) graph with model settings: 15 training subjects, 1000 epochs, default patch size [192, 96, 128], fold 0Training log (Dataset028) graph with model settings, 15 training subjects, 1000 epochs, increased patch size [352, 96, 128], fold 0Impact of default vs increased patch-size (4 testing subjects)Increasing the patch size (Dataset028) had a positive effect, particularly at the C2 and C3 levels. With the default patch size (Dataset026), training didn't start at these levels, resulting in a Dice score of 0 for testing subjects. In other levels, the performance on testing data was similar between the two models. Impact of increased patch size and different number of training subjectsSpinal level C2Increasing the patch size led to earlier training initiation at the C2 level in both cases compared to the default patch size (lighter vs. darker colors). Additionally, increasing the number of training subjects also resulted in earlier training at the C2 level (dark blue vs. dark red).Spinal level C3Increasing the patch size and the number of subjects doesn't have as significant an impact at the C3 level as it does at the C2 level.Multi-contrast models (MIX)NOTE: We considered INV1, INV2 and UNIT1 images as multi-contrast dataset. Training with default vs increased patch-sizeTraining log (Dataset027) graph with model settings: 45 training images, 2000 epochs, default patch size [192, 96, 128], fold 0Training log (Dataset029) graph with model settings, 45 training images, 2000 epochs, increased patch size [352, 96, 128], fold 0Impact of default vs increased patch-size (4 testing subjects)Increasing the patch size had a positive effect, particularly at the C2 and C3 levels. With the default patch size, training didn't start at these levels, resulting in a Dice score of 0 for testing subjects. In other levels, the performance on testing data was similar between the two models. Impact of increased patch size and different number of training subjectsSpinal level C2Increasing the patch size led to earlier training initiation at the C2 level (lighter red vs. darker red).Spinal level C3Increasing the patch size led to earlier training initiation at the C3 level (lighter red vs. darker red). Additionally, increasing the number of training subjects also resulted in earlier training at the C3 level (dark blue vs. dark red).Comparison single-contrast vs multi-contrast model (UNIT1-neg vs UNIT1 data)Model settings:
The performance of single-contrast and multi-contrast models is similar, but the multi-contrast model has the advantage of being directly applicable to the original MP2RAGE data. Unlike the UNIT1-neg single-contrast model, there’s no need to create inverse images. Summary
|
This issue tracks the training of the model on 7T MP2RAGE images (
hc-leipzig-7t-mp2rage
).Steps (originally posted in #63 (comment)):
TODO: describe the training, tagging @KaterinaKrejci231054
The text was updated successfully, but these errors were encountered: