Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Testing cervical model on hc-leipzig-7t-mp2rage dataset #63

Closed
KaterinaKrejci231054 opened this issue Jul 5, 2024 · 8 comments · Fixed by #64
Closed

Testing cervical model on hc-leipzig-7t-mp2rage dataset #63

KaterinaKrejci231054 opened this issue Jul 5, 2024 · 8 comments · Fixed by #64

Comments

@KaterinaKrejci231054
Copy link
Contributor

KaterinaKrejci231054 commented Jul 5, 2024

This issue describes the application of the model r20240523 (for dorsal and ventral rootlets) to images from the hc-leipzig-7t-mp2rage dataset.

Related: #45

Processing steps

Testing rootlet segmentation on raw data

For each subject, 3 raw nifti files are provided (labeled as UNIT1, inv-1_part-mag_MP2RAGE, inv-2_part-mag_MP2RAGE). I tried to test the r20240523 model on these data, but no rootlets segmentation was created (see below):

UNIT1:

sub-sspr20_UNIT1_label-rootlets_dseg

inv-1_part-mag_MP2RAGE:

sub-sspr20_inv-1_part-mag_MP2RAGE_label-rootlets_dseg

inv-2_part-mag_MP2RAGE:

sub-sspr20_inv-2_part-mag_MP2RAGE_label-rootlets_dseg

Testing rootlets segmentation on inverse data

Then, I tried to create inverse images (multiplied by -1). The only rootlets segmentation was created on the UNIT1 inverse image (see below):
sub-sspr20_UNIT1_neg_label-rootlets_dseg

@KaterinaKrejci231054 KaterinaKrejci231054 changed the title Testing cervical model on 'hc-leipzig-7t-mp2rage' dataset Testing cervical model on hc-leipzig-7t-mp2rage dataset Jul 5, 2024
@KaterinaKrejci231054
Copy link
Contributor Author

KaterinaKrejci231054 commented Jul 9, 2024

Gifs of 3 subjects (inverted UNIT1 data) with rootlets segmentation (sub-18, sub-22 and sub-30):

sub-18

sub-118

sub-22

sub-122

sub-30

sub-30

@valosekj
Copy link
Member

valosekj commented Jul 9, 2024

Thanks for testing the model @KaterinaKrejci231054! The predictions for the inverted UNIT1 image (multiplied by -1) look promising! I believe we can leverage them for the model training. I would do the following steps:

  1. run the model on inverted UNIT1 images for 5 subjects to get initial rootlets segmentations (done in Testing hc-leipzig-7t-mp2rage dataset #64)
  2. correct the predictions; push them to git-annex (done in https://data.neuro.polymtl.ca/datasets/hc-leipzig-7t-mp2rage/pulls/2)
  3. train an initial nnUNet model using all contrasts (UNIT1, inv-1_part-mag_MP2RAGE, inv-2_part-mag_MP2RAGE). A big advantage here is that the contrasts are already coregistered, so we can use the same GT for all 3 contrasts.

Note that the recommended nnUNet trainer now is nnU-Net ResEnc L; see here. We can this train two models: one with the default trainer, the second with the nnU-Net ResEnc L trainer.

@jcohenadad, what do you think?

@jcohenadad
Copy link
Member

Excellent plan, thank you!

p.s. looks like the top right rootlets are missing for sub-30 but I'm sure you are aware

@valosekj
Copy link
Member

valosekj commented Jul 9, 2024

Excellent plan, thank you!

Great, thank you for the confirmation! @KaterinaKrejci231054 will work on it.

p.s. looks like the top right rootlets are missing for sub-30 but I'm sure you are aware

Yes, we are aware of this. After running the inference, we will go through the predictions and correct them.

@jcohenadad
Copy link
Member

just throwing this out there, because we talk about doing additional GT: #59 (comment)

@KaterinaKrejci231054
Copy link
Contributor Author

Each contrast shows different shapes of rootlets (see below) - we need to consider which contrast to use for manual corrections of the rootlet segmentation. Maybe we can use all the contrasts and make only one corrected segmentation.
sub-18_different_contrasts

@valosekj
Copy link
Member

Each contrast shows different shapes of rootlets (see below)

Interesting! This can be caused by different inversion times for inv-1 and inv-2.

we need to consider which contrast to use for manual corrections of the rootlet segmentation. Maybe we can use all the contrasts and make only one corrected segmentation.

Yeah. This sounds good. Let's try to leverage information from all contrasts to make one good segmentation. Then, this single segmentation can be reused for all contrasts to train a single model segmenting all MP2RAGE contrasts.

@valosekj
Copy link
Member

Model training continues as part of #65

--> closing this issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants