Skip to content

Commit

Permalink
merge main
Browse files Browse the repository at this point in the history
  • Loading branch information
ckolbPTB committed Nov 16, 2024
2 parents 9daee3f + 8d24ebb commit 90f3a9f
Show file tree
Hide file tree
Showing 11 changed files with 251 additions and 116 deletions.
4 changes: 2 additions & 2 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -26,10 +26,10 @@ repos:
- id: typos

- repo: https://github.com/fzimmermann89/check_all
rev: v1.0
rev: v1.1
hooks:
- id: check-init-all
args: [--double-quotes]
args: [--double-quotes, --fix]
exclude: ^tests/

- repo: https://github.com/pre-commit/mirrors-mypy
Expand Down
46 changes: 23 additions & 23 deletions examples/qmri_sg_challenge_2024_t1.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
"id": "0f82262f",
"metadata": {},
"source": [
"# QMRI Challenge ISMRM 2024 - T1 mapping"
"# QMRI Challenge ISMRM 2024 - $T_1$ mapping"
]
},
{
Expand Down Expand Up @@ -40,7 +40,7 @@
"source": [
"### Overview\n",
"The dataset consists of images obtained at 10 different inversion times using a turbo spin echo sequence. Each\n",
"inversion time is saved in a separate DICOM file. In order to obtain a T1 map, we are going to:\n",
"inversion time is saved in a separate DICOM file. In order to obtain a $T_1$ map, we are going to:\n",
"- download the data from Zenodo\n",
"- read in the DICOM files (one for each inversion time) and combine them in an IData object\n",
"- define a signal model and data loss (mean-squared error) function\n",
Expand Down Expand Up @@ -105,7 +105,7 @@
"fig, axes = plt.subplots(1, 3, squeeze=False)\n",
"for idx, ax in enumerate(axes.flatten()):\n",
" ax.imshow(torch.abs(idata_multi_ti.data[idx, 0, 0, :, :]))\n",
" ax.set_title(f'TI = {idata_multi_ti.header.ti[idx]:.0f}ms')"
" ax.set_title(f'TI = {idata_multi_ti.header.ti[idx]:.3f}s')"
]
},
{
Expand All @@ -116,9 +116,9 @@
"### Signal model and loss function\n",
"We use the model $q$\n",
"\n",
"$q(TI) = M_0 (1 - e^{-TI/T1})$\n",
"$q(TI) = M_0 (1 - e^{-TI/T_1})$\n",
"\n",
"with the equilibrium magnetization $M_0$, the inversion time $TI$, and $T1$. We have to keep in mind that the DICOM\n",
"with the equilibrium magnetization $M_0$, the inversion time $TI$, and $T_1$. We have to keep in mind that the DICOM\n",
"images only contain the magnitude of the signal. Therefore, we need $|q(TI)|$:"
]
},
Expand Down Expand Up @@ -162,7 +162,7 @@
"source": [
"Now we can simply combine the two into a functional to solve\n",
"\n",
"$ \\min_{M_0, T1} || |q(M_0, T1, TI)| - x||_2^2$"
"$ \\min_{M_0, T_1} || |q(M_0, T_1, TI)| - x||_2^2$"
]
},
{
Expand All @@ -187,11 +187,11 @@
"To increase our chances of reaching the global minimum, we can ensure that our starting\n",
"values are already close to the global minimum. We need a good starting point for each pixel.\n",
"\n",
"One option to get a good starting point is to calculate the signal curves for a range of T1 values and then check\n",
"One option to get a good starting point is to calculate the signal curves for a range of $T_1$ values and then check\n",
"for each pixel which of these signal curves fits best. This is similar to what is done for MR Fingerprinting. So we\n",
"are going to:\n",
"- define a list of realistic T1 values (we call this a dictionary of T1 values)\n",
"- calculate the signal curves corresponding to each of these T1 values\n",
"- define a list of realistic $T_1$ values (we call this a dictionary of $T_1$ values)\n",
"- calculate the signal curves corresponding to each of these $T_1$ values\n",
"- compare the signal curves to the signals of each voxel (we use the maximum of the dot-product as a metric of how\n",
"well the signals fit to each other)"
]
Expand All @@ -203,8 +203,8 @@
"metadata": {},
"outputs": [],
"source": [
"# Define 100 T1 values between 100 and 3000 ms\n",
"t1_dictionary = torch.linspace(100, 3000, 100)\n",
"# Define 100 T1 values between 0.1 and 3.0 s\n",
"t1_dictionary = torch.linspace(0.1, 3.0, 100)\n",
"\n",
"# Calculate the signal corresponding to each of these T1 values. We set M0 to 1, but this is arbitrary because M0 is\n",
"# just a scaling factor and we are going to normalize the signal curves.\n",
Expand All @@ -227,8 +227,8 @@
"metadata": {},
"outputs": [],
"source": [
"# The image with the longest inversion time is a good approximation of the equilibrium magnetization\n",
"m0_start = torch.abs(idata_multi_ti.data[torch.argmax(idata_multi_ti.header.ti), ...])"
"# The maximum absolute value observed is a good approximation for m0\n",
"m0_start = torch.amax(torch.abs(idata_multi_ti.data), 0)"
]
},
{
Expand All @@ -242,11 +242,11 @@
"fig, axes = plt.subplots(1, 2, figsize=(8, 2), squeeze=False)\n",
"colorbar_ax = [make_axes_locatable(ax).append_axes('right', size='5%', pad=0.05) for ax in axes[0, :]]\n",
"im = axes[0, 0].imshow(m0_start[0, 0, ...])\n",
"axes[0, 0].set_title('M0 start values')\n",
"axes[0, 0].set_title('$M_0$ start values')\n",
"fig.colorbar(im, cax=colorbar_ax[0])\n",
"im = axes[0, 1].imshow(t1_start[0, 0, ...], vmin=0, vmax=2500)\n",
"axes[0, 1].set_title('T1 start values')\n",
"fig.colorbar(im, cax=colorbar_ax[1])"
"im = axes[0, 1].imshow(t1_start[0, 0, ...], vmin=0, vmax=2.5)\n",
"axes[0, 1].set_title('$T_1$ start values')\n",
"fig.colorbar(im, cax=colorbar_ax[1], label='s')"
]
},
{
Expand All @@ -266,7 +266,7 @@
"source": [
"# Hyperparameters for optimizer\n",
"max_iter = 2000\n",
"lr = 1e0\n",
"lr = 1e-1\n",
"\n",
"# Run optimization\n",
"params_result = adam(functional, [m0_start, t1_start], max_iter=max_iter, lr=lr)\n",
Expand All @@ -283,7 +283,7 @@
"### Visualize the final results\n",
"To get an impression of how well the fit has worked, we are going to calculate the relative error between\n",
"\n",
"$E_{relative} = \\sum_{TI}\\frac{|(q(M_0, T1, TI) - x)|}{|x|}$\n",
"$E_{relative} = \\sum_{TI}\\frac{|(q(M_0, T_1, TI) - x)|}{|x|}$\n",
"\n",
"on a voxel-by-voxel basis"
]
Expand All @@ -304,11 +304,11 @@
"fig, axes = plt.subplots(1, 3, figsize=(10, 2), squeeze=False)\n",
"colorbar_ax = [make_axes_locatable(ax).append_axes('right', size='5%', pad=0.05) for ax in axes[0, :]]\n",
"im = axes[0, 0].imshow(m0[0, 0, ...])\n",
"axes[0, 0].set_title('M0')\n",
"axes[0, 0].set_title('$M_0$')\n",
"fig.colorbar(im, cax=colorbar_ax[0])\n",
"im = axes[0, 1].imshow(t1[0, 0, ...], vmin=0, vmax=2500)\n",
"axes[0, 1].set_title('T1')\n",
"fig.colorbar(im, cax=colorbar_ax[1])\n",
"im = axes[0, 1].imshow(t1[0, 0, ...], vmin=0, vmax=2.5)\n",
"axes[0, 1].set_title('$T_1$')\n",
"fig.colorbar(im, cax=colorbar_ax[1], label='s')\n",
"im = axes[0, 2].imshow(relative_absolute_error[0, 0, ...], vmin=0, vmax=1.0)\n",
"axes[0, 2].set_title('Relative error')\n",
"fig.colorbar(im, cax=colorbar_ax[2])"
Expand Down
46 changes: 23 additions & 23 deletions examples/qmri_sg_challenge_2024_t1.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# %% [markdown]
# # QMRI Challenge ISMRM 2024 - T1 mapping
# # QMRI Challenge ISMRM 2024 - $T_1$ mapping

# %%
# Imports
Expand All @@ -22,7 +22,7 @@
# %% [markdown]
# ### Overview
# The dataset consists of images obtained at 10 different inversion times using a turbo spin echo sequence. Each
# inversion time is saved in a separate DICOM file. In order to obtain a T1 map, we are going to:
# inversion time is saved in a separate DICOM file. In order to obtain a $T_1$ map, we are going to:
# - download the data from Zenodo
# - read in the DICOM files (one for each inversion time) and combine them in an IData object
# - define a signal model and data loss (mean-squared error) function
Expand Down Expand Up @@ -53,15 +53,15 @@
fig, axes = plt.subplots(1, 3, squeeze=False)
for idx, ax in enumerate(axes.flatten()):
ax.imshow(torch.abs(idata_multi_ti.data[idx, 0, 0, :, :]))
ax.set_title(f'TI = {idata_multi_ti.header.ti[idx]:.0f}ms')
ax.set_title(f'TI = {idata_multi_ti.header.ti[idx]:.3f}s')

# %% [markdown]
# ### Signal model and loss function
# We use the model $q$
#
# $q(TI) = M_0 (1 - e^{-TI/T1})$
# $q(TI) = M_0 (1 - e^{-TI/T_1})$
#
# with the equilibrium magnetization $M_0$, the inversion time $TI$, and $T1$. We have to keep in mind that the DICOM
# with the equilibrium magnetization $M_0$, the inversion time $TI$, and $T_1$. We have to keep in mind that the DICOM
# images only contain the magnitude of the signal. Therefore, we need $|q(TI)|$:

# %%
Expand All @@ -76,7 +76,7 @@
# %% [markdown]
# Now we can simply combine the two into a functional to solve
#
# $ \min_{M_0, T1} || |q(M_0, T1, TI)| - x||_2^2$
# $ \min_{M_0, T_1} || |q(M_0, T_1, TI)| - x||_2^2$
# %%
functional = mse @ model

Expand All @@ -88,17 +88,17 @@
# To increase our chances of reaching the global minimum, we can ensure that our starting
# values are already close to the global minimum. We need a good starting point for each pixel.
#
# One option to get a good starting point is to calculate the signal curves for a range of T1 values and then check
# One option to get a good starting point is to calculate the signal curves for a range of $T_1$ values and then check
# for each pixel which of these signal curves fits best. This is similar to what is done for MR Fingerprinting. So we
# are going to:
# - define a list of realistic T1 values (we call this a dictionary of T1 values)
# - calculate the signal curves corresponding to each of these T1 values
# - define a list of realistic $T_1$ values (we call this a dictionary of $T_1$ values)
# - calculate the signal curves corresponding to each of these $T_1$ values
# - compare the signal curves to the signals of each voxel (we use the maximum of the dot-product as a metric of how
# well the signals fit to each other)

# %%
# Define 100 T1 values between 100 and 3000 ms
t1_dictionary = torch.linspace(100, 3000, 100)
# Define 100 T1 values between 0.1 and 3.0 s
t1_dictionary = torch.linspace(0.1, 3.0, 100)

# Calculate the signal corresponding to each of these T1 values. We set M0 to 1, but this is arbitrary because M0 is
# just a scaling factor and we are going to normalize the signal curves.
Expand All @@ -114,27 +114,27 @@
t1_start = rearrange(t1_dictionary[idx_best_match], '(y x)->1 1 y x', y=n_y, x=n_x)

# %%
# The image with the longest inversion time is a good approximation of the equilibrium magnetization
m0_start = torch.abs(idata_multi_ti.data[torch.argmax(idata_multi_ti.header.ti), ...])
# The maximum absolute value observed is a good approximation for m0
m0_start = torch.amax(torch.abs(idata_multi_ti.data), 0)

# %%
# Visualize the starting values
fig, axes = plt.subplots(1, 2, figsize=(8, 2), squeeze=False)
colorbar_ax = [make_axes_locatable(ax).append_axes('right', size='5%', pad=0.05) for ax in axes[0, :]]
im = axes[0, 0].imshow(m0_start[0, 0, ...])
axes[0, 0].set_title('M0 start values')
axes[0, 0].set_title('$M_0$ start values')
fig.colorbar(im, cax=colorbar_ax[0])
im = axes[0, 1].imshow(t1_start[0, 0, ...], vmin=0, vmax=2500)
axes[0, 1].set_title('T1 start values')
fig.colorbar(im, cax=colorbar_ax[1])
im = axes[0, 1].imshow(t1_start[0, 0, ...], vmin=0, vmax=2.5)
axes[0, 1].set_title('$T_1$ start values')
fig.colorbar(im, cax=colorbar_ax[1], label='s')

# %% [markdown]
# ### Carry out fit

# %%
# Hyperparameters for optimizer
max_iter = 2000
lr = 1e0
lr = 1e-1

# Run optimization
params_result = adam(functional, [m0_start, t1_start], max_iter=max_iter, lr=lr)
Expand All @@ -146,7 +146,7 @@
# ### Visualize the final results
# To get an impression of how well the fit has worked, we are going to calculate the relative error between
#
# $E_{relative} = \sum_{TI}\frac{|(q(M_0, T1, TI) - x)|}{|x|}$
# $E_{relative} = \sum_{TI}\frac{|(q(M_0, T_1, TI) - x)|}{|x|}$
#
# on a voxel-by-voxel basis

Expand All @@ -158,11 +158,11 @@
fig, axes = plt.subplots(1, 3, figsize=(10, 2), squeeze=False)
colorbar_ax = [make_axes_locatable(ax).append_axes('right', size='5%', pad=0.05) for ax in axes[0, :]]
im = axes[0, 0].imshow(m0[0, 0, ...])
axes[0, 0].set_title('M0')
axes[0, 0].set_title('$M_0$')
fig.colorbar(im, cax=colorbar_ax[0])
im = axes[0, 1].imshow(t1[0, 0, ...], vmin=0, vmax=2500)
axes[0, 1].set_title('T1')
fig.colorbar(im, cax=colorbar_ax[1])
im = axes[0, 1].imshow(t1[0, 0, ...], vmin=0, vmax=2.5)
axes[0, 1].set_title('$T_1$')
fig.colorbar(im, cax=colorbar_ax[1], label='s')
im = axes[0, 2].imshow(relative_absolute_error[0, 0, ...], vmin=0, vmax=1.0)
axes[0, 2].set_title('Relative error')
fig.colorbar(im, cax=colorbar_ax[2])
Expand Down
28 changes: 15 additions & 13 deletions examples/qmri_sg_challenge_2024_t2_star.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
"id": "5efa18e9",
"metadata": {},
"source": [
"# QMRI Challenge ISMRM 2024 - T2* mapping"
"# QMRI Challenge ISMRM 2024 - $T_2^*$ mapping"
]
},
{
Expand Down Expand Up @@ -39,7 +39,7 @@
"source": [
"### Overview\n",
"The dataset consists of gradient echo images obtained at 11 different echo times, each saved in a separate DICOM file.\n",
"In order to obtain a T2* map, we are going to:\n",
"In order to obtain a $T_2^*$ map, we are going to:\n",
"- download the data from Zenodo\n",
"- read in the DICOM files (one for each echo time) and combine them in an IData object\n",
"- define a signal model (mono-exponential decay) and data loss (mean-squared error) function\n",
Expand Down Expand Up @@ -100,6 +100,8 @@
"source": [
"te_dicom_files = data_folder.glob('**/*.dcm')\n",
"idata_multi_te = IData.from_dicom_files(te_dicom_files)\n",
"# scaling the signal down to make the optimization easier\n",
"idata_multi_te.data[...] = idata_multi_te.data / 1500\n",
"\n",
"# Move the data to the GPU\n",
"if flag_use_cuda:\n",
Expand All @@ -120,7 +122,7 @@
"fig, axes = plt.subplots(1, 3, squeeze=False)\n",
"for idx, ax in enumerate(axes.flatten()):\n",
" ax.imshow(torch.abs(idata_multi_te.data[idx, 0, 0, :, :]).cpu())\n",
" ax.set_title(f'TE = {idata_multi_te.header.te[idx]:.0f}ms')"
" ax.set_title(f'TE = {idata_multi_te.header.te[idx]:.3f}s')"
]
},
{
Expand All @@ -131,9 +133,9 @@
"### Signal model and loss function\n",
"We use the model $q$\n",
"\n",
"$q(TE) = M_0 e^{-TE/T2^*}$\n",
"$q(TE) = M_0 e^{-TE/T_2^*}$\n",
"\n",
"with the equilibrium magnetization $M_0$, the echo time $TE$, and $T2^*$"
"with the equilibrium magnetization $M_0$, the echo time $TE$, and $T_2^*$"
]
},
{
Expand Down Expand Up @@ -176,7 +178,7 @@
"source": [
"Now we can simply combine the two into a functional which will then solve\n",
"\n",
"$ \\min_{M_0, T2^*} ||q(M_0, T2^*, TE) - x||_2^2$"
"$ \\min_{M_0, T_2^*} ||q(M_0, T_2^*, TE) - x||_2^2$"
]
},
{
Expand Down Expand Up @@ -207,11 +209,11 @@
"# The shortest echo time is a good approximation of the equilibrium magnetization\n",
"m0_start = torch.abs(idata_multi_te.data[torch.argmin(idata_multi_te.header.te), ...])\n",
"# 20 ms as a starting value for T2*\n",
"t2star_start = torch.ones(m0_start.shape, dtype=torch.float32, device=m0_start.device) * 20\n",
"t2star_start = torch.ones(m0_start.shape, dtype=torch.float32, device=m0_start.device) * 20e-3\n",
"\n",
"# Hyperparameters for optimizer\n",
"max_iter = 20000\n",
"lr = 1e0\n",
"lr = 1e-3\n",
"\n",
"if flag_use_cuda:\n",
" functional.cuda()\n",
Expand All @@ -235,7 +237,7 @@
"### Visualize the final results\n",
"To get an impression of how well the fit has worked, we are going to calculate the relative error between\n",
"\n",
"$E_{relative} = \\sum_{TE}\\frac{|(q(M_0, T2^*, TE) - x)|}{|x|}$\n",
"$E_{relative} = \\sum_{TE}\\frac{|(q(M_0, T_2^*, TE) - x)|}{|x|}$\n",
"\n",
"on a voxel-by-voxel basis."
]
Expand All @@ -257,12 +259,12 @@
"colorbar_ax = [make_axes_locatable(ax).append_axes('right', size='5%', pad=0.05) for ax in axes[0, :]]\n",
"\n",
"im = axes[0, 0].imshow(m0[0, 0, ...].cpu())\n",
"axes[0, 0].set_title('M0')\n",
"axes[0, 0].set_title('$M_0$')\n",
"fig.colorbar(im, cax=colorbar_ax[0])\n",
"\n",
"im = axes[0, 1].imshow(t2star[0, 0, ...].cpu(), vmin=0, vmax=500)\n",
"axes[0, 1].set_title('T2*')\n",
"fig.colorbar(im, cax=colorbar_ax[1])\n",
"im = axes[0, 1].imshow(t2star[0, 0, ...].cpu(), vmin=0, vmax=5)\n",
"axes[0, 1].set_title('$T_2^*$')\n",
"fig.colorbar(im, cax=colorbar_ax[1], label='s')\n",
"\n",
"im = axes[0, 2].imshow(relative_absolute_error[0, 0, ...].cpu(), vmin=0, vmax=0.1)\n",
"axes[0, 2].set_title('Relative error')\n",
Expand Down
Loading

0 comments on commit 90f3a9f

Please sign in to comment.