Skip to content

Implementation of "Data augmentation using learned transforms for one-shot medical image segmentation"

License

Notifications You must be signed in to change notification settings

Lzf-Peter/brainstorm

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

32 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

brainstorm

Implementation from the paper "Data Augmentation using Learned Transforms for One-shot Medical Image Segmentation".

Paper: arXiv link

This project has dependencies on the following repos. Please place all these repos in the same parent directory.

Training transform models

Spatial and appearance transform models can be trained by specifying the GPU ID, dataset name, and model name.

python main.py trans --gpu 0 --data mri-100-csts2 --model flow-bds
python main.py trans --gpu 0 --data mri-100-csts2 --model color-unet

Each experiment will create a results directory under ./experiments by default, so make sure that location exists.

Training a segmentation network

A segmentation network can be trained with the following:

python main.py fss --gpu 0 --data mri-100-csts2

Again, results will be placed under .experiments. To evaluate trained segmenters, look at the code in evaluate_segmenters.py. You will have to modify the code to point at your trained models.

Repo name inspired by Magic: The Gathering.

Brainstorm

About

Implementation of "Data augmentation using learned transforms for one-shot medical image segmentation"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%