Releases: andreped/livermask
Releases · andreped/livermask
v1.5.0
What's Changed
- Added test demo app for HuggingFace space by @andreped in #22
- Added deploy (CD) and filesize (CI) workflows relevant for HF space by @andreped in #23
- Experimental support for using
livermask
as Python package
Full Changelog: https://github.com/andreped/livermask/commits/v1.5.0
v1.4.1
What's changed
- Added *.nii.gz file extension support by @jpdefrutos in #20
- Fixed file extension check by @jpdefrutos in #21
- Added unit tests for *.nii.gz format in test CI
- Bug fix related to liver parenchyma masks with only one (or none) connected components
New Contributors
- @jpdefrutos made their first contribution in #20
Full Changelog: v1.4.0...v1.4.1
v1.4.0
trained-models-v1
This release simple makes the pretrained models available through GitHub Releases.
This was made due to instabilities using Google Drive with large files.
v1.3.1
Minor bug fixes mostly relevant for macOSX, but fixed a critical bug related to the vessel model.
Changes:
- Included yaml-file as dependency, as it was not included before
- Updated path to yaml-file to work across operating systems
- Changed multiprocessing start method to "spawn" to work with macOSX
- Livermask now works without having CuPy installed and without needing to specify "--cpu" flag
Full Changelog: v1.3.0...v1.3.1
v1.3.0
Changes:
- Added option for vessel segmentation (hepatic vascular system)
- Added pretrained deep vessel model
- Added Chainer and CuPy as dependencies, to support new model
- Added both CPU and GPU support for new model
- Fixed such that two models can be run sequentially, without GPU memory leakage
- Major refactoring
Full Changelog: v1.2.0...v1.3.0
v1.3.0-alpha
Update README.md
v1.2.0
v1.1.0
v1.0.0
First major release of the livermask command line tool.
Features:
- Simply use pip to install the program
- The program can then be used as a command line tool
- Supports and has been tested on Ubuntu Linux, Windows, and macOS
- Supports inference using a dedicated graphics card (NVIDIA)
- Automatically runs inference on GPU if GPU is available
- Option to force computation on CPU (if GPU resources are sparse)