Skip to content

Latest commit

 

History

History
77 lines (53 loc) · 2.47 KB

README.md

File metadata and controls

77 lines (53 loc) · 2.47 KB

Framework to support deep-learning based computer-vision research in microscopy image analysis. Leverages and extends several PyTorch-based framework and tools.

Installation

Install prerequisites

Install qute

$ git clone https://github.com/aarpon/qute
$ cd qute
$ conda create -n qute-env python  # Minimum support version is 3.11
$ conda activate qute-env
$ pip install -e .

On Windows, PyTorch with CUDA acceleration has to be explicitly installed:

$ python -m pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121

Test if GPU acceleration is available

  • Linux and Windows:
$ python -c "import torch; print(torch.cuda.is_available())"
True
  • macOS M1:
$ python -c "import torch; print(torch.backends.mps.is_available())"
True

How to use

High-level API

The high-level qute API provides easy to use objects that manage whole training, fine-tuning and prediction workflows following a user-defined configuration file. Configuration templates can be found in config_samples/.

High-level API

To get started with the high-level API, try:

$ python qute/examples/cell_segmentation_demo_unet.py

Configuration parameters are explained in config_samples/.

To follow the training progress in Tensorboard, run:

$ tensorboard --logdir ${HOME}/Documents/qute/

and then open TensorBoard on http://localhost:6006/.

Low-level API

The low-level API allows easy extension of qute for research and prototyping. You can find the detailed API documentation here.

Hyperparameter optimization

For an example on how to use ray[tune] to optimize hyper-parameters, see examples/cell_segmentation_demo_unet_hyperparameters.py.