Framework to support deep-learning based computer-vision research in microscopy image analysis. Leverages and extends several PyTorch-based framework and tools.
- Install Miniconda from: https://docs.anaconda.com/miniconda/ (for Linux, macOS, and Windows)
- Install CUDA:
- Linux: https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64
- Windows: https://developer.nvidia.com/cuda-downloads?target_os=Windows&target_arch=x86_64
- macOS does not support CUDA; PyTorch will use
mps
on M1 processors.
$ git clone https://github.com/aarpon/qute
$ cd qute
$ conda create -n qute-env python # Minimum support version is 3.11
$ conda activate qute-env
$ pip install -e .
On Windows, PyTorch with CUDA acceleration has to be explicitly installed:
$ python -m pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
- Linux and Windows:
$ python -c "import torch; print(torch.cuda.is_available())"
True
- macOS M1:
$ python -c "import torch; print(torch.backends.mps.is_available())"
True
The high-level qute API provides easy to use objects that manage whole training, fine-tuning and prediction workflows following a user-defined configuration file. Configuration templates can be found in config_samples/.
To get started with the high-level API, try:
$ python qute/examples/cell_segmentation_demo_unet.py
Configuration parameters are explained in config_samples/.
To follow the training progress in Tensorboard, run:
$ tensorboard --logdir ${HOME}/Documents/qute/
and then open TensorBoard on http://localhost:6006/.
The low-level API allows easy extension of qute for research and prototyping. You can find the detailed API documentation here.
For an example on how to use ray[tune]
to optimize hyper-parameters, see examples/cell_segmentation_demo_unet_hyperparameters.py.