SLEAP-NN v0.0.1
SLEAP-NN v0.0.1 - Initial Release
SLEAP-NN is a PyTorch-based deep learning framework for pose estimation, built on top of the SLEAP (Social LEAP Estimates Animal Poses) platform. This framework provides efficient training, inference, and evaluation tools for multi-animal pose estimation tasks.
Documentation: https://nn.sleap.ai/
Quick start
# Install with PyTorch CPU support
pip install sleap-nn[torch-cpu]
# Train a model
sleap-nn train --config-name config.yaml --config-dir configs/
# Run inference
sleap-nn track --model_paths model.ckpt --data_path video.mp4
# Evaluate predictions
sleap-nn eval --ground_truth_path gt.slp --predicted_path pred.slp
What's Changed
- Core Data Loader Implementation by @davidasamy in #4
- Add centroid finder block by @davidasamy in #7
- Add DataBlocks for rotation and scaling by @gitttt-1234 in #8
- Refactor datapipes by @talmo in #9
- Instance Cropping by @davidasamy in #13
- Add more Kornia augmentations by @alckasoc in #12
- Confidence Map Generation by @davidasamy in #11
- Peak finding by @alckasoc in #14
- UNet Implementation by @alckasoc in #15
- Top-down Centered-instance Pipeline by @alckasoc in #16
- Adding ruff to ci.yml by @alckasoc in #21
- Implement base Model and Head classes by @alckasoc in #17
- Add option to Filter to user instances by @gitttt-1234 in #20
- Add Evaluation Module by @gitttt-1234 in #22
- Add metadata to dictionary by @gitttt-1234 in #24
- Added SingleInstanceConfmapsPipeline by @alckasoc in #23
- modify keys by @gitttt-1234 in #31
- Small fix to find_global_peaks_rough by @alckasoc in #28
- Add trainer by @gitttt-1234 in #29
- PAF Grouping by @alckasoc in #33
- Add predictor class by @gitttt-1234 in #36
- Edge Maps by @alckasoc in #38
- Add ConvNext Backbone by @gitttt-1234 in #40
- Add VideoReader by @gitttt-1234 in #45
- Refactor model pipeline by @gitttt-1234 in #51
- Add BottomUp model pipeline by @gitttt-1234 in #52
- Remove Part-names and Edge dependency in config by @gitttt-1234 in #54
- Refactor model config by @gitttt-1234 in #61
- Refactor Augmentation config by @gitttt-1234 in #67
- Add minimal pretrained checkpoints for tests and fix PAF grouping interpolation by @gqcpm in #73
- Fix augmentation in TopdownConfmaps pipeline by @gitttt-1234 in #86
- Implement tracker module by @gitttt-1234 in #87
- Resume training and automatically compute crop size for TopDownConfmaps pipeline by @gitttt-1234 in #88
- LitData Refactor PR1: Get individual functions for data pipelines by @gitttt-1234 in #90
- Add function to load trained weights for backbone model by @gitttt-1234 in #95
- Remove IterDataPipe from Inference pipeline by @gitttt-1234 in #96
- Move ld.optimize to a subprocess by @gitttt-1234 in #100
- Auto compute max height and width from labels by @gitttt-1234 in #101
- Fix sizematcher in Inference data pipline by @gitttt-1234 in #102
- Convert Tensor images to PIL by @gitttt-1234 in #105
- Add threshold mode in config for learning rate scheduler by @gitttt-1234 in #106
- Add option to specify
.binfile directory in config by @gitttt-1234 in #107 - Add StepLR scheduler by @gitttt-1234 in #109
- Add config to WandB by @gitttt-1234 in #113
- Add option to load trained weights for Head layers by @gitttt-1234 in #114
- Add option to load ckpts for backbone and head for running inference by @gitttt-1234 in #115
- Add option to reuse
.binfiles by @gitttt-1234 in #116 - Fix Normalization order in data pipelines by @gitttt-1234 in #118
- Add torch Dataset classes by @gitttt-1234 in #120
- Fix Pafs shape by @gitttt-1234 in #121
- Add caching to Torch Datasets pipeline by @gitttt-1234 in #123
- Remove
random_cropaugmentation by @gitttt-1234 in #124 - Generate np chunks for caching by @gitttt-1234 in #125
- Add
groupto wandb config by @gitttt-1234 in #126 - Fix crop size by @gitttt-1234 in #127
- Resize images before cropping in Centered-instance model by @gitttt-1234 in #129
- Check memory before caching by @gitttt-1234 in #130
- Replace
evalwith an explicit mapping dictionary by @gitttt-1234 in #131 - Add
CyclerDataLoaderto ensure minimum steps per epoch by @gitttt-1234 in #132 - Fix running inference on Bottom-up models with CUDA by @gitttt-1234 in #133
- Fix caching in datasets by @gitttt-1234 in #134
- Save
.slpfile after inference by @gitttt-1234 in #135 - Add option to reuse np chunks by @gitttt-1234 in #136
- Filter instances while generating indices by @gitttt-1234 in #138
- Fix config format while logging to wandb by @gitttt-1234 in #144
- Add multi-gpu support by @gitttt-1234 in #145
- Implement Omegaconfig PR1: basic functionality by @gqcpm in #97
- Move all params to config by @gitttt-1234 in #146
- Add output stride to backbone config by @gitttt-1234 in #147
- Change backbone config structure by @gitttt-1234 in #149
- Add an entry point train function by @gitttt-1234 in #150
- Add logger by @gqcpm in #148
- Fix preprocessing during inference by @gitttt-1234 in #156
- Add CLI for training by @gitttt-1234 in #155
- Specify custom anchor index in Inference pipeline by @gitttt-1234 in #157
- Fix lr scheduler config by @gitttt-1234 in #158
- Add max stride to Convnext and Swint backbones by @gitttt-1234 in #159
- Fix length in custom datasets by @gitttt-1234 in #160
- Add
scaleargument to custom datasets by @gitttt-1234 in #166 - Fix size matcher by @gitttt-1234 in #167
- Fix max instances in TopDown Inference by @gitttt-1234 in #168
- Move lightning modules by @gitttt-1234 in #169
- Save config with chunks by @gitttt-1234 in #174
- Add profiler and strategy parameters by @gitttt-1234 in #175
- Add docker img for remote dev by @gitttt-1234 in #176
- Save files only in rank: 0 by @gitttt-1234 in #177
- Minor changes to validate configs by @gitttt-1234 in #179
- Fix multi-gpu training by @gitttt-1234 in #184
- Cache only images by @gitttt-1234 in #186
- Add a new data pipeline strategy without caching by @gitttt-1234 in #187
- Minor fixes to lightning modules by @gitttt-1234 in #189
- Fix caching when imgs path already exist by @gitttt-1234 in #191
- Ensure caching of images to disk in rank:0 by @gitttt-1234 in #193
- Fix bug in caching images to disk by @gitttt-1234 in #194
- Close videos before creating data loaders by @gitttt-1234 in #195
- Update instance creation for sleap-io v0.3.0 compatibility by @gitttt-1234 in #196
- Fix up block computation for swint and convnext by @gitttt-1234 in #197
- Bump up to python 3.11 by @gitttt-1234 in #200
- Map legacy SLEAP
jsonconfigs to SLEAP-NNOmegaConfobjects by @gqcpm in #162 - Add option to get validation data from train labels by @gitttt-1234 in #201
- Fix anchor part in config by @gitttt-1234 in #203
- Minor fixes to config mapper by @gitttt-1234 in #204
- Save labels with centroid inference by @gitttt-1234 in #205
- Add custom callbacks to publish metrics during training by @gitttt-1234 in #207
- Add visualizer by @gitttt-1234 in #208
- Add CLI for inference by @gitttt-1234 in #209
- Add option to parse frame ranges for videos by @gitttt-1234 in #211
- Add control flags to run inference on select LabeledFrames by @gitttt-1234 in #212
- Add support to run inference on specific video in a .slp file by @gitttt-1234 in #213
- Minor fixes to tracking by @gitttt-1234 in #214
- Fix bug in evaluation by @gitttt-1234 in #215
- Save train, val, test predictions after training by @gitttt-1234 in #216
- Add option to auto-select device for inference by @gitttt-1234 in #217
- Add option to pass multiple .slp files for training by @papamanu in #218
- Add logs by @gitttt-1234 in #219
- Bug fixes to model architecture and trainer by @gitttt-1234 in #220
- Remove
nestedtensors to supportmpsfor BottomUp models by @gitttt-1234 in #221 - Modify viz functions by @gitttt-1234 in #223
- Add
ensure_grayscaleparameter by @gitttt-1234 in #224 - Fix bugs with zmq config by @gitttt-1234 in #225
- Log part-wise losses by @gitttt-1234 in #226
- Fix trainer config mappings and add option to load config from json str by @gitttt-1234 in #227
- Refactor ModelTrainer class by @gitttt-1234 in #228
- Fix infinite data loader and update steps per epoch by @gitttt-1234 in #229
- Add online hard keypoint mining by @gitttt-1234 in #222
- Add more features to Tracker by @gitttt-1234 in #231
- Remove litdata and iterdatapipe pipelines by @gitttt-1234 in #232
- Add CLAUDE.md and update .gitignore for Claude Code integration by @talmo in #233
- Add length parameter to InfiniteDataLoader by @gitttt-1234 in #237
- Get sleap-nn pip package ready to publish by @eberrigan in #236
- Map sleap (json) skeleton to sleap-nn (yaml) format by @gitttt-1234 in #238
- Enable tracking on user-labeled instances by @gitttt-1234 in #239
- Add ID models by @gitttt-1234 in #234
- Add codespell workflow for spell checking by @talmo in #241
- Make torch dependencies optional by @eberrigan in #243
- Reorganize assets and revise checkpoints by @gitttt-1234 in #242
- Refactor architectures per SLEAP by @gitttt-1234 in #245
- Add
keep-vizparameter by @gitttt-1234 in #246 - Format
config.mdby @gitttt-1234 in #249 - Fix minor bugs by @gitttt-1234 in #250
- Setup docs by @talmo in #251
- Remove broken Docker image by @gitttt-1234 in #254
- Import legacy SLEAP model weights by @talmo in #235
- Fix wandb logging by @gitttt-1234 in #255
- Fix preprocess config in inference by @gitttt-1234 in #257
- Update ckpts and cfgs by @gitttt-1234 in #259
- Minor bug fixes in training pipeline by @gitttt-1234 in #260
- Revert "Minor bug fixes in training pipeline" by @gitttt-1234 in #261
- Fix minor bugs by @gitttt-1234 in #262
- Update in channels for torch with keras weights by @gitttt-1234 in #263
- Move convnext/ swint pretrained weights by @gitttt-1234 in #264
- Refactor lightning module parameters by @gitttt-1234 in #265
- Fix skeletons structure in config by @gitttt-1234 in #266
- Ensure consistent types for augmentation parameters by @gitttt-1234 in #267
- Add CLI entry-point functions and shortcuts by @gitttt-1234 in #270
- Ensure only rank-0 handles writing files in ddp training by @gitttt-1234 in #271
- Fix bug in centered-instance dataset by @gitttt-1234 in #272
- Add eff_scale to dataset by @gitttt-1234 in #273
- Update docs by @gitttt-1234 in #256
- Add self-hosted runner to CI by @talmo in #277
- Minor changes to data pipeline and training guide notebook by @gitttt-1234 in #280
- Add support to load keras weights for model init by @gitttt-1234 in #285
- Self-hosted Runners Tests and Trainer Accelerator by @alicup29 in #281
- Check memory with source images by @eberrigan in #283
- Fix
@oneofValidation and Add Support for None for train_labels_path by @7174Andy in #282 - Parallelizing dataset caching by @emdavis02 in #284
- Remove Permanent File Creations After Testing Locally by @7174Andy in #292
- Add file existence tracking to trainer and related tests by @tom21100227 in #291
- Fix transitive torch/torchvision installation & platform compatilibility by @alicup29 in #268
- Add more documentation by @gitttt-1234 in #287
- Update build CI by @talmo in #295
- Add build ci option to release to testpypi by @gitttt-1234 in #296
- Add testpypi index to toml by @gitttt-1234 in #297
- Minor fixes to pyproject.toml by @gitttt-1234 in #298
- Add Hyphen to Checkpoint Paths when Duplication Found by @7174Andy in #299
- Add helpful CLI message to
sleap-nn-trainby @tom21100227 in #294
New Contributors
- @davidasamy made their first contribution in #4
- @gqcpm made their first contribution in #73
- @papamanu made their first contribution in #218
- @eberrigan made their first contribution in #236
- @alicup29 made their first contribution in #281
- @7174Andy made their first contribution in #282
- @emdavis02 made their first contribution in #284
- @tom21100227 made their first contribution in #291
Full Changelog: https://github.com/talmolab/sleap-nn/commits/v0.0.1