Skip to content

Releases: talmolab/sleap-nn

v0.1.0a4

21 Jan 00:23
0ed5d38

Choose a tag to compare

v0.1.0a4 Pre-release
Pre-release

v0.1.0a4 Release Notes

Summary

This pre-release focuses on bug fixes, performance improvements, and CLI usability enhancements:

  • Simpler Train CLI: New --config flag and positional config support for sleap-nn train
  • 17-51x Faster Peak Refinement: Replaced kornia-based cropping with fast tensor indexing
  • ConvNext/SwinT Bug Fix: Fixed skip connection channel mismatch that broke training with these backbones
  • GUI Integration: New --gui flag for SLEAP frontend progress reporting

For the full list of major features, breaking changes, and improvements introduced in the v0.1.0 series, see the v0.1.0a0 release notes.


What's New in v0.1.0a4

Features

Simplified Train CLI (#429)

Training can now be started with a single config file path:

# NEW: Positional config path
sleap-nn train path/to/config.yaml

# NEW: --config flag
sleap-nn train --config path/to/config.yaml

# With Hydra overrides
sleap-nn train config.yaml trainer_config.max_epochs=100

# Legacy flags still work
sleap-nn train --config-dir /path/to/dir --config-name myconfig

The CLI now uses rich-click for styled help output with better formatting and readability.

GUI Progress Mode (#424)

New --gui flag enables JSON progress output for SLEAP GUI integration:

sleap-nn track --data_path video.mp4 --model_paths model/ --gui

Output format:

{"n_processed": 100, "n_total": 1410, "rate": 38.4, "eta": 34.1}
{"n_processed": 200, "n_total": 1410, "rate": 39.2, "eta": 30.8}

This enables real-time progress updates when running inference from the SLEAP GUI.

Performance

17-51x Faster Peak Refinement (#426)

Replaced kornia's crop_and_resize with fast tensor indexing for peak refinement:

Platform Before After Speedup
MPS (M-series Mac) 21.45 ms 0.42 ms 51x
CUDA (RTX A6000) 2.64 ms 0.15 ms 17x

This also enables integral refinement on Mac - the MPS workaround that disabled it has been removed.

Bug Fixes

ConvNext/SwinT Skip Connection Fix (#428)

Fixed RuntimeError: Given groups=1, weight of size [X, Y, 3, 3], expected input to have Y channels when training with ConvNext or SwinT backbones.

What was broken: Training with ConvNext/SwinT backbones crashed during validation due to channel mismatch in skip connections. The decoder assumed skip channels matched computed decoder filters, but ConvNext/SwinT encoder stages have different channel counts.

Impact: Users can now successfully train models with ConvNext and SwinT backbones. All 24 architecture tests pass.

Crop Device Mismatch Fix (#429)

Fixed RuntimeError: indices should be either on cpu or on the same device as the indexed tensor during top-down inference when bboxes tensor was on GPU but images were on CPU.

CSV Learning Rate Logging Fix (#423)

Fixed regression from v0.1.0a2 where learning_rate column in training_log.csv was always empty.

What was broken: PR #417 changed learning rate logging from lr-Adam to train/lr, but the CSV logger only checked for the old format.

Now: The CSV logger checks for train/lr (new format), lr-* (legacy), and learning_rate (direct) in that order. Also adds model-specific loss columns for better parity with wandb logging.

GUI Progress 99% Fix (#429)

Fixed inference progress ending at 99% instead of 100% in GUI mode. The throttled progress reporting was skipping the final update.

Documentation

Prerelease Docs Alias (#425)

Pre-release documentation is now accessible at both:

  • Version-specific: https://sleap.ai/sleap-nn/v0.1.0a4/
  • Alias: https://sleap.ai/sleap-nn/prerelease/

Internal

Test Suite Optimization (#427)

Optimized the 10 slowest tests for faster CI runs:

Test Before After Improvement
test_main_cli 54.44s 21.76s 60%
test_bottomup_predictor 6.71s 1.76s 74%
test_predict_main 15.97s 5.35s 67%

Total estimated savings: ~55% reduction for slowest tests.


Installation

This is an alpha pre-release. Pre-releases are excluded by default per PEP 440 - you must explicitly opt in.

Install with uv (Recommended)

# With --prerelease flag (requires uv 0.9.20+)
uv tool install sleap-nn[torch] --torch-backend auto --prerelease=allow

# Or pin to exact version
uv tool install "sleap-nn[torch]==0.1.0a4" --torch-backend auto

Run with uvx (One-off execution)

uvx --from "sleap-nn[torch]" --prerelease=allow --torch-backend auto sleap-nn system

Verify Installation

sleap-nn --version
# Expected output: 0.1.0a4

sleap-nn system
# Shows full system diagnostics including GPU info

Upgrading from v0.1.0a3

If you already have v0.1.0a3 installed with --prerelease=allow:

# Simple upgrade (retains original settings like --prerelease=allow)
uv tool upgrade sleap-nn

To force a complete reinstall:

uv tool install sleap-nn[torch] --torch-backend auto --prerelease=allow --force

Changelog

PR Category Title
#423 Bug Fix Fix CSV logger not capturing learning_rate
#424 Feature Add --gui flag for JSON progress output in inference
#425 Documentation Add prerelease alias to docs deployment
#426 Performance Replace kornia crop_and_resize with fast tensor indexing
#427 Internal Optimize slow tests for faster CI runs
#428 Bug Fix Fix skip connection channel mismatch in ConvNext/SwinT decoders
#429 Feature Add --config flag for simpler train CLI + fix crop device mismatch

Full Changelog: v0.1.0a3...v0.1.0a4

v0.1.0a3

19 Jan 05:24
df72b51

Choose a tag to compare

v0.1.0a3 Pre-release
Pre-release

Summary

This pre-release adds powerful new capabilities for high-performance inference and post-processing:

  • ONNX/TensorRT Export: Export trained models to optimized formats for 3-6x faster inference
  • Post-Inference Filtering: Remove overlapping/duplicate predictions using IOU or OKS similarity
  • Improved WandB Logging: Better metrics organization and run naming

For the full list of major features, breaking changes, and improvements introduced in the v0.1.0 series, see the v0.1.0a0 release notes.


What's New in v0.1.0a3

Features

ONNX/TensorRT Export Module (#418)

A complete model export system for high-performance inference:

# Export to ONNX
sleap-nn export /path/to/model -o exports/my_model --format onnx

# Export to both ONNX and TensorRT FP16
sleap-nn export /path/to/model -o exports/my_model --format both

# Run inference on exported model
sleap-nn predict exports/my_model video.mp4 -o predictions.slp

Performance Benchmarks (NVIDIA RTX A6000):

Batch size 1 (latency-optimized):

Model Resolution PyTorch ONNX-GPU TensorRT FP16 Speedup
single_instance 192×192 1.8 ms 1.3 ms 0.31 ms 5.9x
centroid 1024×1024 2.5 ms 2.7 ms 0.77 ms 3.2x
topdown 1024×1024 11.4 ms 9.7 ms 2.31 ms 4.9x
bottomup 1024×1280 12.3 ms 9.6 ms 2.52 ms 4.9x
multiclass_topdown 1024×1024 8.3 ms 9.1 ms 1.84 ms 4.5x
multiclass_bottomup 1024×1024 9.4 ms 9.4 ms 2.64 ms 3.6x

Batch size 8 (throughput-optimized):

Model Resolution PyTorch ONNX-GPU TensorRT FP16 Speedup
single_instance 192×192 3,111 FPS 3,165 FPS 11,039 FPS 3.5x
centroid 1024×1024 453 FPS 474 FPS 1,829 FPS 4.0x
topdown 1024×1024 94 FPS 122 FPS 525 FPS 5.6x
bottomup 1024×1280 113 FPS 121 FPS 524 FPS 4.6x
multiclass_topdown 1024×1024 127 FPS 145 FPS 735 FPS 5.8x
multiclass_bottomup 1024×1024 116 FPS 120 FPS 470 FPS 4.1x

Speedup is relative to PyTorch baseline.

Supported model types:

  • Single Instance, Centroid, Centered Instance
  • Top-Down (combined centroid + instance)
  • Bottom-Up (multi-instance with PAF grouping)
  • Multi-class Top-Down and Bottom-Up (with identity classification)

New CLI commands:

  • sleap-nn export - Export models to ONNX/TensorRT
  • sleap-nn predict - Run inference on exported models

New optional dependencies:

uv pip install "sleap-nn[export]"      # ONNX CPU inference
uv pip install "sleap-nn[export-gpu]"  # ONNX GPU inference
uv pip install "sleap-nn[tensorrt]"    # TensorRT support

See the Export Guide for full documentation.

Post-Inference Filtering for Overlapping Instances (#420)

New capability to remove duplicate/overlapping pose predictions after model inference:

# Filter with IOU method (default)
sleap-nn track -i video.mp4 -m model/ --filter_overlapping

# Use OKS method with custom threshold
sleap-nn track -i video.mp4 -m model/ \
    --filter_overlapping \
    --filter_overlapping_method oks \
    --filter_overlapping_threshold 0.5

New CLI options for sleap-nn track:

Option Default Description
--filter_overlapping False Enable filtering using greedy NMS
--filter_overlapping_method iou Similarity method: iou (bbox) or oks (keypoints)
--filter_overlapping_threshold 0.8 Similarity threshold (lower = more aggressive)

Programmatic API:

from sleap_nn.inference.postprocessing import filter_overlapping_instances

labels = filter_overlapping_instances(labels, threshold=0.5, method="oks")

Why use this? Previously, IOU-based filtering only existed in the tracking pipeline. This feature allows filtering overlapping predictions without requiring --tracking.

Improvements

WandB Run Naming and Metrics Logging (#417)

  • Fixed run naming: WandB runs now correctly use auto-generated run names
  • Improved metrics organization: All metrics use / separator for automatic panel grouping in WandB UI:
    • train/loss, train/lr - Training metrics (epoch x-axis)
    • val/loss - Validation metrics (epoch x-axis)
    • eval/val/ - Epoch-end evaluation metrics
    • eval/test.X/ - Post-training test set metrics
  • New metrics logged:
    • train/lr - Learning rate (useful for monitoring LR schedulers)
    • PCK@5, PCK@10 - PCK at 5px and 10px thresholds
    • distance/p95, distance/p99 - Additional distance percentiles

Documentation

  • Exporting Guide (#419): Added comprehensive export documentation to How-to guides navigation

Installation

This is an alpha pre-release. Pre-releases are excluded by default per PEP 440 - you must explicitly opt in.

Install with uv (Recommended)

# With --prerelease flag (requires uv 0.9.20+)
uv tool install sleap-nn[torch] --torch-backend auto --prerelease=allow

# Or pin to exact version
uv tool install "sleap-nn[torch]==0.1.0a3" --torch-backend auto

Run with uvx (One-off execution)

uvx --from "sleap-nn[torch]" --prerelease=allow --torch-backend auto sleap-nn system

Verify Installation

sleap-nn --version
# Expected output: 0.1.0a3

sleap-nn system
# Shows full system diagnostics including GPU info

Upgrading from v0.1.0a2

If you already have v0.1.0a2 installed with --prerelease=allow:

# Simple upgrade (retains original settings like --prerelease=allow)
uv tool upgrade sleap-nn

To force a complete reinstall:

uv tool install sleap-nn[torch] --torch-backend auto --prerelease=allow --force

Changelog

PR Category Title
#417 Improvement Fix wandb run naming and improve metrics logging
#418 Feature Add ONNX/TensorRT export module
#419 Documentation Add Exporting guide to How-to guides section
#420 Feature Add post-inference filtering for overlapping instances

Full Changelog: v0.1.0a2...v0.1.0a3

v0.1.0a2

16 Jan 06:15
24af3b0

Choose a tag to compare

v0.1.0a2 Pre-release
Pre-release

Summary

This pre-release adds real-time evaluation metrics during training and improves video matching robustness in the evaluation pipeline:

  • Epoch-End Evaluation: New metrics logged to WandB at the end of each validation epoch (mOKS, mAP, mAR, PCK, distance metrics)
  • Robust Video Matching: Improved evaluation video matching using sleap-io's Labels.match() API

For the full list of major features, breaking changes, and improvements introduced in the v0.1.0 series, see the v0.1.0a0 release notes.


What's New in v0.1.0a2

Features

  • Epoch-End Evaluation Metrics (#414): Real-time evaluation metrics are now computed at the end of each validation epoch and logged to WandB. This enables monitoring training quality without waiting for post-training evaluation.

    New metrics logged:

    Metric Description
    val_mOKS Mean Object Keypoint Similarity [0-1]
    val_oks_voc_mAP VOC-style mean Average Precision [0-1]
    val_oks_voc_mAR VOC-style mean Average Recall [0-1]
    val_avg_distance Mean Euclidean distance error (pixels)
    val_p50_distance Median Euclidean distance error (pixels)
    val_mPCK Mean Percentage of Correct Keypoints [0-1]
    val_visibility_precision Precision for visible keypoint detection
    val_visibility_recall Recall for visible keypoint detection

    Enable in your training config:

    trainer_config:
      eval:
        enabled: true      # Enable epoch-end evaluation
        frequency: 1       # Evaluate every epoch (or higher for less frequent)
        oks_stddev: 0.025  # OKS standard deviation parameter

Improvements

  • Robust Video Matching in Evaluation (#415): The evaluation module now uses sleap-io's Labels.match() API for more robust video matching between ground truth and prediction labels. This fixes several common failure scenarios:
    • Embedded videos (.pkg.slp) with different internal paths
    • Cross-platform path differences (Windows vs Linux)
    • Renamed or moved video files

Bug Fixes

  • Embedded video handling (#414): get_instances() now correctly handles embedded videos that lack backend.filename attributes, preventing errors during evaluation.
  • Centroid model ground truth matching (#414): Centroid models now properly match centroids to ground truth instances for epoch-end evaluation.
  • Bottom-up training stability (#414): Added max_peaks_per_node=100 guardrail to prevent combinatorial explosion when noisy early-training confidence maps produce spurious peaks.

Dependencies

  • sleap-io: Minimum version bumped from >=0.6.0 to >=0.6.2 for Labels.match() API support

Installation

This is an alpha pre-release. Pre-releases are excluded by default per PEP 440 - you must explicitly opt in.

Install with uv (Recommended)

# With --prerelease flag (requires uv 0.9.20+)
uv tool install sleap-nn[torch] --torch-backend auto --prerelease=allow

# Or pin to exact version
uv tool install "sleap-nn[torch]==0.1.0a2" --torch-backend auto

Run with uvx (One-off execution)

uvx --from "sleap-nn[torch]" --prerelease=allow --torch-backend auto sleap-nn system

Verify Installation

sleap-nn --version
# Expected output: 0.1.0a2

sleap-nn system
# Shows full system diagnostics including GPU info

Upgrading from v0.1.0a1

If you already have v0.1.0a1 installed with --prerelease=allow:

# Simple upgrade (retains original settings like --prerelease=allow)
uv tool upgrade sleap-nn

To force a complete reinstall:

uv tool install sleap-nn[torch] --torch-backend auto --prerelease=allow --force

Changelog

PR Category Title
#414 Feature Add epoch-end evaluation metrics to WandB logging
#415 Improvement Use sleap-io Labels.match() API for robust video matching in evaluation

Full Changelog: v0.1.0a1...v0.1.0a2

v0.1.0a1

13 Jan 09:02
3310608

Choose a tag to compare

v0.1.0a1 Pre-release
Pre-release

Summary

This pre-release is a minor update to v0.1.0a0 with quality-of-life improvements for training workflows:

  • Progress Feedback: Rich progress bar during dataset caching eliminates the "freeze" after startup
  • Disk Space Management: Automatic cleanup of WandB local logs (saves GB of disk space per run)

For the full list of major features, breaking changes, and improvements introduced in the v0.1.0 series, see the v0.1.0a0 release notes.


What's New in v0.1.0a1

Features

  • WandB Local Log Cleanup (#412): Added delete_local_logs option to WandBConfig that automatically deletes the local wandb/ folder after training completes. By default, logs are automatically deleted when syncing online and kept when logging offline. This can save several GB of disk space per training run. Set trainer_config.wandb.delete_local_logs=false to keep local logs.

Improvements

  • Training Startup Progress Bar (#411): Added a rich progress bar during dataset caching to provide visual feedback during training startup. Previously, there was no indication while images were being cached to disk or memory after the "Input image shape" log message.
  • Simplified Log Format (#411): Cleaned up log output by removing module names and log level fields for more user-friendly output.

Installation

This is an alpha pre-release. Pre-releases are excluded by default per PEP 440 - you must explicitly opt in.

Install with uv (Recommended)

# With --prerelease flag (requires uv 0.9.20+)
uv tool install sleap-nn[torch] --torch-backend auto --prerelease=allow

# Or pin to exact version
uv tool install "sleap-nn[torch]==0.1.0a1" --torch-backend auto

Run with uvx (One-off execution)

uvx --from "sleap-nn[torch]" --prerelease=allow --torch-backend auto sleap-nn system

Verify Installation

sleap-nn --version
# Expected output: 0.1.0a1

sleap-nn system
# Shows full system diagnostics including GPU info

Upgrading from v0.1.0a0

If you already have v0.1.0a0 installed with --prerelease=allow:

# Simple upgrade (retains original settings like --prerelease=allow)
uv tool upgrade sleap-nn

To force a complete reinstall:

uv tool install sleap-nn[torch] --torch-backend auto --prerelease=allow --force

Changelog

PR Category Title
#411 Improvement Improve logging during training startup
#412 Feature Add option to clean up wandb local logs after training

Full Changelog: v0.1.0a0...v0.1.0a1

v0.1.0a0

11 Jan 10:46
90266dc

Choose a tag to compare

v0.1.0a0 Pre-release
Pre-release

Summary

This pre-release introduces major improvements to sleap-nn including simplified installation, enhanced training controls, comprehensive inference provenance, and significant performance optimizations. It also includes several breaking changes that warrant testing before the stable v0.1.0 release.

Key highlights:

  • Simplified Installation: New --torch-backend auto flag for automatic GPU detection
  • CUDA 13.0 Support: Full support for latest CUDA version
  • GPU-accelerated Inference: Up to 50% faster inference via GPU normalization
  • Provenance Tracking: Full reproducibility metadata in output SLP files
  • Enhanced Training Controls: Independent augmentation probabilities, auto crop padding
  • System Diagnostics: New sleap-nn system command for troubleshooting

Installation

This is an alpha pre-release. Pre-releases are excluded by default per PEP 440 - you must explicitly opt in.

Install with uv (Recommended)

# With --prerelease flag (requires uv 0.9.20+)
uv tool install sleap-nn[torch] --torch-backend auto --prerelease=allow

# Or pin to exact version
uv tool install "sleap-nn[torch]==0.1.0a0" --torch-backend auto

Install with uvx (One-off execution)

uvx --from "sleap-nn[torch]" --prerelease=allow --torch-backend auto sleap-nn system

Install with pip

pip install --pre sleap-nn[torch] --index-url https://pypi.org/simple --extra-index-url https://download.pytorch.org/whl/cu128

Verify Installation

sleap-nn --version
# Expected output: 0.1.0a0

sleap-nn system
# Shows full system diagnostics including GPU info

Breaking Changes

1. Crop Size Semantics for Top-Down Models (PR #381)

Impact: High - affects model training and inference

The scaling behavior for top-down (centered-instance) models has changed:

Aspect Old Behavior New Behavior
Order Resize full image first, then crop Crop first, then resize
crop_size meaning Region size in scaled coordinates Region size in original image coordinates

Migration: Review your crop_size configuration values. Previously trained models may produce different results.

2. Model Run Folder File Naming (PR #408)

Impact: Medium - affects scripts that read model outputs

File naming conventions standardized:

Old Pattern New Pattern
labels_train_gt_0.slp labels_gt.train.0.slp
labels_val_gt_0.slp labels_gt.val.0.slp
pred_train_0.slp labels_pr.train.0.slp
pred_val_0.slp labels_pr.val.0.slp
train_0_pred_metrics.npz metrics.train.0.npz
val_0_pred_metrics.npz metrics.val.0.npz

3. load_metrics() API Changes (PR #409)

Impact: Low - affects programmatic metrics loading

Change Old New
Parameter name model_path path
Default split "val" "test"

4. Video Path Mapping CLI Syntax (PR #389)

Impact: Low - affects CLI users with path remapping

# Old syntax (no longer works)
sleap-nn train -c config --video-path-map "/old/path->/new/path"

# New syntax
sleap-nn train -c config --video-path-map /old/path /new/path

Performance Improvements

GPU-Accelerated Normalization (PR #406)

Image normalization now runs on GPU, reducing PCIe bandwidth by 4x.

Image Size Before After Speedup
1024x1280 grayscale 55.2 FPS 64.7 FPS 17%
3307x3304 RGB 6.7 FPS 10.1 FPS 50%

New Features

  • Simplified Installation (PR #405): uv tool install sleap-nn[torch] --torch-backend auto
  • CUDA 13.0 Support (PR #405): New --torch-backend cu130 option
  • System Diagnostics (PR #391): sleap-nn system command and --version flag
  • Provenance Metadata (PR #407): Full reproducibility tracking in output SLP files
  • Video Path Remapping (PR #387, #389): Remap paths at training time
  • Frame Filtering (PR #396, #397): --exclude_user_labeled and --only_predicted_frames
  • Enhanced Data Pipeline (PR #394): Auto crop padding, independent augmentation probabilities
  • Multiple Test Files (PR #383): Evaluate against multiple test datasets
  • Enhanced WandB (PR #393, #395, #401): Interactive visualizations, per-head loss logging
  • Centroid Confmaps (PR #386): Return centroid confidence maps in top-down inference

Bug Fixes

  • #382: Fixed max_instances handling in centroid-only inference
  • #385: Fixed crash on frames with empty instances
  • #395: Fixed WandB visualization issues
  • #397: Fixed --exclude_user_labeled being ignored with --video_index
  • #401: Fixed PAF visualization scaling
  • #402: Fixed WandB deprecation warning
  • #394: Fixed user_instances_only handling bugs

Improvements

  • #380: Use sleap-io built-in video matching methods
  • #390: Added CLI reference page and Colab notebooks to docs
  • #392: Run folders cleaned up when training canceled via GUI
  • #398: Comprehensive test coverage improvements
  • #400: WandB URL reported via ZMQ on train start
  • #403: Migrated dev deps to PEP 735 dependency-groups

Changelog

PR Category Title
#380 Improvement Use sleap-io built-in video matching methods
#381 Breaking Fix crop size behavior for top-down models
#382 Fix Fix max_instances handling in centroid-only inference
#383 Feature Support list of paths for test_file_path
#384 Feature Add source image to FindInstancePeaksGroundTruth output
#385 Fix Fix running inference on frames with empty instances
#386 Feature Return centroid confmaps when running topdown inference
#387 Feature Add video path remapping options to train CLI
#389 Breaking Fix train CLI path replacement syntax
#390 Docs Add CLI reference page and Colab notebooks
#391 Feature Add system diagnostics command and --version flag
#392 Fix Clean up run folder when training is canceled via GUI
#393 Feature Improve wandb visualization with slider support
#394 Feature Enhance data pipeline with auto crop padding
#395 Fix Fix wandb visualization issues
#396 Feature Add --exclude_user_labeled and --only_predicted_frames flags
#397 Fix Fix --exclude_user_labeled flag being ignored with --video_index
#398 Tests Add comprehensive test coverage
#400 Feature Report WandB URL via ZMQ on train start
#401 Feature Add per-head loss logging and fix PAF visualization
#402 Fix Fix wandb deprecation warning
#403 Improvement Move dev dependencies to PEP 735 dependency-groups
#405 Feature Add CUDA 13 support and simplify installation
#406 Performance Optimize inference by deferring normalization to GPU
#407 Feature Add provenance metadata to inference output SLP files
#408 Breaking Standardize model run folder file naming
#409 Breaking Improve load_metrics with format compatibility

Full Changelog: v0.0.5...v0.1.0a0

SLEAP-NN v0.0.5

22 Nov 00:13
641ab15

Choose a tag to compare

Summary

This release includes important bug fixes, usability improvements, and configuration enhancements. Key highlights include automatic video-specific output naming for multi-video predictions, improved progress tracking, better handling of edge cases in configuration files, and enhanced security for API key storage.

Major changes

New Features

Progress Bar for Tracking (#366)

Added visual progress tracking during tracking operations, providing real-time feedback on tracking progress for better user experience.

Video-Specific Output Paths (#378)

When running inference with the video_index parameter on multi-video .slp files, output files now automatically include the video name to prevent overwrites. Previously, all predictions would save to the same path (e.g., labels.predictions.slp), requiring users to
manually specify unique output paths. Now, predictions are saved with the format <labels_file>.<video_name>.predictions.slp, enabling seamless batch processing of multiple videos from the same project file.

Bug Fixes

Resume Checkpoint Mapping (#370)

Fixed checkpoint mapping when resuming training from PyTorch model checkpoints, ensuring proper state restoration for torch models.

Metrics Format Compatibility (#371)

Updated metrics saving format to match SLEAP 1.4 specifications and eliminated code duplication in metrics handling, ensuring cross-compatibility between SLEAP-NN and SLEAP 1.4.

Configuration Parameter Handling (#377)

Improved handling of run_name and ckpt_dir configuration parameters when set to empty strings or the string literal "None" in YAML files. This prevents unexpected behavior and ensures consistent defaults are applied.

Security Improvements

API Key Protection (#372)

WandB API keys are now automatically masked when saving initial_config.yaml files, preventing accidental exposure of sensitive credentials in saved configurations.

Configuration & Training Improvements

Optimized Default Parameters (#374, #375)

Updated default trainer configuration parameters based on extensive training experiments, improving training stability and convergence behavior out of the box.

Documentation

Dependency Update Instructions (#376)

Added comprehensive instructions for updating dependencies across all installation methods (GPU, CPU, and Apple Silicon), making it easier for users to maintain up-to-date environments.

Changelog

  • Add progress bar to tracker by @gitttt-1234 in #366
  • Fix resume checkpoint mapping for torch models only by @gitttt-1234 in #370
  • Fix metrics saving format to match SLEAP 1.4 and eliminate code duplication by @gitttt-1234 in #371
  • Mask wandb API key in initial_config.yaml by @gitttt-1234 in #372
  • Update default trainer configuration parameters for improved training stability by @gitttt-1234 in #374
  • Update default configuration values for improved training by @gitttt-1234 in #375
  • Add dependency update instructions for all installation methods by @gitttt-1234 in #376
  • Handle empty and "None" string values for run_name and ckpt_dir config parameters by @gitttt-1234 in #377
  • Append video name to output path when video_index is specified by @gitttt-1234 in #378
  • Bump version to 0.0.5 by @gitttt-1234 in #379

Full Changelog: v0.0.4...v0.0.5

SLEAP-NN v0.0.4

30 Oct 03:30
484bbc2

Choose a tag to compare

Summary

This release includes a dependency version bump and a critical bug fix for empty instance handling. The minimum torchvision version has been updated to 0.20.0, and sleap-io minimum version has been set to 0.5.7 to ensure compatibility with the latest features and improvements.

Major changes

Dependency Version Updates (#365)

  • Minimum torchvision version: Set to 0.20.0 across all torch extras (torch, torch-cpu, torch-cuda118, torch-cuda128)
  • Minimum sleap-io version: Updated to 0.5.7 for improved compatibility

Bug Fixes

  • Fixed empty instance handling (#364): Improved handling of instances with only NaN keypoints in the instance cropping method and CenteredInstanceDataset class. Previously, these instances would trigger "NaN values encountered" warnings when computing bounding boxes. The fix ensures only non-empty instances are processed for crop size computation and removes redundant filtering logic.

Changelog

  • Fix empty instance handling (#364)
  • Bump minimum torchvision version to 0.20.0 (#365)

Full Changelog: v0.0.3...v0.0.4

SLEAP-NN v0.0.3

24 Oct 21:49
d946ec3

Choose a tag to compare

Summary

This release delivers critical bug fixes for multiprocessing support, enhanced tracking capabilities, and significant improvements to the inference workflow. The v0.0.3 release resolves HDF5 pickling issues that prevented proper multiprocessing on macOS/Windows, fixes ID models, and introduces new track cleaning parameters for better tracking performance.

Major changes

Fixed Multiprocessing Bug with num_workers > 0 (#359)

Resolved HDF5 pickling issues that prevented proper multiprocessing on macOS/Windows systems. This fix enables users to utilize multiple workers for faster data loading during training and inference when caching is enabled.

Fixed ID Models (#345)

Fixed minor issues with TopDown and BottomUp ID models.

  • The ID models dataset classes were re-computing the tracks from the labels file. However, they should just grab it from the head config classes parameter.
  • Fix shape mismatch issue with BottomUp ID models

Added Track Cleaning Arguments (#349)

Added new parameters for better track management and cleanup:

  • tracking_clean_instance_count: Target number of instances to clean after tracking
  • tracking_clean_iou_threshold: IOU threshold for cleaning overlapping instances
  • tracking_pre_cull_to_target: Pre-culling instances before tracking
  • tracking_pre_cull_iou_threshold: IOU threshold for pre-culling

Updated Installation Documentation (#348, #351)

Added comprehensive uv add installation instructions for modern Python package management instead of uv pip install method. Added warning for 3.14 python version to prevent installation issues.

Inference workflow enhancements (#360, #361)

Enhanced bottom-up model inference capabilities with improved performance and stability. Fix logger encoding issues on windows and better handle integral refinement error on mps accelerator.

Changelog

SLEAP-NN v0.0.2

29 Sep 21:21
de28b41

Choose a tag to compare

Summary

This release focuses on several bug fixes and improvements across the training, inference, and CLI components of sleap-nn. It includes bug fixes for model backbones and loaders, enhancements to the configuration and CLI experience, improved robustness in multi-GPU training, and new options for device selection and tracking. Documentation and installation guides have also been updated, along with internal refactors to streamline the code consistency.

Major changes

  • Backbones & Models:

    • Fixed bugs in Swin Transformer and UNet backbone filter computations.
    • Corrected weight mapping for legacy TopDown ID models.
  • Inference & Tracking:

    • Removed unintended loading of pretrained weights during inference.
    • Fixed inference with suggestion frames and improved stalling handling.
    • Added option to run tracking on selected frames and video indices.
    • Added thread-safe video access to prevent backend crashes.
    • Added function to load metrics for better evaluation reporting.
  • Training Pipeline:

    • Fixed bugs in the training workflow with the infinite dataloader handling.
    • Improved seeding behavior for reproducible label splits in multi-GPU setups.
    • Fixed experiment run name generation across multi-GPU workers.
  • CLI & Config:

    • Introduced unified sleap-nn CLI with subcommands (train, track, eval) and more robust help injection.
    • Removed deprecated CLI commands and cleaned up legacy imports.
    • Added option to specify which devices to use, with auto-selection of GPUs based on available memory.
    • Updated sample configs and sleap-io skeleton function usage.
    • Minor parameter name and default updates for consistency with SLEAP.
  • Documentation & Installation:

    • Fixed broken documentation pages and improved menu structure.
    • Updated installation instructions with CUDA support for uv-based workflows.

What's Changed

Full Changelog: v0.0.1...v0.0.2

SLEAP-NN v0.0.1

21 Aug 01:30
914c5e4

Choose a tag to compare

SLEAP-NN v0.0.1 - Initial Release

SLEAP-NN is a PyTorch-based deep learning framework for pose estimation, built on top of the SLEAP (Social LEAP Estimates Animal Poses) platform. This framework provides efficient training, inference, and evaluation tools for multi-animal pose estimation tasks.

Documentation: https://nn.sleap.ai/

Quick start

# Install with PyTorch CPU support
pip install sleap-nn[torch-cpu]

# Train a model
sleap-nn train --config-name config.yaml --config-dir configs/

# Run inference
sleap-nn track --model_paths model.ckpt --data_path video.mp4

# Evaluate predictions
sleap-nn eval --ground_truth_path gt.slp --predicted_path pred.slp

What's Changed

Read more