Skip to content

Commit 4d1965b

Browse files
authored
Merge branch 'main' into divya/slumbr_base
2 parents 0a67e8f + 641ab15 commit 4d1965b

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

45 files changed

+324
-180
lines changed

docs/config.md

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -123,7 +123,7 @@ trainer_config:
123123
step_lr: null
124124
reduce_lr_on_plateau:
125125
threshold: 1.0e-06
126-
threshold_mode: rel
126+
threshold_mode: abs
127127
cooldown: 3
128128
patience: 5
129129
factor: 0.5
@@ -739,7 +739,7 @@ trainer_config:
739739
### Optimizer Configuration
740740
- `optimizer_name`: (str) Optimizer to be used. One of ["Adam", "AdamW"]. **Default**: `"Adam"`
741741
- `optimizer`:
742-
- `lr`: (float) Learning rate of type float. **Default**: `1e-3`
742+
- `lr`: (float) Learning rate of type float. **Default**: `1e-4`
743743
- `amsgrad`: (bool) Enable AMSGrad with the optimizer. **Default**: `False`
744744

745745
### Learning Rate Schedulers
@@ -752,12 +752,12 @@ trainer_config:
752752

753753
#### Reduce LR on Plateau
754754
- `lr_scheduler.reduce_lr_on_plateau`:
755-
- `threshold`: (float) Threshold for measuring the new optimum, to only focus on significant changes. **Default**: `1e-4`
756-
- `threshold_mode`: (str) One of "rel", "abs". In rel mode, dynamic_threshold = best * ( 1 + threshold ) in max mode or best * ( 1 - threshold ) in min mode. In abs mode, dynamic_threshold = best + threshold in max mode or best - threshold in min mode. **Default**: `"rel"`
757-
- `cooldown`: (int) Number of epochs to wait before resuming normal operation after lr has been reduced. **Default**: `0`
758-
- `patience`: (int) Number of epochs with no improvement after which learning rate will be reduced. For example, if patience = 2, then we will ignore the first 2 epochs with no improvement, and will only decrease the LR after the third epoch if the loss still hasn't improved then. **Default**: `10`
759-
- `factor`: (float) Factor by which the learning rate will be reduced. new_lr = lr * factor. **Default**: `0.1`
760-
- `min_lr`: (float or List[float]) A scalar or a list of scalars. A lower bound on the learning rate of all param groups or each group respectively. **Default**: `0.0`
755+
- `threshold`: (float) Threshold for measuring the new optimum, to only focus on significant changes. **Default**: `1e-6`
756+
- `threshold_mode`: (str) One of "rel", "abs". In rel mode, dynamic_threshold = best * ( 1 + threshold ) in max mode or best * ( 1 - threshold ) in min mode. In abs mode, dynamic_threshold = best + threshold in max mode or best - threshold in min mode. **Default**: `"abs"`
757+
- `cooldown`: (int) Number of epochs to wait before resuming normal operation after lr has been reduced. **Default**: `3`
758+
- `patience`: (int) Number of epochs with no improvement after which learning rate will be reduced. For example, if patience = 2, then we will ignore the first 2 epochs with no improvement, and will only decrease the LR after the third epoch if the loss still hasn't improved then. **Default**: `5`
759+
- `factor`: (float) Factor by which the learning rate will be reduced. new_lr = lr * factor. **Default**: `0.5`
760+
- `min_lr`: (float or List[float]) A scalar or a list of scalars. A lower bound on the learning rate of all param groups or each group respectively. **Default**: `1e-8`
761761

762762
**Example Learning Rate Scheduler configurations:**
763763

@@ -786,7 +786,7 @@ trainer_config:
786786
step_lr: null
787787
reduce_lr_on_plateau:
788788
threshold: 1e-6
789-
threshold_mode: "rel"
789+
threshold_mode: "abs"
790790
cooldown: 3
791791
patience: 5
792792
factor: 0.5
@@ -795,9 +795,9 @@ trainer_config:
795795

796796
### Early Stopping
797797
- `early_stopping`:
798-
- `stop_training_on_plateau`: (bool) True if early stopping should be enabled. **Default**: `False`
799-
- `min_delta`: (float) Minimum change in the monitored quantity to qualify as an improvement, i.e. an absolute change of less than or equal to min_delta, will count as no improvement. **Default**: `0.0`
800-
- `patience`: (int) Number of checks with no improvement after which training will be stopped. Under the default configuration, one check happens after every training epoch. **Default**: `1`
798+
- `stop_training_on_plateau`: (bool) True if early stopping should be enabled. **Default**: `True`
799+
- `min_delta`: (float) Minimum change in the monitored quantity to qualify as an improvement, i.e. an absolute change of less than or equal to min_delta, will count as no improvement. **Default**: `1e-8`
800+
- `patience`: (int) Number of checks with no improvement after which training will be stopped. Under the default configuration, one check happens after every training epoch. **Default**: `10`
801801

802802
### Online Hard Keypoint Mining (OHKM)
803803
- `online_hard_keypoint_mining`:

docs/installation.md

Lines changed: 94 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -79,6 +79,18 @@ Python 3.11 (or) 3.12 (or) 3.13 (required for all installation methods)
7979
sleap-nn --help
8080
```
8181

82+
### Updating Dependencies
83+
84+
To update sleap-nn and its dependencies (e.g., sleap-io) to their latest versions:
85+
86+
```bash
87+
# Upgrade sleap-nn to the latest version
88+
uv tool upgrade sleap-nn
89+
```
90+
91+
!!! note
92+
When upgrading, uv respects any version constraints specified during installation. The upgrade will only update within those constraints. To change version constraints, reinstall with new specifications using `uv tool install`.
93+
8294
---
8395

8496
## Installation with uvx
@@ -123,6 +135,15 @@ sleap-nn --help
123135
!!! note "uvx Installation"
124136
Because `uvx` installs packages fresh on every run, it's ideal for quick tests or use in remote environments. For regular use, you could install with [`uv tool install`](#installation-as-a-system-wide-tool-with-uv) or setting up a development environment with [`uv sync`](#installation-from-source) to avoid repeated downloads.
125137

138+
### Updating Dependencies
139+
140+
With `uvx`, no separate update command is needed:
141+
142+
!!! tip "Automatic Updates"
143+
`uvx` automatically fetches and installs the latest version of sleap-nn and its dependencies (e.g., sleap-io) each time you run a command. This means you're always using the most recent version unless you specify version constraints like `uvx "sleap-nn[torch]==0.0.3" ...`.
144+
145+
To ensure you're using the latest version, simply run your `uvx` command as usual - it will automatically download and use the newest available version.
146+
126147
---
127148

128149
## Installation with uv add
@@ -215,9 +236,33 @@ uv run sleap-nn --help
215236
```
216237
This ensures the command runs in the correct environment.
217238

218-
- **Another workaround (not recommended):**
239+
- **Another workaround (not recommended):**
219240
Check if you have any *empty* `pyproject.toml` or `uv.lock` files in `Users/<your-user-name>`. If you find empty files with these names, delete them and try again. (Empty files here can sometimes interfere with uv's environment resolution.)
220241

242+
### Updating Dependencies
243+
244+
To update sleap-nn and its dependencies to their latest versions:
245+
246+
=== "Upgrade a Specific Package"
247+
```bash
248+
# Upgrade sleap-nn and update the lock file
249+
uv add "sleap-nn[torch]" --upgrade-package sleap-nn
250+
251+
# Upgrade a specific dependency like sleap-io
252+
uv add sleap-io --upgrade-package sleap-io
253+
```
254+
255+
=== "Upgrade All Dependencies"
256+
```bash
257+
# Upgrade all packages to their latest compatible versions
258+
uv sync --upgrade
259+
```
260+
261+
!!! note
262+
- `uv add --upgrade-package <package>` forces the specified package to update to its latest compatible version, even if a valid version is already installed.
263+
- `uv sync --upgrade` refreshes the entire lockfile and updates all dependencies to their newest compatible versions while maintaining compatibility with your `pyproject.toml` constraints.
264+
- By default, `uv add` only updates the locked version if necessary to satisfy new constraints. Use `--upgrade-package` to force an update.
265+
221266
---
222267

223268
## Installation with pip
@@ -270,6 +315,35 @@ sleap-nn --help
270315
python -c "import torch; print(f'PyTorch: {torch.__version__}'); print(f'CUDA available: {torch.cuda.is_available()}')"
271316
```
272317

318+
### Updating Dependencies
319+
320+
To update sleap-nn and its dependencies to their latest versions:
321+
322+
=== "Windows/Linux (CUDA)"
323+
```bash
324+
# CUDA 12.8
325+
pip install --upgrade sleap-nn[torch] --index-url https://pypi.org/simple --extra-index-url https://download.pytorch.org/whl/cu128
326+
327+
# CUDA 11.8
328+
pip install --upgrade sleap-nn[torch] --index-url https://pypi.org/simple --extra-index-url https://download.pytorch.org/whl/cu118
329+
```
330+
331+
=== "Windows/Linux (CPU)"
332+
```bash
333+
pip install --upgrade sleap-nn[torch] --index-url https://pypi.org/simple --extra-index-url https://download.pytorch.org/whl/cpu
334+
```
335+
336+
=== "macOS"
337+
```bash
338+
pip install --upgrade "sleap-nn[torch]"
339+
```
340+
341+
!!! tip "Upgrading Specific Dependencies"
342+
To upgrade a specific dependency like sleap-io independently:
343+
```bash
344+
pip install --upgrade sleap-io
345+
```
346+
273347
---
274348

275349
## Installation from source
@@ -315,12 +389,27 @@ cd sleap-nn
315389
uv sync --extra dev --extra torch-cpu
316390
```
317391

318-
!!! tip "Upgrading All Dependencies"
319-
To ensure you have the latest versions of all dependencies, use the `--upgrade` flag with `uv sync`:
392+
#### 4. Updating Dependencies
393+
394+
To update sleap-nn and its dependencies to their latest versions:
395+
396+
=== "Windows/Linux (CUDA 11.8)"
320397
```bash
321-
uv sync --extra dev --upgrade
398+
uv sync --extra dev --extra torch-cuda118 --upgrade
322399
```
323-
This will upgrade all installed packages in your environment to the latest available versions compatible with your `pyproject.toml`.
400+
401+
=== "Windows/Linux (CUDA 12.8)"
402+
```bash
403+
uv sync --extra dev --extra torch-cuda128 --upgrade
404+
```
405+
406+
=== "macOS/CPU Only"
407+
```bash
408+
uv sync --extra dev --extra torch-cpu --upgrade
409+
```
410+
411+
!!! tip "How --upgrade Works"
412+
The `--upgrade` flag refreshes the lockfile and updates all dependencies to their newest compatible versions while maintaining compatibility with your `pyproject.toml` constraints. This ensures you have the latest versions of all dependency packages.
324413

325414

326415
### Verify Installation

docs/sample_configs/config_bottomup_convnext.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -122,7 +122,7 @@ trainer_config:
122122
step_lr: null
123123
reduce_lr_on_plateau:
124124
threshold: 1.0e-06
125-
threshold_mode: rel
125+
threshold_mode: abs
126126
cooldown: 3
127127
patience: 5
128128
factor: 0.5

docs/sample_configs/config_bottomup_unet_large_rf.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -133,7 +133,7 @@ trainer_config:
133133
step_lr: null
134134
reduce_lr_on_plateau:
135135
threshold: 1.0e-08
136-
threshold_mode: rel
136+
threshold_mode: abs
137137
cooldown: 3
138138
patience: 8
139139
factor: 0.5

docs/sample_configs/config_bottomup_unet_medium_rf.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -133,7 +133,7 @@ trainer_config:
133133
step_lr: null
134134
reduce_lr_on_plateau:
135135
threshold: 1.0e-08
136-
threshold_mode: rel
136+
threshold_mode: abs
137137
cooldown: 3
138138
patience: 8
139139
factor: 0.5

docs/sample_configs/config_centroid_swint.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -126,7 +126,7 @@ trainer_config:
126126
step_lr: null
127127
reduce_lr_on_plateau:
128128
threshold: 1.0e-06
129-
threshold_mode: rel
129+
threshold_mode: abs
130130
cooldown: 3
131131
patience: 5
132132
factor: 0.5

docs/sample_configs/config_centroid_unet.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -127,7 +127,7 @@ trainer_config:
127127
step_lr: null
128128
reduce_lr_on_plateau:
129129
threshold: 1.0e-08
130-
threshold_mode: rel
130+
threshold_mode: abs
131131
cooldown: 3
132132
patience: 5
133133
factor: 0.5

docs/sample_configs/config_multi_class_bottomup_unet.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -122,7 +122,7 @@ trainer_config:
122122
step_lr: null
123123
reduce_lr_on_plateau:
124124
threshold: 1.0e-06
125-
threshold_mode: rel
125+
threshold_mode: abs
126126
cooldown: 3
127127
patience: 5
128128
factor: 0.5

docs/sample_configs/config_single_instance_unet_large_rf.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -127,7 +127,7 @@ trainer_config:
127127
step_lr: null
128128
reduce_lr_on_plateau:
129129
threshold: 1.0e-05
130-
threshold_mode: rel
130+
threshold_mode: abs
131131
cooldown: 3
132132
patience: 5
133133
factor: 0.5

docs/sample_configs/config_single_instance_unet_medium_rf.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -127,7 +127,7 @@ trainer_config:
127127
step_lr: null
128128
reduce_lr_on_plateau:
129129
threshold: 1.0e-08
130-
threshold_mode: rel
130+
threshold_mode: abs
131131
cooldown: 3
132132
patience: 5
133133
factor: 0.5

0 commit comments

Comments
 (0)