You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/config.md
+12-12Lines changed: 12 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -123,7 +123,7 @@ trainer_config:
123
123
step_lr: null
124
124
reduce_lr_on_plateau:
125
125
threshold: 1.0e-06
126
-
threshold_mode: rel
126
+
threshold_mode: abs
127
127
cooldown: 3
128
128
patience: 5
129
129
factor: 0.5
@@ -739,7 +739,7 @@ trainer_config:
739
739
### Optimizer Configuration
740
740
- `optimizer_name`: (str) Optimizer to be used. One of ["Adam", "AdamW"]. **Default**: `"Adam"`
741
741
- `optimizer`:
742
-
- `lr`: (float) Learning rate of type float. **Default**: `1e-3`
742
+
- `lr`: (float) Learning rate of type float. **Default**: `1e-4`
743
743
- `amsgrad`: (bool) Enable AMSGrad with the optimizer. **Default**: `False`
744
744
745
745
### Learning Rate Schedulers
@@ -752,12 +752,12 @@ trainer_config:
752
752
753
753
#### Reduce LR on Plateau
754
754
- `lr_scheduler.reduce_lr_on_plateau`:
755
-
- `threshold`: (float) Threshold for measuring the new optimum, to only focus on significant changes. **Default**: `1e-4`
756
-
- `threshold_mode`: (str) One of "rel", "abs". In rel mode, dynamic_threshold = best * ( 1 + threshold ) in max mode or best * ( 1 - threshold ) in min mode. In abs mode, dynamic_threshold = best + threshold in max mode or best - threshold in min mode. **Default**: `"rel"`
757
-
- `cooldown`: (int) Number of epochs to wait before resuming normal operation after lr has been reduced. **Default**: `0`
758
-
- `patience`: (int) Number of epochs with no improvement after which learning rate will be reduced. For example, if patience = 2, then we will ignore the first 2 epochs with no improvement, and will only decrease the LR after the third epoch if the loss still hasn't improved then. **Default**: `10`
759
-
- `factor`: (float) Factor by which the learning rate will be reduced. new_lr = lr * factor. **Default**: `0.1`
760
-
- `min_lr`: (float or List[float]) A scalar or a list of scalars. A lower bound on the learning rate of all param groups or each group respectively. **Default**: `0.0`
755
+
- `threshold`: (float) Threshold for measuring the new optimum, to only focus on significant changes. **Default**: `1e-6`
756
+
- `threshold_mode`: (str) One of "rel", "abs". In rel mode, dynamic_threshold = best * ( 1 + threshold ) in max mode or best * ( 1 - threshold ) in min mode. In abs mode, dynamic_threshold = best + threshold in max mode or best - threshold in min mode. **Default**: `"abs"`
757
+
- `cooldown`: (int) Number of epochs to wait before resuming normal operation after lr has been reduced. **Default**: `3`
758
+
- `patience`: (int) Number of epochs with no improvement after which learning rate will be reduced. For example, if patience = 2, then we will ignore the first 2 epochs with no improvement, and will only decrease the LR after the third epoch if the loss still hasn't improved then. **Default**: `5`
759
+
- `factor`: (float) Factor by which the learning rate will be reduced. new_lr = lr * factor. **Default**: `0.5`
760
+
- `min_lr`: (float or List[float]) A scalar or a list of scalars. A lower bound on the learning rate of all param groups or each group respectively. **Default**: `1e-8`
- `stop_training_on_plateau`: (bool) True if early stopping should be enabled. **Default**: `False`
799
-
- `min_delta`: (float) Minimum change in the monitored quantity to qualify as an improvement, i.e. an absolute change of less than or equal to min_delta, will count as no improvement. **Default**: `0.0`
800
-
- `patience`: (int) Number of checks with no improvement after which training will be stopped. Under the default configuration, one check happens after every training epoch. **Default**: `1`
798
+
- `stop_training_on_plateau`: (bool) True if early stopping should be enabled. **Default**: `True`
799
+
- `min_delta`: (float) Minimum change in the monitored quantity to qualify as an improvement, i.e. an absolute change of less than or equal to min_delta, will count as no improvement. **Default**: `1e-8`
800
+
- `patience`: (int) Number of checks with no improvement after which training will be stopped. Under the default configuration, one check happens after every training epoch. **Default**: `10`
Copy file name to clipboardExpand all lines: docs/installation.md
+94-5Lines changed: 94 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -79,6 +79,18 @@ Python 3.11 (or) 3.12 (or) 3.13 (required for all installation methods)
79
79
sleap-nn --help
80
80
```
81
81
82
+
### Updating Dependencies
83
+
84
+
To update sleap-nn and its dependencies (e.g., sleap-io) to their latest versions:
85
+
86
+
```bash
87
+
# Upgrade sleap-nn to the latest version
88
+
uv tool upgrade sleap-nn
89
+
```
90
+
91
+
!!! note
92
+
When upgrading, uv respects any version constraints specified during installation. The upgrade will only update within those constraints. To change version constraints, reinstall with new specifications using `uv tool install`.
93
+
82
94
---
83
95
84
96
## Installation with uvx
@@ -123,6 +135,15 @@ sleap-nn --help
123
135
!!! note "uvx Installation"
124
136
Because `uvx` installs packages fresh on every run, it's ideal for quick tests or use in remote environments. For regular use, you could install with [`uv tool install`](#installation-as-a-system-wide-tool-with-uv) or setting up a development environment with [`uv sync`](#installation-from-source) to avoid repeated downloads.
125
137
138
+
### Updating Dependencies
139
+
140
+
With `uvx`, no separate update command is needed:
141
+
142
+
!!! tip "Automatic Updates"
143
+
`uvx` automatically fetches and installs the latest version of sleap-nn and its dependencies (e.g., sleap-io) each time you run a command. This means you're always using the most recent version unless you specify version constraints like `uvx "sleap-nn[torch]==0.0.3" ...`.
144
+
145
+
To ensure you're using the latest version, simply run your `uvx` command as usual - it will automatically download and use the newest available version.
146
+
126
147
---
127
148
128
149
## Installation with uv add
@@ -215,9 +236,33 @@ uv run sleap-nn --help
215
236
```
216
237
This ensures the command runs in the correct environment.
217
238
218
-
- **Another workaround (not recommended):**
239
+
- **Another workaround (not recommended):**
219
240
Check if you have any *empty* `pyproject.toml` or `uv.lock` files in `Users/<your-user-name>`. If you find empty files with these names, delete them and try again. (Empty files here can sometimes interfere with uv's environment resolution.)
220
241
242
+
### Updating Dependencies
243
+
244
+
To update sleap-nn and its dependencies to their latest versions:
# Upgrade all packages to their latest compatible versions
258
+
uv sync --upgrade
259
+
```
260
+
261
+
!!! note
262
+
- `uv add --upgrade-package <package>` forces the specified package to update to its latest compatible version, even if a valid version is already installed.
263
+
- `uv sync --upgrade` refreshes the entire lockfile and updates all dependencies to their newest compatible versions while maintaining compatibility with your `pyproject.toml` constraints.
264
+
- By default, `uv add` only updates the locked version if necessary to satisfy new constraints. Use `--upgrade-package` to force an update.
To upgrade a specific dependency like sleap-io independently:
343
+
```bash
344
+
pip install --upgrade sleap-io
345
+
```
346
+
273
347
---
274
348
275
349
## Installation from source
@@ -315,12 +389,27 @@ cd sleap-nn
315
389
uv sync --extra dev --extra torch-cpu
316
390
```
317
391
318
-
!!! tip "Upgrading All Dependencies"
319
-
To ensure you have the latest versions of all dependencies, use the `--upgrade` flag with `uv sync`:
392
+
#### 4. Updating Dependencies
393
+
394
+
To update sleap-nn and its dependencies to their latest versions:
395
+
396
+
=== "Windows/Linux (CUDA 11.8)"
320
397
```bash
321
-
uv sync --extra dev --upgrade
398
+
uv sync --extra dev --extra torch-cuda118 --upgrade
322
399
```
323
-
This will upgrade all installed packages in your environment to the latest available versions compatible with your `pyproject.toml`.
400
+
401
+
=== "Windows/Linux (CUDA 12.8)"
402
+
```bash
403
+
uv sync --extra dev --extra torch-cuda128 --upgrade
404
+
```
405
+
406
+
=== "macOS/CPU Only"
407
+
```bash
408
+
uv sync --extra dev --extra torch-cpu --upgrade
409
+
```
410
+
411
+
!!! tip "How --upgrade Works"
412
+
The `--upgrade` flag refreshes the lockfile and updates all dependencies to their newest compatible versions while maintaining compatibility with your `pyproject.toml` constraints. This ensures you have the latest versions of all dependency packages.
0 commit comments