Skip to content

Commit 5e0eedb

Browse files
talmoclaude
andcommitted
Fix inference progress ending at 99% instead of 100%
Add final progress emit after the inference loop in _predict_generator_gui() to ensure 100% is always shown. Previously, the throttled progress reporting (~4Hz) could skip the final update if the last batch completed within 0.25s of the previous report. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
1 parent 9a75e6a commit 5e0eedb

File tree

1 file changed

+10
-0
lines changed

1 file changed

+10
-0
lines changed

sleap_nn/inference/predictors.py

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -567,6 +567,16 @@ def _predict_generator_gui(
567567
print(json.dumps(progress_data), flush=True)
568568
last_report = time()
569569

570+
# Final progress emit to ensure 100% is shown
571+
elapsed = time() - start_time
572+
progress_data = {
573+
"n_processed": total_frames,
574+
"n_total": total_frames,
575+
"rate": round(frames_processed / elapsed, 1) if elapsed > 0 else 0,
576+
"eta": 0,
577+
}
578+
print(json.dumps(progress_data), flush=True)
579+
570580
def _predict_generator_rich(
571581
self, total_frames: int
572582
) -> Iterator[Dict[str, np.ndarray]]:

0 commit comments

Comments
 (0)