-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimize tracker #110
base: main
Are you sure you want to change the base?
Optimize tracker #110
Conversation
- remove persistent tracking - on by default - track by frame until context is reached or first batch done
- per-frame only until context window reached - context window is now in terms of frames - run model on batch x (batch + context) - remove from gpu before passing to run_global_tracker - disable validation tracking (not meaningful anymore)
…atch_tracker and run_global_tracker - to support different tracking modes - implements batch tracking with single fwd pass and full context track linking - moves assoc matrix and instances off gpu for batch tracking - provides flag in run_batch_tracker to compute softmax by frame, or globally (depending on whether global track linking is desired)
- save the inference configs to the results folder for a record of the configs used for tracking - minor bug fixes
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
- add iou weighting to batch tracker
Need to add a flag to enable batch tracking/use frame-by-frame. Setting batch size = 1 is inefficient as it loads data 1 by 1, whereas currently we load in batches and forward pass by frame |
…prev frame instances (#113)
- remove persistent tracking - on by default - track by frame until context is reached or first batch done
- per-frame only until context window reached - context window is now in terms of frames - run model on batch x (batch + context) - remove from gpu before passing to run_global_tracker - disable validation tracking (not meaningful anymore)
…atch_tracker and run_global_tracker - to support different tracking modes - implements batch tracking with single fwd pass and full context track linking - moves assoc matrix and instances off gpu for batch tracking - provides flag in run_batch_tracker to compute softmax by frame, or globally (depending on whether global track linking is desired)
- save the inference configs to the results folder for a record of the configs used for tracking - minor bug fixes
- add iou weighting to batch tracker
…into optimize-tracker
Currently, we do frame by frame tracking, which means a forward pass for each frame, rather than each batch. This is necessary to provide the most relevant context for the model to attend to while linking each frame locally i.e. link 1 frame at a time to tracks. However, there are considerable computational gains to be had if we can switch to batch tracking, i.e. a single forward pass per batch.
To fill up the context window, we will still do frame by frame tracking, and then switch to batch tracking. Now, the forward pass will be don on batch x (batch + context) all at once i.e. the associations are computed intra-batch as well as with context window all at once. This means many more associations to be predicted by the model, which could affect performance. Since the context includes the current batch, this change also opens the door for global track linking via ILP/GNN in the future.