Skip to content

models ocsort_yolox_x_crowdhuman_mot17 private half

github-actions[bot] edited this page Oct 22, 2023 · 16 revisions

ocsort_yolox_x_crowdhuman_mot17-private-half

Overview

Description: ocsort_yolox_x_crowdhuman_mot17-private-half model is from OpenMMLab's MMTracking library. This model is reported to obtain MOTA: 77.8, IDF1: 78.4 for video-multi-object-tracking task on MOT17-half-eval dataset. Multi-Object Tracking (MOT) has rapidly progressed with the development of object detection and re-identification. However, motion modeling, which facilitates object association by forecasting short-term trajec- tories with past observations, has been relatively under-explored in recent years. Current motion models in MOT typically assume that the object motion is linear in a small time window and needs continuous observations, so these methods are sensitive to occlusions and non-linear motion and require high frame-rate videos. In this work, we show that a simple motion model can obtain state-of-the-art tracking performance without other cues like appearance. We emphasize the role of “observation” when recovering tracks from being lost and reducing the error accumulated by linear motion models during the lost period. We thus name the proposed method as Observation-Centric SORT, OC-SORT for short. It remains simple, online, and real-time but improves robustness over occlusion and non-linear motion. It achieves 63.2 and 62.1 HOTA on MOT17 and MOT20, respectively, surpassing all published methods. It also sets new states of the art on KITTI Pedestrian Tracking and DanceTrack where the object motion is highly non-linear. > The above abstract is from MMTracking website. Review the original-model-card to understand the data used to train the model, evaluation metrics, license, intended uses, limitations and bias before using the model. ### Inference samples Inference type|Python sample (Notebook)|CLI with YAML |--|--|--| Real time|video-multi-object-tracking-online-endpoint.ipynb|video-multi-object-tracking-online-endpoint.sh| ### Finetuning samples Task|Use case|Dataset|Python sample (Notebook)|CLI with YAML |---|--|--|--|--| Video multi-object tracking|Video multi-object tracking|MOT17 tiny|mot17-tiny-video-multi-object-tracking.ipynb|mot17-tiny-video-multi-object-tracking.sh| ### Sample inputs and outputs (for real-time inference) #### Sample input json { "input_data": { "columns": [ "video" ], "data": ["video_link"] } } Note: "video_link" should be a publicly accessible url. #### Sample output json [ { "det_bboxes": [ { "box": { "topX": 703.9149780273, "topY": -5.5951070786, "bottomX": 756.9875488281, "bottomY": 158.1963806152 }, "label": 0, "score": 0.9597821236 }, { "box": { "topX": 1487.9072265625, "topY": 67.9468841553, "bottomX": 1541.1591796875, "bottomY": 217.5476837158 }, "label": 0, "score": 0.9568068385 } ], "track_bboxes": [ { "box": { "instance_id": 0, "topX": 703.9149780273, "topY": -5.5951070786, "bottomX": 756.9875488281, "bottomY": 158.1963806152 }, "label": 0, "score": 0.9597821236 }, { "box": { "instance_id": 1, "topX": 1487.9072265625, "topY": 67.9468841553, "bottomX": 1541.1591796875, "bottomY": 217.5476837158 }, "label": 0, "score": 0.9568068385 } ], "frame_id": 0, "video_url": "video_link" } ]

Version: 1

Tags

Preview license : apache-2.0 model_specific_defaults : ordereddict([('apply_deepspeed', 'false'), ('apply_ort', 'false')]) task : multi-object-tracking

View in Studio: https://ml.azure.com/registries/azureml/models/ocsort_yolox_x_crowdhuman_mot17-private-half/version/1

License: apache-2.0

Properties

finetune-min-sku-spec: 4|1|28|176

finetune-recommended-sku: Standard_NC6s_v3

finetuning-tasks: video-multi-object-tracking

inference-min-sku-spec: 2|1|14|28

inference-recommended-sku: Standard_NC6s_v3, Standard_NC12s_v3, Standard_NC24s_v3, Standard_NC24rs_v3, Standard_NC16as_T4_v3, Standard_NC24ads_A100_v4, Standard_NC48ads_A100_v4, Standard_NC4as_T4_v3, Standard_NC64as_T4_v3, Standard_NC8as_T4_v3, Standard_NC96ads_A100_v4, Standard_ND40rs_v2, Standard_ND96amsr_A100_v4, Standard_ND96asr_v4

model_id: ocsort_yolox_x_crowdhuman_mot17-private-half

Clone this wiki locally