[Detector Support]: ONNX on Jetson Orin NX: 320x320 working, 640x640 problematic with no obvious/explicit error #18592
Unanswered
zeroflow
asked this question in
Detector Support
Replies: 2 comments 2 replies
-
One question: why do you use your main streams for detection? In the logs the are hints showing issues with these detect streams. |
Beta Was this translation helpful? Give feedback.
2 replies
-
Yolo7x is a smaller model than yolo-nas. Works fine on my Nvidia GPU so it must just be some overflow or other issue. I'm not aware of any deeper logs to give more info. 320x320 is recommended in most cases anyway |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Describe the problem you are having
I'm running 0.16.0 Beta 3 on my Orin NX 16GB. Jetpack 6.2 / L4T 36.4.3 (latest as of writing).
With some great help from NickM, the ONNX backend for the -jp6 image was fixed.
Now with support for ONNX, I purchased Frigate+ and trained on my own images.
Now the 320x320 model works fine, both with the base model and with my fine-tuned mode.
Since the Orin NX should have enough power, I tried the 640x640 model, but that did not work.
EDIT: TensorRT 640 models, like the Yolo7x-640 work fine (but obviously slower), ONNX models, e.g. YOLO-NAS S 640 does not.
Version
0.16.0-f141b58
Frigate config file
docker-compose file or Docker CLI command
Relevant Frigate log output
Install method
Docker Compose
Object Detector
Other
Screenshots of the Frigate UI's System metrics pages
Any other information that may be helpful
In jtop, I see the memory use for the detector spike up rather high to ~5gig and the iGPU usage is stuck at 100%.
From how it looks, the model seems to be crashing.
Also, it seems to cause problems with the recordings, since frigate.record.maintainer is reporting unprocessed recording segments.
My main question is: Is there some way to debug the model and see where the problem is coming from?
Beta Was this translation helpful? Give feedback.
All reactions