How to debug failed inference on GPU? #21276
-
I've serialized my model using the OpenVINO API (results attached). But when calling inference (on an Alder Lake GPU) I get the below exception. How can I debug it? On CPU the same code works fine.
Attached zip contains serialized IR xml and bin. |
Beta Was this translation helpful? Give feedback.
Answered by
mattiasmar
Nov 25, 2023
Replies: 1 comment
-
I suspected the call to torch.sort to be most risky call in this model. Having canceled out that method the inference call on GPU executed too (just not doing what is expected of the model). Will open separate issue for that. |
Beta Was this translation helpful? Give feedback.
0 replies
Answer selected by
mattiasmar
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I suspected the call to torch.sort to be most risky call in this model. Having canceled out that method the inference call on GPU executed too (just not doing what is expected of the model). Will open separate issue for that.