Skip to content

How to debug failed inference on GPU? #21276

Closed Answered by mattiasmar
mattiasmar asked this question in Q&A
Discussion options

You must be logged in to vote

I suspected the call to torch.sort to be most risky call in this model. Having canceled out that method the inference call on GPU executed too (just not doing what is expected of the model). Will open separate issue for that.

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by mattiasmar
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
1 participant