Replies: 1 comment
-
Onnx runtime has ROCm support, which I began at one point to implement (on my AMD Mac). But I couldn't get it to work... If I had a testing system with Linux on AMD GPU I'll be able to build and test. Otherwise what I can do is make a special build and rely on you all to test it out. This probably would be best done "online" e.g. In a live session when we can test multiple versions in short time otherwise it will take forever (waiting days between tests). The other alternative would be to obtain a test PC somehow |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Is there any way we can have a more GPU agnostic Machine Learning API to accelerate the plugin in hardware other than TensorRT, and if not is there a way users with AMD GPUs like Navi10 or later have acceleration via ROCm support added for GNU/Linux? Just curious.
Beta Was this translation helpful? Give feedback.
All reactions