Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPU support for custom inference runtimes in MLServer #1894

Open
koolgax99 opened this issue Aug 28, 2024 · 0 comments
Open

GPU support for custom inference runtimes in MLServer #1894

koolgax99 opened this issue Aug 28, 2024 · 0 comments

Comments

@koolgax99
Copy link

I am trying to use GPU in my custom inference endpoint built using MLserver. I am unable to load the model on gpu.
Can you please let me know if this is possible or not?

Thank you

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant