Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding llama-3.1-nemotron-70b-instruct #4660

Open
David-Sola opened this issue Oct 31, 2024 · 2 comments
Open

Adding llama-3.1-nemotron-70b-instruct #4660

David-Sola opened this issue Oct 31, 2024 · 2 comments
Labels
enhancement New feature or request

Comments

@David-Sola
Copy link

Currently it is not possible to add the model from Nvidia.
It would be great for this possibility as the api costs are quite low.

@David-Sola David-Sola added the enhancement New feature or request label Oct 31, 2024
@mamoodi
Copy link
Collaborator

mamoodi commented Oct 31, 2024

It doesn't work if you enable Advanced Options and set it as the model?
Here is the litellm docs: https://docs.litellm.ai/docs/providers/nvidia_nim

Unless it's not supported by litellm

@neubig
Copy link
Contributor

neubig commented Nov 1, 2024

Yes @David-Sola , you could access the model via any hosting API that supports the model. For instance OpenRouter should work

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants