Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't run inference examples #113

Open
1 of 2 tasks
heyjustinai opened this issue Nov 12, 2024 · 0 comments · May be fixed by #114
Open
1 of 2 tasks

Can't run inference examples #113

heyjustinai opened this issue Nov 12, 2024 · 0 comments · May be fixed by #114
Assignees

Comments

@heyjustinai
Copy link
Member

System Info

n/a

Information

  • The official example scripts
  • My own modified scripts

🐛 Describe the bug

I get the error below whenever I try to run the inference example.

response = client.inference.chat_completion(
        messages=[message],
        model="Llama3.1-8B-Instruct",
        stream=stream,
    )

Error logs

raise ValueError( ValueError: Model Llama3.2-11B-Vision-Instruct not served by any of the providers: meta-reference-0, meta-reference-1. Make sure there is an Inference provider serving this model.

Expected behavior

I should be able to run inference

@heyjustinai heyjustinai self-assigned this Nov 12, 2024
@heyjustinai heyjustinai linked a pull request Nov 12, 2024 that will close this issue
5 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant