-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] Llama example inference using Vulkan gives build error #2977
Labels
bug
Confirmed bugs
Comments
Hi @asfarkTii thanks for reporting. May I ask the TVM commit hash and MLC commit hash of your local code base? We tried but were not able to reproduce this issue:
|
I see. I'm running on Jetson which is arm platform. Here's the hash of the commit: |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
🐛 Bug
I'm trying to replicate the LLaMA example method as mentioned in introduction documentation gives errors related to relax.build inspite of properly configured pipeline. Vulkan drivers are installed properly and mlc_llm detects vulkan device as well.
I have used below mentioned code:
`
Expected behavior
I would expect it to work straightforward. Let me know if you need more information and I can provide it.
Environment
conda
, source): sourcepip
, source):sourcepython -c "import tvm; print('\n'.join(f'{k}: {v}' for k, v in tvm.support.libinfo().items()))"
, applicable if you compile models):In [71]: `
Additional context
The text was updated successfully, but these errors were encountered: