-
The latest updates made to LM Studio binaries, llama.cpp vulkan runtime, is not loading properly LLM as large as 50B/70B parameters (about 35~40GB) on AMD RX580 GPU. Leading to the message: Previous models ran perfectly. Here are a few logs when it was working properly and when it stop to work: Recent vulkan not working: vulkan-not-working.txt Attached the llama.cpp vulkan working perfectly: I already discussed with LM Studio devs, that suspect that the issue is related to llama.cpp vulkan. Could you please help? Any advice is appreciated. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 10 replies
-
Can you elaborate on how it is failing? I don't see any immediate issue in the logs you provided. |
Beta Was this translation helpful? Give feedback.
Yes, there is a stack overflow, but not in llama.cpp. I was pointed to this: lmstudio-ai/lmstudio-bug-tracker#285
"The issue is related to limitations with chromium's memory allocator. We are tracking the issue here lmstudio-ai/lmstudio-bug-tracker#285"
Hopefully a fix will be implemented anytime soon.
Thanks for your support