Ollama and num_gpu #36717
bchtrue
started this conversation in
LLMs and Zed Agent
Ollama and num_gpu
#36717
Replies: 1 comment
-
.... |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello
Can you please add
num_gpu
ollama parameter to support by Zed?I'm so disappointed! Tried Zed today with Local ollama. And to load model fully to GPU with Ollama I need to send to Ollama
num_gpu 99
parameter, but it's not possible at Zed. Ollama is SLOW without this with Zed, because Ollama can not utilize all of my GPU memory and offload part of the model to CPU! (with 0 it will be possible to offload model fully to CPU if somebody want this)for reference: ollama/ollama#6950 (comment)
Beta Was this translation helpful? Give feedback.
All reactions