Replies: 1 comment
-
|
There have been various reports of the N100 CPU having weird issues with llama.cpp (the backend). Can you enable debug logs and provide those? It should print out what optimizations are enabled on that CPU, which I assume doesn't include AVX or other optimized SIMD instructions. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi all,
I’ve just installed home-llm via HACS, was able to create a Voice Assistant with it. However, it’s very very slow to respond.
My hardware:
My simple questions like What’s the status of blinds is taking 220 seconds to get a response. When I run qwen2.5-coder:1.5b via Ollama, it’s much much faster.
Is there anything wrong with my setup?
Beta Was this translation helpful? Give feedback.
All reactions