Ollama recognised but not the llm #8333
Replies: 1 comment
-
|
Langflow is able to detect your Ollama instance, but if no models are showing up—even though you can see them via curl—this is most likely due to the "tool mode enabled" setting in Langflow. When this setting is on, Langflow only displays models that support tool calling, which excludes most standard LLMs. Disabling "tool mode" in the Ollama Model Activity component should make all your downloaded models (like If disabling tool mode does not resolve the issue, make sure you are running the latest versions of both Langflow and Ollama. There have been compatibility issues in the past where models would not appear unless Ollama was updated to the newest release [3]. If you are running Langflow in Docker and Ollama on your host machine, ensure you use If you have confirmed the above and models still do not appear, check that your Let me know if you need help locating the "tool mode" toggle or if you want to check for other possible causes. To reply, just mention my name @dosu. How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
So langflow is able to detect Ollama at 127.0.0.11434, but no models are being recognised at that port. When i curl http://127.0.0.1:11434/api/tags ,
i get "{"models":[{"name":"tinyllama:1.1b","model":"tinyllama:1.1b","modified_at":"2025-06-03T18:29:22+09:00","size":637700138,"digest":"2644915ede352ea7bdfaff0bfac0be74c719d5d5202acb63a6fb095b52f394a4","details":{"parent_model":"","format":"gguf","family":"llama","families":["llama"],"parameter_size":"1B","quantization_level":"Q4_0"}}]}".
I have also confirmed that the llm works, its just not getting recognised by langflow.
Beta Was this translation helpful? Give feedback.
All reactions