Replies: 1 comment
-
|
Hi @ako! I'm Dosu and I’m helping the langflow team. You can use Langflow with a Docker Model Runner that exposes an OpenAI-compatible API by setting the "OpenAI API Base" (openai_api_base) in the OpenAI provider configuration to point to your Docker endpoint. This lets Langflow send requests to your custom API instead of OpenAI's servers. However, Langflow does not fetch available models dynamically from the endpoint; the model dropdown uses a static list. You can manually type in your model name, even if it's not listed, and it will work as long as your API supports it (source, source). No code changes or custom components are needed for basic use with an OpenAI-compatible API. If you want Langflow to fetch model names dynamically from your endpoint, you would need to modify the OpenAI component code to add this feature (source). For most setups, just set the API base URL and manually enter your model name. If you run Langflow and your model runner in separate Docker containers, make sure the API base URL is reachable from the Langflow container (for example, using To reply, just mention @dosu. How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, newbie here, so this may be a rather obvious question.
Anybody have langflow using models running in docker model runner? The runner api is openai compatible, but when you select openai provider, it doesn't fetch the available models from docker. If i select a custom language model for an agent, there's no option to configure the llm api?
Does this require a custom component, or changes to the openai agent component code?
Thanks,
Andrej
Beta Was this translation helpful? Give feedback.
All reactions