Skip to content

Using Internally Hosted LLM and Embedding Model #1192

@sssaha1989

Description

@sssaha1989

Hello
I am running into issues to use the LLM and Embedding models hosted internally by our organization. I have setup the environment varialbes

LLM_API_KEY="dummy-key"
LLM_MODEL="google/gemma-3-27b-it"
LLM_ENDPOINT="http://xx.com:8000/v1"
LLM_PROVIDER="custom"

EMBEDDING_MODEL="thenlper/gte-large"
EMBEDDING_ENDPOINT="http://xx.com:8003/v1"
EMBEDDING_DIMENSIONS=512
EMBEDDING_MAX_TOKENS=512
EMBEDDING_PROVIDER="custom"
EMBEDDING_API_KEY="dummy-key"

However - it will always complain something like this

litellm.BadRequestError: LLM Provider NOT provided.

Now I tried litellm directly using their VLLM configurations - and that code worked. If I provide the same parameters that I use in my LITELLM code while running the cognee code - it will throw an error.

I tried without the v1 - but that gives me the same error.

Is this even possible using Cognee?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions