Langflow with different frontend and different backend and building from source with multiple workers #10071
Replies: 30 comments
-
|
@dosu can you please help here. |
Beta Was this translation helpful? Give feedback.
-
|
You can achieve a production-grade, scalable Langflow deployment with separate frontend and backend services, multi-user support, and high availability by following these best practices: 1. Separate Backend and Frontend Images Use the official images:
2. Example Production Docker Compose Here’s a simplified, production-ready Compose setup (you can expand with Traefik or other reverse proxies for advanced routing and HTTPS): version: "3.8"
services:
db:
image: postgres:16
environment:
POSTGRES_USER: langflow
POSTGRES_PASSWORD: langflow
POSTGRES_DB: langflow
volumes:
- langflow-postgres:/var/lib/postgresql/data
ports:
- "5432:5432"
redis:
image: redis:6.2.5
ports:
- "6379:6379"
rabbitmq:
image: rabbitmq:3-management
environment:
RABBITMQ_DEFAULT_USER: langflow
RABBITMQ_DEFAULT_PASS: langflow
ports:
- "5672:5672"
- "15672:15672"
volumes:
- rabbitmq_data:/var/lib/rabbitmq
- rabbitmq_log:/var/log/rabbitmq
backend:
image: langflowai/langflow-backend:latest
depends_on:
- db
- rabbitmq
- redis
environment:
LANGFLOW_DATABASE_URL: postgresql://langflow:langflow@db:5432/langflow
LANGFLOW_SUPERUSER: admin
LANGFLOW_SUPERUSER_PASSWORD: adminpassword
LANGFLOW_AUTO_LOGIN: "false"
LANGFLOW_WORKERS: 1
LANGFLOW_EVENT_DELIVERY: direct
LANGFLOW_REDIS_HOST: redis
LANGFLOW_REDIS_PORT: 6379
LANGFLOW_CACHE_TYPE: redis
LANGFLOW_COMPONENTS_PATH: /app/custom_components
LANGFLOW_SECRET_KEY: "your_secret_key"
LANGFLOW_BACKEND_ONLY: "true"
ports:
- "7860:7860"
volumes:
- ./custom_components:/app/custom_components
celeryworker:
image: langflowai/langflow-backend:latest
depends_on:
- backend
- db
- rabbitmq
- redis
environment:
LANGFLOW_DATABASE_URL: postgresql://langflow:langflow@db:5432/langflow
LANGFLOW_SUPERUSER: admin
LANGFLOW_SUPERUSER_PASSWORD: adminpassword
LANGFLOW_AUTO_LOGIN: "false"
LANGFLOW_WORKERS: 1
LANGFLOW_EVENT_DELIVERY: direct
LANGFLOW_REDIS_HOST: redis
LANGFLOW_REDIS_PORT: 6379
LANGFLOW_CACHE_TYPE: redis
LANGFLOW_COMPONENTS_PATH: /app/custom_components
LANGFLOW_SECRET_KEY: "your_secret_key"
command: >
python -m celery -A langflow.worker.celery_app worker --loglevel=INFO --concurrency=1 -n lf-worker@%h -P eventlet
volumes:
- ./custom_components:/app/custom_components
deploy:
replicas: 4 # Adjust for desired concurrency
frontend:
image: langflowai/langflow-frontend:latest
environment:
BACKEND_URL: http://backend:7860
ports:
- "8080:80"
depends_on:
- backend
volumes:
langflow-postgres:
rabbitmq_data:
rabbitmq_log:3. Key Points for Production
4. Celery and Concurrency
5. High Availability
If you need to build from source, you can adapt the official Dockerfiles for backend and frontend as found in the repo (backend Dockerfile, frontend Dockerfile). If you have a specific Celery error, please share the error message for more targeted troubleshooting. To reply, just mention my name @dosu. How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other |
Beta Was this translation helpful? Give feedback.
-
|
@dosu I want to build it from source .How i can achieve it for both frontend and backend . What will be the folder structure and can you provide each docker for frontend and backend and docker compose wit celery and all the complete . I am struggling in achieving this and enabling multi user support . What files need to consider for building and how to do that building both . ALSO do i have to change anything in code for communicate |
Beta Was this translation helpful? Give feedback.
-
|
To build Langflow from source with separate frontend and backend, multi-user support, and Celery workers, here’s a step-by-step guide based on the current repository structure and build system: 1. Folder StructureLangflow uses a monorepo with the following relevant structure:
2. Building the Backend from SourceDockerfile location: A typical backend Dockerfile (simplified for clarity): FROM python:3.10-slim
WORKDIR /app
RUN apt-get update && apt-get install -y git libpq-dev gcc
COPY pyproject.toml uv.lock README.md ./
COPY src/backend/base/pyproject.toml src/backend/base/uv.lock src/backend/base/README.md ./src/backend/base/
COPY src /app/src
COPY custom_components /app/custom_components
RUN pip install --upgrade pip && pip install uv
RUN uv sync --frozen --no-editable --extra postgresql
RUN pip install -e /app/src/backend/base
RUN uv pip install -e /app
EXPOSE 7860
ENV LANGFLOW_COMPONENTS_PATH=/app/custom_components
CMD ["uvicorn", "--host", "0.0.0.0", "--port", "7860", "--factory", "langflow.main:create_app"]3. Building the Frontend from SourceDockerfile location: A typical frontend Dockerfile: FROM node:20-alpine as frontend_build
WORKDIR /app
COPY ./package.json ./package-lock.json ./tsconfig.json ./vite.config.ts ./index.html ./tailwind.config.js ./postcss.config.js ./prettier.config.js /app/
RUN npm install
COPY ./src /app/src
RUN npm run build
FROM nginx
COPY --from=frontend_build /app/build/ /usr/share/nginx/html
COPY nginx.conf /etc/nginx/conf.d/default.conf
COPY start-nginx.sh /start-nginx.sh
RUN chmod +x /start-nginx.sh
ENV BACKEND_URL=$BACKEND_URL
CMD ["/start-nginx.sh"]4. Docker Compose with Celery and Multi-User SupportHere’s a production-ready version: "3.8"
services:
db:
image: postgres:16
environment:
POSTGRES_USER: langflow
POSTGRES_PASSWORD: langflow
POSTGRES_DB: langflow
volumes:
- langflow-postgres:/var/lib/postgresql/data
ports:
- "5432:5432"
redis:
image: redis:6.2.5
ports:
- "6379:6379"
rabbitmq:
image: rabbitmq:3-management
environment:
RABBITMQ_DEFAULT_USER: langflow
RABBITMQ_DEFAULT_PASS: langflow
ports:
- "5672:5672"
- "15672:15672"
volumes:
- rabbitmq_data:/var/lib/rabbitmq
- rabbitmq_log:/var/log/rabbitmq
backend:
build:
context: .
dockerfile: src/backend/Dockerfile
image: langflow-backend-custom:latest
depends_on:
- db
- rabbitmq
- redis
environment:
LANGFLOW_DATABASE_URL: postgresql://langflow:langflow@db:5432/langflow
LANGFLOW_SUPERUSER: admin
LANGFLOW_SUPERUSER_PASSWORD: adminpassword
LANGFLOW_AUTO_LOGIN: "false"
LANGFLOW_WORKERS: 1
LANGFLOW_EVENT_DELIVERY: direct
LANGFLOW_REDIS_HOST: redis
LANGFLOW_REDIS_PORT: 6379
LANGFLOW_CACHE_TYPE: redis
LANGFLOW_COMPONENTS_PATH: /app/custom_components
LANGFLOW_SECRET_KEY: "your_secret_key"
LANGFLOW_BACKEND_ONLY: "true"
ports:
- "7860:7860"
volumes:
- ./custom_components:/app/custom_components
celeryworker:
image: langflow-backend-custom:latest
depends_on:
- backend
- db
- rabbitmq
- redis
environment:
LANGFLOW_DATABASE_URL: postgresql://langflow:langflow@db:5432/langflow
LANGFLOW_SUPERUSER: admin
LANGFLOW_SUPERUSER_PASSWORD: adminpassword
LANGFLOW_AUTO_LOGIN: "false"
LANGFLOW_WORKERS: 1
LANGFLOW_EVENT_DELIVERY: direct
LANGFLOW_REDIS_HOST: redis
LANGFLOW_REDIS_PORT: 6379
LANGFLOW_CACHE_TYPE: redis
LANGFLOW_COMPONENTS_PATH: /app/custom_components
LANGFLOW_SECRET_KEY: "your_secret_key"
command: >
python -m celery -A langflow.worker.celery_app worker --loglevel=INFO --concurrency=1 -n lf-worker@%h -P eventlet
volumes:
- ./custom_components:/app/custom_components
deploy:
replicas: 4
frontend:
build:
context: .
dockerfile: src/frontend/Dockerfile
image: langflow-frontend-custom:latest
environment:
BACKEND_URL: http://backend:7860
ports:
- "8080:80"
depends_on:
- backend
volumes:
langflow-postgres:
rabbitmq_data:
rabbitmq_log:5. Build Steps and Key Files
6. Multi-User & High Availability
If you need more details on any step or run into specific errors (especially with Celery), please share the error message for targeted troubleshooting. To reply, just mention my name @dosu. How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other |
Beta Was this translation helpful? Give feedback.
-
|
@dosu Do we have to explicitly run the make or someting like that or we just do docker compose up. Also what will be the backend url you have added in docker compose. If ypu can provide complete end to end detailed steps |
Beta Was this translation helpful? Give feedback.
-
|
You do not need to run What does End-to-End Steps for Building and Deploying Langflow from Source (with Docker Compose)
version: "3.8"
services:
db:
image: postgres:16
environment:
POSTGRES_USER: langflow
POSTGRES_PASSWORD: langflow
POSTGRES_DB: langflow
volumes:
- langflow-postgres:/var/lib/postgresql/data
ports:
- "5432:5432"
redis:
image: redis:6.2.5
ports:
- "6379:6379"
rabbitmq:
image: rabbitmq:3-management
environment:
RABBITMQ_DEFAULT_USER: langflow
RABBITMQ_DEFAULT_PASS: langflow
ports:
- "5672:5672"
- "15672:15672"
volumes:
- rabbitmq_data:/var/lib/rabbitmq
- rabbitmq_log:/var/log/rabbitmq
backend:
build:
context: .
dockerfile: src/backend/Dockerfile
image: langflow-backend-custom:latest
depends_on:
- db
- rabbitmq
- redis
environment:
LANGFLOW_DATABASE_URL: postgresql://langflow:langflow@db:5432/langflow
LANGFLOW_SUPERUSER: admin
LANGFLOW_SUPERUSER_PASSWORD: adminpassword
LANGFLOW_AUTO_LOGIN: "false"
LANGFLOW_WORKERS: 1
LANGFLOW_EVENT_DELIVERY: direct
LANGFLOW_REDIS_HOST: redis
LANGFLOW_REDIS_PORT: 6379
LANGFLOW_CACHE_TYPE: redis
LANGFLOW_COMPONENTS_PATH: /app/custom_components
LANGFLOW_SECRET_KEY: "your_secret_key"
LANGFLOW_BACKEND_ONLY: "true"
ports:
- "7860:7860"
volumes:
- ./custom_components:/app/custom_components
celeryworker:
image: langflow-backend-custom:latest
depends_on:
- backend
- db
- rabbitmq
- redis
environment:
LANGFLOW_DATABASE_URL: postgresql://langflow:langflow@db:5432/langflow
LANGFLOW_SUPERUSER: admin
LANGFLOW_SUPERUSER_PASSWORD: adminpassword
LANGFLOW_AUTO_LOGIN: "false"
LANGFLOW_WORKERS: 1
LANGFLOW_EVENT_DELIVERY: direct
LANGFLOW_REDIS_HOST: redis
LANGFLOW_REDIS_PORT: 6379
LANGFLOW_CACHE_TYPE: redis
LANGFLOW_COMPONENTS_PATH: /app/custom_components
LANGFLOW_SECRET_KEY: "your_secret_key"
command: >
python -m celery -A langflow.worker.celery_app worker --loglevel=INFO --concurrency=1 -n lf-worker@%h -P eventlet
volumes:
- ./custom_components:/app/custom_components
deploy:
replicas: 4
frontend:
build:
context: .
dockerfile: src/frontend/Dockerfile
image: langflow-frontend-custom:latest
environment:
BACKEND_URL: http://backend:7860
ports:
- "8080:80"
depends_on:
- backend
volumes:
langflow-postgres:
rabbitmq_data:
rabbitmq_log:
If you want to develop locally (not in Docker), then you would use If you have any specific errors with Celery or multi-user support, please share the error message for more targeted help. To reply, just mention my name @dosu. How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other |
Beta Was this translation helpful? Give feedback.
-
|
@dosu don't we have to install the langflow package after building the base . Is this docker files are correct .please check and let me know if we need to install any other dependency as i can see we are building the langflow base app not package . Please see if this is correct and nothing is missing and give me complete end to end the docker files for both frontend and backend,the project structure , do we need to copy the main pyproject.toml or it will work with this which you have given . And give me the final docker compose. Also i am facing error with eventlet. It says module is not there something like that . Also are we setting up python and all package globally . Please check everything once and give me the the updated docker files and compose and the structure . Also i have one question regarding llm models. |
Beta Was this translation helpful? Give feedback.
-
|
@dosu please help here |
Beta Was this translation helpful? Give feedback.
-
|
@dosu please help with queries snd answer |
Beta Was this translation helpful? Give feedback.
-
|
Here’s a complete, production-grade guide for building Langflow from source with separate frontend and backend, multi-worker Celery, and all your requested clarifications: 1. Project StructureYour repo should look like: You must copy both the root 2. Backend Dockerfile (
|
Beta Was this translation helpful? Give feedback.
-
|
@dosu you just copy my docker file backend to me . I don't want to have builder concept and all . Cant you make it simpler docker where we are building from the source. Why we need to install langflow package if we have base the wsy you dhowed earlier . |
Beta Was this translation helpful? Give feedback.
-
|
You can absolutely simplify your backend Dockerfile to avoid the builder/multi-stage concept and build everything from source in a single stage. Here’s how and why: Why do you need to install the main
|
Beta Was this translation helpful? Give feedback.
-
|
@dosu you again added the frontend in backend docker. We discuss we will be building 2 different containers one for frontend and one for backend and binded via docker compose . To enable multi user,concurrency via celery and all . Now give me final code for all docker files and docker compose with all setting complete . Dont miss anything and it should be complete and docker should be single stage only |
Beta Was this translation helpful? Give feedback.
-
|
Here is a complete, production-ready setup for building Langflow from source with separate, single-stage Dockerfiles for backend and frontend, and a Docker Compose file that enables multi-user support, Celery concurrency, and high availability. This setup ensures the backend Dockerfile does not include the frontend, and both containers are built and run independently. 1. Project Structure2. Backend Dockerfile (
|
Beta Was this translation helpful? Give feedback.
-
|
@dosu i need the content for start-nginx.sh or default.conf.template, Also let me know what other files needs to be place for frontend to build success |
Beta Was this translation helpful? Give feedback.
-
|
Here’s what you need for your frontend Docker setup: start-nginx.sh #!/bin/sh
# Replace the placeholder with the actual value
sed -i "s|__BACKEND_URL__|$BACKEND_URL|g" /etc/nginx/conf.d/default.conf
# Start nginx
exec nginx -g 'daemon off;'default.conf.template server {
gzip on;
gzip_comp_level 2;
gzip_min_length 1000;
gzip_types text/xml text/css;
gzip_http_version 1.1;
gzip_vary on;
gzip_disable "MSIE [4-6] \\.";
listen 80;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html =404;
}
# Proxy API requests to backend
location /api/ {
proxy_pass __BACKEND_URL__;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
include /etc/nginx/extra-conf.d/*.conf;
}If your Dockerfile expects Other files needed for a successful frontend build:
These files are referenced in the Dockerfile and are required for the build to succeed. The Nginx config and shell script are needed for the container to serve the built frontend and correctly proxy API requests to your backend. If you need the exact content of any of the config files above, let me know which one! To reply, just mention my name @dosu. How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other |
Beta Was this translation helpful? Give feedback.
-
|
@dosu do we have to rename this file which you mention and where is this file getting use i mean the reference . I am not getting what you are trying to say Why we are proxy for /api backend request . Do we needs to be exactly added in default.conf.template . Dont we need ngunx.conf |
Beta Was this translation helpful? Give feedback.
-
|
@dosu also this is is not working different container setup . Lets suppose we want to use one container and run both frontend and backend on same port . Please share docker file complete single stage for that kind of setup also share the complete docker compose for that achieving multi user and concurrency with celery . Lets have both frontend and backend on same port with single stage docker . What will be the final project structure |
Beta Was this translation helpful? Give feedback.
-
|
To run both the Langflow frontend and backend on the same port in a single container, you need a Dockerfile that builds both, serves the frontend static files, proxies API requests to the backend, and launches Celery for concurrency. Here’s how you can achieve this, along with the required Docker Compose and project structure. 1. Project StructureYour repo should look like: 2. Single-Stage Dockerfile (Combined Frontend + Backend + Celery + Nginx)FROM python:3.12-slim
WORKDIR /app
# Install system dependencies
RUN apt-get update && apt-get install -y \
git gcc libpq-dev npm nodejs nginx supervisor && \
apt-get clean && rm -rf /var/lib/apt/lists/*
# Copy backend and frontend source and config
COPY pyproject.toml uv.lock README.md ./
COPY src/backend/base/pyproject.toml src/backend/base/uv.lock src/backend/base/README.md ./src/backend/base/
COPY src /app/src
COPY custom_components /app/custom_components
# Install Python build tool and backend dependencies
RUN pip install --upgrade pip uv
RUN uv sync --frozen --no-editable --extra postgresql
# Build frontend
WORKDIR /app/src/frontend
RUN npm install && npm run build
# Copy frontend build to a static directory for serving
WORKDIR /app
RUN mkdir -p /app/frontend_build && cp -r /app/src/frontend/build/* /app/frontend_build/
# Install backend packages
RUN pip install -e /app/src/backend/base
RUN uv pip install -e /app
RUN pip install eventlet
# Copy Nginx config and startup script
COPY nginx/default.conf.template /etc/nginx/conf.d/default.conf.template
COPY start-all.sh /start-all.sh
RUN chmod +x /start-all.sh
# Supervisor config (to run multiple processes)
RUN echo "[supervisord]\nnodaemon=true\n" > /etc/supervisord.conf && \
echo "[program:backend]\ncommand=python -m langflow run --host 0.0.0.0 --port 7860\n" >> /etc/supervisord.conf && \
echo "[program:celery]\ncommand=python -m celery -A langflow.worker.celery_app worker --loglevel=INFO --concurrency=1 -n lf-worker@%h -P eventlet\n" >> /etc/supervisord.conf && \
echo "[program:nginx]\ncommand=/start-all.sh\n" >> /etc/supervisord.conf
ENV LANGFLOW_HOST=0.0.0.0
ENV LANGFLOW_PORT=7860
ENV LANGFLOW_COMPONENTS_PATH=/app/custom_components
ENV BACKEND_URL=http://localhost:7860
ENV FRONTEND_PORT=80
EXPOSE 80
CMD ["supervisord", "-c", "/etc/supervisord.conf"]3.
|
Beta Was this translation helpful? Give feedback.
-
|
@dosu i need the details on api reference . Suppose i want to create a user via api . Not from ui how i can achieve that. Give me complete details to post and get details of user for registeration |
Beta Was this translation helpful? Give feedback.
-
|
To create and manage users via the Langflow API (not the UI), you can use the following endpoints. All endpoints are available under your backend service (for example, http://localhost:7860 if running locally). 1. Create a User (Registration)Endpoint: Request Body (JSON): {
"username": "your_username",
"password": "your_password",
"optins": {
"github_starred": false,
"dialog_dismissed": false,
"discord_clicked": false
}
}The Example cURL: curl -X POST http://localhost:7860/users/ \
-H "Content-Type: application/json" \
-d '{"username": "newuser", "password": "newpassword"}'Response: 2. Get Current User DetailsEndpoint: Authentication: Example cURL: curl -X GET http://localhost:7860/users/whoami \
-H "Authorization: Bearer <your_access_token>"Response: 3. Get All Users (Superuser Only)Endpoint: Authentication: Example cURL: curl -X GET "http://localhost:7860/users/?skip=0&limit=10" \
-H "Authorization: Bearer <your_access_token>"4. Authentication (Login)To get an access token for authenticated endpoints: Endpoint: Request Body (form data):
Example cURL: curl -X POST http://localhost:7860/login \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "username=newuser&password=newpassword"Response: 5. Notes on Permissions
6. User Model FieldsWhen creating a user, you can provide If you need details on updating, deleting, or resetting passwords via API, let me know! To reply, just mention my name @dosu. How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other |
Beta Was this translation helpful? Give feedback.
-
|
@dosu how to get bearer access token .
|
Beta Was this translation helpful? Give feedback.
-
|
@dosu can you reply here pls |
Beta Was this translation helpful? Give feedback.
-
|
Here’s how to achieve a single-container, single-stage Docker setup for Langflow that serves both frontend and backend on the same port without Nginx, supports Celery for concurrency, and enables multi-user functionality. 1. How to Get a Bearer Access TokenTo authenticate with the Langflow API, send a POST request to curl -X POST http://localhost:7860/login \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "username=your_username&password=your_password"The response will look like: {
"access_token": "your_jwt_token",
"refresh_token": "your_refresh_token",
"token_type": "bearer"
}Use the 2. Why Nginx Is Used and How to Remove ItNginx is typically used to serve static frontend files and proxy API requests to the backend. However, Langflow’s backend (FastAPI) can serve static files directly. To run both frontend and backend on the same port without Nginx, copy the frontend build output into the backend’s static directory ( 3. Single-Stage Dockerfile (No Nginx, Both Frontend & Backend)FROM python:3.12-slim
WORKDIR /app
RUN apt-get update && apt-get install -y git gcc libpq-dev npm && \
apt-get clean && rm -rf /var/lib/apt/lists/*
COPY pyproject.toml uv.lock README.md ./
COPY src/backend/base/pyproject.toml src/backend/base/uv.lock src/backend/base/README.md ./src/backend/base/
COPY src /app/src
COPY custom_components /app/custom_components
# Install Python dependencies
RUN pip install --upgrade pip uv
RUN uv sync --frozen --no-editable --extra postgresql
# Build frontend and copy to backend static dir
WORKDIR /app/src/frontend
RUN npm install && npm run build && \
mkdir -p /app/src/backend/base/langflow/frontend && \
cp -r build/* /app/src/backend/base/langflow/frontend/
# Install backend packages
WORKDIR /app
RUN pip install -e /app/src/backend/base
RUN uv pip install -e /app
RUN pip install eventlet
ENV LANGFLOW_HOST=0.0.0.0
ENV LANGFLOW_PORT=7860
ENV LANGFLOW_COMPONENTS_PATH=/app/custom_components
EXPOSE 7860
CMD ["python", "-m", "langflow", "run", "--host", "0.0.0.0", "--port", "7860"]4. Docker Compose (Single Container, Multi-User, Celery)version: "3.8"
services:
langflow:
build:
context: .
dockerfile: Dockerfile
image: langflow-all-in-one:latest
environment:
LANGFLOW_DATABASE_URL: postgresql://langflow:langflow@db:5432/langflow
LANGFLOW_SUPERUSER: admin
LANGFLOW_SUPERUSER_PASSWORD: adminpassword
LANGFLOW_AUTO_LOGIN: "false"
LANGFLOW_WORKERS: 1
LANGFLOW_EVENT_DELIVERY: direct
LANGFLOW_REDIS_HOST: redis
LANGFLOW_REDIS_PORT: 6379
LANGFLOW_CACHE_TYPE: redis
LANGFLOW_COMPONENTS_PATH: /app/custom_components
LANGFLOW_SECRET_KEY: "your_secret_key"
ports:
- "7860:7860"
depends_on:
- db
- redis
- rabbitmq
volumes:
- ./custom_components:/app/custom_components
celeryworker:
image: langflow-all-in-one:latest
depends_on:
- langflow
- db
- rabbitmq
- redis
environment:
LANGFLOW_DATABASE_URL: postgresql://langflow:langflow@db:5432/langflow
LANGFLOW_SUPERUSER: admin
LANGFLOW_SUPERUSER_PASSWORD: adminpassword
LANGFLOW_AUTO_LOGIN: "false"
LANGFLOW_WORKERS: 1
LANGFLOW_EVENT_DELIVERY: direct
LANGFLOW_REDIS_HOST: redis
LANGFLOW_REDIS_PORT: 6379
LANGFLOW_CACHE_TYPE: redis
LANGFLOW_COMPONENTS_PATH: /app/custom_components
LANGFLOW_SECRET_KEY: "your_secret_key"
command: >
python -m celery -A langflow.worker.celery_app worker --loglevel=INFO --concurrency=1 -n lf-worker@%h -P eventlet
volumes:
- ./custom_components:/app/custom_components
deploy:
replicas: 4
db:
image: postgres:16
environment:
POSTGRES_USER: langflow
POSTGRES_PASSWORD: langflow
POSTGRES_DB: langflow
volumes:
- langflow-postgres:/var/lib/postgresql/data
ports:
- "5432:5432"
redis:
image: redis:6.2.5
ports:
- "6379:6379"
rabbitmq:
image: rabbitmq:3-management
environment:
RABBITMQ_DEFAULT_USER: langflow
RABBITMQ_DEFAULT_PASS: langflow
ports:
- "5672:5672"
- "15672:15672"
volumes:
- rabbitmq_data:/var/lib/rabbitmq
- rabbitmq_log:/var/log/rabbitmq
volumes:
langflow-postgres:
rabbitmq_data:
rabbitmq_log:5. Project StructureSummary: If you need more details or run into any issues, let me know! To reply, just mention my name @dosu. How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other |
Beta Was this translation helpful? Give feedback.
-
|
we are able to run with celery, rabitmq and redis with these settings celeryworker: my question is how many user can be parallel run without any stuck of flow or issues . Any settings need to add in docker compose for this or in backend docker . Also sharing the docker backend syntax=docker/dockerfile:1--------- STAGE 1: Builder ---------FROM python:3.12.3-slim AS builder WORKDIR /app Install build dependenciesRUN apt-get update && apt-get install -y build-essential git gcc libpq-dev npm Copy base requirementsCOPY pyproject.toml uv.lock README.md ./ Install Python dependencies with uvRUN pip install --upgrade pip && pip install uv Copy the whole source codeCOPY src /app/src Build the frontend and move to backendWORKDIR /app/src/frontend Install langflow-base in editable modeRUN pip install -e /app/src/backend/base ✅ Important: Install the main langflow package (adds CLI like
|
Beta Was this translation helpful? Give feedback.
-
|
@dosu can you please reply and help here |
Beta Was this translation helpful? Give feedback.
-
|
You can run multiple users in parallel with your current setup, but the true concurrency limit depends on several factors: the number of Celery workers, the backend worker count, database connection pool size, and the resources allocated to RabbitMQ, PostgreSQL, and Redis. Key points for your setup:
Summary: If you need specific Docker Compose examples for resource limits or want to tune RabbitMQ/Postgres/Redis further, let me know your expected user load and I can suggest concrete settings. To reply, just mention my name @dosu. How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other |
Beta Was this translation helpful? Give feedback.
-
|
@dosu i am running with celery ,rabitmq, redis and posgress but it is laggy and slow . What setting should i do for celery , rabitmq and redis and posgress to achieve fast , concurrent as multi user will be loging in . Can you give me the updated docker compose complete with all the settings. Ensure it works great for multi user without laggy |
Beta Was this translation helpful? Give feedback.
-
|
To improve performance and concurrency for multi-user Langflow deployments with Celery, RabbitMQ, Redis, and PostgreSQL, you should tune both your Docker Compose settings and environment variables. Here’s an updated, production-ready Docker Compose example and key settings to reduce lag and support more concurrent users: Key OptimizationsCelery & Langflow Workers:
PostgreSQL:
RabbitMQ:
Redis:
Other:
Example Docker Composeversion: "3.8"
services:
db:
image: postgres:16
environment:
POSTGRES_USER: langflow
POSTGRES_PASSWORD: langflow
POSTGRES_DB: langflow
PGDATA: /var/lib/postgresql/data/pgdata
volumes:
- langflow-postgres:/var/lib/postgresql/data/pgdata
ports:
- "5432:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U langflow"]
interval: 10s
timeout: 5s
retries: 5
redis:
image: redis:6.2.5
command: ["redis-server", "--maxmemory", "512mb", "--maxmemory-policy", "allkeys-lru"]
ports:
- "6379:6379"
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
rabbitmq:
image: rabbitmq:3-management
environment:
RABBITMQ_DEFAULT_USER: langflow
RABBITMQ_DEFAULT_PASS: langflow
RABBITMQ_VM_MEMORY_HIGH_WATERMARK: 0.7
ports:
- "5672:5672"
- "15672:15672"
volumes:
- rabbitmq_data:/var/lib/rabbitmq
- rabbitmq_log:/var/log/rabbitmq
healthcheck:
test: ["CMD", "rabbitmq-diagnostics", "ping"]
interval: 10s
timeout: 5s
retries: 5
langflow:
build:
context: .
dockerfile: Dockerfile
image: langflow-all-in-one:latest
environment:
LANGFLOW_DATABASE_URL: postgresql://langflow:langflow@db:5432/langflow
LANGFLOW_SUPERUSER: admin
LANGFLOW_SUPERUSER_PASSWORD: adminpassword
LANGFLOW_AUTO_LOGIN: "false"
LANGFLOW_WORKERS: 1
LANGFLOW_EVENT_DELIVERY: direct
LANGFLOW_REDIS_HOST: redis
LANGFLOW_REDIS_PORT: 6379
LANGFLOW_CACHE_TYPE: redis
LANGFLOW_REDIS_CACHE_EXPIRE: 3600
LANGFLOW_DB_CONNECTION_SETTINGS: '{"pool_size": 20, "max_overflow": 40, "pool_timeout": 30, "pool_pre_ping": true, "pool_recycle": 1800}'
LANGFLOW_COMPONENTS_PATH: /app/custom_components
LANGFLOW_SECRET_KEY: "your_secret_key"
LANGFLOW_DISABLE_TRACK_APIKEY_USAGE: "true"
ports:
- "7860:7860"
depends_on:
- db
- redis
- rabbitmq
volumes:
- ./custom_components:/app/custom_components
celeryworker:
image: langflow-all-in-one:latest
depends_on:
- langflow
- db
- rabbitmq
- redis
environment:
LANGFLOW_DATABASE_URL: postgresql://langflow:langflow@db:5432/langflow
LANGFLOW_SUPERUSER: admin
LANGFLOW_SUPERUSER_PASSWORD: adminpassword
LANGFLOW_AUTO_LOGIN: "false"
LANGFLOW_WORKERS: 1
LANGFLOW_EVENT_DELIVERY: direct
LANGFLOW_REDIS_HOST: redis
LANGFLOW_REDIS_PORT: 6379
LANGFLOW_CACHE_TYPE: redis
LANGFLOW_REDIS_CACHE_EXPIRE: 3600
LANGFLOW_DB_CONNECTION_SETTINGS: '{"pool_size": 20, "max_overflow": 40, "pool_timeout": 30, "pool_pre_ping": true, "pool_recycle": 1800}'
LANGFLOW_COMPONENTS_PATH: /app/custom_components
LANGFLOW_SECRET_KEY: "your_secret_key"
LANGFLOW_DISABLE_TRACK_APIKEY_USAGE: "true"
C_FORCE_ROOT: "true"
command: >
python -m celery -A langflow.worker.celery_app worker --loglevel=INFO --concurrency=1 -n lf-worker@%h -P eventlet
volumes:
- ./custom_components:/app/custom_components
deploy:
replicas: 4
volumes:
langflow-postgres:
rabbitmq_data:
rabbitmq_log:Explanation of Key Settings
Monitor your containers’ CPU and memory usage and scale resources as needed. For even higher concurrency, consider scaling your database and broker containers vertically (more resources) or horizontally (replicas, if supported). If you have a specific expected user load, I can suggest more concrete resource values. To reply, just mention my name @dosu. How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other |
Beta Was this translation helpful? Give feedback.
-
|
@yashugupta786 Were you able to complete the configurations using Dosu's instructions? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Bug Description
I have created a docker compose , docker backend . and mutliple celery worker and rabitmq, and postgress, redis .
Now as of now we are copy and building the source and running on one port which 7860 . I am giving you my docker compose and docker backend . But this fails when running with celery not sure where i am doing wrong . Can you help with a production grade docker compose and other docker file different for frontend and backend and how to achieve concurrency having multi user kind of environment without copy the frontend in backend source . can you help with the correct docker compose and docker backend working and building from source and what setting we need to enable for multi user and custom components and having multiple workers
This is my docker compose
version: "3.8"
services:
db:
image: postgres:16
environment:
POSTGRES_USER: langflow
POSTGRES_PASSWORD: langflow
POSTGRES_DB: langflow
volumes:
- langflow-postgres:/var/lib/postgresql/data
ports:
- "5432:5432"
redis:
image: redis:6.2.5
ports:
- "6379:6379"
rabbitmq:
image: rabbitmq:3-management
environment:
RABBITMQ_DEFAULT_USER: langflow
RABBITMQ_DEFAULT_PASS: langflow
ports:
- "5672:5672"
- "15672:15672"
volumes:
- rabbitmq_data:/var/lib/rabbitmq
- rabbitmq_log:/var/log/rabbitmq
backend:
build:
context: .
dockerfile: Dockerfile.backend
image: langflow-backend-custom:latest
depends_on:
- db
- rabbitmq
- redis
environment:
LANGFLOW_DATABASE_URL: postgresql://langflow:langflow@db:5432/langflow
LANGFLOW_SUPERUSER: admin
LANGFLOW_SUPERUSER_PASSWORD: adminpassword
LANGFLOW_AUTO_LOGIN: "false"
LANGFLOW_WORKERS: 1
LANGFLOW_EVENT_DELIVERY: direct
LANGFLOW_REDIS_HOST: redis
LANGFLOW_REDIS_PORT: 6379
LANGFLOW_CACHE_TYPE: redis
LANGFLOW_COMPONENTS_PATH: /app/custom_components
LANGFLOW_SECRET_KEY: "1234565"
ports:
- "7860:7860"
volumes:
- ./custom_components:/app/custom_components
celeryworker:
image: langflow-backend-custom:latest
depends_on:
- backend
- db
- rabbitmq
- redis
environment:
LANGFLOW_DATABASE_URL: postgresql://langflow:langflow@db:5432/langflow
LANGFLOW_SUPERUSER: admin
LANGFLOW_SUPERUSER_PASSWORD: adminpassword
LANGFLOW_AUTO_LOGIN: "false"
LANGFLOW_WORKERS: 1
LANGFLOW_EVENT_DELIVERY: direct
LANGFLOW_REDIS_HOST: redis
LANGFLOW_REDIS_PORT: 6379
LANGFLOW_CACHE_TYPE: redis
LANGFLOW_COMPONENTS_PATH: /app/custom_components
LANGFLOW_SECRET_KEY: "3543636"
command: >
python -m celery -A langflow.worker.celery_app worker --loglevel=INFO --concurrency=1 -n lf-worker@%h -P eventlet
volumes:
- ./custom_components:/app/custom_components
deploy:
replicas: 4 # Adjust for desired concurrency
volumes:
langflow-postgres:
rabbitmq_data:
rabbitmq_log:
Docker backend
syntax=docker/dockerfile:1
--------- STAGE 1: Builder ---------
FROM python:3.12.3-slim AS builder
WORKDIR /app
Install build dependencies
RUN apt-get update && apt-get install -y build-essential git gcc libpq-dev npm
Copy base requirements
COPY pyproject.toml uv.lock README.md ./
COPY src/backend/base/pyproject.toml src/backend/base/uv.lock src/backend/base/README.md ./src/backend/base/
Install Python dependencies with uv
RUN pip install --upgrade pip && pip install uv
RUN uv sync --frozen --no-editable --extra deploy --extra postgresql
Copy the whole source code
COPY src /app/src
COPY custom_components /app/custom_components
Build the frontend and move to backend
WORKDIR /app/src/frontend
RUN npm ci
ENV NODE_OPTIONS=--max-old-space-size=4096
RUN npm run build && mkdir -p /app/src/backend/base/langflow/frontend && cp -r build/* /app/src/backend/base/langflow/frontend/
Install langflow-base in editable mode
RUN pip install -e /app/src/backend/base
✅ Important: Install the main langflow package (adds CLI like
langflow run)WORKDIR /app
RUN uv pip install -e /app
✅ Install eventlet (for Celery concurrency)
RUN pip install eventlet
--------- STAGE 2: Runtime Image ---------
FROM python:3.12.3-slim
WORKDIR /app
Install runtime dependencies (Node.js is not strictly needed in runtime but left in case you want to serve static frontend yourself)
RUN apt-get update && apt-get install -y
curl
git
libpq5
gnupg &&
curl -fsSL https://deb.nodesource.com/setup_18.x | bash - &&
apt-get install -y nodejs &&
apt-get clean && rm -rf /var/lib/apt/lists/*
Copy built app from builder stage
COPY --from=builder /app /app
RUN pip install eventlet
Set environment variables
ENV PATH="/app/.venv/bin:$PATH"
ENV LANGFLOW_HOST=0.0.0.0
ENV LANGFLOW_PORT=7860
ENV LANGFLOW_COMPONENTS_PATH=/app/custom_components
EXPOSE 7860
Run Langflow app
CMD ["python", "-m", "langflow", "run", "--host", "0.0.0.0", "--port", "7860"]
Above one is the docker backend and its not working giving error with celery . How can be have different frontend image running at different port as a build and different port for backend and communication between those with multi user , multi worker and high availability
@dosu
Reproduction
I have created a docker compose , docker backend . and mutliple celery worker and rabitmq, and postgress, redis .
Now as of now we are copy and building the source and running on one port which 7860 . I am giving you my docker compose and docker backend . But this fails when running with celery not sure where i am doing wrong . Can you help with a production grade docker compose and other docker file different for frontend and backend and how to achieve concurrency having multi user kind of environment without copy the frontend in backend source . can you help with the correct docker compose and docker backend working and building from source and what setting we need to enable for multi user and custom components and having multiple workers
This is my docker compose
version: "3.8"
services:
db:
image: postgres:16
environment:
POSTGRES_USER: langflow
POSTGRES_PASSWORD: langflow
POSTGRES_DB: langflow
volumes:
- langflow-postgres:/var/lib/postgresql/data
ports:
- "5432:5432"
redis:
image: redis:6.2.5
ports:
- "6379:6379"
rabbitmq:
image: rabbitmq:3-management
environment:
RABBITMQ_DEFAULT_USER: langflow
RABBITMQ_DEFAULT_PASS: langflow
ports:
- "5672:5672"
- "15672:15672"
volumes:
- rabbitmq_data:/var/lib/rabbitmq
- rabbitmq_log:/var/log/rabbitmq
backend:
build:
context: .
dockerfile: Dockerfile.backend
image: langflow-backend-custom:latest
depends_on:
- db
- rabbitmq
- redis
environment:
LANGFLOW_DATABASE_URL: postgresql://langflow:langflow@db:5432/langflow
LANGFLOW_SUPERUSER: admin
LANGFLOW_SUPERUSER_PASSWORD: adminpassword
LANGFLOW_AUTO_LOGIN: "false"
LANGFLOW_WORKERS: 1
LANGFLOW_EVENT_DELIVERY: direct
LANGFLOW_REDIS_HOST: redis
LANGFLOW_REDIS_PORT: 6379
LANGFLOW_CACHE_TYPE: redis
LANGFLOW_COMPONENTS_PATH: /app/custom_components
LANGFLOW_SECRET_KEY: "1234565"
ports:
- "7860:7860"
volumes:
- ./custom_components:/app/custom_components
celeryworker:
image: langflow-backend-custom:latest
depends_on:
- backend
- db
- rabbitmq
- redis
environment:
LANGFLOW_DATABASE_URL: postgresql://langflow:langflow@db:5432/langflow
LANGFLOW_SUPERUSER: admin
LANGFLOW_SUPERUSER_PASSWORD: adminpassword
LANGFLOW_AUTO_LOGIN: "false"
LANGFLOW_WORKERS: 1
LANGFLOW_EVENT_DELIVERY: direct
LANGFLOW_REDIS_HOST: redis
LANGFLOW_REDIS_PORT: 6379
LANGFLOW_CACHE_TYPE: redis
LANGFLOW_COMPONENTS_PATH: /app/custom_components
LANGFLOW_SECRET_KEY: "3543636"
command: >
python -m celery -A langflow.worker.celery_app worker --loglevel=INFO --concurrency=1 -n lf-worker@%h -P eventlet
volumes:
- ./custom_components:/app/custom_components
deploy:
replicas: 4 # Adjust for desired concurrency
volumes:
langflow-postgres:
rabbitmq_data:
rabbitmq_log:
Docker backend
syntax=docker/dockerfile:1
--------- STAGE 1: Builder ---------
FROM python:3.12.3-slim AS builder
WORKDIR /app
Install build dependencies
RUN apt-get update && apt-get install -y build-essential git gcc libpq-dev npm
Copy base requirements
COPY pyproject.toml uv.lock README.md ./
COPY src/backend/base/pyproject.toml src/backend/base/uv.lock src/backend/base/README.md ./src/backend/base/
Install Python dependencies with uv
RUN pip install --upgrade pip && pip install uv
RUN uv sync --frozen --no-editable --extra deploy --extra postgresql
Copy the whole source code
COPY src /app/src
COPY custom_components /app/custom_components
Build the frontend and move to backend
WORKDIR /app/src/frontend
RUN npm ci
ENV NODE_OPTIONS=--max-old-space-size=4096
RUN npm run build && mkdir -p /app/src/backend/base/langflow/frontend && cp -r build/* /app/src/backend/base/langflow/frontend/
Install langflow-base in editable mode
RUN pip install -e /app/src/backend/base
✅ Important: Install the main langflow package (adds CLI like
langflow run)WORKDIR /app
RUN uv pip install -e /app
✅ Install eventlet (for Celery concurrency)
RUN pip install eventlet
--------- STAGE 2: Runtime Image ---------
FROM python:3.12.3-slim
WORKDIR /app
Install runtime dependencies (Node.js is not strictly needed in runtime but left in case you want to serve static frontend yourself)
RUN apt-get update && apt-get install -y
curl
git
libpq5
gnupg &&
curl -fsSL https://deb.nodesource.com/setup_18.x | bash - &&
apt-get install -y nodejs &&
apt-get clean && rm -rf /var/lib/apt/lists/*
Copy built app from builder stage
COPY --from=builder /app /app
RUN pip install eventlet
Set environment variables
ENV PATH="/app/.venv/bin:$PATH"
ENV LANGFLOW_HOST=0.0.0.0
ENV LANGFLOW_PORT=7860
ENV LANGFLOW_COMPONENTS_PATH=/app/custom_components
EXPOSE 7860
Run Langflow app
CMD ["python", "-m", "langflow", "run", "--host", "0.0.0.0", "--port", "7860"]
Above one is the docker backend and its not working giving error with celery . How can be have different frontend image running at different port as a build and different port for backend and communication between those with multi user , multi worker and high availability
@dosu
Expected behavior
Achieving mutli user, different frontend and backend with high availabiltiy , multi worker without overlap and anything , Please provide the complete docker compose and docker files for backend and frontend can be run in prod
Who can help?
No response
Operating System
linux
Langflow Version
latest
Python Version
3.12
Screenshot
No response
Flow File
No response
Beta Was this translation helpful? Give feedback.
All reactions