⚠️ Note: This project is a maintained and extended fork of coleam00/local-ai-packaged, which no longer appears to be actively maintained.
This version includes up‑to‑date dependencies, bug fixes, and full automation scripts for setup and management.
Local AI Packaged provides a full self‑hosted AI environment using Docker Compose.
It bundles everything you need for local LLM workflows, including:
- 🧠 Ollama for local LLM inference
- 💬 Open WebUI for chatting with your models or n8n agents
- ⚙️ n8n for building automation workflows
- 🧱 Supabase as database, vector store, and auth system
- 🧩 Flowise, Langfuse, Qdrant, Neo4j, SearXNG, and optional Caddy
This version is designed for technical self‑hosters running the stack on a home server, NAS (e.g. Synology), or any Linux host.
Make sure you have:
- 🐳 Docker and Docker Compose installed
- 🐍 Python 3.8+
- 💾 Git
- 💡 At least 16 GB RAM recommended
- NVIDIA: Install NVIDIA Container Toolkit
- AMD: ROCm runtime configured
- CPU only: Works fine, just slower
git clone https://github.com/gamersalpha/local-ai-packaged.git
cd local-ai-packagedThe generate_env.py script creates a full environment configuration automatically:
- Generates secure random secrets
- Detects GPU type (NVIDIA / AMD / CPU)
- Sets correct local paths for volumes
Run:
python3 generate_env.py --yes --regen-sensitive --dockerOnce complete, it will start the stack and show the service URLs:
🌐 Available services:
🧠 Ollama : http://192.168.1.42:11434
⚙️ n8n : http://192.168.1.42:5678
💬 Open WebUI : http://192.168.1.42:8080
🧱 Supabase : http://192.168.1.42:54323
Use the provided Python launcher for easy orchestration:
python3 start_services.py [options]| Option | Description |
|---|---|
| `--profile [cpu | gpu-nvidia |
| `--environment [private | public]` |
--no-supabase |
Skip Supabase include |
--no-caddy |
Skip Caddy reverse proxy |
--update |
Pull latest Docker images before start |
--dry-run |
Preview configuration only |
# CPU-only deployment
python3 start_services.py --profile cpu
# NVIDIA GPU
python3 start_services.py --profile gpu-nvidia
# Skip Supabase and Caddy
python3 start_services.py --no-supabase --no-caddy
# Pull new images before starting
python3 start_services.py --update💡 What it does:
- Validates or creates your
.env - Clones Supabase’s official Docker stack if missing
- (De)comments Caddy and Supabase dynamically
- Generates a new secret key for SearXNG
- Stops existing containers before redeploy
- Starts Supabase → then the Local AI stack
Use the helper script update_services.sh:
./update_services.shThis will:
- Stop all containers
- Pull the latest images
- Restart the stack via
start_services.py
For GPU users, you can adjust:
docker compose -p localai -f docker-compose.yml --profile gpu-nvidia down
docker compose -p localai -f docker-compose.yml --profile gpu-nvidia pull
python3 start_services.py --profile gpu-nvidia --no-caddy| Service | Description | Default URL |
|---|---|---|
| n8n | Workflow automation | http://192.168.1.42:5678 |
| Open WebUI | Chat interface for LLMs | http://192.168.1.42:8080 |
| Ollama | Local LLM API | http://192.168.1.42:11434 |
| Flowise | Low‑code AI builder | http://192.168.1.42:3001 |
| Supabase | DB, Auth & Vector store | http://192.168.1.42:54323 |
| Langfuse | LLM tracing dashboard | http://192.168.1.42:3000 |
| SearXNG | Web search engine for RAG | http://192.168.1.42:8080 |
| Neo4j | Graph database | http://192.168.1.42:7474 |
(Adjust IPs for your own LAN setup.)
- Check that the folder
supabase/was created automatically. - Delete it and rerun:
python3 start_services.py --no-caddy
- Ensure
.envcontains:POOLER_DB_POOL_SIZE=5
- Ensure the NVIDIA Container Toolkit or ROCm is installed correctly.
- Fallback to CPU mode if needed:
python3 start_services.py --profile cpu
Edit docker-compose.yml to change exposed ports, then restart.
.
├── docker-compose.yml
├── start_services.py
├── update_services.sh
├── generate_env.py
├── supabase/ # auto-cloned (ignored by Git)
├── n8n/
│ └── backup/
├── searxng/
├── shared/
└── neo4j/
📝 Note: The
supabase/folder is automatically cloned bystart_services.pyfrom github.com/supabase/supabase.
It should not be committed to Git and is already included in.gitignore.
Licensed under the Apache 2.0 License.
See LICENSE for details.
Built and maintained with ❤️ for the self‑hosting community.