Complete PostgreSQL-as-a-Service platform with REST API for database provisioning, SSL/TLS encryption, and per-database IP whitelisting. Now with Redis AI Cache for multi-site deployments!
- ✅ PostgreSQL 16 with SSL/TLS (Let's Encrypt)
- ✅ Multi-tenant database isolation
- ✅ REST API for database management
- ✅ Per-database IP whitelisting (optional)
- ✅ Automatic SSL certificate renewal
- ✅ Database statistics and monitoring
- ✅ Audit logging
- ✅ Daily automated backups
- ✅ PgBouncer connection pooling (transaction mode)
- ✅ Shadow databases for Prisma migrations
- ✅ Redis 7.x optimized for AI workloads
- ✅ Multi-site support with key prefixes
- ✅ Semantic caching for LLM responses (save 31% on costs)
- ✅ Vector search support (6 parallel workers)
- ✅ Embeddings cache for OpenAI/Anthropic
- ✅ 4GB memory limit, 10K connections
- ✅ Enhanced monitoring and analytics
- PostgreSQL Installation
- Redis Installation
- PgBouncer Connection Pooling
- Connection Strings (Neon-style)
- API Setup
- API Usage
- IP Whitelisting
- Management
- Security
- Ubuntu 24.04 server
- Root or sudo access
- Domain name pointing to your server (e.g.,
db.yourdomain.com) - Ports 80, 5432, and 22 open in your firewall
# SSH into your server
ssh root@your-server-ip
# Download the installation script
curl -o postgres.sh https://gist.githubusercontent.com/drhema/bece21fefd4c988c44a5443d308ecd71/raw/postgres.sh
OR
curl -o postgres.sh https://raw.githubusercontent.com/drhema/postgres-as-service/refs/heads/main/postgres.sh
# Make it executable
chmod +x postgres.shsudo ./postgres.shThe script will prompt you for:
- Domain name:
db.yourdomain.com(must point to your server's IP) - Email: For Let's Encrypt SSL certificate notifications
- PostgreSQL password: Strong password (min 12 characters)
The script will automatically:
- Update system packages
- Install PostgreSQL 16
- Install dependencies (certbot, curl, etc.)
- Obtain SSL certificate from Let's Encrypt
- Configure PostgreSQL with SSL
- Create control database with schema
- Set up auto-renewal for SSL certificates
- Create management utilities
- Configure daily backups
# Check PostgreSQL status
sudo systemctl status postgresql
# Test connection
sudo -u postgres psql -c "SELECT version();"
# View management commands
pg-statusSave the installation summary which includes:
- Server IP and domain
- Database credentials
- Connection strings
- SSL certificate locations
- Management commands
📖 See QUICK-START.md for the fastest way to get started!
One-command installation on your server:
curl -fsSL https://raw.githubusercontent.com/drhema/postgres-as-service/refs/heads/main/redis.sh | sudo bashOr download and review first:
# Download the script
curl -O https://raw.githubusercontent.com/drhema/postgres-as-service/refs/heads/main/redis.sh
# Review it
cat redis.sh
# Make executable
chmod +x redis.sh
# Run it
sudo ./redis.sh- Redis 7.x with AI optimizations
- 4GB memory (40% of RAM, max)
- 16 databases for site isolation
- 6 search workers for vector/AI queries
- Multi-site support via key prefixes
- Enhanced monitoring script
-
Save your connection URL:
redis://:YOUR_PASSWORD@YOUR_SERVER_IP:6379 -
Configure firewall (CRITICAL!):
sudo ufw allow from YOUR_APP_SERVER_IP to any port 6379 sudo ufw deny 6379 sudo ufw enable -
Test connection:
redis-cli -h YOUR_SERVER_IP -p 6379 -a YOUR_PASSWORD ping # Should return: PONG
Comprehensive documentation included:
| Guide | Description |
|---|---|
| REDIS-INSTALLATION.md | Detailed installation & troubleshooting |
| REDIS-SETUP-SUMMARY.md | Quick reference guide |
| MULTI-SITE-REDIS-GUIDE.md | Multi-site implementation with code examples |
| AI-CACHE-IMPLEMENTATION-GUIDE.md | AI caching, RAG, vector search, semantic cache |
1. Semantic Caching (Save 31% LLM costs):
const cache = new SemanticCache({
redis_url: 'redis://:PASSWORD@HOST:6379',
ttl: 3600
});
// Check cache before calling OpenAI
const cached = await cache.check(prompt);2. Multi-Site Product Cache:
// All sites share one Redis URL
await redis.setEx(
`site:${siteId}:products:${id}`,
3600,
JSON.stringify(product)
);3. Vector Search for RAG:
# Retrieve relevant context for LLM
results = index.query(
VectorQuery(vector=query_embedding, num_results=3)
)4. Chat Session History:
await redis.rPush(
`site:${siteId}:chat:${sessionId}`,
JSON.stringify(message)
);With 80% cache hit rate:
- 10K LLM requests/day @ $0.03 each
- Without cache: $300/day
- With cache: $60/day
- Savings: $87,600/year
See AI-CACHE-IMPLEMENTATION-GUIDE.md for complete examples.
PgBouncer is a lightweight connection pooler for PostgreSQL that dramatically improves performance for applications with many concurrent connections. It sits between your application and PostgreSQL, multiplexing connections.
Benefits:
- ⚡ Handle 1000s of client connections with only 25-50 actual PostgreSQL connections
- 🚀 Transaction pooling mode - One PostgreSQL connection per transaction (not per session)
- 💰 Reduce server load - Lower memory usage and CPU overhead
- 🔄 Automatic connection reuse - No connection overhead for each query
- 🌐 Perfect for web apps and serverless functions (Lambda, Vercel, etc.)
PgBouncer is automatically installed when you run the installation script:
sudo ./postgres.sh
# OR
sudo ./postgres-dns.shThe script automatically:
- Installs PgBouncer package
- Configures transaction pooling mode
- Sets up authentication
- Creates systemd service
- Opens port 6432 in firewall
PgBouncer is disabled by default. To enable it in your API:
1. Update your .env file:
# Enable PgBouncer connection pooling
DB_PGBOUNCER_ENABLED=true
DB_PGBOUNCER_PORT=64322. Restart your API:
npm run dev
# OR
pm2 restart postgres-apiWith PgBouncer enabled (port 6432):
postgresql://username:[email protected]:6432/database?sslmode=require
Direct PostgreSQL (port 5432):
postgresql://username:[email protected]:5432/database?sslmode=require
┌─────────────────┐
│ Your Application│ (100 concurrent connections)
└────────┬────────┘
│
▼
┌─────────────────┐
│ PgBouncer │ Port 6432 (Transaction pooling)
│ (Multiplexer) │ Max 1000 clients → 25 PostgreSQL connections
└────────┬────────┘
│
▼
┌─────────────────┐
│ PostgreSQL │ Port 5432 (Direct connection)
│ (Database) │ Only 25 connections needed instead of 100
└─────────────────┘
| Scenario | Without PgBouncer | With PgBouncer | Improvement |
|---|---|---|---|
| 100 concurrent API requests | 100 PostgreSQL connections | 25 connections | 75% reduction |
| Memory usage (per connection) | ~10MB × 100 = 1GB | ~10MB × 25 = 250MB | 75% less RAM |
| Connection overhead | High (new connection per request) | Low (reused connections) | 10x faster |
| Serverless functions | Connection timeout issues | Seamless operation | Reliable |
✅ Recommended for:
- Web applications with many concurrent users
- Serverless deployments (AWS Lambda, Vercel, Netlify)
- Microservices architecture
- API services with burst traffic
- Applications with short-lived connections
❌ Not needed for:
- Single long-running application
- Background workers with persistent connections
- Applications already using application-level pooling effectively
Our installation uses transaction mode (recommended for web apps):
| Mode | Connection Released | Use Case |
|---|---|---|
| Transaction | After each transaction | Web apps, APIs, serverless ✅ |
| Session | When client disconnects | Legacy apps requiring session state |
| Statement | After each statement | Rarely used |
# Check PgBouncer status
sudo systemctl status pgbouncer
# Restart PgBouncer
sudo systemctl restart pgbouncer
# View PgBouncer logs
sudo journalctl -u pgbouncer -f
# Connect to PgBouncer admin console
psql -h localhost -p 6432 -U postgres -d pgbouncer
# Inside pgbouncer console:
SHOW POOLS; # View connection pools
SHOW CLIENTS; # View client connections
SHOW SERVERS; # View server connections
SHOW STATS; # View statistics
RELOAD; # Reload configurationLocation: /etc/pgbouncer/pgbouncer.ini
Key settings:
[pgbouncer]
pool_mode = transaction # Transaction pooling
max_client_conn = 1000 # Maximum client connections
default_pool_size = 25 # PostgreSQL connections per database
listen_addr = 0.0.0.0 # Listen on all interfaces
listen_port = 6432 # PgBouncer port
auth_type = scram-sha-256 # Authentication method
server_tls_sslmode = require # Require SSL to PostgreSQLConnection refused on port 6432:
# Check if PgBouncer is running
sudo systemctl status pgbouncer
# Check if port is listening
sudo ss -tunlp | grep 6432
# Check firewall
sudo ufw status | grep 6432Authentication failed:
# Verify credentials in /etc/pgbouncer/userlist.txt
sudo cat /etc/pgbouncer/userlist.txt
# Restart PgBouncer after changes
sudo systemctl restart pgbouncerPerformance not improving:
# Ensure DB_PGBOUNCER_ENABLED=true in .env
# Check application is using port 6432
# Monitor pool usage: SHOW POOLS; in pgbouncer consoleCheck pool statistics:
psql -h localhost -p 6432 -U postgres -d pgbouncer -c "SHOW STATS;"Monitor in real-time:
watch -n 2 'psql -h localhost -p 6432 -U postgres -d pgbouncer -c "SHOW POOLS;"'Example output:
database | user | cl_active | cl_waiting | sv_active | sv_idle | sv_used
-----------------+----------+-----------+------------+-----------+---------+---------
tenant_abc123 | user_abc | 5 | 0 | 2 | 3 | 5
postgres_control| api_user | 2 | 0 | 1 | 1 | 2
cl_active: Active client connectionssv_active: Active PostgreSQL connectionssv_idle: Idle PostgreSQL connections ready for reuse
Every database you create comes with 4 connection string variants - all pointing to the same database, just with different parameters. This is the same approach Neon uses.
Connection String Types:
-
Direct Connection (
connection_string)- Port 5432 (direct PostgreSQL)
- Use for: Admin operations, migrations, pg_dump
-
Pooled Connection (
connection_string_pooled)- Port 6432 +
?pgbouncer=trueparameter - Use for: Web applications, API servers, serverless
- Port 6432 +
-
Shadow Database (
shadow_database_url)- Port 5432 +
?schema=publicparameter - Use for: Prisma shadow database (migrations)
- Port 5432 +
-
Shadow + Pooled (
shadow_database_url_pooled)- Port 6432 +
?schema=public&pgbouncer=true - Use for: Prisma migrations with connection pooling
- Port 6432 +
Key Concept: Unlike traditional approaches that create separate shadow databases, we use the same database with URL parameters - exactly like Neon!
When you create a database, you receive all 4 connection strings immediately:
POST /api/databases
X-API-Key: your-secret-api-key-here
{
"friendlyName": "My App Database",
"ownerEmail": "[email protected]",
"maxConnections": 20
}Response:
{
"success": true,
"message": "Database created successfully",
"data": {
"id": "uuid-here",
"database_name": "tenant_abc123",
"username": "user_abc123",
"password": "aB3dEf7GhJk9",
"owner_email": "[email protected]",
"friendly_name": "My App Database",
"max_connections": 20,
"status": "active",
"created_at": "2024-01-01T00:00:00.000Z",
"connection_string": "postgresql://user_abc123:[email protected]:5432/tenant_abc123?sslmode=require",
"connection_string_pooled": "postgresql://user_abc123:[email protected]:6432/tenant_abc123?sslmode=require&pgbouncer=true",
"shadow_database_url": "postgresql://user_abc123:[email protected]:5432/tenant_abc123?sslmode=require&schema=public",
"shadow_database_url_pooled": "postgresql://user_abc123:[email protected]:6432/tenant_abc123?sslmode=require&schema=public&pgbouncer=true"
}
}If you need to retrieve connection strings later (password masked):
GET /api/databases/:id/connection-strings
X-API-Key: your-secret-api-key-hereResponse:
{
"success": true,
"data": {
"database_id": "uuid-here",
"database_name": "tenant_abc123",
"username": "user_abc123",
"connection_string": "postgresql://user_abc123:***@db.yourdomain.com:5432/tenant_abc123?sslmode=require",
"connection_string_pooled": "postgresql://user_abc123:***@db.yourdomain.com:6432/tenant_abc123?sslmode=require&pgbouncer=true",
"shadow_database_url": "postgresql://user_abc123:***@db.yourdomain.com:5432/tenant_abc123?sslmode=require&schema=public",
"shadow_database_url_pooled": "postgresql://user_abc123:***@db.yourdomain.com:6432/tenant_abc123?sslmode=require&schema=public&pgbouncer=true",
"note": "All URLs use the same database. Use ?pgbouncer=true for connection pooling and ?schema=public for Prisma shadow database."
}
}Option 1: Direct Connection (for migrations)
# Main database
DATABASE_URL="postgresql://user_abc123:[email protected]:5432/tenant_abc123?sslmode=require"
# Shadow database (same database, different parameter)
SHADOW_DATABASE_URL="postgresql://user_abc123:[email protected]:5432/tenant_abc123?sslmode=require&schema=public"Option 2: With PgBouncer (production)
# Main database (pooled)
DATABASE_URL="postgresql://user_abc123:[email protected]:6432/tenant_abc123?sslmode=require&pgbouncer=true"
# Shadow database (pooled)
SHADOW_DATABASE_URL="postgresql://user_abc123:[email protected]:6432/tenant_abc123?sslmode=require&schema=public&pgbouncer=true"Your prisma/schema.prisma:
generator client {
provider = "prisma-client-js"
}
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
shadowDatabaseUrl = env("SHADOW_DATABASE_URL") // Same DB, just different URL parameter!
}
model User {
id Int @id @default(autoincrement())
email String @unique
name String?
createdAt DateTime @default(now())
}Run migrations:
npx prisma migrate dev --name init
# Prisma will:
# 1. Test migrations on the same database (using schema=public parameter)
# 2. Apply to your main database
# 3. Generate Prisma ClientStep 1: Create a database
curl -X POST http://localhost:3000/api/databases \
-H "X-API-Key: your-secret-api-key-here" \
-H "Content-Type: application/json" \
-d '{
"friendlyName": "Customer Database",
"ownerEmail": "[email protected]",
"maxConnections": 20
}'You'll receive all 4 connection strings in the response!
Step 2: Configure your app
Create .env:
# For production (use pooled)
DATABASE_URL="postgresql://user_abc123:[email protected]:6432/tenant_abc123?sslmode=require&pgbouncer=true"
# For Prisma migrations (same DB, different parameter)
SHADOW_DATABASE_URL="postgresql://user_abc123:[email protected]:6432/tenant_abc123?sslmode=require&schema=public&pgbouncer=true"Step 3: Run Prisma migrations
npx prisma migrate dev --name initThat's it! No separate shadow database to manage.
| Traditional (Complex) | Neon-style (Simple) |
|---|---|
2 databases (tenant_abc123 + tenant_abc123_shadow) |
1 database (tenant_abc123) |
2 users (user_abc123 + user_abc123_shadow) |
1 user (user_abc123) |
| 2 passwords to manage | 1 password |
| Shadow DB must be synced | No sync needed |
| Consumes 2x storage | Same storage |
| Separate CREATE/DELETE operations | Just use URL parameters |
PostgreSQL and Prisma don't actually need separate physical databases for shadow functionality. The ?schema=public parameter is enough for Prisma to understand it's working with a shadow context. Neon figured this out and we've implemented the same approach!
| Parameter | Purpose | Example |
|---|---|---|
?sslmode=require |
Enable SSL/TLS | Always included |
?pgbouncer=true |
Route through PgBouncer (port 6432) | For apps/APIs |
?schema=public |
Indicate shadow database context | For Prisma migrations |
| Combined | Both pooling + shadow | ?sslmode=require&schema=public&pgbouncer=true |
On your local machine or API server:
cd api
# Install Node.js dependencies
npm install# Copy the example environment file
cp .env.example .env
# Edit .env with your credentials
nano .envUpdate .env with your values:
# PostgreSQL Control Database Connection
DB_HOST=db.yourdomain.com
DB_PORT=5432
DB_USER=api_user
DB_PASSWORD=your-postgres-password
DB_NAME=postgres_control
DB_SSL=true
# API Configuration
PORT=3000
API_KEY=your-secret-api-key-here
# PostgreSQL Server (for creating new databases)
PG_ADMIN_USER=postgres
PG_ADMIN_PASSWORD=your-postgres-passwordDevelopment mode (with auto-reload):
npm run devProduction mode:
# Build
npm run build
# Run
npm startThe API will start on http://localhost:3000
All API endpoints (except /health) require authentication via API key in the header:
-H "X-API-Key: your-secret-api-key-here"GET /healthResponse:
{
"status": "ok",
"timestamp": "2024-01-01T00:00:00.000Z",
"environment": "development"
}POST /api/databases
Content-Type: application/json
X-API-Key: your-secret-api-key-here
{
"friendlyName": "Customer XYZ Database",
"ownerEmail": "[email protected]",
"maxConnections": 20
}Response:
{
"success": true,
"message": "Database created successfully",
"data": {
"id": "uuid-here",
"database_name": "tenant_abc123",
"username": "user_abc123",
"password": "aB3dEf7GhJk9",
"connection_string": "postgresql://user_abc123:[email protected]:5432/tenant_abc123?sslmode=require",
"owner_email": "[email protected]",
"friendly_name": "Customer XYZ Database",
"max_connections": 20,
"status": "active",
"created_at": "2024-01-01T00:00:00.000Z"
}
}GET /api/databases
X-API-Key: your-secret-api-key-hereResponse:
{
"success": true,
"count": 5,
"data": [...]
}GET /api/databases/:id
X-API-Key: your-secret-api-key-hereGET /api/databases/:id/stats
X-API-Key: your-secret-api-key-hereResponse:
{
"success": true,
"data": {
"database_id": "uuid",
"database_name": "tenant_abc123",
"size_bytes": 8388608,
"size_pretty": "8 MB",
"active_connections": 2,
"max_connections": 20
}
}DELETE /api/databases/:id
X-API-Key: your-secret-api-key-hereGet all connection string variants for a database (direct, pooled, shadow):
GET /api/databases/:id/connection-strings
X-API-Key: your-secret-api-key-hereResponse:
{
"success": true,
"data": {
"database_id": "uuid",
"database_name": "tenant_abc123",
"username": "user_abc123",
"connection_string": "postgresql://user_abc123:***@db.yourdomain.com:5432/tenant_abc123?sslmode=require",
"connection_string_pooled": "postgresql://user_abc123:***@db.yourdomain.com:6432/tenant_abc123?sslmode=require&pgbouncer=true",
"shadow_database_url": "postgresql://user_abc123:***@db.yourdomain.com:5432/tenant_abc123?sslmode=require&schema=public",
"shadow_database_url_pooled": "postgresql://user_abc123:***@db.yourdomain.com:6432/tenant_abc123?sslmode=require&schema=public&pgbouncer=true",
"note": "All URLs use the same database. Use ?pgbouncer=true for connection pooling and ?schema=public for Prisma shadow database."
}
}By default, all databases allow connections from any IP address (with SSL required). You can restrict access to specific IPs per database.
POST /api/databases/:id/whitelist
Content-Type: application/json
X-API-Key: your-secret-api-key-here
{
"ipAddress": "203.0.113.5/32",
"description": "Office Network"
}Supported formats:
- Single IP:
203.0.113.5/32 - CIDR range:
189.0.0.0/8 - Subnet:
192.168.1.0/24
GET /api/databases/:id/whitelist
X-API-Key: your-secret-api-key-hereDELETE /api/databases/:id/whitelist/:whitelistId
X-API-Key: your-secret-api-key-here- By default, all IPs can connect (with SSL)
- When you add the first IP to a database's whitelist, only those IPs can connect
- The API automatically updates
pg_hba.confon the PostgreSQL server - Changes take effect immediately (PostgreSQL config is reloaded)
PostgreSQL Commands:
# View PostgreSQL status and connections
pg-status
# List all databases
pg-list-databases
# Backup control database
pg-backup-control
# View PostgreSQL logs
tail -f /var/log/postgresql/postgresql-16-main.log
# Restart PostgreSQL
sudo systemctl restart postgresql
# Reload PostgreSQL config (for pg_hba.conf changes)
sudo systemctl reload postgresqlPgBouncer Commands:
# Check PgBouncer status
sudo systemctl status pgbouncer
# Restart PgBouncer
sudo systemctl restart pgbouncer
# View PgBouncer logs
sudo journalctl -u pgbouncer -f
# Connect to PgBouncer admin console
psql -h localhost -p 6432 -U postgres -d pgbouncer
# View connection pools
psql -h localhost -p 6432 -U postgres -d pgbouncer -c "SHOW POOLS;"
# View statistics
psql -h localhost -p 6432 -U postgres -d pgbouncer -c "SHOW STATS;"From terminal:
# Connect to a tenant database
PGPASSWORD='password' psql -h db.yourdomain.com -U user_abc123 -d tenant_abc123
# Connect to control database
PGPASSWORD='your-password' psql -h db.yourdomain.com -U api_user -d postgres_controlConnection string format:
postgresql://username:password@host:port/database?sslmode=require
Automated backups:
- Control database is backed up daily at 2 AM
- Backups stored in
/var/backups/postgresql/ - Retention: 7 days
Manual backup:
# Backup control database
pg-backup-control
# Backup specific tenant database
sudo -u postgres pg_dump tenant_abc123 | gzip > backup.sql.gz
# Restore
gunzip -c backup.sql.gz | PGPASSWORD='password' psql -h db.yourdomain.com -U user_abc123 -d tenant_abc123- All connections require SSL/TLS encryption
- Certificates from Let's Encrypt
- Auto-renewal configured
- TLSv1.2 minimum
- API: API key-based authentication
- PostgreSQL: SCRAM-SHA-256 password authentication
- Each tenant has isolated credentials
- Default: Open to all IPs (with SSL required)
- Optional: Per-database IP whitelisting
- Firewall: Manage via hardware/cloud firewall
- Generated passwords: 12-16 alphanumeric characters
- Stored hashed in control database (bcrypt)
- Only shown once during creation
- Use strong API keys: 32+ characters, random
- Enable IP whitelisting for sensitive databases
- Rotate credentials periodically
- Monitor logs regularly
- Keep backups secure and tested
- Update PostgreSQL regularly
curl -X POST http://localhost:3000/api/databases \
-H "X-API-Key: your-secret-api-key-here" \
-H "Content-Type: application/json" \
-d '{
"friendlyName": "Test Database",
"ownerEmail": "[email protected]",
"maxConnections": 10
}'# Save the connection details from the response
PGPASSWORD='aB3dEf7GhJk9' psql -h db.yourdomain.com -U user_abc123 -d tenant_abc123
# Inside psql:
CREATE TABLE users (
id SERIAL PRIMARY KEY,
name VARCHAR(100),
email VARCHAR(100),
created_at TIMESTAMP DEFAULT NOW()
);
INSERT INTO users (name, email) VALUES ('John Doe', '[email protected]');
SELECT * FROM users;curl -X POST http://localhost:3000/api/databases/uuid-here/whitelist \
-H "X-API-Key: your-secret-api-key-here" \
-H "Content-Type: application/json" \
-d '{
"ipAddress": "203.0.113.5/32",
"description": "My Office IP"
}'# Check logs
sudo tail -100 /var/log/postgresql/postgresql-16-main.log
# Check service status
sudo systemctl status postgresql@16-main# Check certificate
sudo certbot certificates
# Renew manually
sudo certbot renew
# Test renewal
sudo certbot renew --dry-run-
Check PostgreSQL is listening on all interfaces:
sudo ss -tunlp | grep 5432Should show
0.0.0.0:5432, NOT127.0.0.1:5432Fix if needed:
# Edit postgresql.conf sudo sed -i "s/^#*listen_addresses *=.*/listen_addresses = '*'/" /etc/postgresql/16/main/postgresql.conf # Restart PostgreSQL sudo systemctl restart postgresql
-
Check firewall (ports 5432, 80, 22)
-
Verify pg_hba.conf:
sudo cat /etc/postgresql/16/main/pg_hba.conf -
Test SSL:
openssl s_client -connect db.yourdomain.com:5432 -starttls postgres
- Verify
.envcredentials - Test control database connection:
PGPASSWORD='pass' psql -h db.yourdomain.com -U api_user -d postgres_control - Check API logs for errors
-
PostgreSQL Server (Ubuntu 24.04)
- PostgreSQL 16 with SSL
- Control database for metadata
- Tenant databases (isolated, supports Prisma shadow via URL parameters)
- PgBouncer connection pooler (port 6432)
-
REST API (Node.js/Express)
- Database provisioning
- Connection string generation (4 variants per database)
- IP whitelist management
- Statistics and monitoring
-
Control Database (postgres_control)
databases- Tenant database metadataip_whitelist- Per-database IP restrictionsapi_keys- API authenticationaudit_logs- Operation trackingdatabase_stats- Usage metrics
Database Creation Flow:
Client → API (with API Key)
↓
API → Control Database (metadata)
↓
API → PostgreSQL Admin (create database + user)
↓
Client ← Connection String
↓
Client → Tenant Database (with SSL)
Connection Flow with PgBouncer:
Client Application (100 connections)
↓
PgBouncer Port 6432 (Transaction pooling)
↓
PostgreSQL (Only 25 actual connections)
↓
Tenant Database
Prisma Migration Flow (Neon-style):
Developer → Prisma Migrate Dev
↓
Prisma → Same Database with ?schema=public (test migration)
↓
Prisma → Same Database (apply migration)
↓
Application ← Updated Schema
Note: Both URLs point to the same physical database!
- Same server as PostgreSQL (simple)
- Separate server (recommended for scale)
- Serverless (e.g., Vercel, AWS Lambda)
- Container (Docker, Kubernetes)
NODE_ENV=production
DB_HOST=db.yourdomain.com
DB_SSL=true
API_KEY=use-a-very-strong-random-key-here
# Enable PgBouncer for production (recommended)
DB_PGBOUNCER_ENABLED=true
DB_PGBOUNCER_PORT=6432Using PM2:
npm install -g pm2
pm2 start dist/main.js --name postgres-api
pm2 startup
pm2 saveUsing systemd:
Create /etc/systemd/system/postgres-api.service
MIT
For issues or questions:
- Check logs:
/var/log/postgresql/ - Review configuration:
/etc/postgresql/16/main/ - Test connections with
psql - Verify SSL with
openssl s_client
Built with:
- PostgreSQL 16
- Node.js & Express
- TypeScript
- Let's Encrypt
- Ubuntu 24.04