- Format Conversion: Convert IFC to GLB, STEP, DAE, OBJ, XML, and other formats
- CSV Export/Import: Bidirectional data exchange with CSV, XLSX, and ODS formats
- Clash Detection: Advanced geometric clash detection with smart grouping
- IDS Validation: Validate IFC models against Information Delivery Specification
- Model Comparison: Diff analysis to track changes between IFC versions
- Quantity Takeoff: Automatic calculation and insertion of quantities (5D BIM)
- IFC Patching: Apply built-in and custom IfcPatch recipes to modify models
- JSON Conversion: Convert IFC files to JSON format for web applications
- ML Classification: CatBoost-based element classification
- Workflow Automation: Integrated n8n with custom community nodes
- 3D Viewer: Web-based IFC viewer using @thatopen/components
- Asynchronous Processing: Redis Queue (RQ) based job management
- PostgreSQL Storage: Persistent storage for processing results
- API Gateway: FastAPI-based REST API with comprehensive documentation
- Token-based File Sharing: Secure temporary download links with expiry
- Monitoring Dashboard: RQ Dashboard for queue monitoring
- Automatic Cleanup: Scheduled cleanup of old processing results
- ifcCsv - CSV/XLSX/ODS export and import
- ifcClash - Clash detection with smart grouping
- ifcTester - IDS validation
- ifcDiff - Model comparison and change tracking
- ifcConvert - Format conversion (GLB, STEP, etc.)
- ifc5D - Quantity takeoff calculations
- ifc2json - JSON conversion using https://github.com/bimaps/ifc2json
- ifcPatch - IfcPatch recipe execution (built-in + custom)
- IFC Classifier - ML-based element classification
- IFC Viewer - Web-based 3D viewer
- n8n Integration - Custom community nodes package
- API Key Authentication - Environment variable based security
- PostgreSQL Storage - Persistent result storage
- Worker Architecture - Containerized Python workers
- ifc4D - Time/scheduling integration
- Enhanced Error Handling - Better logging and error recovery
- Webhook Notifications - Job completion callbacks
- Result Caching - Performance optimization layer
- Quick introductory video (1 min)
- Use case examples video (15 min)
- PowerBI integration examples
- Example n8n workflow library
- Getting started with n8n guide
IFC Pipeline follows a microservice architecture with distributed workers for asynchronous processing.
- API Gateway (FastAPI) - Central orchestration point with REST endpoints
- Worker Services - Specialized Python workers for each operation:
ifcconvert-worker- Format conversionifcclash-worker- Clash detectionifccsv-worker- CSV export/importifctester-worker- IDS validationifcdiff-worker- Model comparisonifc5d-worker- Quantity calculationsifcpatch-worker- IFC patchingifc2json-worker- JSON conversion
- IFC Viewer - Web-based 3D viewer (Vite + @thatopen/components)
- n8n - Workflow automation platform with custom nodes
- Redis - Job queue and result backend
- PostgreSQL - Persistent storage for processing results
- Monitoring - RQ Dashboard and PgWeb for observability
- Queue-based Communication: All operations are asynchronous via Redis Queue
- Shared Volumes:
/uploads,/output, and/examplesfor file management - Token-based Access: Secure temporary download links with 30-minute expiry
- Horizontal Scalability: Workers can be replicated for load balancing
- Git
- Docker & Docker Compose
Note for Mac Users (Apple Silicon):
- Docker Desktop for Mac with Rosetta 2 emulation enabled
- The project uses
platform: linux/amd64for Python services to ensure compatibility with ifcopenshell wheels - First-time builds may take longer due to emulation
-
Install prerequisites (on Ubuntu/Debian):
sudo apt install git docker-compose
On macOS:
# Install Docker Desktop from https://www.docker.com/products/docker-desktop # Ensure Rosetta 2 is enabled in Docker Desktop settings: # Settings > General > "Use Rosetta for x86_64/amd64 emulation on Apple Silicon"
-
Clone the repository:
git clone https://github.com/jonatanjacobsson/ifcpipeline.git cd ifcpipeline -
Set up environment variables:
cp .env.example .env # Edit .env with your settings nano .env # or use your preferred editor
Important environment variables to update:
IFC_PIPELINE_API_KEY: Set a secure random string (e.g., generate a UUID)POSTGRES_PASSWORD: Set a secure database password- For local development, keep the default localhost URLs
- For production, update URLs to your actual domain names
-
Build and start all services:
docker compose up --build -d
Note: First build may take 10-30 minutes depending on your system (longer on Apple Silicon due to emulation)
-
Verify the installation:
# Check that all services are running docker compose ps # Check health status curl http://localhost:8000/health
Expected on first run: Queue services will show
"waiting (no jobs yet)"- this is normal! -
Access the services:
- API Gateway: http://localhost:8000
- API Documentation: http://localhost:8000/docs
- n8n Workflows: http://localhost:5678
- IFC Viewer: http://localhost:8001
- RQ Dashboard: http://localhost:9181
- PgWeb (Database): http://localhost:8081
-
Authorize API access:
- Open http://localhost:8000/docs (Swagger UI)
- Click the "Authorize" button (π icon in top right)
- Enter your
IFC_PIPELINE_API_KEYfrom the.envfile - Click "Authorize"
- You can now test all endpoints!
-
Test with a sample file:
# Sample IFC files are available in the shared/examples directory # Upload a sample file via Swagger UI or curl: curl -X POST http://localhost:8000/upload/ifc \ -H "X-API-Key: your-api-key-here" \ -F "file=@shared/examples/Building-Architecture.ifc" # The file will be uploaded and you can use it in subsequent operations
-
Install n8n community nodes (optional):
- Open n8n at http://localhost:5678
- Go to Settings > Community Nodes
- Search and install:
n8n-nodes-ifcpipeline - Community Nodes Guide
Check system health:
curl http://localhost:8000/healthExpected output:
{
"status": "healthy",
"services": {
"api-gateway": "healthy",
"redis": "healthy",
"ifcconvert_queue": "waiting (no jobs yet)",
"ifcclash_queue": "waiting (no jobs yet)",
...
}
}View running services:
docker compose psView logs for a specific service:
docker compose logs api-gateway -fTo update IFC Pipeline to the latest version:
-
Stop all running services:
docker compose down
-
Pull the latest changes:
git pull
-
Rebuild and restart services:
docker compose up --build -d
Create a .env file in the project root with the following variables:
IFC_PIPELINE_API_KEY=your-secret-api-key
IFC_PIPELINE_ALLOWED_IP_RANGES=127.0.0.1/32,172.18.0.0/16
IFC_PIPELINE_EXTERNAL_URL=http://localhost:8000 # Use localhost for local dev
IFC_PIPELINE_PREVIEW_EXTERNAL_URL=http://localhost:8001N8N_WEBHOOK_URL=http://localhost:5678 # Use localhost for local dev
N8N_COMMUNITY_PACKAGES_ENABLED=truePOSTGRES_USER=ifcpipeline
POSTGRES_PASSWORD=your-secure-password
POSTGRES_DB=ifcpipelineREDIS_URL=redis://redis:6379/0π‘ Tip: IP ranges in CIDR format can bypass API key authentication for trusted networks
The API Gateway exposes comprehensive REST endpoints for IFC operations:
POST /ifcconvert- Convert IFC to other formats (GLB, STEP, OBJ, etc.)POST /ifccsv- Export IFC data to CSV/XLSX/ODSPOST /ifccsv/import- Import CSV/XLSX/ODS data back to IFCPOST /ifcclash- Detect clashes between IFC modelsPOST /ifctester- Validate IFC against IDS specificationPOST /ifcdiff- Compare two IFC files and generate diffPOST /ifc2json- Convert IFC to JSON formatPOST /calculate-qtos- Calculate quantities (5D)POST /patch/execute- Apply IfcPatch recipesPOST /patch/recipes/list- List available patch recipes
POST /classify- Classify single IFC element (ML-based)POST /classify/batch- Classify multiple elements
POST /upload/{file_type}- Upload IFC, IDS, or CSV filesPOST /download-from-url- Download file from external URLPOST /create_download_link- Create temporary download tokenGET /download/{token}- Download file using tokenGET /list_directories- List available files and directories
GET /jobs/{job_id}/status- Check job status and resultsGET /health- System health check
GET /{token}- Serve IFC viewer with file access
Visit the auto-generated Swagger UI for interactive API testing:
# 1. Upload IFC file
curl -X POST http://localhost:8000/upload/ifc \
-H "X-API-Key: your-api-key" \
-F "[email protected]"
# 2. Start conversion job
curl -X POST http://localhost:8000/ifcconvert \
-H "X-API-Key: your-api-key" \
-H "Content-Type: application/json" \
-d '{
"input_filename": "model.ifc",
"output_filename": "model.glb"
}'
# Returns: {"job_id": "abc-123"}
# 3. Check job status
curl http://localhost:8000/jobs/abc-123/status \
-H "X-API-Key: your-api-key"
# 4. Download result when complete
curl -X POST http://localhost:8000/create_download_link \
-H "X-API-Key: your-api-key" \
-H "Content-Type: application/json" \
-d '{"file_path": "/output/glb/model.glb"}'n8n provides a visual interface for creating automated IFC processing workflows.
The n8n-nodes-ifcpipeline community package provides:
- IfcPipeline - File operations, uploads, downloads, viewer links
- IfcConversion - Format conversion with configuration
- IfcCsv - CSV/XLSX/ODS export and import
- IfcClash - Clash detection with smart grouping
- IfcTester - IDS validation
- IfcDiff - Model comparison
- IfcToJson - JSON conversion
- IfcQuantityTakeoff - Quantity calculations
- IfcPatch - Apply recipes (dynamic recipe loading)
Webhook (New IFC File URL)
β
Download File from URL
β
Validate against IDS
β
Run Clash Detection
β
Export Results to CSV
β
Send Email with Results
- Access n8n at http://localhost:5678
- Create your account
- Install the
n8n-nodes-ifcpipelinecommunity package - Configure credentials (API Key + URL)
- Start building workflows!
β οΈ Note: Be aware of n8n's Sustainable Use License
Stores persistent results from workers:
- Clash detection results
- Validation reports
- Model comparison data
- Conversion metadata
Access PgWeb: http://localhost:8081
All workers access shared filesystem:
/uploads- Input files/output- Processing results (organized by worker type)/examples- Sample files for testing
The cleanup service runs daily to remove:
- Files older than 7 days in
/output/clashand/output/diff - Empty directories
Check system status:
curl http://localhost:8000/healthReturns health status of:
- API Gateway
- Redis
- All worker queues
- Active workers
Monitor job queues at http://localhost:9181:
- Queue depths
- Worker status
- Failed jobs
- Job history and results
Cause: Missing or incorrect API key
Solution:
- Verify your
IFC_PIPELINE_API_KEYin.envfile - In Swagger UI (http://localhost:8000/docs), click "Authorize" and enter the API key
- For curl requests, add header:
-H "X-API-Key: your-api-key-here"
Cause: This was the old behavior - queues now show "waiting (no jobs yet)" on first run
Solution:
- Update to latest version with improved health check messages
- This is normal on first startup before any jobs are queued
- Once you submit a job, the queue will initialize and show "healthy"
Cause: ifcopenshell wheels not available for your platform or Python version mismatch
Solution:
- Ensure you're using Python 3.11 (now standardized across all Dockerfiles)
- Verify
platform: linux/amd64is set in docker-compose.yml for all Python services - On Mac Apple Silicon: Enable Rosetta 2 in Docker Desktop settings
- Use pinned version:
ifcopenshell==0.8.0(now in all requirements.txt) - Rebuild images:
docker compose build --no-cache
Cause: Worker crashed, out of memory, or not running
Solution:
# Check worker logs
docker compose logs ifcconvert-worker -f
# Restart specific worker
docker compose restart ifcconvert-worker
# Check worker status in RQ Dashboard
# Visit http://localhost:9181Cause: Large IFC files or insufficient Docker resources
Solution:
# Check resource usage
docker stats
# Increase memory limits in docker-compose.yml:
# For heavy workers (ifcclash, ifcdiff, ifcpatch):
deploy:
resources:
limits:
memory: 16G # Increase from 12GCause: Redis service not running or connection refused
Solution:
# Check Redis status
docker compose logs redis
# Restart Redis
docker compose restart redis
# Verify Redis is accessible
docker compose exec redis redis-cli ping
# Should return: PONGCause: Platform emulation (amd64 on arm64)
Solution:
- First build is slow (10-30 minutes) - this is expected
- Subsequent builds use Docker cache and are faster
- Consider using prebuilt images if available
- Ensure Rosetta 2 is enabled in Docker Desktop
Cause: Example files not present or incorrect path
Solution:
# Verify sample files exist
ls -la shared/examples/
# Sample files should include:
# - Building-Architecture.ifc
# - Building-Hvac.ifc
# - Building-Landscaping.ifc
# - Building-Structural.ifc
# Access them via the /examples volume mount in containersCause: Outdated dependencies or mixed Python/package versions
Solution:
- All Dockerfiles now use Python 3.11-slim
- All requirements.txt files pin ifcopenshell==0.8.0
- FastAPI/Pydantic versions are aligned across services
- Rebuild with:
docker compose build --no-cache
| Error Message | Component | Quick Fix |
|---|---|---|
403 Forbidden |
API Gateway | Add API key in Swagger UI or request headers |
queue key not found |
RQ Workers | Normal on first run - submit a job to initialize |
ifcopenshell not found |
Worker | Use Python 3.11 + platform: linux/amd64 |
network not found |
Docker | Create network or remove from docker-compose.yml |
Redis connection refused |
Redis | Check Redis logs and restart service |
Out of memory |
Worker | Increase memory limits in docker-compose.yml |
Worker not responding |
RQ Worker | Check logs and restart worker |
Build timeout |
Docker | Increase Docker build resources |
Critical: Redis versions prior to 7.2.11 have a critical vulnerability allowing remote code execution.
Status: β Fixed in this repository
docker-compose.ymlnow usesredis:7.2.11-alpine- Redis port 6379 is bound to localhost only (not exposed publicly)
- Redis is only accessible within the Docker network
Important:
- Keep Redis internal-only (do not expose port 6379 to public internet)
- Regularly update Redis to latest patch versions
- Monitor DigitalOcean or security advisories for Redis updates
Current configuration (safe):
redis:
image: "redis:7.2.11-alpine" # Patched version
ports:
- "6379:6379" # Localhost only - safe for developmentFor production:
redis:
image: "redis:7.2.11-alpine"
# Remove ports entirely - internal access only
networks:
- defaultView logs for all services:
docker compose logs -fView logs for specific service:
docker compose logs api-gateway -fFilter logs by time:
docker compose logs --since 10m api-gatewayifc-pipeline/
βββ api-gateway/ # FastAPI application
βββ shared/ # Shared Python library
βββ *-worker/ # Worker services (ifcconvert, ifcclash, etc.)
βββ ifc-viewer/ # Vite-based 3D viewer
βββ ifc-classifier-service/ # ML classification service
βββ n8n-data/ # n8n persistent data
βββ postgres/ # Database utilities
βββ docker-compose.yml # Service orchestration
Add custom recipes to ifcpatch-worker/custom_recipes/:
- Create Python file following IfcPatch recipe structure
- Restart ifcpatch-worker
- Recipe auto-discovered and available in API
Heavy workers (configured with higher resources):
- ifcclash-worker: 4 CPU, 12GB RAM
- ifcdiff-worker: 4 CPU, 12GB RAM (2 replicas)
- n8n: 4 CPU, 6GB RAM
Light workers:
- ifccsv-worker: 0.5 CPU, 1GB RAM
- ifctester-worker: 0.3 CPU, 1GB RAM
Increase replicas for heavy workloads:
# In docker-compose.yml
ifcdiff-worker:
deploy:
replicas: 4 # Increase from 2 to 4- API Key Authentication: Required for all API endpoints
- IP Whitelisting: CIDR ranges can bypass API key requirement
- Token Expiry: Download tokens expire after 30 minutes
- Network Isolation: Workers communicate on internal Docker network only
We welcome contributions! Please:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
See WORKER_CREATION_GUIDE.md for adding new workers.
This project wouldn't be possible without:
- IfcOpenShell - Open-source IFC toolkit
- n8n - Workflow automation platform
- @thatopen/components - BIM viewer framework
- BuildingSMART - IFC standards development
This project is licensed under the MIT License.
Questions or Issues? Open an issue on GitHub


