An optimized object detection client for Frigate that leverages Apple Silicon's Neural Engine for high-performance inference using ONNX Runtime. Provides seamless integration with Frigate's ZMQ detector plugin.
- ZMQ IPC Communication: Implements the REQ/REP protocol over IPC endpoints
- ONNX Runtime Integration: Runs inference using ONNX models with optimized execution providers
- Apple Silicon Optimized: Defaults to CoreML execution provider for optimal performance on Apple Silicon
- Error Handling: Robust error handling with fallback to zero results
- Flexible Configuration: Configurable endpoints, model paths, and execution providers
- Download the latest
FrigateDetector.app.zip
from the Releases page. - Unzip it and open
FrigateDetector.app
(first run: right‑click → Open to bypass Gatekeeper). - A Terminal window will appear and automatically:
- create a local
venv/
- install dependencies
- start the detector with
--model AUTO
- create a local
make install
make run
The detector will automatically use the configured model and start communicating with Frigate.
- Model Loading: Uses whatever model Frigate configures via its automatic model loading
- Apple Silicon Optimization: Uses CoreML execution provider for maximum performance
- Frigate Integration: Drop-in replacement for Frigate's built-in detectors
- Multiple Model Support: YOLOv9, RF-DETR, D-FINE, and custom ONNX models
The following models are supported by this detector:
Apple Silicon Chip | YOLOv9 | RF-DETR | D-FINE |
---|---|---|---|
M1 | |||
M2 | |||
M3 | 320-t: 8 ms | 320-Nano: 80 ms | 640-s: 120 ms |
M4 |
The detector uses the model that Frigate configures:
- Frigate automatically loads and configures the model via ZMQ
- The detector receives model information from Frigate's automatic model loading
- No manual model selection required - works with Frigate's existing model management
For implementation details, see the detector README.
- The Makefile automatically manages
venv/
and usesvenv/bin/python3
andvenv/bin/pip3
directly - If you prefer to activate manually (optional):
source venv/bin/activate
- Recreate the environment:
make reinstall
(removesvenv/
and reinstalls) - Verify installation:
venv/bin/python3 -c "import onnxruntime; print('ONNX Runtime version:', onnxruntime.__version__)"
make run MODEL=/path/to/your/model.onnx
make run MODEL=/path/to/your/model.onnx ENDPOINT="tcp://*:5555"
make run MODEL=/path/to/your/model.onnx PROVIDERS="CoreMLExecutionProvider CPUExecutionProvider"
make run MODEL=/path/to/your/model.onnx VERBOSE=1
from detector.zmq_onnx_client import ZmqOnnxClient
# Create client instance
client = ZmqOnnxClient(
endpoint="tcp://*:5555",
model_path="/path/to/your/model.onnx",
providers=["CoreMLExecutionProvider", "CPUExecutionProvider"]
)
# Start the server
client.start_server()
The client includes comprehensive error handling:
- ZMQ Errors: Automatic socket reset and error response
- ONNX Errors: Fallback to zero results with error logging
- Decoding Errors: Graceful handling of malformed requests
- Resource Cleanup: Proper cleanup on shutdown
- CoreML Optimization: Leverages Apple's Neural Engine when available
- Memory Management: Efficient tensor handling with minimal copying
- Async Processing: Non-blocking ZMQ communication
- Batch Processing: Ready for future batch inference support
- Permission Denied: Ensure the IPC endpoint directory has proper permissions (
/tmp/cache/
) - Model Loading Failed: Verify ONNX model files are in the
models/
directory - ZMQ Bind Failed: Ensure the endpoint is not already in use by another process
- Package Not Found: Run
make reinstall
to recreate the virtual environment
Enable verbose logging for detailed operation information:
make run VERBOSE=1
This detector works seamlessly with Frigate's ZMQ detector plugin:
- Start the detector:
make run
- Configure Frigate: Add the ZMQ detector configuration (see Quick Start above)
- Done: Frigate automatically loads the model and the detector handles all inference requests
For detailed implementation information, see the detector documentation.
This project is provided as-is for integration with Frigate and ONNX Runtime inference.