-
-
Notifications
You must be signed in to change notification settings - Fork 2.4k
Description
Problem Description
I have two specialized ONNX models that I need to use together:
- COCO-trained model: Detects persons accurately
- WiderFace-trained model (or any custom model that natively detects faces): Detects faces accurately but doesn't detect persons
For face recognition to work properly in Frigate 0.16, the system needs to:
- First detect a person (required by Frigate's face recognition workflow)
- Then detect faces within the person detection area
Currently, while Frigate supports multiple detectors, each detector can only use a single model. This limits the ability to combine specialized models within a single detection pipeline.
Proposed Solution
Add support for multiple ONNX models in a pipeline configuration, such as:
detectors:
primary:
type: onnx
model:
path: /models/coco_person_detector.onnx
secondary:
type: onnx
model:
path: /models/widerface_detector.onnx
depends_on: primary
roi_from: person # Only run face detection within person detections
Alternative Workarounds Considered
- Model merging: Too complex and requires retraining
- External processing: Adds latency and complexity
- Custom detector plugin: Requires significant development effort
Use Cases
This would benefit users who have:
Specialized models for different object types
Models optimized for specific scenarios (indoor/outdoor, day/night)
Legacy models that work well for specific use cases but lack comprehensive object detection
Expected Behavior
The detection pipeline would:
Run the primary detector (person detection)
For each person detection, run secondary detector(s) within that ROI
Combine results into a unified detection output
Pass combined results to face recognition system
This feature would significantly improve detection accuracy by allowing users to leverage specialized models for different detection tasks.