Which Frigate+ model to use with NPU #19273
-
Describe the problem you are havingI've been trying to follow the (slightly outdated) instructions here to get my Intel Core Ultra i5 245K to work as an NPU detector. After adjusting things a bit (e.g. 0.16 is based on debian12, so i could simply Using the YOLO-NAS model it generates instead of the default Version0.16.0-d96efdb Frigate config filemqtt:
host: xxx
port: 1883
topic_prefix: frigate
client_id: frigate
user: xxx
password: xxx
detectors:
onnx_0:
type: openvino
device: NPU
model:
path: plus://xxx
detect:
width: 1280
height: 720
fps: 5
enabled: true
database:
path: /db/frigate.db
objects:
filters:
person:
# min_area: 15000
# max_ratio: 0.85
min_score: 0.7
threshold: 0.8
face:
min_score: 0.4
threshold: 0.7
car:
min_score: 0.75
threshold: 0.9
cat:
min_score: 0.8
threshold: 0.87
dog:
min_score: 0.8
threshold: 0.87
other_animal:
min_score: 0.8
threshold: 0.9
record:
enabled: true
retain:
days: 1
mode: all
alerts:
pre_capture: 10
post_capture: 5
retain:
days: 30
mode: active_objects
detections:
pre_capture: 10
post_capture: 5
retain:
days: 10
mode: active_objects
snapshots:
enabled: true
quality: 85
retain:
default: 30
lpr:
enabled: true
known_plates:
Golf:
- xxx
BMW:
- xxx
semantic_search:
enabled: true
reindex: false
model_size: small
genai:
enabled: false
provider: openai
api_key: '{FRIGATE_OPENAI_API_KEY}'
model: gpt-4o
# Optional: birdseye configuration
# NOTE: Can (enabled, mode) be overridden at the camera level
birdseye:
# Optional: Enable birdseye view (default: shown below)
enabled: true
# Optional: Restream birdseye via RTSP (default: shown below)
# NOTE: Enabling this will set birdseye to run 24/7 which may increase CPU usage somewhat.
restream: false
# Optional: Width of the output resolution (default: shown below)
width: 1280
# Optional: Height of the output resolution (default: shown below)
height: 720
# Optional: Encoding quality of the mpeg1 feed (default: shown below)
# 1 is the highest quality, and 31 is the lowest. Lower quality feeds utilize less CPU resources.
quality: 8
# Optional: Mode of the view. Available options are: objects, motion, and continuous
# objects - cameras are included if they have had a tracked object within the last 30 seconds
# motion - cameras are included if motion was detected in the last 30 seconds
# continuous - all cameras are included always
mode: motion
timestamp_style:
position: tl
format: '%d.%m.%Y %H:%M:%S'
go2rtc:
streams:
parking:
- rtsp://{FRIGATE_CAMERA_1}/Streaming/Channels/1 # <- stream which supports video & aac audio. This is only supported for rtsp streams, http must use ffmpeg
parking_sub:
- rtsp://{FRIGATE_CAMERA_1}/Streaming/Channels/102 # <- stream which supports video & aac audio. This is only supported for rtsp streams, http must use ffmpeg
entrance:
- rtsp://{FRIGATE_CAMERA_2}/Streaming/Channels/1 # <- stream which supports video & aac audio. This is only supported for rtsp streams, http must use ffmpeg
entrance_sub:
- rtsp://{FRIGATE_CAMERA_2}/Streaming/Channels/102 # <- stream which supports video & aac audio. This is only supported for rtsp streams, http must use ffmpeg
backyard:
- rtsp://{FRIGATE_CAMERA_3}/Streaming/Channels/1 # <- stream which supports video & aac audio. This is only supported for rtsp streams, http must use ffmpeg
backyard_sub:
- rtsp://{FRIGATE_CAMERA_3}/Streaming/Channels/102 # <- stream which supports video & aac audio. This is only supported for rtsp streams, http must use ffmpeg
karlas_room:
- ffmpeg:rtsp://{FRIGATE_CAMERA_5}/Streaming/Channels/1#video=copy#audio=copy#audio=aac # <- stream which supports video & aac audio. This is only supported for rtsp streams, http must use ffmpeg
karlas_room_sub:
- ffmpeg:rtsp://{FRIGATE_CAMERA_5}/Streaming/Channels/102#video=copy#audio=copy#audio=aac # <- stream which supports video & aac audio. This is only supported for rtsp streams, http must use ffmpeg
frontyard:
- rtsp://{FRIGATE_CAMERA_6}/Streaming/Channels/1 # <- stream which supports video & aac audio. This is only supported for rtsp streams, http must use ffmpeg
frontyard_sub:
- rtsp://{FRIGATE_CAMERA_6}/Streaming/Channels/102 # <- stream which supports video & aac audio. This is only supported for rtsp streams, http must use ffmpeg
webrtc:
candidates:
- 172.25.0.2:8555
- 10.16.9.2:8555
- stun:8555
cameras:
parking: # <------ Name the camera
ffmpeg:
inputs:
- path: rtsp://127.0.0.1:8554/parking_sub
input_args: preset-rtsp-restream
roles:
- detect
- path: rtsp://127.0.0.1:8554/parking
input_args: preset-rtsp-restream
roles:
- record
detect:
width: 1280
height: 720
annotation_offset: -700
record:
enabled: true
snapshots:
enabled: true
objects:
track:
- person
- face
- car
- bicycle
- license_plate
zones:
gate:
inertia: 5
coordinates: 1048,720,567,720,761,516,1076,674
objects:
- person
- face
tudors_parking_spot:
inertia: 5
coordinates: 994,134,1241,195,1177,673,545,567
objects:
- person
- face
- car
- license_plate
- bicycle
katis_parking_spot:
inertia: 5
coordinates: 589,165,855,237,516,544,281,359
objects:
- person
- face
- car
- license_plate
- bicycle
genai:
use_snapshot: true
# prompt: "Analyze the {label} in these images from the {camera} security camera at the front door. Focus on the actions and potential intent of the {label}."
object_prompts:
person: Examine the person in these images taken from a security camera mounted
2.5 meters above a couple of parking spaces. In the lower-right corner of
the image is the gate to a frontyard. Regarding the person in the image,
What are they doing, and how might their actions suggest their purpose (e.g.,
delivering something, approaching, leaving)? Summarize their outfit.
car: Examine the cars in these images taken from a security camera mounted
2.5 meters above a couple of parking spaces. Describe their color. If possible,
name their make and model.
objects:
- person
- car
# required_zones:
# - gate|
entrance: # <------ Name the camera
ffmpeg:
inputs:
- path: rtsp://127.0.0.1:8554/entrance_sub
input_args: preset-rtsp-restream
roles:
- detect
- path: rtsp://127.0.0.1:8554/entrance
input_args: preset-rtsp-restream
roles:
- record
detect:
width: 1280
height: 720
annotation_offset: -800
lpr:
enabled: false
record:
enabled: true
snapshots:
enabled: true
objects:
track:
- person
- face
- bicycle
- cat
- dog
- other_animal
- robot_lawnmower
zones:
front_entrance:
inertia: 5
coordinates: 288,720,719,720,495,521,244,660
objects:
- person
- face
- bicycle
- cat
- dog
- other_animal
gate:
inertia: 5
coordinates: 198,201,250,267,107,319,85,244
objects:
- person
- face
- bicycle
- cat
- dog
- other_animal
lawn:
inertia: 10
coordinates:
807,102,1280,293,1280,592,1210,551,1128,720,720,720,517,541,200,194,86,240,56,148,245,75,469,21
objects:
- person
- face
- bicycle
- cat
- dog
- other_animal
- robot_lawnmower
backyard: # <------ Name the camera
ffmpeg:
inputs:
- path: rtsp://127.0.0.1:8554/backyard_sub
input_args: preset-rtsp-restream
roles:
- detect
- path: rtsp://127.0.0.1:8554/backyard
input_args: preset-rtsp-restream
roles:
- record
detect:
width: 1280
height: 720
annotation_offset: -1000
lpr:
enabled: false
record:
enabled: true
snapshots:
enabled: true
objects:
track:
- person
- face
- cat
- dog
- other_animal
- robot_lawnmower
karlas_room: # <------ Name the camera
ffmpeg:
inputs:
- path: rtsp://127.0.0.1:8554/karlas_room_sub
input_args: preset-rtsp-restream
roles:
- audio
- detect
- path: rtsp://127.0.0.1:8554/karlas_room
input_args: preset-rtsp-restream
roles:
- record
output_args:
record: preset-record-generic-audio-aac
onvif:
# host: camera-karla-eth.lan
host: karlas-room.camera
port: 80
user: '{FRIGATE_KARLA_ONVIF_USERNAME}'
password: '{FRIGATE_KARLA_ONVIF_PASSWORD}'
audio:
enabled: true
listen:
- speech
- crying
- yell
- scream
- whispering
- snoring
detect:
width: 640
height: 360
lpr:
enabled: false
record:
enabled: true
snapshots:
enabled: true
objects:
track:
- person
- face
filters:
person:
min_area: 4000
max_ratio: 0.9
threshold: 0.75
frontyard: # <------ Name the camera
ffmpeg:
inputs:
- path: rtsp://127.0.0.1:8554/frontyard_sub
input_args: preset-rtsp-restream
roles:
- detect
- path: rtsp://127.0.0.1:8554/frontyard
input_args: preset-rtsp-restream
roles:
- record
detect:
width: 1280
height: 720
annotation_offset: -950
lpr:
enabled: false
record:
enabled: true
snapshots:
enabled: true
motion:
mask:
- 411,0,406,73,230,103,236,151,0,179,0,0
objects:
track:
- person
- face
- car
- bicycle
- cat
- dog
- other_animal
- robot_lawnmower
filters:
person:
min_area: 4000
max_ratio: 0.9
threshold: 0.75
zones:
gate:
inertia: 5
coordinates: 710,474,806,536,926,477,836,428
objects:
- person
- face
- bicycle
- cat
- dog
- other_animal
patio:
inertia: 5
coordinates: 396,224,508,284,299,319,252,246
objects:
- person
- face
- cat
- dog
- other_animal
front_entrance:
inertia: 5
coordinates: 538,364,600,348,498,286,434,296
objects:
- person
- face
- bicycle
- cat
- dog
- other_animal
lawn:
inertia: 10
coordinates:
228,232,308,309,428,297,809,534,902,490,1238,649,1192,720,530,720,148,720,102,540,98,254
objects:
- person
- face
- bicycle
- cat
- dog
- other_animal
- robot_lawnmower
version: 0.16-0docker-compose file or Docker CLI commandversion: "3.9"
services:
frigate:
container_name: frigate
image: ghcr.io/blakeblackshear/frigate:0.16.0-beta4
restart: unless-stopped
shm_size: "224mb"
devices:
- /dev/accel:/dev/accel
volumes:
- /etc/localtime:/etc/localtime:ro
- /mnt/dev/container-apps/frigate-new/:/config/
- /mnt/recordings/cameras/frigate-new:/media/frigate
- /home/frigate/db:/db
- type: tmpfs # Optional: 1GB of memory, reduces SSD/SD Card wear
target: /tmp/cache
tmpfs:
size: 1500000000
ports:
- "8971:8971"
- "5000:5000"
- "8554:8554" # RTSP feeds
- "8555:8555/tcp" # WebRTC over tcp
- "8555:8555/udp" # WebRTC over udp
environment:
FRIGATE_RTSP_PASSWORD: "xxx"
FRIGATE_OPENAI_API_KEY: "xxx"
FRIGATE_CAMERA_1: "xxx:xxx@xxx:554"
FRIGATE_CAMERA_2: "xxx:xxx@xxx:554"
FRIGATE_CAMERA_3: "xxx:xxx@xxx:554"
FRIGATE_CAMERA_5: "xxx:xxx@xxx:554"
FRIGATE_CAMERA_6: "xxx:xxx@xxx:554"
PLUS_API_KEY: "xxx"
FRIGATE_KARLA_ONVIF_USERNAME: "xxx"
FRIGATE_KARLA_ONVIF_PASSWORD: "xxx"Relevant Frigate log output2025-07-24 15:36:52.893400156 INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
2025-07-24 15:36:52.900536136 [2025-07-24 15:36:52] frigate.audio_manager INFO : Audio processor started (pid: 644)
2025-07-24 15:36:52.981111843 Process detector:onnx_0:
2025-07-24 15:36:52.981114632 Traceback (most recent call last):
2025-07-24 15:36:52.981115515 File "/usr/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap
2025-07-24 15:36:52.981119360 self.run()
2025-07-24 15:36:52.981120337 File "/opt/frigate/frigate/util/process.py", line 41, in run_wrapper
2025-07-24 15:36:52.981120962 return run(*args, **kwargs)
2025-07-24 15:36:52.981129279 ^^^^^^^^^^^^^^^^^^^^
2025-07-24 15:36:52.981129945 File "/usr/lib/python3.11/multiprocessing/process.py", line 108, in run
2025-07-24 15:36:52.981130712 self._target(*self._args, **self._kwargs)
2025-07-24 15:36:52.981131331 File "/opt/frigate/frigate/object_detection/base.py", line 112, in run_detector
2025-07-24 15:36:52.981132084 object_detector = LocalObjectDetector(detector_config=detector_config)
2025-07-24 15:36:52.981132675 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-07-24 15:36:52.981145206 File "/opt/frigate/frigate/object_detection/base.py", line 57, in __init__
2025-07-24 15:36:52.981145846 self.detect_api = create_detector(detector_config)
2025-07-24 15:36:52.981152930 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-07-24 15:36:52.981153627 File "/opt/frigate/frigate/detectors/__init__.py", line 18, in create_detector
2025-07-24 15:36:52.981154168 return api(detector_config)
2025-07-24 15:36:52.981154596 ^^^^^^^^^^^^^^^^^^^^
2025-07-24 15:36:52.981155175 File "/opt/frigate/frigate/detectors/plugins/openvino.py", line 50, in __init__
2025-07-24 15:36:52.981155723 self.interpreter = self.ov_core.compile_model(
2025-07-24 15:36:52.981156189 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-07-24 15:36:52.981156854 File "/usr/local/lib/python3.11/dist-packages/openvino/runtime/ie_api.py", line 543, in compile_model
2025-07-24 15:36:52.981163247 super().compile_model(model, device_name, {} if config is None else config),
2025-07-24 15:36:52.981163982 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-07-24 15:36:52.981164556 RuntimeError: Exception from src/inference/src/cpp/core.cpp:121:
2025-07-24 15:36:52.981165166 Exception from src/inference/src/dev/plugin.cpp:58:
2025-07-24 15:36:52.981165715 Exception from src/plugins/intel_npu/src/plugin/src/plugin.cpp:697:
2025-07-24 15:36:52.981166322 Exception from src/plugins/intel_npu/src/plugin/src/compiled_model.cpp:62:
2025-07-24 15:36:52.981166825 Exception from src/core/src/partial_shape.cpp:266:
2025-07-24 15:36:52.981174107 to_shape was called on a dynamic shape.Install methodProxmox via Docker Object DetectorOpenVino Screenshots of the Frigate UI's System metrics pagesN/A Any other information that may be helpfulNo response |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments 20 replies
-
|
For Frigate+ models on Intel NPUs via OpenVINO, the recommended model type is yolonas(1). The documentation specifies for Intel hardware (including platforms supporting OpenVINO) that you should use the openvino detector type with yolonas model type for Frigate+(1). Your detector config should look like this: detectors:
ov:
type: openvino
device: NPU
model:
model_type: yolonas
width: 320 # or 640, must match the model export
height: 320 # or 640, must match the model export
input_tensor: nchw
input_pixel_format: bgr
path: /config/yolo_nas_s.onnx # or the downloaded Frigate+ yolonas model path
labelmap_path: /labelmap/coco-80.txtMake sure you are using the yolonas variant from Frigate+ and that the model is exported in a format compatible with OpenVINO. The mobiledet model type is not recommended or supported for OpenVINO/Intel NPUs(1). Reference: 📚 Sources: Have another question? Just tag @inkeep. |
Beta Was this translation helpful? Give feedback.
-
|
There is no official NPU support, I have only seen a couple reports that YOLO-NAS was working with NPU and the performance was considerably worse than GPU. In general, using GPU is recommended as the NPU is generally not powerful enough to run a larger model like YOLO-NAS |
Beta Was this translation helpful? Give feedback.
-
|
I'm success used intel NPU (Intel Core Ultra 7 258v) on Frigate with YOLO-NAS. NPU don't run on flatten export that same problem run YOLO-NAS on Rockchip NPU. I modify openvino.py for support this model. |
Beta Was this translation helpful? Give feedback.
-
|
I am in the process of overhauling my server and wondering if a NPU on a Ultra 245k would be worth it for 8 cameras? |
Beta Was this translation helpful? Give feedback.

Looks like with the later versions of OpenVINO both YOLO-NAS and YOLOv9 via Frigate+ are supported. Official support for the NPU has been added for 0.17 #20536