Replies: 9 comments 16 replies
-
This would significantly increase the required space at /dev/shm. I would be open to making the rate limit of the events topic configurable, but I'm not convinced this approach is even practical. There is a reason I am managing all of the frames in an uncompressed shared memory store. I don't think you will be able to implement this successfully. My suggestion would be to do the best you can with the existing APIs and mqtt topics because I think you are likely to end up with different but similar challenges anyway. |
Beta Was this translation helpful? Give feedback.
-
I have tried using the current APIs. It doesn't work. By the time Frigate produces the event, it goes through MQTT and I get around to issuing an HTTP request to fetch the current frame the bird has moved significantly and the bounding box is off and/or the bird pose it bad. My back of the envelope calculation is as follows:
That doesn't sound excessive to me. Even if there were 10 cameras it wouldn't cause issues as far as I know. The only alternative I can come up with is to shove all these frames into mqtt and that seems worse to me. In the end, the primary purpose here is to enable experimentation with new stuff. If it works and efficiency is required then it will probably have to be integrated more tightly into frigate. |
Beta Was this translation helpful? Give feedback.
-
I have different proposal (haven't looked at the code yet to confirm feasibility/difficulty). With each event, publish an array of metadata that allow reprocessing or visualizing the clip corresponding to the event. The metadata would contain an entry for each frame processed by frigate and each entry would consist of: exact frame timestamp, object type, detection score, object bounding box, plus anything else relevant that I'm forgetting. This would allow an external classifier to fetch the clip, extract the frames that frigate analyzed, and for each one crop out the detection bbox. This would also allow some other features for debugging, such as producing an annotated version of the clip that visualizes what frigate's analysis came up with. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
i still like the idea of "chaining" object classification. could potentially help to find and compare for models with somehow "better" detections capabilites. https://hub.tensorflow.google.cn/s?module-type=image-object-detection |
Beta Was this translation helpful? Give feedback.
-
There is some related discussion here. |
Beta Was this translation helpful? Give feedback.
-
I would be interested in being able to chain frigate classification to another system for different model detection |
Beta Was this translation helpful? Give feedback.
-
anybody seen already this birdfeeder-project on google-coral ? |
Beta Was this translation helpful? Give feedback.
-
@tve i followed the idea of the bird-classifier and also needs better object-crops. as the doods2 developer will build the bird-object-classifier into doods2, still good (croped) images of the objects are needed... after that is done, one could think of lots of other classification ideas... |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I'd like to "chain" an object classification model off of frigate object detection in order to be able to classify detected objects more finely. I'm most interested in bird species, but it could also be persons or specific pets. More background in #1426
I've been going back and forth about how to try this out and my conclusion is that I'd like to start by building something external to frigate that can consume events published by frigate via MQTT. If the whole thing is successful, and others are interested, and it fits into frigate's mission, and the stars align, then maybe it can be integrated later. But initially building something independent seems a lot better.
What I'm missing is two things: access to better camera frames for further processing and more intermediate events. The first issue is that when an event is published the second stage really needs the exact same camera frame so it can apply the object bbox and do image classification. Right now this does not seem possible: it's a race to fetch
/api/{camera_name}
and hope things haven't moved too much, which is very unsatisfactory. Maybe I'm missing something.The second issue is that frigate only publishes an event update if the object tracker thinks it has a better view of the object, but that doesn't necessarily match with what a downstream classifier might conclude. At least that's my experience so far. So I'd like to add a setting that causes frigate to output an event update every N seconds even if the current view is no better than the previous view.
The concrete proposal I have is:
detect
configuration section add aforce_update
section to specify an interval in seconds at which frigate issues an event even if the view is no betterThoughts? Concerns? Alternatives? I'd love to get some feedback before I start hacking and produce a PR...
Beta Was this translation helpful? Give feedback.
All reactions