How to contribute to the code of other edge computing AI accelerators? #20283
Replies: 4 comments 2 replies
-
Support for object detection hardware can be added to Frigate through the community supported boards project. See the official documentation here: https://docs.frigate.video/development/contributing-boards You can see examples in the codebase of other community supported boards. Here is a PR for a recently added one: #19680 You can follow the conventions in the PR and in the official documentation to submit a pull request. |
Beta Was this translation helpful? Give feedback.
-
To contribute support for new edge AI accelerators or detectors to Frigate, you should follow the standard process for contributing to the Frigate main code base:
For large or architectural changes, it is recommended to open a discussion on GitHub before submitting significant modifications(1). You can find more details and guidelines in the official contributing guide here: Contributing To The Main Code Base(1). 📚 Sources: Have another question? Just tag @inkeep. |
Beta Was this translation helpful? Give feedback.
-
@BUG1989 spotted the hardware specification for this on cnx-soft blog and have three questions with some follow-up questions/links:
For some Edge AI use cases it is not good to require active cooling, so good idea to sell a low-temperature with just a larger heatsink.
That is, can it be used to accelerate all type of video decode and video encode tasks for Frigate NVR too?
Home Assistant OS in turn can run Frigate add-ons See these Home Assistant specific requests and discussions asking about support for this new M5Stack LLM-8850 AI accelerator adapter : However the AI-accelerator could be useful for many things in HA OS, such as STT (Whisper), TTS (Piper), object recognition, and local LLMs: PS: Home Assistant Operating System supports 64-bit ARM and 64-bit x84 platforms but the most popular today is probably Raspberry Pi 5: |
Beta Was this translation helpful? Give feedback.
-
FYI, also see the "Radxa AICore AX-M1" M.2 2280 M Key form-factor AI Acceleration Module (M72T) is based the same Axera AX8850 SoC: Radxa has not yet posted anything about pricing or when it will be availability. Radxa officially lists their ROCK 5A, 5B, 5B+, and ROCK 5 ITX boards as tested. ![]() RapidAnalysis posted a demo-video of a preview unit testing DeepSeek-R1-Qwen-7B and SmolLM2-360M-Instruct large language model(s): Radxa has a support listing for Large Language Models, small and large Vision models, Speech Models, Text-to-image Generation models: Large Language Models:
Vision Large Models:
Speech Models:
Text-to-image Generation Model:
Vision Model:
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Describe what you are trying to accomplish and why in non technical terms
We are a company specializing in edge AI chips. Our AI chips can efficiently run excellent algorithm models such as YOLO, CLIP, Whisper, LLM, and VLM, and consume only a few watts of power.
We have already preliminarily completed the adaptation of the M.2 computing power card LLM8850 (similar to Hailo8L, M.2242, up to 24TOPs@Int8) based on AX650N/AX8850 for the edge AI chips to Frigate, reducing the CPU usage of the target detector.
Our next step is to further support the adaptation of JinaCLIP. Our chips have already been adapted to various CLIP models, such as CN_CLIP-L14-336, CLIP-L14-336, MobileCLIP2-S4, and have already been mass-produced on other commercial intelligent NVRs. We also hope to extend this function to the Frigate platform, so that users of Frigate can have more options.
We can provide adaptation services for x86 platforms and aarch64 platforms (such as Raspberry Pi 5), and once completed, the source code will be made open-source.
Describe the solution you'd like
If we want to add code to the Frigate mainline to support our chips, do you have any rules to follow? We hope to receive your guidance.
We hope to receive guidance from the official engineers of Frigate, so that our co-processors can exert their greater capabilities and enhance the power of Frigate.
Describe alternatives you've considered
None
Additional context
Beta Was this translation helpful? Give feedback.
All reactions