如果你看到这个仓库,证明你想试试这个多线程的推理。
- 这个里的代码不是最优的,有些错序的问题需要其他手段解决,我没有在这里解决,你可以看下一个标题新版本仓库的链接。新仓库解决了这个问题
- 本仓库的代码思路想法,在我的B站上有详细的讲解,需要理解程序的可以去b站搜我“kaylordut”
- 项目合作的可以发邮件到[email protected], 邮件请说明来意,和简单的需求,以及你的预算。邮件我一般都回复,请不要一来就索要微信,一个切实可行的项目或者良好的技术交流是良好的开始。
An inference framework compatible with TensorRT, OnnxRuntime, NNRT and RKNN
If you want to find some more yolo8/yolo11 demo and depth anything demo, visit my another repository
The project is a multi-threaded inference demo of Yolov8 running on the RK3588 platform, which has been adapted for reading video files and camera feeds. The demo uses the Yolov8n model for file inference, with a maximum inference frame rate of up to 100 frames per second.
If you want to test yolov8n with ros2 for yourself kit, click the link
you can find the model file in the 'src/yolov8/model', and some large files:
Link: https://pan.baidu.com/s/1zfSVzR1G7mb-EQvs6A6ZYw?pwd=gmcs Password: gmcs
Google Drive: https://drive.google.com/drive/folders/1FYluJpdaL-680pipgIQ1zsqqRvNbruEp?usp=sharing
go to my blog --> blog.kaylordut.com
go to my another repository --> yolov10
download pt model and export:
# End-to-End ONNX
yolo export model=yolov10n/s/m/b/l/x.pt format=onnx opset=13 simplify
go to my blog --> blog.kaylordut.com
TIPS: (Yolov10)
- rknn-toolkit2(release:1.6.0) does not support some operators about attention, so it runs attention steps with CPU, leading to increased inference time.
- rknn-toolkit2(beta:2.0.0b12) has the attention operators for 3588, so I build a docker image, you can pull it from kaylor/rknn_onnx2rknn:beta
Please refer to the spreadsheet '8vs10.xlsx' for details.
V8l-2.0.0 | V8l-1.6.0 | V10l-2.0.0 | V10l-1.6.0 | V8n-2.0.0 | V8n-1.6.0 | V10n-2.0.0 | V10n-1.6.0 |
---|---|---|---|---|---|---|---|
133.07572815534 | 133.834951456311 | 122.992233009709 | 204.471844660194 | 17.8990291262136 | 18.3300970873786 | 21.3009708737864 | 49.9883495145631 |
https://space.bilibili.com/327258623?spm_id_from=333.999.0.0
QQ group1: 957577822 (full)
QQ group2: 546943464
Set up a cross-compilation environment based on the following link.
cat << 'EOF' | sudo tee /etc/apt/sources.list.d/kaylordut.list
deb [signed-by=/etc/apt/keyrings/kaylor-keyring.gpg] http://apt.kaylordut.cn/kaylordut/ kaylordut main
EOF
sudo mkdir /etc/apt/keyrings -pv
sudo wget -O /etc/apt/keyrings/kaylor-keyring.gpg http://apt.kaylordut.cn/kaylor-keyring.gpg
sudo apt update
sudo apt install kaylordut-dev libbytetrack
If your OS is not Ubuntu22.04, and find kaylordut-dev and libbytetrack sources in my github.
- Compile
git clone https://github.com/kaylorchen/rk3588-yolo-demo.git
cd rk3588-yolo-demo/src/yolov8
mkdir build
cd build
cmake -DCMAKE_TOOLCHAIN_FILE=/path/to/toolchain-aarch64.cmake -DCMAKE_EXPORT_COMPILE_COMMANDS=ON ..
make
/path/to/toolchain-aarch64.cmake is .cmake file absolute path
- Run
Usage: ./videofile_demo [--model_path|-m model_path] [--input_filename|-i input_filename] [--threads|-t thread_count] [--framerate|-f framerate] [--label_path|-l label_path]
Usage: ./camera_demo [--model_path|-m model_path] [--camera_index|-i index] [--width|-w width] [--height|-h height][--threads|-t thread_count] [--fps|-f framerate] [--label_path|-l label_path]
Usage: ./imagefile_demo [--model_path|-m model_path] [--input_filename|-i input_filename] [--label_path|-l label_path]
you can run the above command in your rk3588