-
Notifications
You must be signed in to change notification settings - Fork 2.1k
examples/advanced: add TFLM Fashion-MNIST INT8 demo (tflm_fmnist_demo) #21664
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Murdock results❌ FAILED 9e22496 examples/advanced: add TFLM Fashion-MNIST INT8 demo (tflm_fmnist_demo)
Build failures (1)
Artifacts |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did not go through the code very thorougly yet, but here are some styling related comments. Please also check the output of the static-tests
in the "Files" tab of the Github PR section (https://github.com/RIOT-OS/RIOT/pull/21664/files) or run make static-test
in your local repository.
There are a lot of trailing whitespaces at the end of code lines
USEPKG += tflite-micro | ||
USEMODULE += xtimer | ||
|
||
# ✅ List ALL C++ sources here |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# ✅ List ALL C++ sources here | |
# List ALL C++ sources here |
It's generally good practise not to use emojis in source files, even though with modern editors it's possible. Did ChatGPT generate this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think so yes, since I had a bug and it gave me a new version of the Makefile, by the way, my work in RIOT was part of a mandatory internship I did for Mr. Hahm at my university which I finished 2 days ago, since I am starting to do my thesis now I do not have a lot of time, I will probably only after completing it, have time to update all of the mentioned aspects.
In the meantime you can gladly go through my code in more depth and point out other flaws if you want.
Thank you very much for your review thus far, I do not know how high the demand is, but I saw that this tf light micro library was available in RIOT, so I thought it would be good to make an advanced example out of this
|
||
FEATURES_REQUIRED += cpp | ||
USEPKG += tflite-micro | ||
USEMODULE += xtimer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
xtimer
has been deprecated for a long time, it would be good to use ztimer
instead.
for (int i = 0; i < SAMPLE_COUNT; ++i) { | ||
quantize_into_input(&interpreter, g_images[i]); | ||
|
||
uint32_t t0 = xtimer_now_usec(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you really need microsecond precision? On embedded system, that can be quite power consuming.
@@ -0,0 +1,123 @@ | |||
// main.cpp — RIOT + TFLite Micro, runtime quantization from uint8 images (48x48) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add a header to your source files according to our Coding Convention: https://github.com/RIOT-OS/RIOT/blob/master/CODING_CONVENTIONS.md#documentation
#include "xtimer.h" | ||
#include "samples.h" // generated by the Python helper: g_images (uint8), g_labels | ||
|
||
// TFLite Micro |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
RIOT uses C-Style Comments instead of C++ Style comments, please adapt the source files according to our Coding Convention: https://github.com/RIOT-OS/RIOT/blob/master/CODING_CONVENTIONS.md#comments
|
||
// Adjust at build time: make TENSOR_ARENA_SIZE=262144 | ||
#ifndef TENSOR_ARENA_SIZE | ||
#define TENSOR_ARENA_SIZE (200 * 1024) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#define TENSOR_ARENA_SIZE (200 * 1024) | |
# define TENSOR_ARENA_SIZE (200 * 1024) |
Preprocessor directives inside of #if
s should be indended, please refer to our Coding Convention: https://github.com/RIOT-OS/RIOT/blob/master/CODING_CONVENTIONS.md#indentation-of-preprocessor-directives
Contribution description
This PR adds a new advanced example:
examples/advanced/tflm_fmnist_demo
, a minimal TensorFlow Lite Micro (TFLM) INT8 inference demo for RIOT.Highlights:
Embedded INT8 model in
model_data.cpp
(g_model
,g_model_len
), distilled from a larger teacher model (see provenance below).48×48 grayscale Fashion-MNIST sample images are provided. At runtime the app quantizes these images into the model’s input tensor and prints the predicted class.
Self-contained example showing how to:
tflite-micro
viaUSEPKG
MicroInterpreter
, tensor arena, and minimal op resolver (Conv2D, DepthwiseConv2D, AvgPool, FullyConnected, Reshape, Mean, Quantize/Dequantize)Includes helper scripts and ten PNGs to regenerate the small test set, but the firmware itself only consumes
samples.h
(the PNGs are for developer convenience).Provenance of the student model: https://github.com/KiyoLelou10/FashionMNISTDistill
A larger model was trained and distilled into this compact INT8 student. That repo also contains people images; with the same pattern shown here, one could train/distill a tiny model to count people in a photo (e.g., 0–5+) and run it on RIOT.
Directory (new):
Testing procedure
nRF52840 DK
cd examples/advanced/tflm_fmnist_demo PORT=/dev/ttyACM0 BOARD=nrf52840dk make all term flash WERROR=0
Notes:
The first build/flash takes a while because TFLM packages are compiled (flatbuffers, gemmlowp, ruy, kernels, micro, etc.). This is expected; you’ll see lines like:
Warnings from TFLM are noisy;
WERROR=0
keeps them non-fatal.If
AllocateTensors()
fails on another board, try a larger arena:Expected serial output (example):
(With reference kernels on nRF52840, ~3.1s/inference is normal. Enable CMSIS-NN for faster.)
Native (Linux host)
Regenerate the sample header (optional)
The firmware uses
samples.h
. PNGs inimgs/
are provided so reviewers can rebuild:(Or create new PNGs with
save_fmnist_10_pngs.py
and regeneratesamples.h
.)Issues/PRs references
None. This is a self-contained new example.