|
2 | 2 | sidebar_position: 2 |
3 | 3 | --- |
4 | 4 |
|
5 | | -# Introduction to TensorFlow in Raspberry Pi Environment |
| 5 | +# Introduction to TensorFlow in Raspberry Pi Environment |
| 6 | + |
| 7 | + |
| 8 | + |
| 9 | +As you are familiar with the basics of DNNs, CNNs, and object detection, let’s move on to the TensorFlow machine learning library! |
| 10 | + |
| 11 | +So, what is [TensorFlow](https://www.tensorflow.org/), and why is it so popular among people in the ML domain? |
| 12 | + |
| 13 | +TensorFlow is an open-source platform created by Google for machine learning (ML) and artificial intelligence (AI) applications. It’s designed to help developers and researchers build and train powerful ML models quickly and efficiently. By offering a set of flexible tools, libraries, and community resources, TensorFlow has become a go-to platform for everything from simple ML models to complex deep learning architectures. |
| 14 | + |
| 15 | + |
| 16 | +## What is Tensor? |
| 17 | + |
| 18 | + |
| 19 | + |
| 20 | +A **tensor** is a mathematical object that stores data in multiple dimensions. Think of it like a container for numbers, similar to a list or table. |
| 21 | + |
| 22 | +For example, a single number (like 5) is a 0-dimensional tensor, a list of numbers (like [1, 2, 3]) is a 1-dimensional tensor, and a grid of numbers is a 2-dimensional tensor (like a table). Tensors can even go beyond these dimensions, forming cubes or more complex shapes. |
| 23 | + |
| 24 | +They’re essential in machine learning because they can hold vast amounts of data, like images or text, in ways that make it easy for computers to process and analyze. This flexibility makes tensors a key part of tools like TensorFlow, where they're used to train AI models. |
| 25 | + |
| 26 | +[Reference](https://dev.to/mmithrakumar/scalars-vectors-matrices-and-tensors-with-tensorflow-2-0-1f66) |
| 27 | + |
| 28 | +## How TensorFlow Works |
| 29 | + |
| 30 | + |
| 31 | + |
| 32 | +At its core, TensorFlow works with tensors (multi-dimensional arrays) and uses these to perform operations on data. It organizes computations into graphs where nodes represent operations (like adding or multiplying) and edges represent data flowing between them. This makes TensorFlow incredibly efficient at handling large amounts of data, which is key in ML tasks. |
| 33 | + |
| 34 | +[Reference](https://www.analyticsvidhya.com/blog/2016/10/an-introduction-to-implementing-neural-networks-using-tensorflow/) |
| 35 | + |
| 36 | +## Key Highlights of TensorFlow: |
| 37 | + |
| 38 | +1. **Powerful and Versatile**: Supports a wide range of tasks, from image recognition to speech processing, on small devices to large servers. |
| 39 | + |
| 40 | +2. **Easy-to-Build Models with Keras**: Integrated Keras API simplifies neural network building for beginners and advanced users alike. |
| 41 | + |
| 42 | +3. **Flexible Deployment**: Models can run on CPUs, GPUs, mobile devices, IoT, and browsers. |
| 43 | + |
| 44 | +4. **Supports Advanced AI Research**: Offers low-level tools for deep customization, popular in both academia and industry. |
| 45 | + |
| 46 | +## What is Keras and TensorFlow Relationship? |
| 47 | + |
| 48 | + |
| 49 | + |
| 50 | +Keras is a high-level API that runs on top of TensorFlow, making it easier to build, train, and test deep learning models. Here’s how they relate: |
| 51 | + |
| 52 | +**Keras as Part of TensorFlow**: Originally, Keras was an independent library that could work with multiple backends (including TensorFlow, Theano, and CNTK). Now, it’s officially integrated within TensorFlow as tf.keras, so users can access it directly in TensorFlow. |
| 53 | + |
| 54 | +**Simplifying TensorFlow**: Keras provides a simple interface to TensorFlow’s powerful features, making it easier for beginners to build |
| 55 | +models without needing to dive into complex, lower-level TensorFlow code. |
| 56 | + |
| 57 | +**Streamlined Workflow**: Keras allows for quick prototyping and testing of neural networks, while TensorFlow handles the more intensive computations and optimization behind the scenes. |
| 58 | + |
| 59 | +## Building a Machine Learning Pipeline with TensorFlow |
| 60 | + |
| 61 | + |
| 62 | + |
| 63 | +**Data Collection:** |
| 64 | + |
| 65 | +Use TensorFlow to gather and preprocess data efficiently from various sources (e.g., images). |
| 66 | + |
| 67 | +**Data Preprocessing:** |
| 68 | + |
| 69 | +Leverage TensorFlow's tools for data cleaning, normalization, and augmentation to enhance model performance. |
| 70 | + |
| 71 | +**Model Development:** |
| 72 | + |
| 73 | +Utilize TensorFlow/Keras to build and train deep learning models with layers suitable for tasks like classification or detection. |
| 74 | +Easy experimentation with architectures and hyperparameters to optimize model performance. |
| 75 | + |
| 76 | +**Training and Evaluation:** |
| 77 | + |
| 78 | +Utilize built-in functions for training models on large datasets with GPU acceleration. |
| 79 | +Employ TensorFlow’s evaluation metrics to assess model accuracy and performance. |
| 80 | + |
| 81 | +**Model Saving and Exporting:** |
| 82 | + |
| 83 | +Use TensorFlow’s capabilities to save trained models in various formats (e.g., SavedModel) for easy deployment. |
| 84 | + |
| 85 | +**Deployment on Raspberry Pi:** |
| 86 | + |
| 87 | +Convert models to TensorFlow Lite format for efficient inference on the Raspberry Pi. |
| 88 | +Utilize TensorFlow Lite to run predictions with low latency and minimal resource usage. |
| 89 | + |
| 90 | +## Let's Create |
| 91 | + |
| 92 | +Now let's talk about building a model, as well as training and validation. With the building blocks of CNNs in mind, let's create one using a dataset provided by TensorFlow. We will use Google Colab to build the model. You can explore datasets from TensorFlow at TensorFlow Datasets Overview. The dataset we will use is the CIFAR-10 dataset. |
| 93 | + |
| 94 | +Here’s a simple explanation of what each line does in our model definition: |
| 95 | + |
| 96 | +1.**models.Sequential()**: Initializes a sequential model, which allows you to build a linear stack of layers. |
| 97 | + |
| 98 | +2.**model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)))**: Adds a convolutional layer with 32 filters, a 3x3 kernel, ReLU activation, and input shape for 32x32 RGB images. |
| 99 | + |
| 100 | +3.**model.add(layers.MaxPooling2D((2, 2)))**: Adds a max pooling layer that reduces the spatial dimensions by taking the maximum value from each 2x2 region. |
| 101 | + |
| 102 | +4.**model.add(layers.Dense(64, activation='relu'))**: Adds a fully connected (dense) layer with 64 units and ReLU activation to learn complex patterns. |
| 103 | + |
| 104 | + |
| 105 | +## LiteRT (TensorFlow Lite) |
| 106 | + |
| 107 | + |
| 108 | + |
| 109 | +LiteRT (short for Lite Runtime), formerly known as TensorFlow Lite (TFLite), is Google’s high-performance runtime specifically designed for on-device AI. It enables developers to deploy machine learning models on resource-constrained devices like smartphones, IoT devices, and single-board computers such as the Raspberry Pi. LiteRT provides a library of ready-to-run models covering a wide range of AI tasks. Additionally, it supports the conversion of models built in TensorFlow, PyTorch, and JAX to the LiteRT format through AI Edge conversion and optimization tools. For devices with limited resources, such as the Raspberry Pi, quantizing models is essential. Quantization reduces model size and memory usage by lowering the precision of model weights, which not only speeds up inference but also reduces the power consumption—making LiteRT ideal for edge AI applications. |
| 110 | + |
| 111 | +[Reference 1](https://www.kaggle.com/code/ashusma/understanding-tf-lite-and-model-optimization) |
| 112 | +[Reference 2](https://ai.google.dev/edge/litert/models/model_analyzer) |
| 113 | + |
| 114 | + |
| 115 | +We have created a Colab tutorial to train a model using the CIFAR-10 dataset. You can run each cell one by one to get hands-on experience. |
| 116 | + |
| 117 | +<a target="_blank" href="https://colab.research.google.com/github/KasunThushara/Tutorial-of-AI-Kit-with-Raspberry-Pi-From-Zero-to-Hero/blob/main/notebook/Chapter1/TensorFlow_CNN.ipynb"> |
| 118 | + <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> |
| 119 | +</a> |
| 120 | + |
| 121 | + |
| 122 | + |
| 123 | +In this lesson, we will set up a Raspberry Pi to run image classification using a pre-trained EfficientNet model and a standard dataset. |
| 124 | +This guide will walk you through the environment setup, model preparation, and running a live image classification script. |
| 125 | + |
| 126 | +## Prepare Your Raspberry Pi |
| 127 | + |
| 128 | +First, let's create a folder for your TensorFlow course and set up a virtual environment. |
| 129 | + |
| 130 | +```bash |
| 131 | +mkdir my_tf_course |
| 132 | +cd my_tf_course |
| 133 | +python -m venv --system-site-packages env |
| 134 | +source env/bin/activate |
| 135 | +``` |
| 136 | +## Install TensorFlow and OpenCV |
| 137 | + |
| 138 | +```bash |
| 139 | +pip3 install opencv-contrib-python tensorflow |
| 140 | +``` |
| 141 | + |
| 142 | + |
| 143 | +## Download the EfficientNet Model and Labels |
| 144 | + |
| 145 | +Download the [EfficientNet pre-trained model](../../models/Chapter2/2.tflite) and the [imagenet-classes.txt](../../models/Chapter2/imagenet-classes.txt) file (which contains the labels). |
| 146 | +Copy these files to a folder on your Desktop named `tf_files`. |
| 147 | + |
| 148 | + |
| 149 | + |
| 150 | +## Create the Python Script |
| 151 | + |
| 152 | +```bash |
| 153 | + |
| 154 | +import os |
| 155 | +import cv2 |
| 156 | +import numpy as np |
| 157 | +import tensorflow as tf |
| 158 | + |
| 159 | +# Define paths as variables |
| 160 | +MODEL_PATH = os.path.expanduser("/home/pi/Desktop/tf_files/2.tflite") # Adjust as needed |
| 161 | +LABELS_PATH = os.path.expanduser("/home/pi/Desktop/tf_files/imagenet-classes.txt") # Adjust as needed |
| 162 | + |
| 163 | +# Load the TFLite model |
| 164 | +interpreter = tf.lite.Interpreter(model_path=MODEL_PATH) |
| 165 | +interpreter.allocate_tensors() |
| 166 | + |
| 167 | +# Get input and output details for the model |
| 168 | +input_details = interpreter.get_input_details() |
| 169 | +output_details = interpreter.get_output_details() |
| 170 | + |
| 171 | +# Load labels (assuming they are in a text file, one label per line) |
| 172 | +with open(LABELS_PATH, 'r') as f: |
| 173 | + labels = [line.strip() for line in f.readlines()] |
| 174 | + |
| 175 | +# Function to preprocess image |
| 176 | +def preprocess_image(image): |
| 177 | + image = cv2.resize(image, (224, 224)) |
| 178 | + image = np.expand_dims(image, axis=0).astype(np.uint8) |
| 179 | + return image |
| 180 | + |
| 181 | +# Function to get top 3 predictions |
| 182 | +def get_top_3_predictions(interpreter, image): |
| 183 | + interpreter.set_tensor(input_details[0]['index'], image) |
| 184 | + interpreter.invoke() |
| 185 | + |
| 186 | + output = interpreter.get_tensor(output_details[0]['index']) |
| 187 | + output = np.squeeze(output) |
| 188 | + top_3_indices = output.argsort()[-3:][::-1] |
| 189 | + return top_3_indices, output[top_3_indices] |
| 190 | + |
| 191 | +# Start webcam capture |
| 192 | +cap = cv2.VideoCapture(0) |
| 193 | + |
| 194 | +while True: |
| 195 | + ret, frame = cap.read() |
| 196 | + if not ret: |
| 197 | + break |
| 198 | + |
| 199 | + image = preprocess_image(frame) |
| 200 | + top_3_indices, top_3_probs = get_top_3_predictions(interpreter, image) |
| 201 | + |
| 202 | + # Display the predictions with class names |
| 203 | + for i, (idx, prob) in enumerate(zip(top_3_indices, top_3_probs)): |
| 204 | + label = labels[idx] if idx < len(labels) else "Unknown" |
| 205 | + cv2.putText(frame, f"Top {i+1}: {label} ({prob:.2f})", (10, 30 + i * 30), |
| 206 | + cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 0, 0), 2) |
| 207 | + |
| 208 | + cv2.imshow('Webcam Feed - Top 3 Predictions', frame) |
| 209 | + if cv2.waitKey(1) & 0xFF == ord('q'): |
| 210 | + break |
| 211 | + |
| 212 | +cap.release() |
| 213 | +cv2.destroyAllWindows() |
| 214 | + |
| 215 | +``` |
| 216 | +
|
| 217 | +## Run the Script |
| 218 | +
|
| 219 | +Navigate to the folder where your Python file (tflesson1.py) is saved. |
| 220 | +
|
| 221 | +```bash |
| 222 | +cd /home/pi/Desktop/tf_files |
| 223 | +``` |
| 224 | + |
| 225 | +
|
| 226 | +Run the Python script to start the webcam feed with predictions. |
| 227 | +
|
| 228 | +```bash |
| 229 | +python tflesson1.py |
| 230 | +``` |
| 231 | + |
0 commit comments