Skip to content

Commit 0b18120

Browse files
author
Fabrizio Romanelli
committed
Adding RNN to the library
1 parent 66978d3 commit 0b18120

File tree

3 files changed

+231
-92
lines changed

3 files changed

+231
-92
lines changed

README.md

Lines changed: 89 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ The current version is 0.0.2.
1010

1111
## Changelog
1212

13-
11.10.2023 - Version 0.0.3: Integration of a fully functional Convolutional Neural Network class.
13+
11.10.2023 - Version 0.0.3: Integration of a fully functional Convolutional Neural Network and Recurrent Neural Network classes.
1414
10.10.2023 - Version 0.0.2: Integration of a fully functional multivariate Deep Neural Network class.
1515

1616
## Overview
@@ -38,7 +38,7 @@ This will install the library with full support for tensorflow-gpu.
3838
## Quick Start
3939

4040
### Multivariate Deep Neural Network (`multivariateDNN`)
41-
Here's a simple example of how to use the deepforge to create a simple Deep Neural Network via the `multivariateDNN` class:
41+
Here's a simple example of how to deepforge a simple Deep Neural Network via the `multivariateDNN` class:
4242

4343
```python
4444
import numpy as np
@@ -236,7 +236,7 @@ ________________________________________________________________________________
236236
For more detailed usage and examples, please refer to the documentation.
237237

238238
### Convolutional Neural Network (`CNN`)
239-
Here's a simple example of how to use the deepforge to create a simple Convolutional Neural Network via the `CNN` class and fitting the model with the MNIST dataset:
239+
Here's a simple example of how to deepforge a simple Convolutional Neural Network via the `CNN` class and fitting the model with the MNIST dataset:
240240

241241
```python
242242
# Convolutional Neural Network example with MNIST dataset training and validation
@@ -336,6 +336,92 @@ Test accuracy: 0.991100013256073
336336
[DF] Model saved!
337337
```
338338

339+
### Recurrent Neural Network (`RNN`)
340+
Here's a simple example of how to deepforge a simple Recurrent Neural Network via the `RNN` class:
341+
342+
```python
343+
# Simple Recurrent Neural Network example
344+
345+
# Make an instance of a RNN
346+
rnn = df.RNN(name="Simple RNN", inputN=1)
347+
348+
# Set inputs, inner layers and out layers
349+
rnn.setInputs([{'shape': (1,2), 'name': 'Input layer'}])
350+
rnn.setRecurrentLayers([[{'units': 500}]])
351+
rnn.setOutLayers([{'units': 1, 'activation': 'linear'}])
352+
353+
# Configure the model
354+
rnn.setModelConfiguration(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
355+
356+
# Build the model and print the summary
357+
rnn.build()
358+
rnn.summary()
359+
360+
################################################
361+
################################################
362+
################################################
363+
364+
# Another RNN example with 3 stacked LSTM layers
365+
rnn2 = df.RNN(name="Stacked RNN")
366+
367+
# Set inputs, inner layers and out layers
368+
rnn2.setInputs([{'shape': (6,2), 'name': 'Input layer'}])
369+
rnn2.setRecurrentLayers([[{'units': 500, 'return_sequences': True},{'units': 500, 'return_sequences': True},{'units': 500}]])
370+
rnn2.setOutLayers([{'units': 1, 'activation': 'linear'}])
371+
372+
# Configure the model
373+
rnn2.setModelConfiguration(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
374+
375+
# Build the model and print the summary
376+
rnn2.build()
377+
rnn2.summary()
378+
```
379+
380+
The output for the previous code is:
381+
382+
```
383+
[DF] Building model...
384+
[DF] Model built!
385+
Model: "SimpleRNN"
386+
_________________________________________________________________
387+
Layer (type) Output Shape Param #
388+
=================================================================
389+
Input layer (InputLayer) [(None, 1, 2)] 0
390+
391+
cu_dnnlstm_4 (CuDNNLSTM) (None, 500) 1008000
392+
393+
dense_2 (Dense) (None, 1) 501
394+
395+
=================================================================
396+
Total params: 1,008,501
397+
Trainable params: 1,008,501
398+
Non-trainable params: 0
399+
_________________________________________________________________
400+
[DF] Building model...
401+
[DF] Model built!
402+
403+
404+
Model: "StackedRNN"
405+
_________________________________________________________________
406+
Layer (type) Output Shape Param #
407+
=================================================================
408+
Input layer (InputLayer) [(None, 6, 2)] 0
409+
410+
cu_dnnlstm_5 (CuDNNLSTM) (None, 6, 500) 1008000
411+
412+
cu_dnnlstm_6 (CuDNNLSTM) (None, 6, 500) 2004000
413+
414+
cu_dnnlstm_7 (CuDNNLSTM) (None, 500) 2004000
415+
416+
dense_3 (Dense) (None, 1) 501
417+
418+
=================================================================
419+
Total params: 5,016,501
420+
Trainable params: 5,016,501
421+
Non-trainable params: 0
422+
_________________________________________________________________
423+
```
424+
339425
## Documentation
340426

341427
Check out the full documentation for [Keras](https://keras.io/api/) and [Tensorflow](https://www.tensorflow.org/api_docs) for in-depth information on how to use the library.

deepforge.ipynb

Lines changed: 49 additions & 80 deletions
Original file line numberDiff line numberDiff line change
@@ -2,33 +2,9 @@
22
"cells": [
33
{
44
"cell_type": "code",
5-
"execution_count": 1,
5+
"execution_count": null,
66
"metadata": {},
7-
"outputs": [
8-
{
9-
"name": "stderr",
10-
"output_type": "stream",
11-
"text": [
12-
"2023-10-11 12:21:28.203222: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX_VNNI FMA\n",
13-
"To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.\n",
14-
"2023-10-11 12:21:28.283337: I tensorflow/core/util/port.cc:104] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n",
15-
"2023-10-11 12:21:29.894158: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:967] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node\n",
16-
"Your kernel may have been built without NUMA support.\n",
17-
"2023-10-11 12:21:29.897596: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:967] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node\n",
18-
"Your kernel may have been built without NUMA support.\n",
19-
"2023-10-11 12:21:29.897644: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:967] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node\n",
20-
"Your kernel may have been built without NUMA support.\n",
21-
"2023-10-11 12:21:30.420836: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:967] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node\n",
22-
"Your kernel may have been built without NUMA support.\n",
23-
"2023-10-11 12:21:30.420967: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:967] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node\n",
24-
"Your kernel may have been built without NUMA support.\n",
25-
"2023-10-11 12:21:30.420975: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1700] Could not identify NUMA node of platform GPU id 0, defaulting to 0. Your kernel may not have been built with NUMA support.\n",
26-
"2023-10-11 12:21:30.421000: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:967] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node\n",
27-
"Your kernel may have been built without NUMA support.\n",
28-
"2023-10-11 12:21:30.421025: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1613] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 1585 MB memory: -> device: 0, name: NVIDIA GeForce RTX 3050 Ti Laptop GPU, pci bus id: 0000:01:00.0, compute capability: 8.6\n"
29-
]
30-
}
31-
],
7+
"outputs": [],
328
"source": [
339
"# Jupyter notebook for testing deepforge library\n",
3410
"# author: Fabrizio Romanelli\n",
@@ -48,7 +24,7 @@
4824
"metadata": {},
4925
"outputs": [],
5026
"source": [
51-
"# Simple DNN example\n",
27+
"# Simple Deep Neural Network example\n",
5228
"import numpy as np\n",
5329
"\n",
5430
"# Make an instance of a multivariate DNN\n",
@@ -95,7 +71,7 @@
9571
"metadata": {},
9672
"outputs": [],
9773
"source": [
98-
"# DNN with 2 input layers and custom loss function example\n",
74+
"# Deep Neural Network with 2 input layers and custom loss function example\n",
9975
"import numpy as np\n",
10076
"from keras.losses import MeanSquaredError\n",
10177
"\n",
@@ -155,59 +131,9 @@
155131
},
156132
{
157133
"cell_type": "code",
158-
"execution_count": 4,
134+
"execution_count": null,
159135
"metadata": {},
160-
"outputs": [
161-
{
162-
"name": "stdout",
163-
"output_type": "stream",
164-
"text": [
165-
"[DF] Building model...\n",
166-
"[DF] Model built!\n",
167-
"Model: \"SimpleCNN\"\n",
168-
"_________________________________________________________________\n",
169-
" Layer (type) Output Shape Param # \n",
170-
"=================================================================\n",
171-
" Input Layer (InputLayer) [(None, 28, 28, 1)] 0 \n",
172-
" \n",
173-
" conv2d_6 (Conv2D) (None, 26, 26, 32) 320 \n",
174-
" \n",
175-
" max_pooling2d_4 (MaxPooling (None, 13, 13, 32) 0 \n",
176-
" 2D) \n",
177-
" \n",
178-
" conv2d_7 (Conv2D) (None, 11, 11, 64) 18496 \n",
179-
" \n",
180-
" max_pooling2d_5 (MaxPooling (None, 5, 5, 64) 0 \n",
181-
" 2D) \n",
182-
" \n",
183-
" conv2d_8 (Conv2D) (None, 3, 3, 64) 36928 \n",
184-
" \n",
185-
" flatten_2 (Flatten) (None, 576) 0 \n",
186-
" \n",
187-
" dense_4 (Dense) (None, 64) 36928 \n",
188-
" \n",
189-
" dense_5 (Dense) (None, 10) 650 \n",
190-
" \n",
191-
"=================================================================\n",
192-
"Total params: 93,322\n",
193-
"Trainable params: 93,322\n",
194-
"Non-trainable params: 0\n",
195-
"_________________________________________________________________\n",
196-
"Epoch 1/5\n",
197-
"938/938 [==============================] - 6s 5ms/step - loss: 0.1780 - accuracy: 0.9445\n",
198-
"Epoch 2/5\n",
199-
"938/938 [==============================] - 6s 6ms/step - loss: 0.0501 - accuracy: 0.9843\n",
200-
"Epoch 3/5\n",
201-
"938/938 [==============================] - 5s 6ms/step - loss: 0.0365 - accuracy: 0.9886\n",
202-
"Epoch 4/5\n",
203-
"938/938 [==============================] - 6s 6ms/step - loss: 0.0273 - accuracy: 0.9916\n",
204-
"Epoch 5/5\n",
205-
"938/938 [==============================] - 6s 6ms/step - loss: 0.0226 - accuracy: 0.9930\n",
206-
"313/313 [==============================] - 1s 4ms/step - loss: 0.0281 - accuracy: 0.9911\n",
207-
"Test accuracy: 0.991100013256073\n"
208-
]
209-
}
210-
],
136+
"outputs": [],
211137
"source": [
212138
"# Convolutional Neural Network example with MNIST dataset training and validation\n",
213139
"\n",
@@ -251,6 +177,49 @@
251177
"test_loss, test_acc = cnnModel.evaluate(test_images, test_labels)\n",
252178
"print(\"Test accuracy:\", test_acc)"
253179
]
180+
},
181+
{
182+
"cell_type": "code",
183+
"execution_count": null,
184+
"metadata": {},
185+
"outputs": [],
186+
"source": [
187+
"# Recurrent Neural Network examples\n",
188+
"\n",
189+
"# Make an instance of a RNN\n",
190+
"rnn = df.RNN(name=\"Simple RNN\", inputN=1)\n",
191+
"\n",
192+
"# Set inputs, inner layers and out layers\n",
193+
"rnn.setInputs([{'shape': (1,2), 'name': 'Input layer'}])\n",
194+
"rnn.setRecurrentLayers([[{'units': 500}]])\n",
195+
"rnn.setOutLayers([{'units': 1, 'activation': 'linear'}])\n",
196+
"\n",
197+
"# Configure the model\n",
198+
"rnn.setModelConfiguration(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])\n",
199+
"\n",
200+
"# Build the model and print the summary\n",
201+
"rnn.build()\n",
202+
"rnn.summary()\n",
203+
"\n",
204+
"################################################\n",
205+
"################################################\n",
206+
"################################################\n",
207+
"\n",
208+
"# Another RNN example with 3 stacked LSTM layers\n",
209+
"rnn2 = df.RNN(name=\"Stacked RNN\")\n",
210+
"\n",
211+
"# Set inputs, inner layers and out layers\n",
212+
"rnn2.setInputs([{'shape': (6,2), 'name': 'Input layer'}])\n",
213+
"rnn2.setRecurrentLayers([[{'units': 500, 'return_sequences': True},{'units': 500, 'return_sequences': True},{'units': 500}]])\n",
214+
"rnn2.setOutLayers([{'units': 1, 'activation': 'linear'}])\n",
215+
"\n",
216+
"# Configure the model\n",
217+
"rnn2.setModelConfiguration(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])\n",
218+
"\n",
219+
"# Build the model and print the summary\n",
220+
"rnn2.build()\n",
221+
"rnn2.summary()"
222+
]
254223
}
255224
],
256225
"metadata": {

0 commit comments

Comments
 (0)