@@ -8,6 +8,11 @@ A python object oriented library to model deep neural networks based on Keras/Te
88
99The current version is 0.0.2.
1010
11+ ## Changelog
12+
13+ 11.10.2023 - Version 0.0.3: Integration of a fully functional Convolutional Neural Network and Recurrent Neural Network classes.
14+ 10.10.2023 - Version 0.0.2: Integration of a fully functional multivariate Deep Neural Network class.
15+
1116## Overview
1217
1318deepforge is a Python object-oriented library built on top of Keras and TensorFlow for simplifying the creation and training of deep neural networks. It provides a user-friendly interface for designing, configuring, and training neural network models, making it easier for developers and researchers to work with deep learning. Next versions of the library should include support for other deep learning frameworks (such as PyTorch, Theano, Caffe, etc.).
@@ -32,7 +37,8 @@ This will install the library with full support for tensorflow-gpu.
3237
3338## Quick Start
3439
35- Here's a simple example of how to use the deepforge to create a simple Deep Neural Network via the ` multivariateDNN ` class:
40+ ### Multivariate Deep Neural Network (` multivariateDNN ` )
41+ Here's a simple example of how to deepforge a simple Deep Neural Network via the ` multivariateDNN ` class:
3642
3743``` python
3844import numpy as np
@@ -229,6 +235,193 @@ ________________________________________________________________________________
229235
230236For more detailed usage and examples, please refer to the documentation.
231237
238+ ### Convolutional Neural Network (` CNN ` )
239+ Here's a simple example of how to deepforge a simple Convolutional Neural Network via the ` CNN ` class and fitting the model with the MNIST dataset:
240+
241+ ``` python
242+ # Convolutional Neural Network example with MNIST dataset training and validation
243+ import numpy as np
244+ # Import the deepforge library
245+ import deepforge as df
246+ from keras.datasets import mnist
247+ from keras.utils import to_categorical
248+
249+ # Initialize the environment
250+ df.initialize(CPU = 20 , GPU = 1 , VERBOSE = ' 2' , NPARRAYS = True )
251+
252+ # Make an instance of a CNN
253+ cnn = df.CNN(name = " Simple CNN" , inputN = 1 )
254+
255+ # Set inputs, inner layers and out layers
256+ cnn.setInputs([{' shape' : (28 , 28 , 1 ), ' name' : ' Input Layer' }])
257+ cnn.setConvLayers([[{' filters' : 32 , ' kernel_size' : (3 , 3 ), ' activation' : ' relu' },{' filters' : 64 , ' kernel_size' : (3 , 3 ), ' activation' : ' relu' },{' filters' : 64 , ' kernel_size' : (3 , 3 ), ' activation' : ' relu' }]])
258+ cnn.setPoolLayers([[{' pool_size' : (2 ,2 )},{' pool_size' : (2 ,2 )}]])
259+ cnn.setOutLayers([{' units' : 64 , ' activation' : ' relu' },{' units' : 10 , ' activation' : ' softmax' }])
260+
261+ # Configure the model
262+ cnn.setModelConfiguration(optimizer = ' adam' , loss = ' categorical_crossentropy' , metrics = [' accuracy' ])
263+
264+ # Build the model and print the summary
265+ cnn.build()
266+ cnn.summary()
267+
268+ # Load the MNIST dataset and preprocess it
269+ (train_images, train_labels), (test_images, test_labels) = mnist.load_data()
270+ train_images = train_images.reshape((60000 , 28 , 28 , 1 ))
271+ test_images = test_images.reshape((10000 , 28 , 28 , 1 ))
272+ train_images = train_images.astype(' float32' ) / 255
273+ test_images = test_images.astype(' float32' ) / 255
274+ train_labels = to_categorical(train_labels)
275+ test_labels = to_categorical(test_labels)
276+
277+ # Fit the model
278+ cnn.fit(x = train_images, y = train_labels, epochs = 5 , batch_size = 64 )
279+
280+ # Save the model
281+ cnn.save(' CNN' ,tflite = False )
282+
283+ # Get the Keras model and run a test to evaluate accuracy
284+ cnnModel = cnn.getModel()
285+
286+ test_loss, test_acc = cnnModel.evaluate(test_images, test_labels)
287+ print (" Test accuracy:" , test_acc)
288+ ```
289+
290+ The output of the previous code snippet is reported here:
291+ ```
292+ [DF] Building model...
293+ [DF] Model built!
294+ Model: "SimpleCNN"
295+ _________________________________________________________________
296+ Layer (type) Output Shape Param #
297+ =================================================================
298+ Input Layer (InputLayer) [(None, 28, 28, 1)] 0
299+
300+ conv2d_6 (Conv2D) (None, 26, 26, 32) 320
301+
302+ max_pooling2d_4 (MaxPooling (None, 13, 13, 32) 0
303+ 2D)
304+
305+ conv2d_7 (Conv2D) (None, 11, 11, 64) 18496
306+
307+ max_pooling2d_5 (MaxPooling (None, 5, 5, 64) 0
308+ 2D)
309+
310+ conv2d_8 (Conv2D) (None, 3, 3, 64) 36928
311+
312+ flatten_2 (Flatten) (None, 576) 0
313+
314+ dense_4 (Dense) (None, 64) 36928
315+
316+ dense_5 (Dense) (None, 10) 650
317+
318+ =================================================================
319+ Total params: 93,322
320+ Trainable params: 93,322
321+ Non-trainable params: 0
322+ _________________________________________________________________
323+ Epoch 1/5
324+ 938/938 [==============================] - 6s 5ms/step - loss: 0.1780 - accuracy: 0.9445
325+ Epoch 2/5
326+ 938/938 [==============================] - 6s 6ms/step - loss: 0.0501 - accuracy: 0.9843
327+ Epoch 3/5
328+ 938/938 [==============================] - 5s 6ms/step - loss: 0.0365 - accuracy: 0.9886
329+ Epoch 4/5
330+ 938/938 [==============================] - 6s 6ms/step - loss: 0.0273 - accuracy: 0.9916
331+ Epoch 5/5
332+ 938/938 [==============================] - 6s 6ms/step - loss: 0.0226 - accuracy: 0.9930
333+ 313/313 [==============================] - 1s 4ms/step - loss: 0.0281 - accuracy: 0.9911
334+ Test accuracy: 0.991100013256073
335+ [DF] Saving model...
336+ [DF] Model saved!
337+ ```
338+
339+ ### Recurrent Neural Network (` RNN ` )
340+ Here's a simple example of how to deepforge a simple Recurrent Neural Network via the ` RNN ` class:
341+
342+ ``` python
343+ # Simple Recurrent Neural Network example
344+
345+ # Make an instance of a RNN
346+ rnn = df.RNN(name = " Simple RNN" , inputN = 1 )
347+
348+ # Set inputs, inner layers and out layers
349+ rnn.setInputs([{' shape' : (1 ,2 ), ' name' : ' Input layer' }])
350+ rnn.setRecurrentLayers([[{' units' : 500 }]])
351+ rnn.setOutLayers([{' units' : 1 , ' activation' : ' linear' }])
352+
353+ # Configure the model
354+ rnn.setModelConfiguration(optimizer = ' adam' , loss = ' categorical_crossentropy' , metrics = [' accuracy' ])
355+
356+ # Build the model and print the summary
357+ rnn.build()
358+ rnn.summary()
359+
360+ # ###############################################
361+ # ###############################################
362+ # ###############################################
363+
364+ # Another RNN example with 3 stacked LSTM layers
365+ rnn2 = df.RNN(name = " Stacked RNN" )
366+
367+ # Set inputs, inner layers and out layers
368+ rnn2.setInputs([{' shape' : (6 ,2 ), ' name' : ' Input layer' }])
369+ rnn2.setRecurrentLayers([[{' units' : 500 , ' return_sequences' : True },{' units' : 500 , ' return_sequences' : True },{' units' : 500 }]])
370+ rnn2.setOutLayers([{' units' : 1 , ' activation' : ' linear' }])
371+
372+ # Configure the model
373+ rnn2.setModelConfiguration(optimizer = ' adam' , loss = ' categorical_crossentropy' , metrics = [' accuracy' ])
374+
375+ # Build the model and print the summary
376+ rnn2.build()
377+ rnn2.summary()
378+ ```
379+
380+ The output for the previous code is:
381+
382+ ```
383+ [DF] Building model...
384+ [DF] Model built!
385+ Model: "SimpleRNN"
386+ _________________________________________________________________
387+ Layer (type) Output Shape Param #
388+ =================================================================
389+ Input layer (InputLayer) [(None, 1, 2)] 0
390+
391+ cu_dnnlstm_4 (CuDNNLSTM) (None, 500) 1008000
392+
393+ dense_2 (Dense) (None, 1) 501
394+
395+ =================================================================
396+ Total params: 1,008,501
397+ Trainable params: 1,008,501
398+ Non-trainable params: 0
399+ _________________________________________________________________
400+ [DF] Building model...
401+ [DF] Model built!
402+
403+
404+ Model: "StackedRNN"
405+ _________________________________________________________________
406+ Layer (type) Output Shape Param #
407+ =================================================================
408+ Input layer (InputLayer) [(None, 6, 2)] 0
409+
410+ cu_dnnlstm_5 (CuDNNLSTM) (None, 6, 500) 1008000
411+
412+ cu_dnnlstm_6 (CuDNNLSTM) (None, 6, 500) 2004000
413+
414+ cu_dnnlstm_7 (CuDNNLSTM) (None, 500) 2004000
415+
416+ dense_3 (Dense) (None, 1) 501
417+
418+ =================================================================
419+ Total params: 5,016,501
420+ Trainable params: 5,016,501
421+ Non-trainable params: 0
422+ _________________________________________________________________
423+ ```
424+
232425## Documentation
233426
234427Check out the full documentation for [ Keras] ( https://keras.io/api/ ) and [ Tensorflow] ( https://www.tensorflow.org/api_docs ) for in-depth information on how to use the library.
0 commit comments