Deep learning library written from scratch in Numpy. Why? Because it's fun! 🤷🏻♂️
from sklearn import datasets
from dnpy.layers import *
from dnpy.net import *
from dnpy.optimizers import *
from dnpy.regularizers import *
from dnpy import metrics, losses
from dnpy import utils
# Get data
iris = datasets.load_iris()
X = iris.data # we only take the first two features.
Y = iris.target
# Pre-processing
# Standarize
X = (X - np.mean(X, axis=0))/np.std(X, axis=0)
# Classes to categorical
num_classes = 3
Y = utils.to_categorical(Y, num_classes=num_classes)
# Shuffle dataset
idxs = np.arange(len(X))
np.random.shuffle(idxs)
X, Y = X[idxs], Y[idxs]
# Select train/test
c = 0.8
tr_size = int(len(X) * c)
x_train, y_train = X[:tr_size], Y[:tr_size]
x_test, y_test = X[tr_size:], Y[tr_size:]
# Params *********************************
batch_size = int(len(x_train)/5)
epochs = 500
# Define architecture
l_in = Input(shape=x_train[0].shape)
l = Dense(l_in, 20, kernel_regularizer=L2(lmda=0.01), bias_regularizer=L1(lmda=0.01))
l = Relu(l)
l = Dense(l, 15)
l = BatchNorm(l)
l = Dropout(l, 0.1)
l = Relu(l)
l = Dense(l, num_classes)
l_out = Softmax(l)
# Build network
mymodel = Net()
mymodel.build(
l_in=[l_in],
l_out=[l_out],
optimizer=Adam(lr=10e-2),
losses=[losses.CrossEntropy()],
metrics=[[metrics.CategoricalAccuracy()]],
debug=False
)
# Print model
mymodel.summary()
# Train
mymodel.fit([x_train], [y_train],
x_test=[x_test], y_test=[y_test],
batch_size=batch_size, epochs=epochs,
evaluate_epoch=True,
print_rate=10)
# Save mode
mymodel.save("./trained/trained_iris.pkl", save_grads=True)
# Evaluate
print("\n----------------------")
print("Evaluation:")
lo, me = mymodel.evaluate([x_test], [y_test], batch_size=batch_size)
str_eval = mymodel._format_eval(lo, me)
print(f"- Losses[{', '.join(str_eval[0])}]")
print(f"- Metrics[{'; '.join(str_eval[1])}]")
-
Layers:
- Core:
- Input (GC)
- Dense (GC)
- Reshape / Flatten (GC)
- Activations:
- Relu (GC)
- LeakyRelu (GC)
- Sigmoid (GC)
- Tanh (GC)
- Softmax (GC)
- LogSoftmax (GC)
- PRelu (GCX)
- Embedding (GCX)
- Regularization:
- BatchNorm (GCX)
- Dropout (GC-)
- GaussianNoise (GC-)
- Operators:
- Element-wise (Operator as argument): (GCX)
- Add, Subtract, Multiply, Divide, Power, Maximum, Minimum,...
- Others: Average, Concatenate
- Element-wise (Operator as argument): (GCX)
- Convolutions:
- Conv2D (GCX)
- TransposedConv2D => Pending...
- DepthwiseConv2D
- PointwiseConv2D
- LocallyConnected2D => Pending...
- SpatialSeparableConv* => Pending...
- Pooling:
- MaxPool2D (GCX)
- AvgPool2D (GCX)
- GlobalMaxPool2D
- GlobalAvgPool2D
- Recurrent:
- RNN (GCX)
- LSTM => Pending...
- GRU => Pending...
- Others:
- Encoder-decoder => Pending...
- Attention => Wrapper for something like
Softmax(QK)*V
- Core:
-
Optimizers:
- SGD
- Momentum
- Nesterov => Review
- Bias correction
- RMSprop
- Adam
- SGD
-
Losses:
- MSE
- RMSE
- MAE
- CrossEntropy
- BinaryCrossEntropy
- NLL
- Hinge
-
Metrics:
- MSE
- RMSE
- MAE
- CategoricalAccuracy
- BinaryAccuracy
-
Initializers:
- Constant
- Ones
- Zeros
- RandomNormal
- RandomUniform
- GlorotNormal
- GlorotUniform
- HeNormal
-
Regularizers:
- L1
- L2
- L1L2
-
Miscellaneous:
- Multi-input
- Multi-loss support
- Set modes
- Freeze layers
- Smart derivatives
- Topological sort
- Gradient checking
- get/set params/grads
- Truncated BPTT
- Load/Save model
- Learning rate decay => Pending...
- Callbacks => Pending...
- EarlyStopping => Pending...