Deeppy is a flexible deep learning framework built on PyTorch, designed to simplify training workflows while supporting powerful research capabilities. It embraces a modular approach by decoupling data, algorithms, and neural networks, making it easy to swap components, experiment with new ideas, and customize pipelines end-to-end.
- 🔧 Modular by Design – Swap networks, algorithms, and data pipelines easily
- 💡 Research-Oriented – Designed for flexibility and prototyping
- 📉 Integrated Plotting & Logging – Visualize your training instantly
- 🔍 XAI Tooling (Coming Soon) – Make black-box models more interpretable
- ⚡ Built with PyTorch – GPU, torch compile and AMP support and compatibility by default
-
🔍 Autoencoders
-
🔬 Basic Model
🧠 Train Your Own GPT Model
with open("assets/shakespeare.txt", "r", encoding = "utf-8") as f:
text = f.read()
encoding = tiktoken.encoding_for_model("gpt-2")
data = GPTText(text=text, tokenizer=encoding, context_size = context_size)GPT_params = {
"optimizer_params":Optimizer_params,
"vocab_size":vocab_size,
"embed_dim":embed_dim,
"num_heads":num_heads,
"num_layers":num_layers,
"context_size":context_size,
"device":device,
"criterion":nn.CrossEntropyLoss(ignore_index = -1),
}
model = GPT(GPT_params)📊 Total Parameters: ~28.9M
lf = LearnFrame(model,data)
for i in range(epoch):
lf.optimize()
lf.plot(show_result=True, log=True)model.generate("KING RICHARD III: \n On this very beautiful day, let us")KING RICHARD III:
On this very beautiful day, let us us hear
The way of the king.
DUKE OF YORK::
I will not be avoided'd with my heart.
DUKEKE VINCENTIO:
I thank you, good father.
LLUCIO:
I thank you, good my lord; I'll to your your daughter.
KING EDWARD IV:
Now, by the jealous queen
🎮 Train a RL Agent
import deeppy as dp
env = gym.make("LunarLander-v1")
data = dp.EnvData(env, buffer_size=100000)policy_network = {
"layers" : [obs,128,128,act],
"blocks" : [nn.Linear, nn.ReLU]
"out_act" : nn.Softmax,
"weight_init" : "uniform"
}model = dp.SAC(sac_params) #Soft Actor Criticlf = dp.LearningFrame(model, data)
for epoch in range(EPOCH):
#Take one step in environment using the model
lf.collect()
#Train SAC one step
lf.optimize()
#Automatic plotting
lf.plot()lf.get_anim()lf.save(file_name)
lf.load(file_name)Looking to dive deeper? We've included hands-on examples covering everything from GPT training to reinforcement learning agents like LunarLander.
📂 Find them all in the tutorials/ folder.
pip install -r requirements.txtDocumentation is in progress. Meanwhile, refer to the tutorials folder.
RL
Dueling DQN
PPO
MBPO (Model-Based Policy Optimization)
SafeMBPO



