Skip to content

eshwanthkartitr/Gate_neural_network_cpp

Repository files navigation

Neural Network Logic Gates

C++ License Build Platform

Faast af boiiii

🎯 Overview

This project is a revisit of something I first built back in my first year using Python. The idea was simple: train a small neural network to learn the behavior of basic logic gates. This C++ version is cleaner, faster, and more structured, but still stays true to that original spirit of learning by building from scratch.

It’s a compact, from‑scratch C++11 implementation that covers gates like OR, AND, XOR, NOT, NAND, and NOR. The goal is to make the code human‑friendly and easy to follow, while still showing the essential concepts of forward/backward propagation, activation functions, and training with gradient descent.

✨ Features

  • 🧠 Complete Logic Gate Support - OR, AND, XOR, NOT, NAND, NOR
  • ⚙️ JSON Configuration - Easy parameter tuning without recompilation
  • 📊 ASCII Visualization - Decision boundary display in terminal
  • 📈 Real-time Monitoring - Loss tracking during training
  • Tested on Windows

🚀 Quick Start

Prerequisites

  • C++11 compatible compiler (g++, clang++, MSVC)
  • Standard library support

Installation & Usage

# Clone the repository
git clone https://github.com/eshwanthkartitr/Gate_neural_network_cpp.git
cd Gate-neural-network-cpp

# Compile
g++ -std=c++11 -Wall -O2 -o logic_gates logic_gates_main.cpp

# Run
./logic_gates

Alternatively, use the provided Makefile:

make logic_gates
./logic_gates

📁 Project Structure

neural-network-logic-gates/
├──logic_gates_main.cpp    # Main program with JSON parser
├── gates_config.json       # Gate configurations
├──NN.cpp                  # Neural network class
├──layer.cpp               # Layer implementations
├──activation.cpp           # Activation functions
├──losses.cpp              # Loss functions (BCE, MSE)
├──utils.cpp               # Utility functions
├──main.cpp                # Original XOR example
├──Makefile               # Build configuration
└──README.md               # This file

⚙️ Configuration

Customize training parameters by editing gates_config.json:

{
  "gates": [
    {
      "name": "XOR",
      "inputs": [[0,0], [0,1], [1,0], [1,1]],
      "outputs": [[0], [1], [1], [0]],
      "epochs": 4000,
      "learning_rate": 0.1
    },
    {
      "name": "AND",
      "inputs": [[0,0], [0,1], [1,0], [1,1]],
      "outputs": [[0], [0], [0], [1]],
      "epochs": 2000,
      "learning_rate": 0.1
    }
  ]
}

Configuration Parameters

Parameter Description Example
name Gate identifier "XOR", "AND", "OR"
inputs Training input vectors [[0,0], [0,1], [1,0], [1,1]]
outputs Expected output vectors [[0], [1], [1], [0]]
epochs Number of training iterations 4000
learning_rate Gradient descent step size 0.1

🏗️ Network Architecture

For 2-Input Gates (OR, AND, XOR, NAND, NOR)

Input(2) → Linear(2→4) → ReLU → Linear(4→1) → Sigmoid → Output(1)

For 1-Input Gates (NOT)

Input(1) → Linear(1→3) → ReLU → Linear(3→1) → Sigmoid → Output(1)

📊 Sample Output

Loaded 6 gates from gates_config.json
Training All Logic Gates from JSON...

=== Training XOR Gate ===
Epochs: 4000, Learning Rate: 0.1
Epoch 4000/4000 - Loss: 0.0001
XOR Gate training completed!

=== Testing XOR Gate ===
Input -> Output (Probability) -> Predicted -> Expected
------------------------------------------------
0,0 -> 0.0234 -> 0 -> 0 [✓ OK]
0,1 -> 0.9876 -> 1 -> 1 [✓ OK]
1,0 -> 0.9823 -> 1 -> 1 [✓ OK]
1,1 -> 0.0187 -> 0 -> 0 [✓ OK]
Accuracy: 4/4 (100%)

=== Hyperplane Visualization for XOR Gate ===
Decision boundary (0=blue, 1=red):
   0.0 0.2 0.4 0.6 0.8 1.0
1.0 0 0 0 1 1 1
0.8 0 0 1 1 1 1
0.6 0 1 1 1 1 1
0.4 1 1 1 1 1 0
0.2 1 1 1 1 0 0
0.0 1 1 1 0 0 0

Successfully trained all 6 logic gates! 🎉

🎛️ Customization

Adding Custom Gates

Create new gate definitions in gates_config.json:

{
  "name": "CUSTOM_3INPUT_GATE",
  "inputs": [
    [0,0,0], [0,0,1], [0,1,0], [0,1,1],
    [1,0,0], [1,0,1], [1,1,0], [1,1,1]
  ],
  "outputs": [[0], [1], [1], [0], [1], [0], [0], [1]],
  "epochs": 5000,
  "learning_rate": 0.05
}

Modifying Network Architecture

Edit the createNetwork() function in logic_gates_main.cpp:

// Example: Deeper network
network.add(new Linear(2, 8));    // More neurons
network.add(new Relu());
network.add(new Linear(8, 4));    // Additional hidden layer
network.add(new Relu());
network.add(new Linear(4, 1));
network.add(new Sigmoid());

🔧 Build Options

Using Makefile

make logic_gates        # Release build
make debug             # Debug build with symbols
make clean             # Clean build files

Manual Compilation

# Release build
g++ -std=c++11 -Wall -O2 -o logic_gates logic_gates_main.cpp

# Debug build
g++ -std=c++11 -Wall -g -DDEBUG -o logic_gates_debug logic_gates_main.cpp

# With additional optimizations
g++ -std=c++17 -Wall -O3 -march=native -o logic_gates logic_gates_main.cpp

🧮 Mathematical Foundation

Forward Propagation

For a 2-layer network:

h₁ = W₁x + b₁          # Linear transformation
a₁ = ReLU(h₁)          # Non-linear activation
h₂ = W₂a₁ + b₂         # Second linear transformation
ŷ = σ(h₂)              # Sigmoid output

Backward Propagation

Gradients computed using the chain rule:

∂L/∂W₂ = ∂L/∂ŷ × ∂ŷ/∂h₂ × ∂h₂/∂W₂
∂L/∂W₁ = ∂L/∂ŷ × ∂ŷ/∂h₂ × ∂h₂/∂a₁ × ∂a₁/∂h₁ × ∂h₁/∂W₁

Activation Functions

Function Formula Derivative
Sigmoid σ(x) = 1/(1+e^(-x)) σ'(x) = σ(x)(1-σ(x))
ReLU ReLU(x) = max(0,x) ReLU'(x) = x > 0 ? 1 : 0

Loss Functions

Function Formula Use Case
Binary Cross-Entropy BCE = -1/N Σ[y×ln(ŷ) + (1-y)×ln(1-ŷ)] Binary classification
Mean Squared Error MSE = 1/N Σ(y - ŷ)² Regression tasks

📊 Logic Gates Truth Tables

Gate 0,0 0,1 1,0 1,1 Complexity
OR 0 1 1 1 Linear ⭐
AND 0 0 0 1 Linear ⭐
XOR 0 1 1 0 Non-linear ⭐⭐⭐
NAND 1 1 1 0 Linear ⭐
NOR 1 0 0 0 Linear ⭐
NOT 1→0 0→1 - - Linear ⭐

⚡ Performance Benchmarks

Gate Training Time Epochs Final Loss Memory Usage
OR/AND/NAND/NOR ~1-2s 2000 <0.001 ~2MB
XOR ~3-4s 4000 <0.001 ~2MB
NOT ~0.5s 1500 <0.001 ~1MB

System: Intel i7, 16GB RAM, compiled with -O2

💡 Tips & Best Practices

Training Optimization

  • 📈 Increase epochs for better XOR convergence (try 5000-8000)
  • ⚙️ Adjust learning rate if training is unstable (try 0.05-0.2)
  • 🏗️ Add more layers for complex custom gates
  • 📊 Monitor loss - should decrease and stabilize near 0

Troubleshooting

Issue Symptom Solution
Slow Convergence Loss decreasing slowly Increase learning rate
Unstable Training Loss oscillating Decrease learning rate
Poor XOR Performance XOR accuracy <90% Increase epochs or add neurons
Compilation Errors Missing C++11 features Update compiler or add -std=c++11

🤝 Contributing

Contributions are welcome! Here's how you can help:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

Development Areas

  • 🧠 New activation functions (Tanh, Leaky ReLU, Swish)
  • 📊 Additional layer types (Dropout, Batch Normalization)
  • 🎨 Enhanced visualization (matplotlib integration)
  • 📈 Performance optimizations (SIMD, threading)
  • 🧪 Unit tests and benchmarks

📚 Learning Resources

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

🙏 Acknowledgments

  • Inspired by classic neural network tutorials
  • Built for educational purposes and learning
  • Special thanks to the open-source community

⭐ Star this repository if it helped you learn neural networks! ⭐

Built with ❤️ for learning neural networks from scratch

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published