This project implements a fully connected neural network from scratch using NumPy, without using deep learning frameworks such as TensorFlow or PyTorch.
The goal of this project is to understand the mathematics and mechanics behind neural networks, including forward propagation, backpropagation, and gradient descent.
- Manual weight and bias initialization
- Forward propagation
- Backpropagation using the chain rule
- Gradient descent optimization
- ReLU activation function
- Softmax output layer
- One-hot encoded labels
- Prediction and accuracy evaluation