An interactive, animated visualizer that demonstrates how a Convolutional Neural Network (CNN) learns to match a target output by adjusting its filter weights using gradient descent — step by step.
- Binary pattern (1s and 0s)
- Represents a tiny grayscale image or signal
- Starts with initialized weights
- Moves across 9 positions (3×3 output) using a sliding window
- Performs convolution on each patch
- The CNN’s goal is to learn filter weights that produce this exact output from the input
At each position, the filter:
- Computes a dot product with the 3×3 input patch (forward pass)
- Compares the result with the target value
- Calculates the error (
prediction - target) - Uses the error to update the gradient for each filter weight (backward pass)
- After each full sweep over the input:
- Compute Mean Squared Error over all 9 outputs
- Track the loss over time and plot it live
- After each full forward + backward pass:
- Average the accumulated gradients
- Update each filter weight using:
where
w = w - η ⋅ avg_gradientηis the learning rate
- ReLU activation applied to the output
- Live MSE loss graph updates
- Reset button for easy re-runs
- Fully responsive visual layout using Tailwind CSS + Framer Motion
