A minimal implementation of a single-layer Perceptron written from scratch in pure JavaScript.
This implementation demonstrates how a basic linear binary classifier works using randomly initialized weights and bias, trained via the Perceptron learning rule.
This project implements a:
- Single neuron (Perceptron)
- Binary classification model (2 classes: 0 and 1)
- Step activation function
- Supervised learning using labeled training samples
The model initializes weights and bias randomly and iteratively updates them to minimize classification errors.
- Random weights
- Random bias
- Defined number of input features
The output is computed as: output = step( w · x + b )
Where:
w= weightsx= input featuresb= biasstep()= activation function
Binary step function: if (sum > 0) return 1; else return 0;
This makes the model suitable only for binary classification problems.
The model is trained using the Perceptron Learning Rule: w = w + learning_rate * error * input b = b + learning_rate * error Where: error = target - output
Training continues for a fixed number of iterations or until no weight updates occur.
- Works only for linearly separable data
- Supports binary classification only
- Not suitable for regression problems
- Uses non-differentiable step activation
const p = new Perceptron(2);
p.train(trainingInputs, targets, 0.01, 1000);
const result = p.calculateOutput([x1, x2]);This implementation is designed for learning and understanding:
Linear classifiers
Decision boundaries
Weight updates
Fundamental neural network concepts