Welcome to my comprehensive repository dedicated to unraveling the mysteries of Artificial Intelligence (AI), Machine Learning (ML), Deep Learning (DL), and Reinforcement Learning (RL). 🚀
This repository is a learning-first, from-scratch implementation collection of core AI, Machine Learning, Deep Learning, Reinforcement Learning, and Bayesian algorithms.
It is designed for readers who:
- Already know what these algorithms are
- Want to understand how they work internally
- Prefer readable, step-by-step implementations over optimized or production-grade code
This is not a production library or benchmarking suite.
Instead, the focus is on algorithmic intuition, mathematical flow, and code transparency.
This repository may not be ideal if you are looking for:
- Highly optimized or GPU-accelerated implementations
- Drop-in replacements for scikit-learn, PyTorch, or TensorFlow
- State-of-the-art performance benchmarks
- Large-scale dataset pipelines
The goal here is understanding, not performance.
If you're new to the repository, a recommended learning path is:
- Supervised Learning
- Linear & Logistic Regression
- k-Nearest Neighbors
- Decision Trees
- Unsupervised Learning
- K-Means
- PCA
- Gaussian Mixture Models
- Neural Networks
- Single-Layer Perceptron
- Multi-Layer Perceptron
- CNNs and RNNs
- Reinforcement Learning
- Q-Learning
- Deep Q-Networks
- Policy-based methods
- Bayesian Learning
- Bayesian Inference
- Bayesian Neural Networks
Each folder is self-contained and can be explored independently.
│
├── LICENSE
├── README.md <- The top-level README for developers/collaborators using this project.
├── neural_network <- Folder for Neural Network implementations
│ ├── AutoEncoder
│ ├── BoltzmannMachine
│ ├── GenerativeAdversarialNetwork
│ ├── HopfieldNetwork
│ ├── LongShortTermMemoryLSTM
│ ├── MultiLayerPerceptronClassification
│ ├── MultiLayerPerceptronRegression
│ ├── RadialBasisFunctionNetworks
│ ├── SelfAttentionMechanism
│ ├── SimpleCNN
│ ├── SimpleEncoderDecoder
│ ├── SimpleRNN
│ ├── SingleLayerPerceptronClassification
│ ├── SingleLayerPerceptronRegression
│ ├── TitanicSurvialBySingleLayerPerceptron
│ └── Transformer
│
├── reinforcement_learning <- Folder for Reinforcement Learning implementations
│ ├── Deep Deterministic Policy Gradients
│ ├── Deep Q Network
│ ├── Soft Actor Crtic
│ ├── Proximal Policy Optimization
│ └── QLearning
│
├── supervised <- Folder for Supervised Learning implementations
│ ├── DecisionTrees
│ ├── KnearestNeighbour
│ ├── LassoRegression
│ ├── LinearRegression
│ ├── LogisticRegression
│ ├── Naive Bayes
│ ├── RandomForest
│ ├── RidgeRegression
│ └── SupportVectorMachines
│
├── unsupervised <- Folder for Unsupervised Learning implementations
│ ├── AprioriAlgorithm
│ ├── Density-Based Spatial Clustering of Applications with Noise
│ ├── Expectation-Maximization
│ ├── Gaussian Mixture Model
│ ├── HierarchicalClustering
│ ├── IndependentComponentAnalysis
│ ├── K-MedoidsClustering
│ ├── KMeansPlusPlus
│ ├── PrincipalComponentAnalysis
│ ├── SelfOrganizing Map
│ ├── kmeanclustering
│ └── tSNE
│
└── Bayesian Learning
├── BayesianInference
├── BayesianNetwork
├── Gibbs Sampling
├── Metropolis-Hastings Algorithm
├── Bayesian Neural Networks
├── BayesianLinearRegression
└── Variational Inference
Across all implementations, the following principles are applied:
- Prefer explicit loops over vectorized one-liners when it improves clarity
- Separate model logic, loss computation, and parameter updates
- Avoid high-level ML libraries to expose core mechanics
- Keep implementations concise and inspectable
Many design choices intentionally trade performance for readability.
Are you eager to grasp the core concepts of these cutting-edge technologies? Look no further! In this repository, we've meticulously crafted implementations of fundamental algorithms from scratch, accompanied by detailed explanations and documentation. Our mission is to empower learners by providing hands-on experience in building these algorithms, fostering a deeper understanding of the underlying principles.
To get the most value from this repository:
- Read the code line-by-line
- Add print statements or visualizations
- Modify hyperparameters and observe behavior
- Re-implement the same algorithm in a different style
- Compare similar algorithms across folders
This repository is meant to be actively explored, not passively read.
Understanding AI, ML, DL, and RL can be a daunting task, especially for beginners. Yet, I believe that building these algorithms from the ground up offers unparalleled insights. By diving into the code, you'll gain a profound understanding of the inner workings, demystifying the complex algorithms that power the technology around us.
- Educational Focus: Every algorithm is meticulously implemented with educational purposes in mind.
- Comprehensive Documentation: Each implementation is accompanied by thorough explanations, ensuring you not only run the code but understand it.
- Progressive Complexity: Starting from simpler concepts, we gradually delve into more advanced algorithms, allowing you to build your knowledge progressively.
- Neural Networks: Dive into the realm of neural networks, from basic perceptrons to advanced architectures like LSTMs and Transformers.
- Reinforcement Learning: Understand the dynamics of reinforcement learning through implementations of DDPG, DQN, PPO, and Q-learning.
- Supervised Learning: Explore classical supervised learning algorithms, including decision trees, regression models, and support vector machines.
- Unsupervised Learning: Delve into the mysteries of unsupervised learning with implementations like k-means, PCA, and GMM.
Whether you're a student, a curious enthusiast, or a seasoned developer looking to solidify your understanding, this repository is designed for you. Our step-by-step implementations and detailed documentation cater to learners at all levels.
Ready to embark on this exciting journey? Let's code, learn, and demystify the world of AI together! 🌐✨
The implementations in this repository are grounded in the following learning paradigms:
Supervised learning is a type of machine learning where the algorithm is trained on a labeled dataset. In a labeled dataset, each input data point is associated with the corresponding correct output, allowing the algorithm to learn the mapping between inputs and outputs. The goal is for the algorithm to make accurate predictions on new, unseen data.
Unsupervised learning involves training algorithms on unlabeled datasets. Unlike supervised learning, there are no predefined output labels. Instead, the algorithm discovers patterns, structures, or relationships within the data on its own. Common tasks in unsupervised learning include clustering and dimensionality reduction.
Neural networks are a class of machine learning models inspired by the structure and function of the human brain. They consist of interconnected nodes, or neurons, organized into layers. Neural networks can learn complex patterns and representations through training on labeled data. Deep learning, a subset of neural networks, involves architectures with multiple layers (deep neural networks).
Reinforcement learning is a type of machine learning where an agent learns to make decisions by interacting with an environment. The agent receives feedback in the form of rewards or penalties based on the actions it takes. The goal is for the agent to learn a policy that maximizes cumulative reward over time. Reinforcement learning is commonly used in applications such as game playing, robotics, and autonomous systems.
Bayesian learning is a statistical framework that combines prior knowledge with new evidence to update and refine our beliefs about uncertain quantities. Unlike traditional machine learning approaches that focus solely on point estimates, Bayesian learning provides a probabilistic framework for reasoning about uncertainty.
Each algorithm is provided as a standalone Python script. You can run these scripts to see the algorithms in action. Additionally, the code is extensively documented to help you understand the implementation details.