This repository contains my practice work based on Andrej Karpathy’s “Makemore” series.
The course focuses on building character-level language models from scratch using Python and PyTorch.
Throughout the series, Karpathy walks through the process of creating neural networks capable of generating text — starting from simple bigram models to fully-trained multilayer neural networks.
Follow along with the full course on YouTube:
➡️ Neural Networks: Zero to Hero (Playlist)
- Step-by-step implementations following each lesson in the Makemore / Zero-to-Hero series
- Experiments and notes exploring key concepts such as:
- tokenization
- embeddings
- backpropagation
- optimization and training
- Custom extensions and tweaks to better understand how generative models work under the hood
The aim of this work is to deepen my understanding of neural networks and gain hands-on experience with language modeling — learning not just how to use machine learning libraries, but how they actually function at a low level.
- Python
- PyTorch
- NumPy
- Matplotlib (for visualization)
Special thanks to Andrej Karpathy for the excellent Neural Networks: Zero to Hero series and for making deep learning education open and accessible to everyone.