A Faster Pytorch Implementation of Multi-Head Self-Attention
-
Updated
May 27, 2022 - Jupyter Notebook
A Faster Pytorch Implementation of Multi-Head Self-Attention
News Articles Recommendation System. (Project for Practical Seminar in Machine Learning - PSI:ML 9)
gpt∀ - A modular, from-scratch implementation of the GPT architecture in PyTorch, covering attention, transformer blocks, and core LLM components.
Notebook used for the Kaggle Competition on Nasa-Cmapss dataset for the RUL predictions
My pytorch implemented solution to the Fall 2020 UC Berkley CS198 ViT homework; (P.S this is my first experience with ViTs let alone transformers please leave feedback!)
Add a description, image, and links to the multihead-self-attention topic page so that developers can more easily learn about it.
To associate your repository with the multihead-self-attention topic, visit your repo's landing page and select "manage topics."