Distribution transparent Machine Learning experiments on Apache Spark
-
Updated
Feb 21, 2024 - Python
Distribution transparent Machine Learning experiments on Apache Spark
Universal probing and interpretability tool for MLX language models on Apple Silicon
This is an unofficial implementation of the Point Transformer paper (CLASSIFIER EXTENSION)
Controlled depth ablation of a BERT bi-encoder across training budgets and seeds on three BEIR tasks (nfcorpus, scifact, fiqa). L3–L12 is flat within seed noise at 20K steps; 80K training degrades every depth on zero-shot transfer (−45% NDCG@10 on fiqa for L12).
Reverse engineering the circuit responsible for the "greater than" capability in a language model
Add a description, image, and links to the ablation-studies topic page so that developers can more easily learn about it.
To associate your repository with the ablation-studies topic, visit your repo's landing page and select "manage topics."