A Mechanistic Interpretability Toolkit for Cross-Layer Transcoder Training and Attribution-Graph Visualization
-
Updated
Mar 5, 2026 - Python
A Mechanistic Interpretability Toolkit for Cross-Layer Transcoder Training and Attribution-Graph Visualization
AI Safety research platform for studying personality drift in AI systems using mechanistic interpretability and clinical assessment tools. Complete simulation framework with neural circuit analysis, statistical drift detection, and intervention protocols.
Universal probing and interpretability tool for MLX language models on Apple Silicon
WPE/TME: Text-native language for encoding semantic structure and temporal relationships. Geometric calculus with formal semantics. AI reasoning.
Conformal Geometric Algebra (CGA) with efficient sequence modeling by introducing a recurrent rotor mechanism and a novel bit-masked hardware kernel that solves the computational bottleneck of Clifford products.
OKI TRACE: Local LLM observability. See step-by-step, layer-by-layer what your AI thinks. Logit Lens & Attention for HuggingFace models.
📦 Redwood Research's transformer interpretability tools, conveniently packaged in a Docker container for simple and reproducible deployments.
I Asked It to Forget, but It Didn't — A Case of Miscommunication Between AI and Humans
A NeuroAI project using Bernoulli-inspired fluid-flow analogy to explore how information moves through neural networks. The signal strength in the NN is defined as the "pressure" from Bernoulli's equation, the speed of information propagation as the "flow speed of fluid" and, the activation level as the "opening and closing of valves".
Open-source AI cognition layer — circuit-level topology engine producing verifiable FIRE events, bus validation receipts, and falsifiable cognition records in real time. AGPL-3.0.
Framework for evaluating and steering generative image systems using geometry-first metrics, structural stress testing, and constraint-based analysis. Designed to expose compositional collapse, spatial priors, and model failure modes without accessing training data or model internals.
🌐 Explore WPE and TME, text-native languages designed for structural and temporal reasoning, enhancing clarity in semantic calculus.
Add a description, image, and links to the ai-interpretability topic page so that developers can more easily learn about it.
To associate your repository with the ai-interpretability topic, visit your repo's landing page and select "manage topics."