Staff AI Product Engineer & Independent AI Researcher. 20 years shipping production systems across fintech, identity, and distributed infrastructure — now applying that discipline to LLM evaluation, agent orchestration, and scalable oversight.
making-minds.ai · Research · Substack
Scalable AI Oversight — Measuring where verification breaks down and building ensemble fixes. Verification Swarm · CMED · HDCS · Model Organisms · argos-swarm · cmed-toolkit
Multi-Agent Coordination — Efficient, safe communication protocols for agent swarms.
Slipstream (82% token reduction) · Covert Channel Prevention · slipcore · Ollama · pip install slipcore
Cognitive Architectures — Persistent memory, coherence, and safe self-extension for long-lived agents. Coherence-Seeking Architectures · Continuity Core · Self-Directed Knowledge Acquisition · Synthesis · CoDA-GQA-L (9.5x KV-cache compression)
AI Safety & Alignment — Structural failure modes: sycophancy, hallucination, introspection. Epistemic Dissonance · Scaffolded Introspection
Applied AI — Concrete Intelligence: AI deployment guide for heavy industry (public domain book).
ORCID · Google Scholar · ResearchGate · Hugging Face · LinkedIn · GitHub
Seeking Staff+, EM, Director, or Research roles in agentic applications, AI safety, eval infrastructure, agent reliability, and scalable oversight. anthony@making-minds.ai




