Founder of Cubic Consulting — helping enterprises walk the first mile of AI adoption.
I believe the hardest part of AI transformation isn't the technology. It's the decisions people make before the technology arrives — and the organizational courage to commit when the data runs out.
I build open-source Agent Skills that turn philosophy into operational protocols for AI systems: how to define goals, structure memory, correct drift, audit bias, switch reasoning modes, and decide when humans must stay in the loop.
These projects are not generic prompt packs. Each one takes a serious idea from philosophy, psychology, or control theory and turns it into a concrete protocol for agents and human-AI workflows.
-
🦅 Leap of Faith
Decision guidance under uncertainty. Built on Kierkegaard and Polanyi for moments when rational analysis runs out but a real commitment still has to be made. -
⚡ High Agency
From stuck to started. A skill for activating motion, ownership, and initiative when people or teams know what matters but still cannot move.
-
🎯 Goal Clarifier
Define telos before design. Turns vague requests into executable briefs by forcing explicit goals, constraints, and success criteria. -
🎛️ Feedback Controller
Closed-loop correction for execution drift. Measures deviation, localizes error sources, and chooses the right corrective action instead of blindly retrying. -
🗂️ Memory Taxonomist
Structured memory design. Separates facts, preferences, procedures, unresolved questions, and exceptions so retrieval stays useful. -
🔁 Loop Stability Check
Workflow stability for agents. Detects dead retries, oscillation, drift, and feedback starvation before loops waste more time or amplify errors.
-
🧠 Bias Audit
Decision-framing audit. Surfaces anchoring, loss aversion, false binaries, and loaded wording before they quietly decide the answer. -
🌓 Dual-Mode Reasoner
Risk-aware reasoning depth. Keeps low-risk tasks fast, but switches into deliberate mode when stakes, irreversibility, or ambiguity demand it.
- Goal before capability — A strong agent with a weak telos is just a fast mistake.
- Feedback before confidence — Output quality comes from closed-loop correction, not one-shot eloquence.
- Classification before memory — If everything is remembered the same way, nothing useful is retrievable.
- Calibration before autonomy — Reasoning depth, human oversight, and retry behavior should match risk.
- Prompt Engineering — Crafting the right instructions for AI to follow
- Context Engineering — Designing the knowledge and memory that AI agents carry
- Harness Engineering — Building the constraints, feedback loops, and guardrails where agents do their best work
- First-Mile Problems — The messy, human, organizational challenges of bringing AI into the real world
When data runs out, wisdom begins.
