CLI proxy that reduces LLM token consumption by 60-90% on common dev commands. Single Rust binary, zero dependencies
-
Updated
Apr 18, 2026 - Rust
CLI proxy that reduces LLM token consumption by 60-90% on common dev commands. Single Rust binary, zero dependencies
The Context Optimization Layer for LLM Applications
Reduce AI coding costs by 99% — MCP Server + Shell Hook for Cursor, Claude Code, Copilot, Windsurf, Gemini CLI & 24 tools. Single Rust binary, zero telemetry.
Sharper context. Fewer tokens. Open-source middleware for Claude Code.
Working memory for Claude Code - persistent context and multi-instance coordination
Find the ghost tokens. Fix them. Survive compaction. Avoid context quality decay.
Stop Claude Code from burning through your quota in 20 minutes. Auto-rotates oversized sessions and preserves context.
An MCP server that executes Python code in isolated rootless containers with optional MCP server proxying. Implementation of Anthropic's and Cloudflare's ideas for reducing MCP tool definitions context bloat.
Production-ready modular Claude Code framework with 30+ commands, token optimization, and MCP server integration. Achieves 2-10x productivity gains through systematic command organization and hierarchical configuration.
Up to 71.5x fewer tokens per session on Claude Code with Obsidian + Graphify. Persistent memory, codebase knowledge graphs, and chat import pipeline. 🇧🇷 PT-BR included.
Generate a compact codebase index for AI assistants — saves 50K+ tokens per conversation
Your agents are guessing at APIs. Give them the actual Agent-Native spec. 1500+ API's Ready To-Use skills, Compile any API spec into a lean, agent-native format. 10× smaller. OpenAPI, GraphQL, AsyncAPI, Protobuf, Postman.
CLI proxy that reduces LLM token usage by 60-90%. Declarative YAML filters for Claude Code, Cursor, Copilot, Gemini. rtk alternative in Go.
Entroly-Daemon: Self -Evolving Daemon. Compress 2M-token repos into a razor-sharp Principal Engineer's context. 95% fewer tokens—built for Cursor, Claude Code, Opus,Codex,GPT & Copilot.
Config-driven CLI tool that compresses command output before it reaches an LLM context
Independent research on Claude Code internals, Claude Agent SDK, and related tooling.
TOON encoding for Laravel. Encode data for AI/LLMs with ~50% fewer tokens than JSON.
Automatic prompt caching for Claude Code. Cuts token costs by up to 90% on repeated file reads, bug fix sessions, and long coding conversations - zero config.
A smart context filter that removes noise, improves responses, and reduces token usage up to 90%
The context spine for AI coding agents. 8 providers, 88% proven token savings, 5 IDE integrations (Claude Code, Continue.dev, Cursor, Zed, Aider). Hook-based Read interception, HTTP API, tree-sitter AST, auto-tuning. Local SQLite, zero cloud.
Add a description, image, and links to the token-optimization topic page so that developers can more easily learn about it.
To associate your repository with the token-optimization topic, visit your repo's landing page and select "manage topics."