Releases: clanker-lover/codedocent
v1.0.0 — Architecture Mode
What's New
Architecture Mode — Visualize your codebase as a zoomable dependency graph
codedocent /path --archor choose option 4 in the wizard- Level 0: See all modules and how they connect
- Level 1: Drill into a module, see files and their dependencies
- Level 2: Click through to the existing CodeDocent file view
- Export MD button generates context for AI thinking nodes
Enhanced AI Summaries
- AI now knows what each file imports and what imports it
- New ROLE section explains the file's job in the system
- New KEY CONCEPTS section lists main functions/classes
- Better explanations focused on data flow, not syntax
Install/Upgrade
pip install --upgrade codedocent
v0.5.0 — Cloud AI Support
What's new
Cloud AI is now a first-class option. Use OpenAI, OpenRouter, Groq, or any OpenAI-compatible endpoint to get code explanations — no local setup required.
Highlights
- Three ways to get AI explanations: cloud provider, local Ollama, or no AI at all
- Setup wizard — run
codedocentwith no args and it walks you through choosing a backend, provider, and model - Env-var-only API keys — keys are never passed as CLI arguments; read from
OPENAI_API_KEY,OPENROUTER_API_KEY,GROQ_API_KEY, orCODEDOCENT_API_KEY - Custom endpoints — point at any OpenAI-compatible API with
--cloud custom --endpoint <url> - 8 security fixes — SSRF protection, secret masking, response size limits, HTTPS enforcement, input validation, and more
Install / upgrade
pip install --upgrade codedocentQuick start
codedocent # setup wizard
codedocent /path/to/code --cloud openai # cloud AI
codedocent /path/to/code # local Ollama (default)
codedocent /path/to/code --no-ai # structure onlyv0.4.0 — Replace Code expansion + quality scoring overhaul
What's new
- Replace Code on all blocks — now works on files, not just functions/methods/classes. Directory replacement still blocked.
- Quality scoring overhaul — removed line-count scoring entirely. Radon complexity + parameter count remain. Thresholds adjusted: A/B/C = clean, D = high complexity, E/F = severe complexity.
- Security hardening — 1MB payload limit (byte-enforced), template file write protection, two rounds of hostile code review by multiple AI models.
- Updated warning labels — D grade shows 'High complexity', E/F shows 'Severe complexity'.
Full changelog in commit history.
v0.3.0 — Security & Robustness Hardening
42 fixes from a 5-model AI audit (ChatGPT, Gemini, Grok, DeepSeek, Kimi). Covers CSRF protection, symlink containment, atomic file writes, input validation, encoding safety, timeout handling, and more. No new features — this release is entirely about making the existing functionality bulletproof.
v0.2.1
Imports now collapse behind a toggle instead of displaying as a long list. Cleaner visual output.
v0.2.0
What's New
Setup wizard
Run codedocent with no arguments to launch an interactive wizard that walks you through folder selection, Ollama detection, model picking, and mode choice. No flags to memorize.
GUI launcher
A tkinter window with folder picker, model dropdown, and mode selector. Launch via codedocent --gui or the new codedocent-gui entry point.
Always-visible code buttons
Code action buttons (Show Code, Export Code, Copy for AI, Replace Code) now appear immediately on every block — no longer gated behind AI analysis completing.
New /api/source/ endpoint
A lightweight GET /api/source/{node_id} endpoint returns source code without triggering AI analysis, so clicking "Show Code" is instant.
Shared Ollama utilities
Extracted check_ollama() and fetch_ollama_models() into codedocent/ollama_utils.py, shared by both CLI and GUI.
Install or upgrade:
pip install codedocent==0.2.0Quality:
- 93 tests passing
- pylint 10/10, flake8/mypy/bandit all clean