Releases: melagiri/code-insights
v4.10.2 — Doctor command docs
Improved
- README: document
doctorcommand — Added a dedicated Diagnostics section tocli/README.mdcovering all four flags (--fix,--verbose,--json) with usage guidance. Also addeddoctorto the quick-start individual commands list.
Full Changelog: v4.10.1...v4.10.2
v4.10.1 — Doctor Command
Added
code-insights doctorcommand — A Flutter/Homebrew-style diagnostic command that checks your installation across 8 areas: environment, database, config, session sources, AI analysis, hooks, sync state, and dashboard. ~30 individual checks with actionable fix hints. Supports--fix(applies safe idempotent fixes automatically),--verbose(shows probed paths for skipped items), and--json(machine-readable output for sharing in bug reports). First-run mode shows a step-by-step setup guide when nothing is configured yet.
Improved
- Hook utility extraction — Shared hook logic (
HOOKS_FILE,CLI_ENTRY,hookAlreadyInstalled()) extracted frominstall-hook.tsintoutils/hooks-utils.ts, making it reusable across commands.
Full Changelog: v4.10.0...v4.10.1
v4.10.0 — Source Tool Filtering
Added
- Source tool filter across all dashboard pages — A new source tool selector (with color-coded dots for Claude Code, Cursor, Codex CLI, Copilot CLI, and Copilot) is now available on the Sessions, Insights, Analytics, and Knowledge Journal pages. Filter any view to a specific AI coding tool to see only its sessions and insights.
Fixed
-
Source filter empty state — The "no results" empty state on the Sessions page now correctly activates when only the source filter is set (previously it would not detect source as an active filter).
-
Insights and Journal session map limit — Session-to-source lookups on the Insights and Journal pages now fetch up to 500 sessions, consistent with Analytics. The previous server default of 50 would silently miss sessions for users with larger history.
Full Changelog: v4.9.7...v4.10.0
v4.9.7 — Cursor Message Parsing (Complete Fix)
Fixed
- Cursor raw JSON in messages (complete fix) — The v4.9.6 fix correctly parsed Lexical JSON for new sessions, but existing sessions already stored in the database kept their raw JSON content because
INSERT OR IGNOREnever overwrites existing rows.--forcesync now usesINSERT OR REPLACEfor messages, so re-parsing overwrites stale content. Runningcode-insights sync --forcewill fix all affected sessions.
Full Changelog: v4.9.6...v4.9.7
v4.9.6 — Cursor Message Parsing Fix
Fixed
- Cursor raw JSON in messages — User messages in Cursor sessions were displaying raw Lexical editor JSON (
{"root":{"children":[...) instead of the actual message text. Newer Cursor versions store the Lexical editor state in thetextbubble field rather thanrichText. The parser now detects and unwraps Lexical JSON from either field.
Full Changelog: v4.9.5...v4.9.6
v4.9.5 — Cursor Session Parsing Accuracy
Fixed
-
Cursor timestamps — All Cursor sessions previously showed epoch timestamps because
bubble.createdAtdoes not exist in Cursor's storage format. Timestamps are now extracted fromtimingInfo.clientRpcSendTimeon assistant bubbles (the actual Unix-ms wall clock). Sessions missing timing data fall back tocomposerData.createdAt/lastUpdatedAt, then epoch. -
Cursor cost tracking — Cursor sessions were showing $0 in the cost dashboard. Session cost is now populated from
composerData.usageData.default.costInCents. -
Cursor token counts — Token usage is now aggregated from
tokenCount.inputTokens/tokenCount.outputTokensacross all assistant bubbles per session. Previously alwaysnull. -
Cursor git branch —
gitBranchis now extracted from thegitStatusRawfield present on user bubbles (/^On branch (.+)/m). Previously hardcodednull. -
messageCountconsistency — All non-Claude-Code providers (Cursor, Codex, Copilot CLI, VS Code Copilot Chat) now computemessageCountasuserMessageCount + assistantMessageCount, consistent with the Claude Code provider. System messages are excluded from the semantic count.
Full Changelog: v4.9.4...v4.9.5
v4.9.4 — Sync Output Polish
Improved
-
Cleaner sync output — Removed redundant discovery messages ("[claude-code] Discovered 598 JSONL files" and "total session files discovered"). The spinner is sufficient feedback during discovery.
-
Session counts in "up to date" status — Providers with nothing to sync now show
✔ Up to date (170 sessions)instead of just the provider name. -
Condensed telemetry notice — Replaced the 7-line telemetry disclosure banner with a single dim line:
Telemetry enabled · Disable: code-insights telemetry disable. -
Suppressed internal housekeeping messages — The "Usage stats reconciled" message no longer appears after sync.
Full Changelog: v4.9.3...v4.9.4
v4.9.3 — Sync Performance & CLI Polish
Fixed
-
Empty files re-parsed on every sync — When
provider.parse()returnednull(empty or unsupported files), the file was not tracked in sync state. This caused those files to be re-discovered, re-stat'd, and re-parsed on everycode-insights syncrun. Now tracked with an'__empty__'sentinel in sync state, eliminating ~117 wasted file operations per sync for typical setups. -
Telemetry banner shown on
--version— Runningcode-insights --versiondisplayed the verbose telemetry disclosure notice. Now skipped for--version,-V,--help, and-hflags.
Improved
- Cleaner sync output — Replaced verbose per-provider file counts ("Found 379 files / 69 need syncing / 310 already synced / 69 empty") with concise status lines: "up to date" or "Synced 1 new, 1 updated (583 messages)".
Full Changelog: v4.9.2...v4.9.3
v4.9.2 — llama.cpp Token Budget Fix
Fixed
-
llama.cpp token budget overflow — Session analysis against local
llama-serverfailed withexceed_context_size_error(HTTP 400) because the token budget didn't account for ~3K tokens of prompt overhead and 4K tokens reserved for output. Reduced the effective conversation budget from 24K to 12K tokens and changed the token estimation heuristic fromchars/4tochars/3(more conservative for code-heavy content). Sessions that still exceed the context window now get a clear error message with the exact token counts and a suggested-cflag value. -
<json>tag wrapping in llama.cpp responses — Small local models (e.g., Gemma 4) sometimes wrap valid JSON in<json>...</json>tags despiteresponse_format: json_objectbeing set. This caused the JSON validation retry to fail on both attempts, wasting inference time. Added provider-level tag stripping before JSON validation.
Improved
-
llama.cpp inference timeout — Increased default timeout from 2 minutes to 10 minutes. Local CPU inference at ~6-10 tok/s can take 3-5 minutes for a full session analysis; the old timeout was too aggressive.
-
llama.cpp output budget — Reduced
max_tokensfrom 8192 to 4096 to halve inference time on CPU and better fit within typical context windows.
Full Changelog: v4.9.1...v4.9.2
v4.9.1 — llama.cpp Test Connection Fix
Fixed
- llama.cpp dashboard test connection — The "Test Connection" button on the Settings page returned a 422 error for llama.cpp providers. The test prompt requested plain text
"ok", but the llamacpp provider enforces JSON mode on all responses — small local models replied with literalok, failing JSON validation. Changed test prompt to request JSON output, fixing compatibility with both llama.cpp and Gemini (which also uses JSON mode).
Full Changelog: v4.9.0...v4.9.1