You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
PR anomalyco#14743 — fix(cache): improve Anthropic prompt cache hit rate
- Split system prompt into stable (global) + dynamic (project) blocks
- Remove cwd from bash tool schema (was busting cache per-repo)
- Freeze date under OPENCODE_EXPERIMENTAL_CACHE_STABILIZATION flag
- Add optional 1h TTL on first system block (OPENCODE_EXPERIMENTAL_CACHE_1H_TTL)
- Add OPENCODE_CACHE_AUDIT logging for per-call cache accounting
- Track global vs project skill scope for stable cache prefix
- Add splitSystemPrompt provider option to opt out
PR anomalyco#14973 — fix(core): prevent agent loop stopping after tool calls
- Check lastAssistantMsg.parts for tool type before exiting loop
- Fixes OpenAI-compatible providers (Gemini, LiteLLM) returning
finish_reason 'stop' instead of 'tool_calls' when tools were called
ci: add FORCE_JAVASCRIPT_ACTIONS_TO_NODE24 to upstream-sync workflow
build: relax bun version check to minor-level for local builds
0 commit comments