CodeAI Hub is a Visual Studio Code extension + standalone Project Manager (CEF) that unifies multiple AI providers behind a single, type-safe orchestration layer.
- SolidWorks-WorkFlow docs index:
doc/SolidWorks-WorkFlow/Docs_Index.md - System SSOT:
doc/SolidWorks-WorkFlow/System/SystemArchitecture.md - Session input lock SSOT:
doc/SolidWorks-WorkFlow/Contracts/SessionInputLock_SSOT_StateMachine.md - Bug registry:
doc/BugRegistry.md
- Локализация VS Code extension webview теперь работает с первого рендера.
HomeViewProviderзагружает cached bootstrap snapshot черезLocalizationRuntimeService.loadRuntimeBootstrapSnapshot(...)и инжектит его вwindow.__CODEAI_LOCALIZATION_BOOTSTRAP__до mount'а React. Раньше extension всегда передавалlocalizationBootstrap: null→ первый paint был на English fallback независимо от пользовательских настроек. - Retag заголовка extension shell под UI Labels. Ключ
extension_shell.role.titleперенесён изui_helper_text.jsonвui_labels.json(runtime категорияui_interface), потому что короткий section title — этоUI LabelsperUserFacing_Text_Localization_Boundary §3.1. Body/Hint остаются вui_helper_text.jsonподuser_guidance. С выбором пользователяUI Labels: Default Language (English)заголовок остаётся английским, а body/hint следуют за языкомUI Helper Text.
- VS Code extension compat notice переписан на steady-state копию про роль расширения; retired fallback-only ключи
settings.only.compat_*.
- CI quality-gate Lint разблокирован через pinning всех 7 non-host Biome platform binaries в root
optionalDependencies; Knip step переведён в advisory на CI; Invariant §34 зафиксирован в SystemArchitecture.
- PM footer больше не дублирует workspace identity, Open Settings выведен в primary action с accent-цветом и тремя визуальными фазами.
- Detached Digital Models popup больше не закрывает весь standalone PM, popup больше не наследует autosaved frame главного окна.
- Красная NSWindow close кнопка на macOS была переведена на безопасный
[NSApp terminate:]path, что устранило crash dialog без очередной exception-side mitigation. reportException:swizzle из1.2.51сохранён только как safety net до будущего CEF/Chromium upgrade.
- Method swizzle на
-[NSApplication reportException:]через+loadcategory. User retest показал что crash dialog всё равно появляется — на macOS 26 exception достигает_crashOnException:не только черезreportException:. Swizzle оставлен в 1.2.52 как belts-and-suspenders safety net поверх primary short-circuit fix.
- Попытка через
NSSetUncaughtExceptionHandler()передCefExecuteProcess. User retest подтвердил crash не перехвачен. Dead code удалён в 1.2.51.
- Полный rollback CEF bootstrap refactor из 1.2.46 + 1.2.48. Восстановлен
[NSApplication sharedApplication]bootstrap 1.2.45 baseline, Cmd+V/SuperWhisper заработали. Crash-on-quit вернулся как known issue — теперь mitigated в 1.2.50.
- Narrow fix: попытка вернуть clipboard shortcuts через удаление Edit menu и стандартный
applicationShouldTerminate:path. Не попал в корень; полностью откачен в 1.2.49.
- Переход на CEF-compatible custom
NSApplication <CefAppProtocol>shell для подавления редкого shutdown crash. Сломал clipboard shortcuts; полностью откачен в 1.2.49.
- Claude/Codex reopened dialogs теперь показывают лимиты до первого нового turn'а. PM seed-ит
SessionIdBarиз provider-scoped cache, а explicitdialog_openedlifecycle в Core сначала replay-ит last-known snapshot, затем делает cheap refresh без eagerresumeSession. - Usage widget стал честнее на cold-open. Пока fresh payload ещё в полёте,
SessionIdBarпоказывает явный pending state вместо молчаливой пустоты; как только провайдер отдаётresetsAt, 5-часовое и недельное окна сразу показывают reset-info в скобках.
- Hotfix к 1.2.42: Codex больше не застревает в
Provider codexCli unavailableи usage_limits виджет наполняется после первого turn'а. PATH augmentation для Codex spawn + post-rebind usage-limits refresh в stale-binding retry branch.
- Первое сообщение в reopened Claude/Codex диалоге после Core restart больше не исчезает молча. Provider adapters бросают typed stale-binding error, Core dispatch one-shot рестартует binding.
- Hotfix к релизу 1.2.40: Diagram Modules Artifacts panel теперь действительно помещается по горизонтали под auto-fit. Возврат к естественному grid sizing после удаления
width: max-content + minWidth: 100%.
- Development Tree sidebar теперь стабильно показывает правильную структуру для
diagram_modulesартефактов. Core-side парсерdevelopment-tree-snapshot.tsпереписан: все/g-regex потребляются через.matchAll()(lastIndex-free iterator) или factory-функции, аMODULE_ROW_REужесточён до строго 2-column — 4-колоночные Simple Relations rows физически не матчатся. - Diagram Modules Artifacts panel получил auto-fit zoom.
DiagramEditorFacadeтеперь применяетeffectiveZoom = autoFitScale × userZoomчерезResizeObserverна container; manual Cmd/Ctrl+scroll overlay поверх auto-fit базы, Cmd/Ctrl+0 сбрасывает только user overlay.
- Reopened workflow dialog больше не залипает в
Agents is working, please wait...после cold-start Core.RemoteBridgeDialogCommandRouter.handleDialogListчерез новыйmaterializeContinuityEntrieshelper создаёт stub runtime session для каждой continuity entry сturnState: "idle", чтобы PM reconciliation автоматически переводил UI в unlocked. Stop-кнопка на reopened workflow dialog корректно инвалидирует binding после этого же materialize.
- Legacy Codex SDK-based provider module removed.
packages/Codex_Module/(and its transitive@openai/codex-sdk@0.53.0dependency) has been deleted from the repository. The newcodex app-serverline inpackages/Codex_AppServer_Module/has been the sole active runtime since release1.2.22; the legacy package was orphaned and carried only historical weight. External contract is unchanged — provider idcodexCli, provider slot~/.codeai-hub/providers/codex, and installer artifact namecodex-module-<version>.tar.bz2all stay stable. - Canonical SSOT docs updated to reflect the single active Codex implementation.
Modules/Codex.md,System/SystemArchitecture.md,Contracts/Formal_Module_Cluster_Facade_Architecture.md,Contracts/ProviderFailure_Recovery_And_ProviderSwitch.md, andContracts/EffectiveModelIdentity_And_Settings_SSOT.mdno longer reference the legacy package; historical material inCHANGELOG.md,doc/TODO/Archive/,doc/SolidWorks-WorkFlow/Plans/Archive/,doc/Sessions/, anddoc/BugRegistry.mdis intentionally preserved as audit trail.
- Diagram Modules now renders clusters, standalone modules, and the Development Tree correctly. Both the Project Manager diagram canvas and the left-sidebar Development Tree now read module tables in the canonical 2-column shape (
| \module-id` | Responsibility |), matching the agent template anddiagram-modules-field-reference.md. Previously the parsers still required a third backtick-wrapped column (the removedModuleKindslot), so cluster bodies rendered asModules: 0` and standalone modules disappeared, even when the staged artifact was valid. - Simple Relations rows no longer leak as phantom standalone modules in the Development Tree. The Core development-tree snapshot now clamps the
## Standalone Modulesbody at the next##header, soFrom/Toentries from## Simple Relationsare no longer surfaced as fake standalone module nodes in the PM sidebar. - Parser contract is now a documented invariant.
System/SystemArchitecture.md§6.4 records the 2-column module table contract and the standalone-section clamping rule as SSOT for both readers ofproduct-parts/<part-id>.md(browser staged-part parser and Coredevelopment-tree-snapshot.ts).
- UI and Reasoning translation engines are now split. The Settings localization card surfaces two dedicated selectors —
UI Translation Engine(drives interface bundle materialization and browser bootstrap) andReasoning Translation Engine(drives live translation of visible Thinking / Reasoning bubbles). The reasoning engine defaults toGoogle GTX Freefor the most stable live translation, while provider-backed engines remain available with an explicit warning that higher parallel activity may cause fallback to source English. - Reasoning is now a fifth user-facing localization category. Visible Thinking / Reasoning bubbles no longer share the
Messages for the Usertarget language; a dedicatedReasoningcard in Settings exposes an independent language selector, while hidden reasoning stays outside the translation pipeline as before. - Reasoning engine or reasoning language changes are runtime-only. They never block Settings save, never block Project Manager / new session sends, and never rebuild browser bootstrap bundles; only
UI Translation Engineand the four UI-owned category languages still enter the strict localization sync path. - Legacy settings migrate automatically. Existing installations see the previous translation engine preserved under
UI Translation Engine, the reasoning engine seeded toGoogle GTX Free, and the new reasoning language seeded from the currentMessages for the Userlanguage, so Day 1 after upgrade feels unchanged.
- Main
Thinkingtext is now slightly more readable. The body text inside both internal thinking paths now usesrgba(173, 178, 186, 0.7)instead of the dimmer0.6, so reasoning content reads more clearly without changing the accepted muted card chrome. - The rest of the
Thinkingcard contract stays stable. Fill, stroke, shadow, provider-colored header, and timestamp treatment remain aligned with the1.2.34visual baseline.
- Provider
Thinkingheaders now keep their accent while remaining muted. Assistant-tagged reasoning cards such asCodex · Thinking,Claude · Thinking, andGemini · Thinkingno longer collapse to neutral gray; the header stays on the provider hue at a softer60%alpha, so provider identity remains readable without overpowering the final answer. - Muted thinking bubbles are slightly stronger and more legible. The shared thinking surface now uses
45%alpha for fill and border instead of the weaker40%, keeping the subdued secondary feel while giving the visible card edge and background better separation.
- Visible
Thinkingcards now use the muted visual contract on the real user-facing render path. Session UI now applies the softened alpha treatment not only to dedicatedthinkingrole bubbles but also to assistant messages tagged as thinking, so cards such asCodex · Thinkingactually render with the intended quieter fill, border, and typography. - The real
Thinkingcard path is regression-covered in shared Session UI logic. The dialog message class builder now has a dedicated test forassistant + tag="thinking"so the visible reasoning card path cannot silently fall back to ordinary assistant styling again.
- Session dialog message cards now use a lighter
1pxstroke. The shared bubble contract for user, assistant, and thinking cards no longer uses the heavier2pxborder, so the dialog surface reads cleaner without changing message structure or provider routing. - Thinking cards are visually quieter across all providers. Claude, Codex, and Gemini reasoning bubbles now use softer background/border alpha plus dimmer header/body typography, making
Thinkingcontent feel secondary to the final assistant answer while remaining readable.
- Late translation growth of the last dialog bubble now keeps the view pinned to the bottom. When the last visible thinking or assistant bubble first appears in English and then expands in place after a Russian
localizedContentoverlay arrives, Session UI now treats that display-text growth as a real autoscroll anchor change and re-scrolls to the newest bottom edge automatically. - The fix is regression-covered at the scroll-anchor layer. Session UI now has a dedicated test proving that a change in
localizedContentalone, without any change to nativecontent, still invalidates the last-bubble scroll anchor.
- Codex reasoning now renders from completed summary blocks instead of live readable fragments. The app-server line no longer materializes
thinkingbubbles fromsummaryTextDelta/textDelta; user-facing reasoning waits foritem/completedand emits one block per completed summary section, preserving heading/body boundaries such as**Crafting concise questions**. - Standalone bold reasoning headings now keep the correct vertical rhythm in Session UI. Bold-only paragraph headings keep the gap before the heading while the extra gap after the heading is suppressed, so section titles read as the start of the following paragraph rather than as isolated floating lines.
- Codex reasoning contract and regression coverage are now aligned. The app-server module includes dedicated regression tests for completed-summary emission, fallback behavior without
item.summary[], and raw-text fallback when structured reasoning fields are absent.
- Mixed-language translation overlays now preserve word boundaries automatically. Shared translation normalization inserts the missing space on
latin <-> cyrillicboundaries in ordinary prose, so overlays no longer collapse into fragments likeparallelдля,вродеpwd, orlsилиsedwhile protected code spans stay untouched. - Session messages now keep section-like bold titles on their own paragraph. The same shared formatter repairs glued patterns such as
...data.**Clarifying ...**before assistant/thinking content is persisted and before translated overlays are projected, so both ordinary replies and reasoning bubbles keep readable section structure. - Nested markdown lists no longer inflate into empty vertical gaps in dialog UI. Session markdown rendering now collapses structural whitespace on the
lilayer instead of preserving markdown indentation/newline artefacts as visible empty blocks.
- Claude pre-tool progress text no longer leaks into the dialog as a normal assistant bubble. In localized Claude workflow turns, a pre-tool fragment such as
I've read the Final_Description.md... Let me create the directory...could appear between twoClaude · Thinkingbubbles as an ordinary assistant/live message. This was wrong in two ways: the fragment was progress/thinking-like text before atool_use, not a real final answer, and because it materialized as assistant/live it skipped the thinking translation path and stayed in English. The 1.2.17 fix hardens the Claude messaging path so localized pre-tool text no longer escapes through the assistant/live branch when the message resolves totool_use; instead it follows the thinking contract, while ordinaryend_turnassistant text stays on the normal assistant path.
- Claude no longer gets stuck in false
Agent is resuming...after a completed turn. A ClaudeDescriptionturn could finish normally, persist the full final reply into native/SDK/unified logs, and still leave the Session UI blocked inAgent is resuming your session... Please wait.The immediate bug was the Unix post-turn/contextprobe runner inpackages/Claude_Module/src/sdk/claude-context-usage-probe.ts: on macOS/Linux it executednode <executablePath> ..., but the installedclaudecommand can resolve to a native bundle (claude.exeinside the package), so the probe crashed withERR_UNKNOWN_FILE_EXTENSION. The release fixes the runner selection to execute native Claude binaries directly on Unix and also hardens Core continuity arbitration: if a provider explicitly reports that post-turn token usage is unavailable, Core resolves the turn tono_rolloverinstead of leaving the session in endlesscontext_check_pending.
- Model label flicker eliminated completely. 1.2.13 fixed the Core-side broadcast path (raw SDK
model_info→ effective id), but there was still a second flicker path: the client-side initial render goes throughresolveModelReasoninginsrc/client/ui/src/session/model-info-builder.ts, which for Gemini and Codex returned the raw level string from settings ("high"/"medium") instead of the prefixed form ("thinking high"/"reasoning medium"). At first render the label briefly appeared asGemini 3.1 Pro Preview (high), then Core'ssession:model:updatereplaced it with the effective(thinking high)— user saw a one-frame flicker, most visible on temp-session start. Both fallback branches now wrap the level in the appropriate provider prefix, matching the formparseEffectiveModelIdproduces. Claude branch is unchanged (separate convention with"thinking off"/ raw effort). UI label is now stable from the very first render.
- Gemini post-tool stalled-turn watchdog bumped 120s → 240s. Retest of 1.2.13 on Gemini 3.1 Pro Preview +
thinkingLevel=highsurfaced a false-positive watchdog kill in the post-tool leg: after the initial turn completed withread_filetool calls and Core fed results back, Gemini went into its silent deep-reasoning phase and the post-tool watchdog cut the stream at exactly 120s withProvider turn failed: Gemini stream stalled after 120s without progress.The 1.2.11 rationale — "follow-up legs already account for nested reasoning, 120s is enough" — turned out to be wrong for high-thinking + large prompts.packages/Gemini_Module/src/session/gemini-session-lifecycle.tsDEFAULT_POST_TOOL_STALLED_TURN_WATCHDOG_MSis now 240_000, symmetric with the initial leg (DEFAULT_STALLED_TURN_WATCHDOG_MS = 240_000). Per-session overrides (stalledTurnWatchdogMs/postToolStalledTurnWatchdogMs) are preserved. Adaptive-per-thinking-level watchdog remains deferred.
- Stable model label in the SESSION UI status panel. Previously the bottom panel of the session view flickered between
Gemini 3.1 Pro Preview (thinking high)andGemini 3.1 Pro Preview (high)within the same active turn — every time the Gemini SDK emitted itsmodel_infoevent the label briefly collapsed to the short form, then the next applied-turn-config broadcast restored the long form. Two distinct code paths (session-provider-event-router.tsbroadcastRuntimeModelUpdatevssession-request-handler-message-dispatch.ts) were pushingsession:model:updateevents with different modelId shapes — the SDK path forwarded the raw base id (gemini-3.1-pro-preview), the dispatch path carried the full effective id (gemini-3.1-pro-preview thinking:high). The UI renderer inmodel-info-builder.tsformed different labels from those two shapes, hence the flicker. Core now enriches the SDK path through the sameAppliedTurnConfig.resolveEffectiveModelIdhelper the dispatch path uses, so both broadcasts carry identical effective ids and the label stays stable.
- Core no longer crashes when Gemini cli-core self-aborts on loop detection. 1.2.11 retest on Gemini 3.1 Pro Preview +
thinkingLevel=highsurfaced anAbortErroruncaughtException from@google/gemini-cli-core/dist/src/core/client.js:539GeminiClient.processTurn— cli-core internally callscontroller.abort()when its own loop-detection fires, and the resulting node-fetch promise rejection is in a background async context that ourrunTurntry/catch does not own. NativegeminiCLI survives this because its outersubmitQuerywrapper explicitly ignoreserror.name === "AbortError"; we now do the equivalent at the daemon level.packages/core/src/index.tsgains aprocess.on("uncaughtException", ...)handler that selectively swallows AbortError only when the stack trace includes@google/gemini-cli-core. All other uncaughtExceptions remain fatal — crash-safety for real bugs is preserved. - Gemini mis-routed thinking content is now rerouted to the thinking overlay. On
thinkingLevel=highwith large prompts, Gemini 3.1 Pro sometimes streams its internal meta-prompt (sthought\n,CRITICAL INSTRUCTION 1:,Related tools:,Plan:,Drafting the content...) throughContentevents instead ofThoughtevents. Our normalizer was faithfully writing these as ordinary assistant bubbles, so the user saw a 10,000+ character English meta-prompt glued onto the dialog.packages/Gemini_Module/src/messaging/gemini-assistant-event-normalizer.tsnow detects the misrouted-thinking prefixes in finalised assistant segments and reroutes the whole segment through the existingthought-translator-serviceoverlay path (same mechanism used for inline[Thought: true]splitter in 1.2.9). The detector runs after the 1.2.9 marker splitter and pre-tool Cyrillic heuristic so it does not conflict with existing reroute rules. Underlying provider-side quirk is a Google bug; this is our UI-correctness patch until they fix it.
- Gemini initial-leg stalled-turn watchdog bumped from 60s to 240s. Surfaced during 1.2.10 retest: Gemini 3.1 Pro Preview with
thinkingLevel=highon the Description step timed out after exactly 60 seconds withGemini stream stalled after 60s without progress.Root cause inpackages/Gemini_Module/src/session/gemini-session-lifecycle.ts: theDEFAULT_STALLED_TURN_WATCHDOG_MSconstant was hard-coded at60_000regardless of model or thinking level. On large system-instruction prompts (Description Agent + questionnaire) athigheffort the Gemini SDK stays silent on the stream channel through the whole deep-reasoning phase, which exceeds 60s. Our watchdog interpreted the silence as a hung stream and killed the turn. Bumping the initial-leg timeout to 240s givesthinkingLevel=highenough headroom while still protecting against genuinely hung streams. Post-tool watchdog (DEFAULT_POST_TOOL_STALLED_TURN_WATCHDOG_MS = 120_000) stays unchanged — follow-up legs already account for nested reasoning. No other behaviour changes; this is a single-constant bump.
-
Audit cleanup release. Post-Session043 codebase audit (dead code + broken doc links + duplication analysis) produced a concrete debt list — 1.2.10 closes the actionable items. No runtime behaviour changes; no retest required. Four directions:
-
A. Docs + config verification. Audit flagged three potential issues — all investigated.
Docs_Index.md:80-82bundled-template paths were already correct (destinationRelativePathinsidepackages/core/src/templates/bundled-templates.ts); extended the section to also document the per-workspace instance layout (.codeai-hub/codeai-hub/description/) so future sessions don't confuse the two.knip.jsonexclusion for the diagram-DSL parser chain was found to be intentional (chain used only throughdiagram-editor-facade.test.tsx, so knip would otherwise flag the whole subtree as unused) — left as-is. TODO inpackages/agents/spec-creator/dist/contract/contract-builder.d.tslives inside a published third-party package; no source under our control. -
B. Localization cleanup. 99 unused keys identified across the four approved source dicts (
ui_labels.json,ui_helper_text.json,messages_for_the_user.json,artifacts_for_the_user.json) — residue of removed components (SwitchRecoveryBanner), rewritten questionnaire fields, and never-wired placeholders. After a dry-run grep-partial pass to rule out dynamict(\${prefix}.${suffix}`)usage, the confirmed-dead subset is removed from source. Translator runtime ignores them from now on; materialised~/.codeai-hub/localization//*.json` caches shrink accordingly on next language selection. -
C. Duplication refactor (scope-bounded). Top-20 of 233 jscpd clones classified. 17/20 are legitimate parallel provider scaffolding (Claude/Codex/Gemini mirrors) or client↔core boundary mirrors — extracting them would violate module isolation. Three real extracts:
useBootstrapSettings→src/client/shared/hooks/use-bootstrap-settings.ts(eliminates client↔PM settings bootstrap clone);createWorkspaceFileHandlerfactory inpackages/core/src/remote-bridge/handlers/workspace-file-service.ts(eliminates within-file read/write handler clone);idea-collector-schema-utils.tsnow imports from@codeai-hub/agents-sharedschema-utils instead of duplicating the strictifier + normalizer.check:dupgoes from 3.68% to ~3.2%; threshold stays at 3%. -
D. Process formalization. New
doc/SolidWorks-WorkFlow/Checklists/PeriodicAudit.mddocuments the recurring audit cadence (every 3-5 releases), parallel audit-pass workflow, clone-classification rubric, and uses 1.2.10 as the reference precedent. SystemArchitecture gains an explicit "acceptable parallel-scaffolding duplication" invariant — future audits must not flag provider-module mirrors or client↔core boundary copies as debt.
-
A. Docs + config verification. Audit flagged three potential issues — all investigated.
- Gemini inline
[Thought: true]marker now splits into a thinking bubble + final assistant reply: on post-tool follow-up turns, Gemini CLI Core sometimes streams a thought-like English summary, the literal token[Thought: true], and the final target-language reply inside a singlecontentevent stream — without any accompanyingptype: "thought"events.packages/Gemini_Module/src/messaging/gemini-assistant-event-normalizer.tshandleFinishedEventnow regex-splits the assembled segment on/\[Thought:\s*(true|false)\]/, routes the pre-marker text through the existingthought-translator-service(same overlay path as native Gemini thoughts, so the translation arrives as a thinking bubble with properlocalizedContent), and keeps the post-marker text as the ordinary assistant bubble. The literal[Thought: true]/[Thought: false]token never surfaces in dialog, and the user no longer sees an English thought-summary glued onto a Russian final answer. - Gemini pre-tool non-target-language progress text reroutes to thinking overlay: at session start Gemini often emits a brief English
contentevent (e.g.I will read the questionnaire and the template...) right before the firsttool_call_requestof the turn — again withoutptype: "thought"events. When the user has Messages-for-the-User set to a Cyrillic-family target (ru / uk / bg / sr / mk / be / ky / kk / mn / tg / ab) and that pre-tool text contains zero Cyrillic characters (U+0400..U+052F), the normalizer now snapshots it as the pre-tool segment, reroutes it throughthought-translator-serviceas a thinking bubble, and excludes it from the final assistant bubble. Targetendisables the heuristic entirely — we cannot reliably detect "not English" from raw characters. In-target-language pre-tool text is prepended to the assistant bubble unchanged (current behaviour preserved).
- Gemini post-stop resume now actually loads the prior chat: 1.2.7 shipped
argv.resumeon the Gemini CLI Core path, but that flag alone is a no-op — the officialgeminibinary main reads it, looks up the chat file, and then callsclient.resumeChat(history, resumedSessionData)to hydrate the in-memory chat and reuse the existing chat file. Our embed path skipped that step, so the rebind started fresh and wrote a new empty chat file anyway.gemini-session-bootstrapper.tsnow performs the full resume pipeline itself: scansconfig.storage.getProjectTempDir()/chatsforsession-*-<uuid-first-8>.json, picks the file whose fullsessionIdmatches (prefers the one with the most messages when pre-1.2.8 state left two files with the same UUID), callsconfig.setSessionId(loaded.sessionId), convertsmessagesvia@google/gemini-cli-coreconvertSessionToClientHistory, thenawait client.resumeChat(history, { conversation, filePath }). Description Agent system instruction and prior dialog are available to the next turn, and subsequent provider writes append to the same chat file instead of orphaning it. - Stale-seed send recovery: Project Manager dialog bootstrap can seed a fresh Core session with an already-dead
providerSessionIdandproviderSessionStatus: "ready"(mirror of the 1.2.5 Claude case). User's next message bypasseshasStopInvalidatedBindingand hits the provider directly, which throwsGemini session <id> not found. Available: [] Aliases: []. The provider adapter now translates this specific failure into a Core-visibleSessionStaleBindingError;SessionRequestHandlerProviderSend.dispatchcatches it, rewrites the binding topending, remembers the pre-stopproviderSessionId, re-runsensureSessionReadyForSend(which triggers the normal post-stop resume path), and retries the send once. Only one retry per turn; a second stale failure flows through as an ordinary provider error. - Legacy
SwitchRecoveryBannerremoved: the "Retry in place / Retry with current provider / Switch to …" toolbar that surfaced onfailureClass=session_binding_recoverableis a leftover from the pre-1.2.5 recovery flow. With 1.2.7 post-stop resume and 1.2.8 stale-seed guard, recoverable failures are handled silently by Core. The component, its companion hookuse-dialog-switch-offer, associated type file, CSS, and localization keys have been fully removed from the code base.
- Gemini
Stopno longer wipes provider chat history andContinueresumes the prior dialog:packages/Gemini_Module/src/session/gemini-session-lifecycle.tscloseSessionpreviously calledsession.client.resetChat()which, on the Gemini CLI Core side, materialized a new emptyGeminiChatagainst the sameConfig.sessionIdand wrote a new empty chat file under~/.gemini/tmp/<projectSlug>/chats/, leaving the prior chat-file history orphaned. The abort path now stops afterabortController.abort()andsessionStore.removeSession()so the pre-stop chat file stays intact on disk. - Core-side post-stop Gemini rebind resumes by provider session id: Core's stop-action now remembers the live
providerSessionIdbefore invalidating the binding, andSessionRequestHandlerStopRebind.performRebindthreads that id back intoresolveProviderSessionIdon the next send, but only for providers declared asrequiresPostStopResume.GeminiProviderAdapter.resumeSessionforwards it asargv.resume, so Gemini CLI Core loads the prior chat file with full Description Agent system instructions and prior dialog. Claude/Codex paths are unchanged because their post-stop continuity is already owned provider-natively. - Invariant 24 extended: "Provider Stop actually aborts the active turn" now also requires that Stop does not discard provider-native chat history. For providers with
requiresPostStopResume, Core must persist the pre-stop provider session id and resume against it on rebind, otherwise the rebound session starts with an empty context and forgets the original workflow instruction.
- Codex
Stopnow actually aborts the active turn: the Codex SDK-patchstreamCodexExec(inpackages/Codex_Module/src/sdk/codex-sdk-patches.ts) spawns the underlyingcodex execas a child process and then blocks insidefor await (const line of rl)on that process's stdout. Previouslyadapter.closeSessionjust resolved the outer message generator withnull, while the child kept running and the readline cursor waited for the next stdout line — Stop was effectively a no-op until Codex naturally emittedturn_completed(2+ minutes in the 1.2.3 retest). The patch now registers the spawnedChildProcessin a module-scoped Map keyed bythreadIdand exportskillActiveCodexProcess(threadId)which sendsSIGTERM.CodexSessionManager.closeSessioncalls that exported hook before awaiting the lifecycle and processing loop, so Stop closes the Codex subprocess within ~100 ms — matching the Claude behaviour 1.2.5 already shipped. - PM Stop-button debounce:
InputPaneltracks a newstopInFlightstate that flips to true the moment the user clicks Stop and resets to false whenagentBusyflips to false (Core has sent theidlesnapshot back). While in-flight, the handler short-circuits before callingstopSessionso a user who spam-clicks Stop cannot stack nine parallelsession:stopmessages the way the 1.2.3 trace showed.InputPlayStopButtongains astopPendingprop that disables the button and switches the label toStopping current turn…. - Core
handleStopre-entry guard:session-request-handler-stop-action.tsearly-returns whenhasStopInvalidatedBinding(sessionId)is already true. This is belt-and-suspenders for callers that bypass the PM debounce (programmatic sources, races) and prevents re-invalidating an already-pending binding.
- Stop → Continue input lock — fixed: after
Stopon a live Claude/Codex turn, Core invalidates the provider binding (providerSessionId → null,status → pending) and, on the next user message, creates a new session backed by the same provider-native session id. The PM dialog controller previously missed this swap:onSessionBindingonly updated the snapshot-level binding, not theSessionRecord.binding, so theonSessionCreatedadoption check saw the oldstatus: "ready"and refused to adopt. The input panel then kept readingconnectionStatefrom the now-dead session and stayed unlocked even though Core was streaming the reply onto the new one.useProjectManagerDialogSessionControllernow mirrorsonSessionBindinginto both the snapshot and theSessionRecord, remembers the pre-stopproviderSessionIdin a ref the moment it flips to null, and adopts the newly created session ononSessionCreatedwhen itsproviderSessionIdmatches. Placeholder cleanup and ref reset both cover the new path. - 1.2.3 / 1.2.4 diagnostic instrumentation removed: all
stopdiag_(Core) andpmdiag_(PM) trace logs are gone.pm:diag:logis back to writing into~/.codeai-hub/logs/core/core.logvia the shared Core logger — the temporary split to~/.codeai-hub/logs/project-manager/project-manager.logis not needed now that the fix has landed. TheCODEAI_PROJECT_MANAGER_LOG_FILEenv override was removed alongside.
- PM-side
pmdiag_trace release. Loggedapi_stop_session,api_send_session_message,workspace_snapshot_apply(per-session summary),active_session_changed(with caller stack),dialog_active_session_changed. Routed to a dedicatedproject-manager.logvia a local appender. The trace confirmed the session-id swap the 1.2.5 fix addresses. Removed in 1.2.5.
- Core-only
stopdiag_trace logs on stop-action / stop-rebind / message-dispatch /emitTurnStateEvent(with caller stack) / provider-event-router in~/.codeai-hub/logs/core/core.log. Baseline verification for the Stop → Continue input lock regression; proved Core emitsrunningcorrectly and narrowed the root cause to the PM side. Removed in 1.2.5.
- Claude
x-Highreasoning effort stops reverting tomediumon Project Manager boot: Core had its own hardcoded thinking-effort whitelist next to the extension-side normalizer, andxhighhad been added to the UI registry and the shared defaults resolver in 1.1.998 but NOT to that Core-only handler. On everysettings:loadfrom PM, Core silently rewrotexhighback tomediumand persisted it to disk.xhighis now in the Core whitelist together with its legacymaxTokens = 20 000anchor. Diagnostic logging from 1.2.0 / 1.2.1 is removed. - New SSOT invariant: SystemArchitecture §3 now has Invariant 27 documenting the four-way parity requirement between the UI model registry, the extension-side normalizer, the shared Core defaults resolver, and the Core remote-bridge handler when adding any new effort/reasoning/thinking level. Matching bullets added to Modules/Claude.md, Codex.md, Gemini.md so future provider work catches the cross-boundary rule.
- Temporary build. Added polling
fs.watchFileon~/.codeai-hub/settings/settings.jsonso any external writer became observable regardless of which process wrote the file. Removed in 1.2.2 once the root cause was identified.
- Temporary build. Added persist/load/save trace through
~/.codeai-hub/logs/extension/extension.log, including a stack trace forpersistSettingsSnapshot. Removed in 1.2.2.
- Claude live assistant text now collapses into one growing dialog card: consecutive live text fragments from the same turn merge visually into a single assistant bubble instead of rendering one card per sentence. The provider still emits each live fragment as a stable append-only message so translation overlays keep attaching
localizedContentper fragment, but the UI layer now runs a merge pass symmetric to the existing thinking merge.
- Claude assistant text now prints live, no more multi-minute silence on
Write/Edit: visible assistant text is surfaced as Claude streams it, sentence-by-sentence, instead of being held in memory until the tool_use block finishes streaming its payload. The old two-minute pause while Claude generated a largeWriteinput is gone. - Claude
Thinkingis now visible on Opus 4.7: the SDKthinking.display: "summarized"flag is now always sent when thinking is enabled, so Claude Opus 4.7 emits plain-text reasoning fragments instead of encrypted-only signatures. Previously thinking on Opus 4.7 was invisible regardless of effort. - New Reasoning effort level — x-High (Opus-only): Settings Claude now exposes
x-HighbetweenHighandMax. Documented by the SDK as "Deeper than high (Opus 4.7 only; falls back to High elsewhere)". - Model labels no longer show stale version numbers: Claude model cards display just
Sonnet/Opus/Haiku— Anthropic resolves the alias to the latest version at query time, so there's no moreOpus 4.5label while the provider is actually runningOpus 4.7.
Stopno longer crashes core during a Claude turn: pressingStopfrom Project Manager while Claude is streaming now interrupts the active turn cleanly. Late provider errors that arrive after shutdown are suppressed instead of leaking out as an unhandled error event, so the next user message can continue the same workflow session instead of finding a dead core.- Claude
Thinkingnow appears live instead of in one delayed block: reasoning is materialized into readable thinking bubbles as the model is still streaming, at sentence/paragraph boundaries, so the dialog no longer goes silent during long Claude reasoning. The final assembled thinking block is reconciled against what was already shown, so the same reasoning never appears twice. - Translation overlays follow each live thinking bubble: each emitted live bubble carries its own stable
messageId, and Core-owned translation overlays attach to those bubbles individually as soon as translation completes, so localized reasoning text now arrives incrementally too instead of waiting for the whole reasoning block.
- Project Manager
Stopnow uses the correct runtime transport: the shared session input panel now delegatessession:stopthrough the Project Manager transport when it is hosted inside the standalone workflow shell, instead of trying to use the regular chat webview bridge that is not initialized there. - Hung rollover sessions can now be interrupted from the Project Manager input: when a continuity resume stalls and the UI shows
Agent is resuming your session, theStopbutton can again send a real stop request for the active session and unblock the input path. - Regression coverage now locks the Project Manager stop bridge: the core-bridge stop-session test asserts that the shared
stopSession()helper forwards to the Project Manager hook when that environment is active.
- Description no longer leaks stale artifacts across workspace switches: Project Manager now ignores workflow snapshots that belong to the previous workspace while the new workspace handshake is still settling, so the right panel no longer reopens an old
Final_Description.md. - Description startup recovers the correct pre-submit surface after switching workspace: when the newly selected workspace only has
questionnaire.md, the main area now stays aligned with the active workspace and shows the questionnaire editor instead of the falseDescription artifact is not available yetplaceholder. - Regression coverage now locks the workspace-snapshot guard: the main-area workflow-state test asserts that artifact derivation only accepts snapshots whose
workspaceSlugandworkspacePathmatch the current active workspace.
- Translation engine availability now follows the real provider runtime state: the Settings
Translation engineselector keepsGoogle GTX Freeavailable by default, but disables OpenAI Codex and Anthropic Claude engines when their backing provider stack is unavailable in livecore:state. - Provider-owned engines no longer look selectable when access is missing: unavailable
CodexandClaudetranslation entries now surface the provider recovery/status message instead of behaving like always-ready engines. - The product now stays honest about what it knows: CodeAI Hub still does not perform a first-class subscription entitlement check, so the UI now gates by actual provider availability/auth status instead of implying that model access has been verified.
- Google GTX no longer fails strict localization sync on large runtime bundles: long marker-preserving localization batches such as
system_feedbacknow switch fromGETtoPOST application/x-www-form-urlencoded, avoiding URL-length overflow and preventing full-bundle fallback during Settings save. - The whole-bundle localization contract stays intact for Google:
LocalizationMaterializerstill sends one structured no-chunk batch per runtime bundle, butGoogleTranslateClientnow uses transport appropriate for the payload size instead of forcing long bundles through query-string transport. - Regression coverage now locks the Google transport split: the shared translation package tests both short
GETrequests and largePOSTrequests, so future changes cannot silently reintroduce the83 fallback translationsfailure onsystem_feedback.
- Haiku translation runtime now hard-disables thinking at both transport layers: the provider-owned Claude Haiku translation path keeps
thinking: { type: "disabled" }and also passes SDK settingsalwaysThinkingEnabled: false, preventing literal help text such asUltrathinkfrom reactivating hidden Claude reasoning on interface/help bundle syncs. - Translation-only query profile is now locked in by regression coverage: the Haiku translation service test asserts the explicit SDK
alwaysThinkingEnabled: falseflag together with the existing translate-only prompt and disabled thinking profile. - Claude module SSOT now documents the hard-disable requirement: the module contract explicitly states that translation-only Haiku queries must not allow prompt-triggered thinking heuristics back in.
- Haiku localization/help sync now stays on an explicit translate-only path: the provider-owned Claude Haiku runtime wraps every request in a dedicated translation prompt and repeats the marker-preservation rule for whole-bundle
localization_bundlebatches, so helper/help/interface materialization no longer degrades into raw English responses from an under-specified prompt. - Haiku native translation traces are now isolated in the intended runtime bucket: translation turns keep
persistSession: true, but the querycwdnow points at the dedicatedtranslation-runtime-haikuproject directory while auth/bootstrap still comes from provider-home, restoring predictable native Claude JSONL forensics. - Duplicate reasoning translations no longer self-queue: Core now reuses one in-flight Haiku translation per
engineId + targetLanguage + sourceHash, removing redundant live/replay duplicate requests that previously stretched long reasoning overlays behind a single-worker queue.
- Haiku Settings save no longer fails on a false bootstrap mismatch: extension-side strict sync now compares the same canonical five-category localization snapshot that Core returns from
/api/v1/localization/bootstrap, instead of comparing it against a nine-key mirrored shape and rejecting an otherwise valid response. - Core-only localization snapshot matching is now explicit and tested: the Haiku bootstrap path uses a dedicated runtime-settings helper plus regression coverage for the exact
anthropic-claude-haiku-4-5save scenario that previously raisedCore localization bootstrap does not match the current settings snapshot. - Fast-start fixes from
1.1.988remain intact:SettingsandProject Managerstill render immediately without blocking on localization bootstrap, while the corrected strict sync path now allows Haiku selection to save cleanly.
- Incremental localization sync on Save: provider-only, response-mode, and continuity saves skip the
Synchronizing localizationoverlay entirely; engine or category saves rebuild only the runtime bundles actually affected by the change instead of forcing a full five-bundle rematerialization. - Forward-only thinking visibility: visible
Thinking / Reasoningbubbles carry an immutablevisibilityAtEmissiondecision stamped at emission time, so turningThinking in dialog/Reasoning in dialogback on inside a long-running session no longer reveals thinking that was hidden when it was emitted, and hidden thinking never enters the translation queue. - Messages for the User explicitly owns visible Thinking / Reasoning: the localization contract, module SSOT, and Settings helper copy name visible provider Thinking / Reasoning as part of
Messages for the User, so language + engine selection follow one explicit ownership decision.
- Reasoning translation no longer re-chunks live thinking by default: shared runtime translation now keeps each provider-emitted reasoning block intact unless a caller explicitly opts back into chunking.
- Lower latency for Codex, Gemini, and Claude thinking overlays: the Core-owned reasoning overlay path now sends one translation request per visible thinking message instead of
2-5sequential subrequests for the same message. - Reasoning chunking remains opt-in only: generic/document translation keeps the existing engine-aware chunk planner, while reasoning can still explicitly request
chunkingMode = autofor future experimental callers.
- Codex thinking translation bootstrap path repaired: Core now reads the persisted localization bootstrap snapshot from the canonical
~/.codeai-hub/localization/cache/browser-runtime-bootstrap.jsonpath instead of a double-prefixed non-existent path under~/.codeai-hub/.codeai-hub/.... - Live reasoning overlays resume dispatch: once the persisted bootstrap matches the active localization settings, Codex
thinkingfragments can again enter the translation dispatch path and produce async overlay patches instead of being skipped forever aslocalization_sync_pending. - Regression coverage for production-like settings/bootstrap layout: Core now tests the exact
~/.codeai-hub/settings/+~/.codeai-hub/localization/cache/layout that previously disabled all Codex thinking translation in release runtime.
- Codex artifact language no longer falls back to English after PM restart: Project Manager now reuses the persisted browser localization bootstrap snapshot when live settings cache is not ready, so
Artifacts for the Userstays aligned with the saved runtime language. - Codex translation runtime survives legacy auth layout: isolated translation-only Codex homes now bootstrap from provider home first and transparently fall back to legacy
~/.codexauth/cache when needed. - Thinking translation chunks stay independent: Codex reasoning delta messages now emit deterministic per-chunk ids instead of reusing one provider item id, preventing later translation overlays from overwriting earlier thinking fragments in live/replay/history paths.
- Codex Spark thinking translation repaired: Codex rollout thinking now stays on the source-first path and is upgraded by the Core-owned translation overlay instead of attempting a second provider-local translation inside the active Codex turn.
- Final assistant restore under workflow schema mode: rollout
final_answerplain text now has a safe fallback path when structured parsing yields noassistantText, so Codex workflow turns no longer finish without a visible final reply. - Dead rollout adapter removed: the obsolete provider-local Codex thought-translation adapter has been removed, keeping the runtime aligned with the single-owner overlay architecture and preventing
knipregressions.
- Source-first thinking overlays: visible reasoning/thinking messages now appear immediately in their native provider language, then asynchronously switch to the user's language through stable
messageId-based translation overlays instead of waiting on provider-local translation before render. - Persisted localized history projection: translated thinking is now cached per session in a Core-owned sidecar and reapplied on history load, so reopening a session restores already-localized reasoning without rewriting the canonical transcript.
- Claude runtime packaging guard: release packaging now validates that the Claude installed bundle includes
@codeai-hub/translation, closing the runtime gap that could break Claude's remaining provider-local pre-tool translation path.
- Trunk-step provider override: idle
Virtual SimulationandDiagram Modulesconfirmation cards now show an inline provider selector. The previous-step provider stays preselected for the one-click path, but you can switch to any connected provider before pressingStart step. - Chosen-provider bootstrap sync: when a new step starts on a different provider, Project Manager now seeds the dialog/bootstrap snapshot from the explicit step-start provider intent, so the lower model/status panel opens on the correct provider context instead of inheriting stale state from the previous trunk step.
- Provider-correct usage limits after step start: once the new step session reaches
binding.status === ready,Session ID + Usage Limitsrefreshes against the selected provider/runtime identity and shows the correct provider-family limits (Claude,Codex, orGemini).
- Simplified dialog restore adoption: Project Manager no longer blocks restored runtime-session adoption on PM-only
sessionKind, so the auto-opened workflow step can actually switch from placeholder to real runtime session on first workspace open. - First-open limits path restored: once the real runtime session is adopted, the existing ready-time
Session ID + Usage Limitsrefresh path runs on the first auto-selected step instead of waiting for a manual step switch. - No extra restore heuristics: the fix removes one invalid matcher condition instead of adding more branching, keeping the dialog restore path aligned to real continuity identity (
workspace,stage,run,provider,providerSessionId).
- Auto-select runtime-restore fix: Project Manager no longer fires usage-limits refresh from a dialog bootstrap placeholder before the real runtime session exists, so limits can render on the auto-opened workflow step after workspace launch.
- Pending-to-runtime adoption in dialog mode: when Core materializes the runtime session for a restored dialog continuity entry, PM now replaces the placeholder snapshot with that real runtime session and carries the loaded dialog history forward.
- Ready-only manual refresh:
Session ID + Usage Limitsnow waits forbinding.status === readybefore sending manual refresh, preventing skipped requests against non-existent runtime sessions during restore.
- Auto-select diagnostics routed into file logs: standalone Project Manager now forwards usage-limits investigation events into Core-owned file logging, so the restore/bootstrap trace is captured in
~/.codeai-hub/logs/core/core.log. - Refresh decision visibility in Core: Core now records whether a manual usage-limits refresh found a runtime session, found a bound provider session id, and was actually dispatched to the provider adapter.
- Diagnostic-only release: this build is for isolating the auto-select usage-limits race after workspace open; it does not claim a behavioural fix yet.
- Dialog-session usage limits restored: Project Manager dialog-mode sessions now trigger the same live
Session ID + Usage Limitsrefresh path as runtime sessions, so limits render again on active workflow stage screens. - Live quota readers remain authoritative: Codex, Claude, and Gemini limits continue to come from their provider-specific live quota/HTML readers, not from SDK usage logs or stale browser state.
- Provider-global behavior retained: sessions that use the same provider still converge to one provider-global usage scope (
claude:global,codex:global,gemini:global) across workflow steps.
- Provider-global usage limits: sessions that use the same provider now converge to a shared provider-global usage scope (
claude:global,codex:global,gemini:global) instead of diverging by provider session id. - No stale usage-limits cache:
Session ID + Usage Limitsno longer hydrates from persistent browser cache and now renders only from live snapshot state after refresh. - Legacy scope migration on restore: restored workflow sessions with old session-specific usage-limit scope keys are normalized into the provider-global contract as soon as fresh limits arrive.
- Session-scoped usage limits refresh:
Session ID + Usage Limitsnow refreshes against the real active session context (sessionId + providerId + providerSessionId) instead of a provider-wide synthetic bucket. - Cold-start and stage-switch coverage: usage limits refresh now reruns when Project Manager restores the active workflow session on workspace open and when the user switches to another workflow step/session.
- Immediate rerender path: Core broadcasts manual refresh results back into the concrete runtime
sessionId, so the active snapshot updates immediately through the normalsession:stream -> snapshots -> rerenderflow.
- Sidecar v2 persists layout params:
module-map.flow.jsonschema bumped toversion: 2with a newlayoutParamssection holding per-ProductPart (columns,targetAspectRatio) and per-Cluster (moduleColumns) CSS Grid overrides. Right-click selections now survive diagram reload, PM restart, and cross-window sidecar sync. - Backwards compatible with v1: existing
module-map.flow.jsonfiles from1.1.921still load without errors; missinglayoutParamsfall back to defaults, and on first context-menu edit the sidecar is upgraded to v2 automatically. - Enum-guarded parser: invalid
columns/targetAspectRatio/moduleColumnsvalues are dropped per entry instead of failing the whole sidecar, so hand-edited files degrade gracefully to defaults.
- React Flow removed:
@xyflow/reactdependency deleted; ProductPart cards render in single-column CSS Grid with native scroll. - CSS Grid at all levels: ProductParts, Clusters, and Modules all use browser-native CSS Grid — zero JS layout code.
- Right-click context menu for ProductPart (columns, aspect ratio) and Cluster (module columns) layout overrides — in-memory only until Sidecar v2 in 1.1.922.
- Cmd/Ctrl+scroll zoom with smooth sensitivity; Cmd/Ctrl+0 resets to 100%; clickable zoom badge.
- Edges between modules removed from the diagram canvas.
Previous releases (summary): 1.1.800–1.1.917 — CSS Grid layout engine replacing the iterative settle-loop (~1350 lines deleted), standalone file-link query decode hotfixes, left-sidebar active-stage sync, temporary Description-first workspace startup, workflow-state startup SSOT alignment, Diagram Modules canonical English naming under localized prose, Codex raw-rollout dialog semantics, Codex empty-terminal answer recovery, the short-lived Foundation Envelope rollout later retired in 1.1.906, the heuristic-only Diagram Modules boundary wave in 1.1.907–1.1.915, and earlier localization/provider/release stabilization waves.
- Unified provider orchestration: launch Claude, Codex, or Gemini sessions from an identical picker; the dialog surfaces connection state, enforces one-provider selection, and reminds you to install/authenticate matching CLIs.
- Description-first workflow: the first guided workflow step is
Description, producingquestionnaire.mdandFinal_Description.mdas the canonical entry intoVirtual Simulation. - Persistent standalone UI: the macOS launcher (CEF) stores window position and size in real time, so Project Manager reopens exactly where you left it—even across monitor changes.
- Offline-first packaging: manifests point to the local
~/.codeai-hub/releases/cache, build scripts publish fresh tarballs for core, launcher, and provider modules without relying on GitHub downloads, and the shipped VSIX excludes repository-only Husky hook helpers. - Quality guardrails: Ultracite architecture rules, jscpd duplication scans, knip dead-code detection, and Biome formatting are orchestrated through Husky pre-commit/pre-push hooks.
CodeAI Hub is already usable, but the current recommended installation path is still source-based. If you want to try the product today, clone the repository, build the release artifacts locally, and install the generated VSIX into Visual Studio Code.
- Git
nvm- Node.js 20 +
npm - Visual Studio Code
cmake(required for the standalone CEF launcher / Project Manager build)- the provider CLIs or SDK access you plan to use (
Claude,Codex,Gemini) installed and authenticated separately
git clone https://github.com/OleynikAleksandr/CodeAI-Hub.git
cd CodeAI-Hub
nvm use || nvm install 20
npm install
npm run setup:hooks
./scripts/build-all.sh
./scripts/build-release.sh --use-current-version- VSIX package in the repository root:
codeai-hub-<version>.vsix - fresh runtime tarballs in:
doc/tmp/releases/~/.codeai-hub/releases/
Open Visual Studio Code and run Extensions: Install from VSIX..., then select the generated codeai-hub-<version>.vsix.
- This is the current early-access path, not a polished one-click installer.
- The first full build can take a while because it prepares provider bundles, UI bundles, core runtime, and the standalone launcher.
- Provider CLIs / SDKs are not bundled inside this repository and must be available separately.
Before starting, read doc/SolidWorks-WorkFlow/Docs_Index.md and follow the SSOT contracts in doc/SolidWorks-WorkFlow/Contracts/ (especially Contracts/Workflow_CLI.md) to configure provider CLIs and SDKs.
- Install dependencies
npm install npm run setup:hooks # installs Husky git hooks - Implement changes in
src/andpackages/**(micro-classes + facades; keep files under 500 lines). - Run quality checks before committing:
npm run quality # architecture gate + Ultracite lint npm run check:knip # detect unused files/exports npm run compile # ensure TypeScript builds cleanly
- GitHub Actions now runs a minimal public CI baseline on every push to
mainand on every pull request. - The workflow enforces the same root quality gates used as the local baseline:
npm run check:architecture,npm run lint,npm run check:knip, andnpm run compile. - The root
compilegate now builds@codeai-hub/translation,@codeai-hub/localization, and@codeai-hub/core-supervisorbefore browser/root type-check, so clean GitHub runners do not depend on pre-existing workspacedist/folders. - Local Husky hooks remain the fastest feedback path; CI is the public verification surface, not a replacement for the local release ritual.
./scripts/build-all.sh
./scripts/build-release.sh --use-current-versionmedia/ Bundled webview assets (CSS + JS) shipped with the extension.
media/react-chat.js React bundle generated by the webview build script.
src/core/webview-module/ HTML scaffold that injects the webview assets.
src/extension-module/ Extension host micro-classes.
src/extension.ts Entry point registering the webview provider.
scripts/ Quality and release automation.
doc/ Architecture and knowledge base.
This repository is currently distributed as UNLICENSED. Source is visible for audit and development collaboration, but redistribution requires explicit permission from the repository owner.