Merge upstream 0.6.97 into local overlay#12
Merged
cbusillo merged 258 commits intolocal/cbusillo-overlayfrom May 1, 2026
Merged
Merge upstream 0.6.97 into local overlay#12cbusillo merged 258 commits intolocal/cbusillo-overlayfrom
cbusillo merged 258 commits intolocal/cbusillo-overlayfrom
Conversation
## Why Config loading had become split across crates: `codex-config` owned the config types and merge logic, while `codex-core` still owned the loader that assembled the layer stack. This change consolidates that responsibility in `codex-config`, so the crate that defines config behavior also owns how configs are discovered and loaded. To make that move possible without reintroducing the old dependency cycle, the shell-environment policy types and helpers that `codex-exec-server` needs now live in `codex-protocol` instead of flowing through `codex-config`. This also makes the migrated loader tests more deterministic on machines that already have managed or system Codex config installed by letting tests override the system config and requirements paths instead of reading the host's `/etc/codex`. ## What Changed - moved the config loader implementation from `codex-core` into `codex-config::loader` and deleted the old `core::config_loader` module instead of leaving a compatibility shim - moved shell-environment policy types and helpers into `codex-protocol`, then updated `codex-exec-server` and other downstream crates to import them from their new home - updated downstream callers to use loader/config APIs from `codex-config` - added test-only loader overrides for system config and requirements paths so loader-focused tests do not depend on host-managed config state - cleaned up now-unused dependency entries and platform-specific cfgs that were surfaced by post-push CI ## Testing - `cargo test -p codex-config` - `cargo test -p codex-core config_loader_tests::` - `cargo test -p codex-protocol -p codex-exec-server -p codex-cloud-requirements -p codex-rmcp-client --lib` - `cargo test --lib -p codex-app-server-client -p codex-exec` - `cargo test --no-run --lib -p codex-app-server` - `cargo test -p codex-linux-sandbox --lib` - `cargo shear` - `just bazel-lock-check` ## Notes - I did not chase unrelated full-suite failures outside the migrated loader surface. - `cargo test -p codex-core --lib` still hits unrelated proxy-sensitive failures on this machine, and Windows CI still shows unrelated long-running/timeouting test noise outside the loader migration itself.
…ai#19393) ## Why Runtime decisions should not infer permissions from the lossy legacy sandbox projection once `PermissionProfile` is available. In particular, `Disabled` and `External` need to remain distinct, and managed profiles with split filesystem or deny-read rules should not be collapsed before approval, network, safety, or analytics code makes decisions. ## What Changed - Changes managed network proxy setup and network approval logic to use `PermissionProfile` when deciding whether a managed sandbox is active. - Migrates patch safety, Guardian/user-shell approval paths, Landlock helper setup, analytics sandbox classification, and selected turn/session code to profile-backed permissions. - Validates command-level profile overrides against the constrained `PermissionProfile` rather than a strict `SandboxPolicy` round trip. - Preserves configured deny-read restrictions when command profiles are narrowed. - Adds coverage for profile-backed trust, network proxy/approval behavior, patch safety, analytics classification, and command-profile narrowing. ## Verification - `cargo test -p codex-core direct_write_roots` - `cargo test -p codex-core runtime_roots_to_legacy_projection` - `cargo test -p codex-app-server requested_permissions_trust_project_uses_permission_profile_intent` --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/19393). * openai#19395 * openai#19394 * __->__ openai#19393
Summary: - Update config tests to reference config requirement types from codex_config after the loader split. Tests: - just fmt - cargo build -p codex-core --tests - cargo clippy -p codex-core --tests -- -D warnings
## Summary Increase `core-all-test`'s Bazel shard count from `8` to `16`. ## Why [openai#19609](openai#19609) restored `bazel.yml` to a 30-minute timeout and increased `app-server-all-test`'s shard count because the bigger timeout risk was not just a cold Windows build. The more common problem was a long `rust_test()` shard failing and getting retried multiple times. Recent `main` runs show that `//codex-rs/core:core-all-test` still has the same shape of problem on Windows: - [Run 24943931330](https://github.com/openai/codex/actions/runs/24943931330) reported `//codex-rs/core:core-all-test` as flaky after first-attempt failures in shard `5/8` and shard `8/8`. - Those retries were driven by `suite::cli_stream::responses_mode_stream_cli_supports_openai_base_url_config_override` and `suite::pending_input::steered_user_input_waits_when_tool_output_triggers_compact_before_next_request`. - The failed shard attempts in that run took `272.61s` and `259.27s` before retrying, which is exactly the sort of wall-clock cost that burns through the 30-minute budget. - [Run 24966332583](https://github.com/openai/codex/actions/runs/24966332583) also retried `//codex-rs/tui:tui-unit-tests` after `app::tests::update_memory_settings_updates_current_thread_memory_mode` failed once on Windows. - [Run 24965527138](https://github.com/openai/codex/actions/runs/24965527138) and its linked [BuildBuddy invocation](https://app.buildbuddy.io/invocation/ac1a8265-06fa-4da5-9552-4715b7965bce) show the other half of the problem: when Windows cache reuse is weak, the `bazel test //...` step can already consume `24m11s` on its own, leaving very little headroom for flaky retries. Increasing `core-all-test` to `16` shards does not fix the flaky tests, but it does reduce the wall-clock cost when a single shard has to be retried. That matches the mitigation we already applied to `app-server-all-test` in `openai#19609`. ## What Changed - Update `codex-rs/core/BUILD.bazel` so `core-all-test` uses `16` shards instead of `8`. - Leave `core-unit-tests` unchanged. ## Follow-up Work This change is meant to buy back CI headroom while we fix the flaky tests themselves in subsequent commits. The recent Windows retries that look worth addressing directly include: - `suite::cli_stream::responses_mode_stream_cli_supports_openai_base_url_config_override` - `suite::pending_input::steered_user_input_waits_when_tool_output_triggers_compact_before_next_request` - `app::tests::update_memory_settings_updates_current_thread_memory_mode` ## Verification - Compared `core-all-test`'s current sharding against the `app-server-all-test` precedent in [openai#19609](openai#19609). - Inspected recent `main` Bazel workflow logs and the linked BuildBuddy invocation to confirm that Windows retries on long shards are still consuming a meaningful fraction of the 30-minute timeout budget. - Did not run local tests for this change because it only adjusts Bazel sharding metadata.
## Why The MCP connection manager module had grown to mix orchestration, RMCP client startup, elicitation handling, Codex Apps cache and naming behavior, tool qualification and filtering, and runtime data. The previous stacked PRs split these responsibilities incrementally; this PR collapses that work into one self-contained refactor on latest main. ## What changed - Move McpConnectionManager into connection_manager.rs. - Move RMCP client lifecycle, startup, and uncached tool listing into rmcp_client.rs. - Move elicitation request tracking and policy handling into elicitation.rs. - Move Codex Apps cache, key, filtering, and naming helpers into codex_apps.rs. - Rename the tool-name helper module to tools.rs and move ToolInfo, tool filtering, schema masking, and qualification there. - Move runtime and sandbox shared types into runtime.rs. - Preserve latest main PermissionProfile-based MCP elicitation auto-approval behavior. ## Verification - just fmt - cargo check -p codex-mcp - cargo check -p codex-mcp --tests - cargo check -p codex-core --------- Co-authored-by: Codex <noreply@openai.com>
This field is unused. Delete it.
## Why Several execution paths still converted profile-backed permissions into `SandboxPolicy` and then rebuilt runtime permissions from that legacy shape. Those round trips are unnecessary after the preceding PRs and can lose split filesystem semantics. Core approval and escalation should carry the resolved profile directly. ## What Changed - Removes `sandbox_policy` from `ResolvedPermissionProfile`; the resolved permission object now carries the canonical `PermissionProfile` directly. - Updates exec-policy fallback, shell/unified-exec interception, escalation reruns, and related tests to pass profiles instead of legacy policies. - Removes legacy additional-permission merge helpers that built an effective `SandboxPolicy` before rebuilding runtime permissions. - Keeps legacy projections only at compatibility boundaries that still require `SandboxPolicy`, not in core permission computation. ## Verification - `cargo test -p codex-core direct_write_roots` - `cargo test -p codex-core runtime_roots_to_legacy_projection` - `cargo test -p codex-app-server requested_permissions_trust_project_uses_permission_profile_intent` --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/19394). * openai#19737 * openai#19736 * openai#19735 * openai#19734 * openai#19395 * __->__ openai#19394
# Why Requirements support host-specific `remote_sandbox_config.hostname_patterns`, but config loading previously resolved and passed the system hostname through every config-loading path even when no requirements layer used `remote_sandbox_config`. On machines where hostname lookup is slow, startup and app-server config reads paid for a feature that was not active. We only need the hostname when a requirements layer actually declares `remote_sandbox_config`, so this moves hostname resolution to the single requirements merge point and keeps all other config callers unaware of hostname matching. # What - Removed the eager `host_name` plumbing from `load_config_layers_state`, `load_requirements_toml`, `ConfigBuilder`, app-server `ConfigManager`, network proxy loading, and related call sites. - Resolve the hostname inside `merge_requirements_with_remote_sandbox_config` only when the incoming requirements contain `remote_sandbox_config`.
## Why The remaining migration work still needs `SandboxPolicy` at a few compatibility boundaries, but those projections should come from one canonical path. Keeping ad hoc legacy projections scattered through app-server, CLI, and config code makes it easy for behavior to drift as `PermissionProfile` gains fidelity that the legacy enum cannot represent. ## What Changed - Adds `Permissions::legacy_sandbox_policy(cwd)` and `Config::legacy_sandbox_policy()` as the compatibility projection from the canonical `PermissionProfile`. - Adds `Permissions::can_set_legacy_sandbox_policy()` so legacy inputs are checked after they are converted into profile semantics. - Updates app-server command handling, Windows sandbox setup, session configuration, and sandbox summaries to use the centralized projection helper. - Leaves `SandboxPolicy` in place only for boundary inputs/outputs that still speak the legacy abstraction. ## Verification - `cargo check -p codex-config -p codex-core -p codex-sandboxing -p codex-app-server -p codex-cli -p codex-tui` - `cargo test -p codex-tui permissions_selection_history_snapshot_full_access_to_default -- --nocapture` - `cargo test -p codex-tui permissions_selection_sends_approvals_reviewer_in_override_turn_context -- --nocapture` - `bazel test //codex-rs/tui:tui-unit-tests-bin --test_arg=permissions_selection_history_snapshot_full_access_to_default --test_output=errors` - `bazel test //codex-rs/tui:tui-unit-tests-bin --test_arg=permissions_selection_sends_approvals_reviewer_in_override_turn_context --test_output=errors` --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/19734). * openai#19737 * openai#19736 * openai#19735 * __->__ openai#19734
## Why Auto-review can deny an action that the user later decides they want to retry. Today there is no TUI surface for selecting a recent denial and sending explicit approval context back into the session, so users have to restate intent manually and the retry can be reviewed without the original denied action context. This adds a narrow TUI-driven path for approving a recent denied action while still keeping the retry inside the normal auto-review flow. ## What Changed - Added `/auto-review-denials` to open a picker of recent denied auto-review actions. - Added a small in-memory TUI store for the 10 most recent denied auto-review events. - Selecting a denial sends the structured denied event back through the existing core/app-server op path. - Core now injects a developer message containing the approved action JSON rather than the full assessment event. - Auto-review transcript collection now preserves this specific approval developer message so follow-up review sessions can see the user approval context. - Added TUI snapshot/unit coverage for the picker and approval dispatch path. - Added core coverage for retaining the approval developer message in the auto-review transcript. ## Verification - `cargo test -p codex-core collect_guardian_transcript_entries_keeps_manual_approval_developer_message` - `cargo test -p codex-tui auto_review_denials` - `cargo test -p codex-tui approving_recent_denial_emits_structured_core_op_once` ## Notes This intentionally keeps retries going through auto-review. The approval signal is context for the exact previously denied action, not a blanket bypass for similar future actions.
## Why After config and requirements store canonical profiles, exec requests should not cache a derived `SandboxPolicy`. The cached legacy value can drift from the richer profile state, and most execution paths already have the filesystem and network runtime policies they need. ## What Changed - Removes `sandbox_policy` from `codex_sandboxing::SandboxExecRequest` and `codex_core::sandboxing::ExecRequest`. - Adds an on-demand `ExecRequest::compatibility_sandbox_policy()` helper for the Windows and legacy call sites that still need a `SandboxPolicy` projection. - Updates Windows filesystem override setup and unified exec policy serialization to derive that compatibility policy at the boundary. - Updates Unix escalation reruns and direct shell requests to reconstruct exec requests from `PermissionProfile` plus runtime filesystem/network policy, without carrying a cached legacy policy. - Adjusts sandboxing manager tests to assert the effective profile rather than the removed legacy field. ## Verification - `cargo check -p codex-config -p codex-core -p codex-sandboxing -p codex-app-server -p codex-cli -p codex-tui` - `cargo test -p codex-sandboxing manager` - `cargo test -p codex-core exec_server_params_use_env_policy_overlay_contract` - `cargo test -p codex-core unix_escalation` - `cargo test -p codex-core exec::tests` - `cargo test -p codex-core sandboxing::tests`
Problem: Maintainers need a shared way to run Codex GitHub issue digests without copying large prompts or relying on manual GitHub page summaries. Solution: Add a reusable codex-issue-digest skill with a deterministic GitHub collector, owner/all-label windows, reaction-aware activity metrics, scaled attention markers, and focused tests.
## Why `features.multi_agent_v2.max_concurrent_threads_per_session` is meant to be the MultiAgentV2-specific session thread cap: it counts the root thread and all open subagent threads. The previous implementation kept this surface tied to `agents.max_threads`, which made it a global subagent-only cap and allowed the legacy setting to coexist with MultiAgentV2. ## What Changed - Added `max_concurrent_threads_per_session` to `[features.multi_agent_v2]` with default `4`. - Removed the `[agents] max_concurrent_threads_per_session` alias to `agents.max_threads`. - When MultiAgentV2 is enabled, reject `agents.max_threads` and derive the existing internal subagent slot limit as `max_concurrent_threads_per_session - 1`. - Regenerated `core/config.schema.json` and added coverage for the new config semantics. ## Result ``` ➜ codex git:(jif/clean-multi-agent-v2-config) codex -c features.multi_agent_v2.enabled=true -c features.multi_agent_v2.max_concurrent_threads_per_session=3 ╭────────────────────────────────────────────────────╮ │ >_ OpenAI Codex (v0.0.0) │ │ │ │ model: gpt-5.5 xhigh fast /model to change │ │ directory: ~/code/codex │ ╰────────────────────────────────────────────────────╯ Tip: Update Required - This version will no longer be supported starting May 8th. Please upgrade to the latest version (https://github.com/openai/codex/releases/latest) using your preferred package manager. › Can you try to spawn 4 agents • I’ll try to start four lightweight agents at once and report exactly what the runtime accepts. • Spawned Russell [no-apps] (gpt-5.5 xhigh) └ Spawn probe 1: reply briefly that you started, then wait for further instructions. Do not do any repo work. • Spawned Descartes [no-apps] (gpt-5.5 xhigh) └ Spawn probe 2: reply briefly that you started, then wait for further instructions. Do not do any repo work. • Agent spawn failed └ Spawn probe 3: reply briefly that you started, then wait for further instructions. Do not do any repo work. • Agent spawn failed └ Spawn probe 4: reply briefly that you started, then wait for further instructions. Do not do any repo work. • The runtime accepted the first two and rejected the next two with agent thread limit reached. I’m checking whether the two accepted probes have returned cleanly, then I’ll close them if needed. ``` --------- Co-authored-by: Codex <noreply@openai.com>
…#18982) ## Why This PR make the `morpheus` agent (memory phase 2) use a git diff to start it's consolidation. The workflow is the following: 1. The agent acquire a lock 2. If `.codex/memories` does not exist or is not a git root, initialize everything (and make a first empty commit) 3. Update `raw_memories.md` and `rollout_summaries/` as before. Basically we select max N phase 1 memories based on a given policy 4. We use git (`gix`) to get a diff between the current state of `.codex/memories` and the last commit. 5. Dump the diff in `phase2_workspace_diff.md` 6. Spawn `morpheus` and point it to `phase2_workspace_diff.md` 7. Wait for `morpheus` to be done 8. Re-create a new `.git` and make one single commit on it. We do this because we don't want to preserve history through `.git` and this is cheap anyway 9. We release the lock On top of this, we keep the retry policies etc etc The goals of this new workflow are: * Better support of any memory extensions such as `chronicle` * Allow the user to manually edit memories and this will be considered by the phase 2 agent As a follow-up we will need to add support for user's edition while `morpheus` is running ## What Changed - Added memory workspace helpers that prepare the git baseline, compute the diff, write `phase2_workspace_diff.md`, and reset the baseline after successful consolidation. - Updated Phase 2 to sync current inputs into `raw_memories.md` and `rollout_summaries/`, prune old extension resources, skip clean workspaces, and run the consolidation subagent only when the workspace has changes. - Tightened Phase 2 job ownership around long-running consolidation with heartbeats and an ownership check before resetting the baseline. - Simplified the prompt and state APIs so DB watermarks are bookkeeping, while workspace dirtiness decides whether consolidation work exists. - Updated the memory pipeline README and tests for workspace diffs, extension-resource cleanup, pollution-driven forgetting, selection ranking, and baseline persistence. ## Verification - Added/updated coverage in `core/src/memories/tests.rs`, `core/src/memories/workspace_tests.rs`, `state/src/runtime/memories.rs`, and `core/tests/suite/memories.rs`. --------- Co-authored-by: Codex <noreply@openai.com>
## Why The Phase 2 memories job row is only the global lock for the git-backed memory workspace. Manual memory edits do not enqueue new Stage 1 work, so a Phase 2 row with `retry_remaining = 0` could be skipped before the worker ever claimed the lock and generated `phase2_workspace_diff.md`. That left workspace-only changes unconsolidated after repeated failures, even when retry backoff had elapsed and the filesystem had real diffable work. ## What Changed - Allow `try_claim_global_phase2_job` to claim the Phase 2 lock after the retry budget is exhausted, while still respecting active `retry_at` backoff and fresh running leases. - Treat `SkippedRetryUnavailable` for Phase 2 as backoff-only, and update the outcome docs to match. - Clamp Phase 2 retry bookkeeping at zero when failed attempts are recorded. ## Verification - Added `phase2_global_lock_can_be_claimed_after_retry_budget_is_exhausted` to cover the exhausted-budget lock claim path. - Ran `cargo test -p codex-state`.
## Why Phase 2 can now claim the global consolidation lock on startup even when the git-backed memory workspace is already clean. The clean-workspace path still finalized through the normal Phase 2 success path, which clears and re-marks `selected_for_phase2` rows. That made no-op startups perform avoidable writes to `stage1_outputs`, creating unnecessary DB I/O and contention when no memory files changed. ## What Changed - Added a preserving-selection Phase 2 finalizer in `codex-state` that only marks the global job row as succeeded. - Kept the existing `mark_global_phase2_job_succeeded` behavior for real consolidation runs, where the selected Phase 2 snapshot must be rewritten. - Switched the `succeeded_no_workspace_changes` branch in `core/src/memories/phase2.rs` to use the preserving-selection finalizer. - Added a regression test that installs a SQLite trigger on `stage1_outputs` and verifies the clean finalizer performs zero updates there. ## Testing - `cargo test -p codex-state` - `cargo test -p codex-core memories::tests::phase2`
Extract memories into 2 different crates
## Why Fixes openai#19508. In a fresh TUI session, pressing `Esc` twice entered the rewind transcript overlay even though there was no user message to rewind to. That produced an empty header-only transcript view and exposed a rewind flow that could not select a valid target. ## What changed The backtrack flow now checks whether a user-message rewind target exists before opening the transcript preview. If no target exists, Codex stays in the main TUI and shows `No previous message to edit.` instead of opening an empty overlay. The same guard applies when starting rewind preview from the transcript overlay, and the first `Esc` no longer advertises the “edit previous message” hint when there is no previous message available. Snapshot coverage was added for the unavailable rewind info message, along with a small target-detection test.
## Why `!` shell commands are currently surfaced as "Bash mode", which is misleading for users running shells such as PowerShell or zsh. Those commands also bypass the persistent prompt history path, so they cannot be recalled after starting a new session. Fixes openai#19613. ## What changed - Rename the TUI footer label and related test wording from "Bash mode" to "Shell mode". - Persist accepted `!` shell commands to prompt history with the leading `!`, so recall restores the composer into shell mode across sessions. - Add coverage for immediate and queued shell-command submissions emitting the prompt-history update. ## Verification - `cargo test -p codex-tui bang_shell` - `cargo test -p codex-tui shell_command_uses_shell_accent_style` - `cargo test -p codex-tui footer_mode_snapshots` - `cargo insta pending-snapshots --manifest-path tui/Cargo.toml` Manually verified fix after confirming presence of bug prior to fix.
## Why Fixes openai#19632. When a delegated agent requests approval for an in-progress file change, the parent TUI handles that request from an inactive thread. The app server already sent the `FileChange` item with the proposed diff, but the inactive-thread approval path was not recovering and rendering it the same way as the active-thread path. The result was an inconsistent approval prompt: main-thread edits show a normal patch preview history item before the approval modal, while delegated edits did not show that preview in the transcript flow. ## What Changed - Recover buffered or historical `FileChange` item changes when building inactive-thread file-change approval requests. - Reuse the app-server file-change conversion helper for both live transcript rendering and inactive-thread approvals. - Render recovered delegated patches as a normal patch preview history cell before the approval modal. - Keep apply-patch approval modals focused on the decision prompt and optional metadata; they do not render a synthetic command line or embed the diff body. ## Manual Repro And Verification I manually reproduced the issue using a file under `~/Desktop` so the write would require approval. Before the fix: 1. Ask the main thread: `Use apply_patch, not shell redirection or Python, to create ~/Desktop/bug1.txt with three short lines.` 2. Observe the expected TUI shape: the transcript shows a normal patch preview such as `• Added ~/Desktop/bug1.txt (+N -0)` above the approval modal, and the modal contains only the approval prompt/options without a synthetic command line. 3. Ask for the delegated path: `Spawn a worker. Have it use apply_patch, not shell redirection or Python, to create ~/Desktop/bug1.txt with four short lines.` 4. Observe the delegated approval is inconsistent: the parent view does not render the proposed patch as the normal transcript preview before the modal, so the diff context is missing from the stream or appears inside the modal instead of in the history flow. After the fix: 1. Repeat the delegated worker prompt with `apply_patch`. 2. Confirm the parent view renders the same normal patch preview history cell (`• Added ~/Desktop/bug1.txt (+N -0)` plus the diff) immediately before the approval modal. 3. Confirm the approval modal remains focused on the decision prompt. For delegated approvals it may show the worker thread label, but it should not show a `$ apply_patch` command line or embed the diff body in the modal.
## Why The plugin, app, and skills handlers had a lot of repeated `send_error`/`return` branches that made the success path hard to scan. This slice keeps behavior the same while moving fallible steps into local response-producing helpers, so the request boundary can send one result. ## What Changed - Converted plugin list/install/uninstall handlers in `codex-rs/app-server/src/codex_message_processor/plugins.rs` to return `Result<*Response, JSONRPCErrorError>` from helper methods and call `send_result` once. - Added local error-mapping helpers for plugin install/uninstall and marketplace failures. - Applied the same mechanical shape to app list, skills list/config, and marketplace add/remove/upgrade handlers in `codex-rs/app-server/src/codex_message_processor.rs`. ## Verification - `cargo check -p codex-app-server` - `cargo test -p codex-app-server --test all v2::app_list -- --test-threads=1` - `cargo test -p codex-app-server --test all v2::plugin_ -- --test-threads=1` - `cargo test -p codex-app-server --test all v2::skills_list -- --test-threads=1`
## Summary Auth loading used to expose synchronous construction helpers in several places even though some auth sources now need async work. This PR makes the auth-loading surface async and updates the callers to await it. This is intentionally only plumbing. It does not change how AgentIdentity tokens are decoded, how task runtime ids are allocated, or how JWT signatures are verified. ## Stack 1. **This PR:** [refactor: make auth loading async](openai#19762) 2. [refactor: load AgentIdentity runtime eagerly](openai#19763) 3. [feat: verify AgentIdentity JWTs with JWKS](openai#19764) ## Important call sites | Area | Change | | --- | --- | | `codex-login` auth loading | `CodexAuth` and `AuthManager` construction paths now await auth loading. | | app-server startup | Auth manager construction is awaited during initialization. | | CLI/TUI/exec/MCP/chatgpt callers | Existing auth-loading calls now await the same behavior. | | cloud requirements storage loader | The loader becomes async so it can share the same auth construction path. | | auth tests | Tests that load auth now run in async contexts. | ## Testing Tests: targeted Rust auth test compilation, formatter, scoped Clippy fix, and Bazel lock check.
…9854) ## Why The `build-test` workflow stages a representative `codex` npm tarball by asking `scripts/stage_npm_packages.py` to look up a past `rust-release` run for a hardcoded release version. That started failing in CI because the representative version in `.github/workflows/ci.yml` was stale: - the workflow was still using `0.115.0` - `stage_npm_packages.py` resolves native artifacts by looking for a `rust-release` run on the `rust-v<version>` branch - that lookup no longer found a matching run for `rust-v0.115.0`, so the smoke test failed before it could stage the package This PR makes that smoke test depend on a known-good recent release run instead of an older branch lookup that is no longer reliable. ## What Changed - Updated the representative release version in `.github/workflows/ci.yml` from `0.115.0` to `0.125.0`. - Added an explicit `WORKFLOW_URL` pointing at a recent successful `rust-release` run: `https://github.com/openai/codex/actions/runs/24901475298`. - Passed that URL to `scripts/stage_npm_packages.py` via `--workflow-url` so the job can reuse the expected native artifacts directly instead of relying on `gh run list --branch rust-v<version>` to discover them. That keeps the npm staging smoke test representative while making it less sensitive to older release branch history disappearing from the GitHub Actions lookup path. ## Verification - Inspected the failing CI log from `build-test` and confirmed the failure came from `scripts/stage_npm_packages.py` being unable to resolve `rust-v0.115.0`. - Confirmed that `https://github.com/openai/codex/actions/runs/24901475298` is a successful `rust-release` run for `rust-v0.125.0`.
## Why All Bazel CI jobs are currently blocked in the `setup-bazelisk` step while trying to download Bazelisk. [`bazelbuild/setup-bazelisk`](https://github.com/bazelbuild/setup-bazelisk) is archived, and its README now recommends migrating to [`bazel-contrib/setup-bazel`](https://github.com/bazel-contrib/setup-bazel), so leaving our workflows on the archived action leaves CI exposed to exactly this sort of outage. Because `v8-canary` now consumes the shared local `setup-bazel-ci` action, that workflow also needs to trigger when the action changes. Without that follow-up, Bazel bootstrap regressions specific to the V8 canary path could be skipped by the workflow path filters. ## What Changed - Switched `.github/actions/setup-bazel-ci/action.yml` from `bazelbuild/setup-bazelisk` to `bazel-contrib/setup-bazel`, pinned to `0.19.0`. - Left `bazelisk-version` unset so GitHub-hosted runners can use their preinstalled Bazelisk instead of downloading `1.x` at job start. - Updated `.github/workflows/rusty-v8-release.yml` and `.github/workflows/v8-canary.yml` to use the shared `setup-bazel-ci` action instead of referencing `setup-bazelisk` directly. - Added `.github/actions/setup-bazel-ci/**` to the `pull_request` and `push` path filters in `.github/workflows/v8-canary.yml` so changes to the shared Bazel setup action still run the canary workflow. - Kept the existing repository-cache and Windows-specific Bazel setup logic intact. This keeps Bazel version selection anchored by `.bazelversion` while removing the failing dependency on the archived setup action. ## Verification - Searched `.github/` to confirm there are no remaining `setup-bazelisk` references. - Parsed the updated workflow and action YAML locally with Ruby's `YAML.load_file`.
## Why Account login/logout and command exec handlers were doing local error sends in the middle of each handler. That made these request flows branch heavily even though most of the logic is validate, perform the operation, and return the response. ## What Changed - Converted ChatGPT/API-key login, login cancel, logout, rate-limit, and add-credit handlers in `codex-rs/app-server/src/codex_message_processor.rs` to compute `Result` values and send them once at the request boundary. - Applied the same shape to command exec start/write/resize/terminate handlers. - Kept side-effect notifications in the same places after successful request handling. ## Verification - `cargo check -p codex-app-server` - `cargo test -p codex-app-server --test all v2::account -- --test-threads=1` - `cargo test -p codex-app-server --test all v2::command_exec -- --test-threads=1`
Large rollouts are no good. This updates the TUI to behave the same as the Codex App, which is also turning it off.
## Why When an MCP or app tool is configured with approval mode `approve` (always allow), users expect that decision to be authoritative. In guardian auto-review mode, ARC could still return `ask-user`, which then routed the approval question into guardian with the ARC reason as context. That meant a tool explicitly configured as always allowed still went through both safety monitors before running. This change keeps the existing ARC behavior for non-auto-review sessions, but avoids the ARC-to-guardian sequence when `approvals_reviewer = auto_review` and the tool approval mode is `approve`. ## What changed - Short-circuit MCP tool approval handling when `approval_mode == approve` and `approvals_reviewer == auto_review`. - Updated the MCP approval regression test so the auto-review case asserts neither ARC nor guardian is called. - Preserved existing tests that verify ARC can still block always-allow MCP tools outside guardian auto-review mode. ## Verification - `cargo test -p codex-core --lib mcp_tool_call`
## Why Codex now has configurable TUI keymaps, but the composer still behaves like a plain text field. Users who prefer modal editing need a way to keep Vim muscle memory while drafting prompts, and the keymap picker needs to expose Vim-specific actions if those bindings are configurable instead of hardcoded. ## What Changed - Adds composer Vim mode with insert/normal state, common normal-mode movement and editing commands, `d`/`y` operator-pending flows, and mode-aware footer and cursor indicators. - Adds `/vim`, an optional global `toggle_vim_mode` binding, and `tui.vim_mode_default` so Vim mode can be toggled per session or enabled as the default composer state. - Extends runtime and config keymaps with `vim_normal` and `vim_operator` contexts, exposes those contexts in `/keymap`, refreshes the config schema, and validates Vim bindings separately. - Integrates Vim normal mode with existing composer behavior: `/` opens slash command entry, `!` enters shell mode, `j`/`k` navigate history at history boundaries, successful submissions reset back to normal mode, and paste burst handling remains insert-mode only. - Teaches the TUI render path to apply and restore cursor style so Vim insert mode can use a bar cursor without leaving the terminal in that state after exit. ## Validation - `cargo test -p codex-tui keymap -- --nocapture` on the keymap/Vim coverage - `cargo insta pending-snapshots` ## Docs This introduces user-facing `/vim`, `tui.vim_mode_default`, and Vim keymap contexts under `tui.keymap`, so the public CLI configuration and slash-command docs should be updated before the feature ships.
## Summary - emit `codex_plugin_installed` after a remote plugin install succeeds - keep local installs unchanged, but let remote installs override the analytics `plugin_id` with the backend remote plugin id (`plugins~Plugin_...`) - preserve the local/display identity in `plugin_name` and `marketplace_name`, plus capability metadata from the installed bundle - add regression coverage for local install analytics, remote install analytics, and analytics id override serialization ## Testing - `just fmt` - `cargo test -p codex-analytics` - `cargo test -p codex-app-server`
openai#20499) …ntal We have some bugs to work out and it is not quite ready to consume as a public API.
# Why The hooks feature flag should use the concise canonical name `hooks`, while existing configs that still use `codex_hooks` continue to work during the rename. # What - change the canonical `Feature::CodexHooks` key from `codex_hooks` to `hooks` - register `codex_hooks` through the existing legacy-alias path - update the config schema and canonical config fixtures to prefer `hooks` - add regression coverage that both `hooks` and `codex_hooks` resolve to `Feature::CodexHooks` # Verification - `cargo test -p codex-features` - `cargo test -p codex-core config::schema_tests` - `cargo test -p codex-core pre_tool_use_blocks_shell_when_defined_in_config_toml` - `cargo test -p codex-app-server hooks_list_uses_each_cwds_effective_feature_enablement`
) ## Why On Windows, Codex runs shell commands through a top-level `powershell.exe -NoProfile -Command ...` wrapper. `execpolicy` was matching that wrapper instead of the inner command, so prefix rules like `["git", "push"]` did not fire for PowerShell-wrapped commands even though the same normalization already happens for `bash -lc` on Unix. This change makes the Windows shell wrapper transparent to rule matching while preserving the existing Windows unmatched-command safelist and dangerous-command heuristics. ## What changed - add `parse_powershell_command_plain_commands()` in `shell-command/src/powershell.rs` to unwrap the top-level PowerShell `-Command` body with `extract_powershell_command()` and parse it with the existing PowerShell AST parser - update `core/src/exec_policy.rs` so `commands_for_exec_policy()` treats top-level PowerShell wrappers like `bash -lc` and evaluates rules against the parsed inner commands - carry a small `ExecPolicyCommandOrigin` through unmatched-command evaluation and expose `is_safe_powershell_words()` / `is_dangerous_powershell_words()` so Windows safelist and dangerous-command checks still work after unwrap - add Windows-focused tests for wrapped PowerShell prompt/allow matches, wrapper parsing, and unmatched safe/dangerous inner commands, and re-enable the end-to-end `execpolicy_blocks_shell_invocation` test on Windows ## Testing - `cargo test -p codex-shell-command`
## Summary Fixes a regression introduced in openai#10941 so that heredocs do not permit file redirects to be approved by rules, and adds scenario tests to cover this behavior. Previously, heredoc command parsing would allow redirects and environment variables: ```bash # commands_for_exec_policy() would parse this via parse_shell_lc_single_command_prefix PATH=/tmp/bad:$PATH cat <<'EOF' > /tmp/bad/hello.txt hello EOF ``` This conflicts with the Codex Rules documentation; heredoc parsing logic should abide by the same strictness of parsing. ## Tests - [x] Updated unit tests accordingly - [x] Added scenario tests for these cases --------- Co-authored-by: Codex <noreply@openai.com>
…#20341) ## Why Remote-control protocol v3 makes segmentation an explicit wire-level feature. The app-server transport needs to support that protocol directly so large messages can be chunked, acknowledged, replayed, and reassembled consistently. ## What changed - Bump the remote-control websocket protocol version from `2` to `3`. - Add explicit client/server chunk envelope variants plus chunk-aware acknowledgements. - Split oversized outbound server messages into bounded transport chunks. - Reassemble ordered inbound client chunks with bounded memory usage and stream/client invalidation handling. - Track inbound chunk cursors and outbound ack cursors as `(seq_id, segment_id)` so duplicate chunks and partial replays behave correctly. - Add focused coverage for chunk splitting, reassembly, duplicate suppression, and stream replacement behavior. ## Validation - Added targeted unit coverage for segmented message handling in `remote_control`. - Local validation is currently blocked before compilation because `packageproxy` does not serve the locked `rustls-webpki 0.103.13` dependency required by the workspace.
## Why Several analytics event families need the same per-thread attribution state: the app-server client/runtime associated with a thread and, for lifecycle-oriented events, the thread metadata captured during initialization. Keeping connection ids and lifecycle metadata in separate maps made each consumer rebuild the same thread context and made subagent attribution harder to resolve consistently. ## What changed - Replaces the separate thread connection and metadata maps with one reducer-owned `threads` map. - Routes guardian, compaction, turn-steer, and turn analytics through shared thread-state lookups while preserving turn-origin attribution for turn events and request-origin attribution for steer events. - Lets newly observed spawned subagent threads inherit their parent thread connection so later thread-scoped analytics can resolve through the same state model. - Adds regression coverage for standalone `SubAgentThreadStarted` publication plus the `SubAgentSource::ThreadSpawn` parent fallback through a thread-scoped consumer that depends on inherited connection state. ## Verification - `cargo test -p codex-analytics` --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/20300). * openai#18748 * openai#18747 * openai#17090 * openai#17089 * openai#20239 * openai#20515 * openai#20514 * __->__ openai#20300
…0484) ## Summary - Surface failed GitHub Actions jobs in the PR babysitter watcher so Codex can fetch job logs as soon as a job fails, instead of waiting for the overall workflow run to complete. - Update babysit-pr skill instructions, GitHub API notes, and heuristics to prefer direct job log archives before falling back to `gh run view --log-failed`. - Add guardrails requiring explicit user confirmation before posting replies to human-authored review comments. - Add guardrails preventing Codex from patching unrelated flaky tests, CI infrastructure, runner issues, dependency outages, or other failures not caused by the PR branch. ## Validation - `python3 -m pytest .codex/skills/babysit-pr/scripts/test_gh_pr_watch.py`
## Summary Remote plugin-service returns plugin availability separately from a user's installed/enabled state. This adds `PluginAvailabilityStatus` to the app-server protocol, propagates remote catalog `status` into `PluginSummary`, and rejects install attempts for remote plugins marked `DISABLED_BY_ADMIN` before downloading or caching the bundle. This is the `openai/codex` half of the change. The companion `openai/openai` webview PR is openai/openai#873269. ## Validation - `cargo run -p codex-app-server-protocol --bin write_schema_fixtures` - `cargo test -p codex-app-server --test all plugin_list_marks_remote_plugin_disabled_by_admin` - `cargo test -p codex-app-server --test all plugin_list_includes_remote_marketplaces_when_remote_plugin_enabled` - `cargo test -p codex-app-server --test all plugin_install_rejects_remote_plugin_disabled_by_admin_before_download` - `cargo test -p codex-app-server-protocol schema_fixtures`
## Why Several legacy `EventMsg` variants were still emitted or mapped even though clients either ignored them or had moved to item/lifecycle events. `Op::Undo` had also degraded to an unavailable shim, so this removes that dead task path instead of preserving a command that cannot do useful work. `McpStartupComplete`, `WebSearchBegin`, and `ImageGenerationBegin` are intentionally kept because useful consumers still depend on them: MCP startup completion drives readiness behavior, and the begin events let app-server/core consumers surface in-progress web-search and image-generation items before the final payload arrives. ## What Changed - Removed weak legacy event variants and payloads from `codex-protocol`, including legacy agent deltas, background events, and undo lifecycle events. - Kept/restored `EventMsg::McpStartupComplete`, `EventMsg::WebSearchBegin`, and `EventMsg::ImageGenerationBegin` with serializer and emission coverage. - Updated core, rollout, MCP server, app-server thread history, review/delegate filtering, and tests to rely on the useful replacement events that remain. - Removed `Op::Undo`, `UndoTask`, the undo test module, and stale TUI slash-command comments. - Stopped agent job/background progress and compaction retry notices from emitting `BackgroundEvent` payloads. ## Verification - `cargo check -p codex-protocol -p codex-app-server-protocol -p codex-core -p codex-rollout -p codex-rollout-trace -p codex-mcp-server` - `cargo test -p codex-protocol -p codex-app-server-protocol -p codex-rollout -p codex-rollout-trace -p codex-mcp-server` - `cargo test -p codex-core --test all suite::items` - `just fix -p codex-protocol -p codex-app-server-protocol -p codex-core -p codex-rollout -p codex-rollout-trace -p codex-mcp-server` - Earlier coverage on this PR also included `codex-mcp`, `codex-tui`, core library tests, MCP/plugin/delegate/review/agent job tests, and MCP startup TUI tests.
- Build one app-server process ThreadStore from startup config and share it with ThreadManager and CodexMessageProcessor. - Remove per-thread/fork store reconstruction so effective thread config cannot switch the persistence backend. - Add params to ThreadStore create/resume for specifying thread metadata, since otherwise the metadata from store creation would be used (incorrectly).
## Why Goal mode shows elapsed time in compact hour/minute form. That is easy to scan for shorter runs, but once a goal runs past 24 hours, large hour counts become harder to read at a glance. ## What changed Updated `codex-rs/tui/src/goal_display.rs` so unbudgeted goal elapsed time keeps the existing compact format below one day, then switches to a day-aware format once the elapsed time reaches 24 hours: - `23h 59m` - `1d 0h 0m` - `2d 23h 42m` The formatter now covers the 24-hour boundary in unit tests, and the TUI status-line snapshot for a completed elapsed goal now exercises the multi-day display. ## Verification - `cargo test -p codex-tui` Here's my longest-running test task: <img width="186" height="23" alt="image" src="https://github.com/user-attachments/assets/cedfcdab-7f6e-44e6-8495-8a39f63973fb" />
## Why Users have shared that the TUI can feel too visually flat because themes mostly show up in code syntax highlighting. The configurable statusline is a natural place to make the active theme more visible, while still letting users keep the existing monotone statusline if they prefer it. ## What Changed - Added a statusline styling helper that builds the rendered statusline from `(StatusLineItem, text)` segments, preserving item identity while keeping the plain text output unchanged. - Derived foreground accent colors from the active syntax theme by looking up TextMate scopes through the existing syntax highlighter, with conservative ANSI fallbacks when a scope does not provide a foreground. - Tuned theme-derived colors to keep the accents visible without making the statusline feel overly bright. - Added `[tui].status_line_use_colors`, defaulting to `true`, plus a separated `/statusline` toggle so users can enable or disable theme-derived statusline colors from the setup UI. - Updated the live statusline and `/statusline` preview to use the same styled builder, while keeping terminal-title preview text plain. - Kept statusline separators and active-agent add-ons subdued while removing blanket dimming from the whole passive statusline. ## Verification - `cargo test -p codex-tui status_line` - `cargo test -p codex-tui theme_picker` - `cargo test -p codex-tui foreground_style_for_scopes` - `cargo test -p codex-tui` - `cargo test -p codex-config` - `cargo test -p codex-core status_line_use_colors` - `cargo insta pending-snapshots --manifest-path tui/Cargo.toml` ## Visual <img width="369" height="23" alt="Screenshot 2026-04-30 at 6 16 08 PM" src="https://github.com/user-attachments/assets/11d03efb-8e4f-4450-8f4d-00a9659ef4cd" /> <img width="385" height="23" alt="Screenshot 2026-04-30 at 6 16 02 PM" src="https://github.com/user-attachments/assets/a3d89f36-bdc1-42e8-8e84-61350e3999e2" />
## Summary - Refresh the remote installed-plugin cache after login/logout instead of keying it by account or eagerly clearing it. - Reuse the existing single-flight remote installed refresh loop so newer queued auth refreshes replace older pending requests and the API result eventually overwrites or clears the cache. - Keep derived plugin/skills cache and MCP refresh side effects behind the existing effective-plugin-changed task when the refreshed installed state changes. - Leave `clear_plugin_related_caches` scoped to derived plugin/skills caches so share mutations do not drop remote installed plugins. ## Tests - `cargo fmt --all --manifest-path codex-rs/Cargo.toml` (passes; stable rustfmt warns that `imports_granularity = Item` is nightly-only) - `cargo test -p codex-core-plugins remote_installed_cache` - `cargo test -p codex-app-server skills_list_loads_remote_installed_plugin_skills_from_cache`
## Summary Adds an app-server `plugin/skill/read` method for remote plugin skill markdown. The new method calls the plugin-service skill detail endpoint and returns `skill_md_contents`, so clients can preview skills for remote plugins before the bundle is installed locally. ## Why Uninstalled remote plugin skills do not have local `SKILL.md` files. Without an on-demand remote read, the desktop plugin details UI cannot render the skill details modal for those skills. ## Validation - `just write-app-server-schema` - `just fmt` - `cargo test -p codex-app-server-protocol` - `cargo test -p codex-app-server --test all -- suite::v2::plugin_read::plugin_skill_read_reads_remote_skill_contents_when_remote_plugin_enabled --exact` - `just fix -p codex-app-server-protocol -p codex-core-plugins -p codex-app-server`
When a local plugin is shared, Codex now records the local plugin path by remote plugin id under CODEX_HOME/.tmp. plugin/share/list includes the remote share URL and the matching local plugin path when available, and plugin/share/delete clears the local mapping after deleting the remote share. Also add sharedURL to plugin/share/list.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: d5e00d750d
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Verification
Notes