Skip to content

feat(memory): add participant context foundation#521

Open
vsumner wants to merge 3 commits intomainfrom
codex/participant-context-foundation
Open

feat(memory): add participant context foundation#521
vsumner wants to merge 3 commits intomainfrom
codex/participant-context-foundation

Conversation

@vsumner
Copy link
Copy Markdown
Collaborator

@vsumner vsumner commented Apr 1, 2026

Why

Working memory had two related gaps.

First, Layer 4 participant context was useful enough to ship, but the initial implementation put too much responsibility into prompt-time rendering. That made the current config-backed version harder to evolve into the full participant-awareness pipeline.

Second, decision extraction was too eager. Routine acknowledgements could be recorded as durable Decision memories, which adds noise and can distort persistence behavior.

Scope

This PR is the foundation slice only. It improves participant-aware prompt context and tightens working-memory decision capture, but it does not add the full humans store or cortex-generated participant summary pipeline yet.

What Changed

  • Added a dedicated participant-context path with per-channel active participant tracking, instead of rebuilding participant state from transcript scans during every render.
  • Introduced a standalone ParticipantContextConfig surface, wired through config loading and runtime hot reload, so future participant-awareness work has a stable integration seam.
  • Threaded participant context through prompt assembly and prompt inspection, and restored the memory bulletin fallback so cold-start turns still see long-term memory when richer synthesis is missing.
  • Tightened reply-text decision extraction so only explicit decision/change language emits Decision events, reducing working-memory noise from routine acknowledgements.
  • Scoped participant activity keys consistently across write and read paths so recent activity stays platform-aware and does not mix users across adapters.
  • Updated the working-memory and participant-awareness design docs to match the current implementation truthfully.

Example Usage

[defaults.participant_context]
enabled = true
min_participants = 2
token_budget = 280
max_participants = 3

Testing

  • just preflight
  • just gate-pr
  • cargo test extracts_decision_summary_from_reply_text --lib
  • cargo test renderable_participants_respects_minimum_and_ordering --lib
  • cargo test upgrades_fallback_participant_entry_without_duplicates --lib
  • cargo test test_participant_context_defaults_and_overrides_resolution --lib
  • cargo test participant_memory_key_uses_platform_scope_for_unknown_sender --lib
  • cargo test test_render_participant_context_uses_humans_and_recent_activity --lib

Notes

  • Participant context currently renders from tracked active participants, config-backed HumanDef metadata, and recent activity. The DB-backed humans store and summary generation loop are intentionally deferred.
  • This PR touches async/stateful channel paths. The participant map is maintained in live channel state, upgraded from fallback keys to canonical human IDs when possible, and cloned before prompt rendering so the render path does not hold state locks across awaits.

Docs

Note

This commit establishes the foundation for participant-aware prompt context in Spacebot's working memory system. It decouples participant state tracking from prompt-time rendering by introducing dedicated configuration-backed participant context paths, enabling more flexible evolution toward full multi-user awareness. Decision extraction logic is tightened to reduce noise from routine acknowledgements. The changes maintain platform-aware activity scoping across write and read paths, with comprehensive test coverage for participant tracking, decision capture, and context rendering. The participant store and cortex-generated synthesis pipelines remain intentionally deferred for future work.
Written by Tembo for commit ac7ae1d. This will update automatically on new commits.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 1, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

Adds in-memory per-channel active participant tracking, a participant-context configuration (TOML + runtime), prompt-time rendering of participant context, expands prompt-engine inputs (knowledge_synthesis + participant_context), extracts decision summaries from agent replies to emit Decision events, and changes channel history return type and tests.

Changes

Cohort / File(s) Summary
Configuration & Runtime
src/config/types.rs, src/config/toml_schema.rs, src/config/load.rs, src/config/runtime.rs, src/config.rs
Adds ParticipantContextConfig type and TOML schema, resolves defaults in loader, exposes hot-reloadable participant_context in RuntimeConfig, and adds a unit test for defaults/overrides.
Participant Tracking Module
src/conversation/participants.rs, src/conversation.rs
New public participants module: ActiveParticipant struct, canonical identity key (participant_memory_key), track_active_participant, participant_display_name, renderable_participants (ordering/truncation/dedupe), and unit tests; re-exports added.
Channel Logic & History
src/agent/channel.rs, src/agent/channel_history.rs
Adds active_participants map to ChannelState, participant tracking calls, decision-extraction helpers, emits Decision events after replies, refactors memory rendering, and changes apply_history_after_turn to return AppliedHistory with tests/adjusted callsites.
Prompt Engine & Memory Renderers
src/prompts/engine.rs, src/memory/working.rs
render_channel_prompt_with_links gains knowledge_synthesis and participant_context args and template context; adds render_participant_context (with token-budget trimming) and participant_recent_activity helper plus tests.
API & Prompt Inspection
src/api/channels.rs
inspect_prompt now loads knowledge_synthesis and renders participant_context from channel_state.active_participants and config, then forwards both into prompt rendering; minor reordering of adapter extraction and error handling.
Templates & Documentation
prompts/en/channel.md.j2, docs/design-docs/participant-awareness.md, docs/design-docs/working-memory.md
Reorders Jinja blocks so working_memory, channel_activity_map, and participant_context render earlier; updates docs to describe per-channel active participant map and config-first, prompt-time rendering semantics.
Tests & Helpers
tests/context_dump.rs
Test helpers updated to initialize new ChannelState.active_participants (empty Arc<RwLock<HashMap<...>>>) where channel state literals are constructed.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Possibly related PRs

Suggested reviewers

  • jamiepine
🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title 'feat(memory): add participant context foundation' clearly and directly summarizes the main change—introducing a participant context foundation for working memory with per-channel active participant tracking and tightened decision extraction.
Description check ✅ Passed The description thoroughly relates to the changeset, explaining the motivation (working-memory gaps), scope (foundation slice only), detailed changes (participant tracking, config surface, decision extraction), example configuration, comprehensive testing, and updated documentation—all matching the actual code modifications.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch codex/participant-context-foundation

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@vsumner vsumner marked this pull request as ready for review April 1, 2026 15:49
@vsumner vsumner force-pushed the codex/participant-context-foundation branch from 682f7cf to ac7ae1d Compare April 1, 2026 15:49
return None;
}

let mut summary = trimmed
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Small edge case: has_explicit_marker is computed across the full reply, but the summary always takes the first sentence. For replies like Got it. We'll switch to X., this will emit a decision event with Got it.

Consider selecting the first sentence that contains an explicit marker/change-comparison, falling back to the first sentence only if none match.

Suggested change
let mut summary = trimmed
let mut summary = trimmed
.split_terminator(['.', '!', '?', '\n'])
.map(str::trim)
.filter(|sentence| !sentence.is_empty())
.find(|sentence| {
let sentence_lower = sentence.to_ascii_lowercase();
explicit_markers.iter().any(|marker| sentence_lower.contains(marker))
|| (sentence_lower.contains(" instead of ")
&& [
"use ",
"switch",
"adopt ",
"choose ",
"pick ",
"go with ",
"proceed with ",
]
.iter()
.any(|marker| sentence_lower.contains(marker)))
})
.unwrap_or(trimmed)
.to_string();

}

let human = find_human_for_sender(humans, platform, &message.sender_id);
let display_name = human
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

display_name can come from user-controlled metadata (sender_display_name / formatted_author) and gets rendered into the system prompt as markdown. Normalizing it to a single line (and probably capping length) would avoid newline/formatting surprises.

Suggested change
let display_name = human
let display_name = human
.and_then(|entry| entry.display_name.as_deref())
.map(str::trim)
.filter(|name| !name.is_empty())
.map(ToOwned::to_owned)
.unwrap_or_else(|| participant_display_name(message));
let display_name = display_name.split_whitespace().collect::<Vec<_>>().join(" ");

&participant_config,
)
.await
.unwrap_or_default();
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For inspect_prompt, swallowing errors with unwrap_or_default() makes missing sections hard to debug. Logging a warn (and still returning empty) would help.

Suggested change
.unwrap_or_default();
.await
.unwrap_or_else(|error| {
tracing::warn!(%error, channel_id = %query.channel_id, "failed to render participant context for prompt inspection");
String::new()
});

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/agent/channel.rs (1)

1590-1602: ⚠️ Potential issue | 🟠 Major

Coalesced replies never reach decision memory.

Lines 1590-1591 discard reply_text, and this path never emits a WorkingMemoryEventType::Decision. Any explicit decision made in a batched/coalesced turn is therefore lost, even though the single-message path records it. At minimum, thread the extracted reply through here and record a channel-scoped decision when sender attribution is ambiguous.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/agent/channel.rs` around lines 1590 - 1602, The batched/coalesced path
currently ignores the extracted reply_text returned from run_agent_turn, so
decisions are never emitted to decision memory; modify the call sequence around
run_agent_turn and handle_agent_result to capture and pass the reply_text (or
equivalent returned value) into handle_agent_result (or a small wrapper) and
ensure that when reply attribution is ambiguous you explicitly emit a
WorkingMemoryEventType::Decision scoped to the channel; update the tuple
unpacking from run_agent_turn to include reply_text, thread that variable into
handle_agent_result or a new record_decision helper, and call the memory/event
emitter to record a channel-scoped Decision when necessary.
🧹 Nitpick comments (2)
src/config.rs (1)

1141-1159: Test name promises fallback behavior, but only full override is exercised.

This currently proves explicit override parsing, not default fallback merge. Consider adding a second case with only one or two fields set (e.g., only enabled) and assert remaining fields come from defaults.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/config.rs` around lines 1141 - 1159, The test
test_participant_context_defaults_and_overrides_resolution currently only
verifies a full override; add a second sub-case (or a new test) that constructs
a TomlConfig/TOML string with a partial override (e.g., set only
defaults.participant_context.enabled = true or only min_participants) then call
Config::from_toml and assert that the unset fields on
config.defaults.participant_context fall back to the library defaults
(token_budget, min_participants, max_participants) rather than being left unset
or zero; reference the existing test function name and use TomlConfig and
Config::from_toml to locate where to add the new assertions.
src/memory/working.rs (1)

771-813: Budget complete participant blocks before appending.

Lines 793-810 trim after the whole section has already been written, so one long profile or Recent: line can leave a half-rendered participant block or even just the header. Build each participant entry into a temporary string and only append it when the full block fits the remaining budget.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/memory/working.rs` around lines 771 - 813, The loop currently writes each
participant directly into output so a single long profile or "Recent:" line can
leave a partial participant block when trimming later; change the logic to build
each participant's entire block into a temporary String (e.g., per_participant
or entry_buf) — include display_name, role, profile_summary, the await call to
participant_recent_activity(...), and its "Recent:" line — then call
estimate_tokens on that temporary block plus the current output tokens and only
append it to output if it fits within config.token_budget; keep the final
fallback trimming logic but it should rarely trigger once you append only full
participant blocks.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@docs/design-docs/working-memory.md`:
- Around line 353-356: Update the participant-context sections to consistently
describe the current config-backed implementation: replace or remove references
to the older cached-summary pipeline (mentions of a `humans` table,
`HumanStore`, and “cortex-generated” participant summaries) and align all text
with the active implementation that uses a per-session active participant map
keyed by canonical human ID or `platform:sender_id`, matches humans to
configured `HumanDef` entries, and renders `display_name`, `role`, and
`description`/`bio` inline; ensure lines around and after the existing paragraph
(including the text currently referencing the older pipeline) are rewritten to
reflect this single architecture and note that the design can later switch to
user-scoped memories or the participant-awareness.md approach without changing
prompt shape.

In `@src/agent/channel.rs`:
- Around line 1952-1961: The code sets user_id by calling
participant_memory_key(...) for all messages, which incorrectly attributes
synthetic system retrigger turns to a fake participant; update the logic around
the user_id assignment (the block that uses self.deps.humans.load() and
participant_memory_key) to skip creating a participant key when the current turn
is a synthetic retrigger—i.e., only compute Some(participant_memory_key(...)) if
!is_retrigger (or message.source != "system") and the existing empty-string
checks pass, otherwise set user_id to None so retrigger/system events remain
channel-scoped.

In `@src/api/channels.rs`:
- Around line 608-615: The current call to render_participant_context uses
.await.unwrap_or_default(), which hides rendering errors; change it to capture
the Result from render_participant_context (do not call unwrap_or_default), then
on Err log the error with context (include query.channel_id,
tracked_participants or participant_config) using the crate's logging/tracing
facility, and only then fall back to a default participant_context if you must;
alternatively propagate the error out of the handler instead of swallowing it.
Refer to render_participant_context, participant_context,
channel_state.deps.working_memory, tracked_participants, query.channel_id, and
participant_config to locate and fix the code.

In `@src/conversation/participants.rs`:
- Around line 113-130: The participant_display_name function currently prefers
metadata["sender_display_name"] over InboundMessage.formatted_author contrary to
the contract; update participant_display_name to first check
message.formatted_author (trim, non-empty, owned) and only if absent/empty fall
back to message.metadata.get("sender_display_name") (trim, non-empty, owned),
finally defaulting to message.sender_id.clone() so formatted_author is primary
and metadata is fallback.

---

Outside diff comments:
In `@src/agent/channel.rs`:
- Around line 1590-1602: The batched/coalesced path currently ignores the
extracted reply_text returned from run_agent_turn, so decisions are never
emitted to decision memory; modify the call sequence around run_agent_turn and
handle_agent_result to capture and pass the reply_text (or equivalent returned
value) into handle_agent_result (or a small wrapper) and ensure that when reply
attribution is ambiguous you explicitly emit a WorkingMemoryEventType::Decision
scoped to the channel; update the tuple unpacking from run_agent_turn to include
reply_text, thread that variable into handle_agent_result or a new
record_decision helper, and call the memory/event emitter to record a
channel-scoped Decision when necessary.

---

Nitpick comments:
In `@src/config.rs`:
- Around line 1141-1159: The test
test_participant_context_defaults_and_overrides_resolution currently only
verifies a full override; add a second sub-case (or a new test) that constructs
a TomlConfig/TOML string with a partial override (e.g., set only
defaults.participant_context.enabled = true or only min_participants) then call
Config::from_toml and assert that the unset fields on
config.defaults.participant_context fall back to the library defaults
(token_budget, min_participants, max_participants) rather than being left unset
or zero; reference the existing test function name and use TomlConfig and
Config::from_toml to locate where to add the new assertions.

In `@src/memory/working.rs`:
- Around line 771-813: The loop currently writes each participant directly into
output so a single long profile or "Recent:" line can leave a partial
participant block when trimming later; change the logic to build each
participant's entire block into a temporary String (e.g., per_participant or
entry_buf) — include display_name, role, profile_summary, the await call to
participant_recent_activity(...), and its "Recent:" line — then call
estimate_tokens on that temporary block plus the current output tokens and only
append it to output if it fits within config.token_budget; keep the final
fallback trimming logic but it should rarely trigger once you append only full
participant blocks.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: f45a2b26-6b44-4e7a-a07a-67ab3a4f016d

📥 Commits

Reviewing files that changed from the base of the PR and between 89fac4c and ac7ae1d.

📒 Files selected for processing (16)
  • docs/design-docs/participant-awareness.md
  • docs/design-docs/working-memory.md
  • prompts/en/channel.md.j2
  • src/agent/channel.rs
  • src/agent/channel_history.rs
  • src/api/channels.rs
  • src/config.rs
  • src/config/load.rs
  • src/config/runtime.rs
  • src/config/toml_schema.rs
  • src/config/types.rs
  • src/conversation.rs
  • src/conversation/participants.rs
  • src/memory/working.rs
  • src/prompts/engine.rs
  • tests/context_dump.rs

jamiepine
jamiepine previously approved these changes Apr 2, 2026
@jamiepine jamiepine force-pushed the codex/participant-context-foundation branch from ac7ae1d to f333167 Compare April 2, 2026 07:11
@jamiepine jamiepine force-pushed the codex/participant-context-foundation branch from f333167 to 130a76e Compare April 2, 2026 07:20
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/agent/channel.rs (1)

1590-1602: ⚠️ Potential issue | 🟠 Major

Coalesced turns currently skip decision recording.

run_agent_turn() now returns reply_text specifically for decision extraction, but this branch throws it away and only dispatches the visible reply. Explicit decisions made during coalesced turns never reach working memory, so recent participant/context rendering diverges from the single-message path.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/agent/channel.rs` around lines 1590 - 1602, run_agent_turn is returning a
reply_text used for decision extraction but the coalesced-turn branch currently
drops it by destructuring as (_, _, _, _, _); update the destructuring to
capture that reply_text (e.g., let (result, skip_flag, replied_flag, reply_text,
_) = self.run_agent_turn(...).await?;) and forward that captured reply_text to
whatever path records decisions (either by passing it into handle_agent_result
or invoking the decision-recording function used on the single-message path) so
explicit decisions from coalesced turns are written to working memory; reference
run_agent_turn, handle_agent_result, combined_text, system_prompt,
conversation_id, and attachment_parts to locate the change.
♻️ Duplicate comments (3)
src/api/channels.rs (1)

608-615: ⚠️ Potential issue | 🟡 Minor

Don't hide participant-context render failures in prompt inspection.

Line 615 still turns renderer errors into an empty section, so this debugging endpoint can't distinguish “no participant context” from “participant-context rendering broke.” Please log the error or surface it instead of defaulting silently.

As per coding guidelines, "Don't silently discard errors. No let _ = on Results. Handle them, log them, or propagate them. The only exception is .ok() on channel sends where the receiver may be dropped."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/api/channels.rs` around lines 608 - 615, The call to
render_participant_context in channels.rs is swallowing renderer errors via
.await.unwrap_or_default(), making prompt inspection unable to tell if there was
no context or a render failure; replace the unwrap_or_default usage around
crate::memory::working::render_participant_context so you explicitly handle the
Result (e.g., match or ?), and on Err log the error with context (include
query.channel_id and participant_config/tracked_participants) or propagate the
error into the inspection response instead of returning an empty
participant_context; update any downstream usage of participant_context
accordingly so failures are visible in the debugging endpoint.
src/agent/channel.rs (2)

1952-1961: ⚠️ Potential issue | 🟠 Major

Do not attribute synthetic retrigger replies to a fake participant.

On retrigger turns message.source and message.sender_id are both "system", so this still records a user-scoped Decision under a synthetic system/system key. Leave these events channel-scoped by guarding participant_memory_key(...) with !is_retrigger (or message.source != "system").

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/agent/channel.rs` around lines 1952 - 1961, The current logic always
creates a participant-scoped key even for synthetic retrigger replies (when
message.source and message.sender_id are "system"); change the branch that sets
user_id so that you only call participant_memory_key(...) when the turn is not a
retrigger—e.g., guard the Some(participant_memory_key(...)) with a condition
like !is_retrigger or message.source != "system" (and still treat empty
source/sender_id as None); update the code paths that reference user_id
accordingly to ensure retrigger events remain channel-scoped.

121-144: ⚠️ Potential issue | 🟠 Major

Apply the decision detector to individual sentences.

The marker scan runs across the full reply, but the summary still comes from the first sentence. That mis-summarizes Got it. We'll switch to X. as Got it, and the current instead of + generic-verb branch will also accept question/hypothetical sentences like Should we use X instead of Y?. Split first, then select the first sentence that itself contains explicit, declarative decision language.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/agent/channel.rs` around lines 121 - 144, The marker detection currently
tests the whole reply but always picks the first sentence for summary; change
the flow in channel::... so you first split trimmed into sentences (using the
existing split_terminator call), iterate sentences, and for each sentence
compute lowercase `lower` and test the existing checks (`explicit_markers`
membership and the `instead of` + verb-list test now named `has_explicit_marker`
/ `has_change_comparison`), then select the first sentence that returns true and
set `summary` to that sentence (trimmed and to_string()); if no sentence
matches, return None as before. Use the same symbols `trimmed`,
`explicit_markers`, `has_explicit_marker`, `has_change_comparison`, and
`summary` so changes are localized.
🧹 Nitpick comments (1)
src/memory/working.rs (1)

1579-1582: Pin min_participants in the test setup.

This test only exercises the renderer while ParticipantContextConfig::default().min_participants <= 1. Setting min_participants: 1 here keeps the test tied to the behavior under test instead of a default that can change.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/memory/working.rs` around lines 1579 - 1582, The test's
ParticipantContextConfig should explicitly set min_participants to 1 so the
behavior under test is stable; update the config construction (the variable
named config of type crate::config::ParticipantContextConfig) to include
min_participants: 1 alongside max_participants: 3 (i.e., set min_participants: 1
in the ParticipantContextConfig initializer) rather than relying on
ParticipantContextConfig::default() for that field.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/config/load.rs`:
- Around line 1609-1626: In the participant_context mapping in
src/config/load.rs, add validation after constructing ParticipantContextConfig
(the block creating ParticipantContextConfig inside
toml.defaults.participant_context.map(...)) to reject configurations where
max_participants < min_participants; e.g., after building the struct check if
max_participants < min_participants and return an error (or propagate a
ConfigLoad/Validation error) so the loader fails fast instead of accepting
impossible bounds; ensure the check references
ParticipantContextConfig.min_participants and .max_participants and plugs into
the function's existing Result/validation error path.

In `@src/memory/working.rs`:
- Around line 771-779: ActiveParticipant.display_name (and its fallback
participant_display_name) can contain newlines/control characters from user
input; before writing participant.display_name in the loop in working.rs,
sanitize it by stripping control characters and newlines and truncating to a
safe max length (e.g. 64 chars). Implement this by normalizing the name string
(replace any \r/\n with a space, remove non-printable/control chars) and then
truncating to the cap, either inline before the write! calls or via a small
helper function like sanitize_display_name(name: &str) used where
participant.display_name (and optionally participant.profile_summary) are
written. Ensure the sanitized variable is used in write!(output, "**{}**", ...)
and any subsequent writes so no raw user-controlled newlines or control
characters reach the system context.

---

Outside diff comments:
In `@src/agent/channel.rs`:
- Around line 1590-1602: run_agent_turn is returning a reply_text used for
decision extraction but the coalesced-turn branch currently drops it by
destructuring as (_, _, _, _, _); update the destructuring to capture that
reply_text (e.g., let (result, skip_flag, replied_flag, reply_text, _) =
self.run_agent_turn(...).await?;) and forward that captured reply_text to
whatever path records decisions (either by passing it into handle_agent_result
or invoking the decision-recording function used on the single-message path) so
explicit decisions from coalesced turns are written to working memory; reference
run_agent_turn, handle_agent_result, combined_text, system_prompt,
conversation_id, and attachment_parts to locate the change.

---

Duplicate comments:
In `@src/agent/channel.rs`:
- Around line 1952-1961: The current logic always creates a participant-scoped
key even for synthetic retrigger replies (when message.source and
message.sender_id are "system"); change the branch that sets user_id so that you
only call participant_memory_key(...) when the turn is not a retrigger—e.g.,
guard the Some(participant_memory_key(...)) with a condition like !is_retrigger
or message.source != "system" (and still treat empty source/sender_id as None);
update the code paths that reference user_id accordingly to ensure retrigger
events remain channel-scoped.
- Around line 121-144: The marker detection currently tests the whole reply but
always picks the first sentence for summary; change the flow in channel::... so
you first split trimmed into sentences (using the existing split_terminator
call), iterate sentences, and for each sentence compute lowercase `lower` and
test the existing checks (`explicit_markers` membership and the `instead of` +
verb-list test now named `has_explicit_marker` / `has_change_comparison`), then
select the first sentence that returns true and set `summary` to that sentence
(trimmed and to_string()); if no sentence matches, return None as before. Use
the same symbols `trimmed`, `explicit_markers`, `has_explicit_marker`,
`has_change_comparison`, and `summary` so changes are localized.

In `@src/api/channels.rs`:
- Around line 608-615: The call to render_participant_context in channels.rs is
swallowing renderer errors via .await.unwrap_or_default(), making prompt
inspection unable to tell if there was no context or a render failure; replace
the unwrap_or_default usage around
crate::memory::working::render_participant_context so you explicitly handle the
Result (e.g., match or ?), and on Err log the error with context (include
query.channel_id and participant_config/tracked_participants) or propagate the
error into the inspection response instead of returning an empty
participant_context; update any downstream usage of participant_context
accordingly so failures are visible in the debugging endpoint.

---

Nitpick comments:
In `@src/memory/working.rs`:
- Around line 1579-1582: The test's ParticipantContextConfig should explicitly
set min_participants to 1 so the behavior under test is stable; update the
config construction (the variable named config of type
crate::config::ParticipantContextConfig) to include min_participants: 1
alongside max_participants: 3 (i.e., set min_participants: 1 in the
ParticipantContextConfig initializer) rather than relying on
ParticipantContextConfig::default() for that field.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 0f5368df-1c8e-448d-8d04-616541f833ea

📥 Commits

Reviewing files that changed from the base of the PR and between ac7ae1d and f333167.

📒 Files selected for processing (16)
  • docs/design-docs/participant-awareness.md
  • docs/design-docs/working-memory.md
  • prompts/en/channel.md.j2
  • src/agent/channel.rs
  • src/agent/channel_history.rs
  • src/api/channels.rs
  • src/config.rs
  • src/config/load.rs
  • src/config/runtime.rs
  • src/config/toml_schema.rs
  • src/config/types.rs
  • src/conversation.rs
  • src/conversation/participants.rs
  • src/memory/working.rs
  • src/prompts/engine.rs
  • tests/context_dump.rs
✅ Files skipped from review due to trivial changes (2)
  • src/config.rs
  • docs/design-docs/participant-awareness.md
🚧 Files skipped from review as they are similar to previous changes (7)
  • prompts/en/channel.md.j2
  • tests/context_dump.rs
  • src/conversation.rs
  • src/prompts/engine.rs
  • docs/design-docs/working-memory.md
  • src/config/types.rs
  • src/conversation/participants.rs

Comment on lines +1609 to +1626
participant_context: toml
.defaults
.participant_context
.map(|participant_context| ParticipantContextConfig {
enabled: participant_context
.enabled
.unwrap_or(base_defaults.participant_context.enabled),
min_participants: participant_context
.min_participants
.unwrap_or(base_defaults.participant_context.min_participants),
token_budget: participant_context
.token_budget
.unwrap_or(base_defaults.participant_context.token_budget),
max_participants: participant_context
.max_participants
.unwrap_or(base_defaults.participant_context.max_participants),
})
.unwrap_or(base_defaults.participant_context),
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Validate impossible participant-context bounds during load.

This block accepts combinations like min_participants = 4 with max_participants = 3. Once that reaches runtime, the section can never satisfy both constraints and just looks like participant context silently stopped working. Please reject at least max_participants < min_participants here so bad config fails fast.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/config/load.rs` around lines 1609 - 1626, In the participant_context
mapping in src/config/load.rs, add validation after constructing
ParticipantContextConfig (the block creating ParticipantContextConfig inside
toml.defaults.participant_context.map(...)) to reject configurations where
max_participants < min_participants; e.g., after building the struct check if
max_participants < min_participants and return an error (or propagate a
ConfigLoad/Validation error) so the loader fails fast instead of accepting
impossible bounds; ensure the check references
ParticipantContextConfig.min_participants and .max_participants and plugs into
the function's existing Result/validation error path.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/agent/channel.rs (1)

1590-1602: ⚠️ Potential issue | 🟠 Major

Coalesced turns currently bypass decision logging.

This batch path ignores the new reply_text return and never calls extract_decision_summary_from_reply, so any decision made in a coalesced reply is absent from working memory. Reuse the same post-turn extraction helper here that the single-message path uses.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/agent/channel.rs` around lines 1590 - 1602, The batch/coalesced path
calls run_agent_turn and then handle_agent_result but ignores the new reply_text
return and therefore skips decision extraction; modify the coalesced branch that
calls run_agent_turn (the tuple currently bound as (result, skip_flag,
replied_flag, _, _)) to capture the reply_text return, then call
extract_decision_summary_from_reply(reply_text, &conversation_id, ...) exactly
as the single-message path does before/after handle_agent_result so the decision
is recorded into working memory; adjust the tuple binding and ensure the
extracted decision summary is passed/stored in the same way the single-message
flow uses.
♻️ Duplicate comments (5)
src/config/load.rs (1)

1614-1631: ⚠️ Potential issue | 🟡 Minor

Validate participant-context bounds during config load.

Line [1614]-Line [1631] still accepts impossible configs (for example, min_participants = 4 with max_participants = 3). Please reject this during load so misconfiguration fails fast instead of silently disabling the section at runtime.

Suggested fix
-            participant_context: toml
-                .defaults
-                .participant_context
-                .map(|participant_context| ParticipantContextConfig {
-                    enabled: participant_context
-                        .enabled
-                        .unwrap_or(base_defaults.participant_context.enabled),
-                    min_participants: participant_context
-                        .min_participants
-                        .unwrap_or(base_defaults.participant_context.min_participants),
-                    token_budget: participant_context
-                        .token_budget
-                        .unwrap_or(base_defaults.participant_context.token_budget),
-                    max_participants: participant_context
-                        .max_participants
-                        .unwrap_or(base_defaults.participant_context.max_participants),
-                })
-                .unwrap_or(base_defaults.participant_context),
+            participant_context: {
+                let participant_context = toml
+                    .defaults
+                    .participant_context
+                    .map(|participant_context| ParticipantContextConfig {
+                        enabled: participant_context
+                            .enabled
+                            .unwrap_or(base_defaults.participant_context.enabled),
+                        min_participants: participant_context
+                            .min_participants
+                            .unwrap_or(base_defaults.participant_context.min_participants),
+                        token_budget: participant_context
+                            .token_budget
+                            .unwrap_or(base_defaults.participant_context.token_budget),
+                        max_participants: participant_context
+                            .max_participants
+                            .unwrap_or(base_defaults.participant_context.max_participants),
+                    })
+                    .unwrap_or(base_defaults.participant_context);
+                if participant_context.max_participants < participant_context.min_participants {
+                    return Err(ConfigError::Invalid(
+                        "defaults.participant_context.max_participants must be >= min_participants"
+                            .to_string(),
+                    )
+                    .into());
+                }
+                participant_context
+            },
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/config/load.rs` around lines 1614 - 1631, The ParticipantContextConfig
being built from toml.defaults.participant_context can accept invalid bounds
(e.g., min_participants > max_participants); after constructing the
ParticipantContextConfig (or in ParticipantContextConfig::new/validate) check
that min_participants <= max_participants and that token_budget (and other
numeric fields if applicable) are non-negative, and if the check fails return a
config load error (or Err) with a clear message including the offending field
names and values; update the load path that currently uses the
toml.defaults...map(...).unwrap_or(...) to propagate that validation error
instead of silently accepting or disabling the section (refer to
ParticipantContextConfig, min_participants, max_participants, token_budget, and
base_defaults.participant_context).
src/conversation/participants.rs (1)

113-129: ⚠️ Potential issue | 🟡 Minor

Prefer formatted_author before metadata fallback.

The InboundMessage contract treats formatted_author as the primary rendered name. Reversing that order here can downgrade the display name now that channel logging and participant prompts both call this helper.

Suggested fix
 pub fn participant_display_name(message: &InboundMessage) -> String {
     message
-        .metadata
-        .get("sender_display_name")
-        .and_then(|value| value.as_str())
-        .map(str::trim)
-        .filter(|name| !name.is_empty())
-        .map(ToOwned::to_owned)
-        .or_else(|| {
-            message
-                .formatted_author
-                .as_deref()
-                .map(str::trim)
-                .filter(|name| !name.is_empty())
-                .map(ToOwned::to_owned)
-        })
+        .formatted_author
+        .as_deref()
+        .map(str::trim)
+        .filter(|name| !name.is_empty())
+        .map(ToOwned::to_owned)
+        .or_else(|| {
+            message
+                .metadata
+                .get("sender_display_name")
+                .and_then(|value| value.as_str())
+                .map(str::trim)
+                .filter(|name| !name.is_empty())
+                .map(ToOwned::to_owned)
+        })
         .unwrap_or_else(|| message.sender_id.clone())
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/conversation/participants.rs` around lines 113 - 129, The
participant_display_name function currently prefers
metadata["sender_display_name"] over InboundMessage.formatted_author; change the
lookup to prefer formatted_author first and fall back to
metadata["sender_display_name"], keeping the same trimming, empty-string
filtering, and final fallback to message.sender_id; update the chain in
participant_display_name to call message.formatted_author (as_deref -> trim ->
filter -> to_owned) before attempting metadata.get("sender_display_name") so
formatted_author is the primary rendered name.
src/agent/channel.rs (2)

1957-1970: ⚠️ Potential issue | 🟠 Major

Log decisions only after a real user reply is delivered.

Two supported paths are wrong here: plaintext fallback replies never set replied_flag, so their decisions are dropped, and retrigger/system turns still compute a system:system participant key when extraction does fire. Move extraction behind the actual delivery path and guard attribution with !is_retrigger.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/agent/channel.rs` around lines 1957 - 1970, The decision extraction and
attribution run too early — move the call to extract_decision_summary_from_reply
and any use of reply_text so it only happens after an actual reply delivery path
(the same place where plaintext fallback and delivered replies are handled), and
ensure participant attribution (participant_memory_key call using
message.source/message.sender_id and humans) is only performed when
!is_retrigger (i.e., avoid computing system:system keys for retrigger/system
turns); in short, defer extract_decision_summary_from_reply behind the delivery
branch and wrap the participant_memory_key logic with a guard that rejects
retrigger/system turns.

139-144: ⚠️ Potential issue | 🟠 Major

Pick the sentence that actually contains the decision.

The marker scan happens on the full reply, but this still keeps the first sentence unconditionally. Replies like Sounds good. We'll switch to X. will emit Sounds good, which puts acknowledgement noise back into working memory.

Suggested fix
-    let mut summary = trimmed
-        .split_terminator(['.', '!', '?', '\n'])
-        .find(|sentence| !sentence.trim().is_empty())
-        .unwrap_or(trimmed)
-        .trim()
-        .to_string();
+    let mut summary = trimmed
+        .split_terminator(['.', '!', '?', '\n'])
+        .map(str::trim)
+        .filter(|sentence| !sentence.is_empty())
+        .find(|sentence| {
+            let sentence_lower = sentence.to_ascii_lowercase();
+            explicit_markers
+                .iter()
+                .any(|marker| sentence_lower.contains(marker))
+                || (sentence_lower.contains(" instead of ")
+                    && [
+                        "use ",
+                        "switch",
+                        "adopt ",
+                        "choose ",
+                        "pick ",
+                        "go with ",
+                        "proceed with ",
+                    ]
+                    .iter()
+                    .any(|marker| sentence_lower.contains(marker)))
+        })
+        .unwrap_or(trimmed)
+        .to_string();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/agent/channel.rs` around lines 139 - 144, The current code
unconditionally picks the first sentence from trimmed via the split_terminator
chain (variable summary), which captures acknowledgements like "Sounds good."
instead of the actual decision; change this to iterate all sentences from
trimmed (using the same split_terminator([...]) iterator) and choose the
sentence that likely contains the decision by preferring sentences that include
decision/intent markers (e.g., contains words like "will", "we'll", "we will",
"going to", "shall", "plan to", "switch to", "use", verbs in future/imperative
forms or contains a clear object), falling back to the last non-empty sentence
if no marker is found; implement this replacement where summary is set so
summary becomes the chosen sentence.trim().to_string() rather than always the
first sentence.
src/memory/working.rs (1)

771-779: ⚠️ Potential issue | 🟠 Major

Sanitize participant display names before rendering this system block.

participant.display_name can still come from user-controlled author metadata. Writing it raw into ## Participants lets newlines/control characters escape the intended markdown structure and inject higher-authority prompt text. Normalize it to a single printable line and cap the rendered length before write!.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/memory/working.rs` around lines 771 - 779, participant.display_name is
user-controlled and is written raw into the markdown block; sanitize it first by
collapsing whitespace/newlines and stripping control/non-printable chars, then
truncate to a safe max length (e.g., 200 chars) before calling write!. Implement
this inside the loop (or add a small helper like sanitize_display_name) and use
the sanitized value instead of participant.display_name when writing the
"**{}**" header (and apply same treatment to participant.profile_summary if
needed).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/agent/channel.rs`:
- Around line 2224-2225: The code currently wraps possibly-empty strings in
Some(...) for memory_bulletin_text and knowledge_synthesis_text, which prevents
render_channel_prompt_with_links() from falling back when the synthesis is
empty; change those assignments so they produce None for empty strings (e.g.,
set knowledge_synthesis_text to None when
rc.knowledge_synthesis.load().to_string() is empty, and similarly for
memory_bulletin_text) so render_channel_prompt_with_links() can perform its
fallback behavior.

In `@src/conversation/participants.rs`:
- Around line 38-42: The participant key function conflates the base platform
used for HumanDef lookup with the adapter-scoped fallback key; change
participant_memory_key to keep using the base platform for
find_human_for_sender(humans, base_platform, sender_id) but build the fallback
string from the adapter when present
(adapter.as_deref().unwrap_or(base_platform)) instead of the lookup platform.
Add an adapter: Option<&str> parameter (or equivalent) to participant_memory_key
and any other affected functions called in the 54-67 region, use the
adapter-derived string only for the format! fallback
(format!("{adapter_or_base}:{sender_id}")) while leaving the lookup call
unchanged.

---

Outside diff comments:
In `@src/agent/channel.rs`:
- Around line 1590-1602: The batch/coalesced path calls run_agent_turn and then
handle_agent_result but ignores the new reply_text return and therefore skips
decision extraction; modify the coalesced branch that calls run_agent_turn (the
tuple currently bound as (result, skip_flag, replied_flag, _, _)) to capture the
reply_text return, then call extract_decision_summary_from_reply(reply_text,
&conversation_id, ...) exactly as the single-message path does before/after
handle_agent_result so the decision is recorded into working memory; adjust the
tuple binding and ensure the extracted decision summary is passed/stored in the
same way the single-message flow uses.

---

Duplicate comments:
In `@src/agent/channel.rs`:
- Around line 1957-1970: The decision extraction and attribution run too early —
move the call to extract_decision_summary_from_reply and any use of reply_text
so it only happens after an actual reply delivery path (the same place where
plaintext fallback and delivered replies are handled), and ensure participant
attribution (participant_memory_key call using message.source/message.sender_id
and humans) is only performed when !is_retrigger (i.e., avoid computing
system:system keys for retrigger/system turns); in short, defer
extract_decision_summary_from_reply behind the delivery branch and wrap the
participant_memory_key logic with a guard that rejects retrigger/system turns.
- Around line 139-144: The current code unconditionally picks the first sentence
from trimmed via the split_terminator chain (variable summary), which captures
acknowledgements like "Sounds good." instead of the actual decision; change this
to iterate all sentences from trimmed (using the same split_terminator([...])
iterator) and choose the sentence that likely contains the decision by
preferring sentences that include decision/intent markers (e.g., contains words
like "will", "we'll", "we will", "going to", "shall", "plan to", "switch to",
"use", verbs in future/imperative forms or contains a clear object), falling
back to the last non-empty sentence if no marker is found; implement this
replacement where summary is set so summary becomes the chosen
sentence.trim().to_string() rather than always the first sentence.

In `@src/config/load.rs`:
- Around line 1614-1631: The ParticipantContextConfig being built from
toml.defaults.participant_context can accept invalid bounds (e.g.,
min_participants > max_participants); after constructing the
ParticipantContextConfig (or in ParticipantContextConfig::new/validate) check
that min_participants <= max_participants and that token_budget (and other
numeric fields if applicable) are non-negative, and if the check fails return a
config load error (or Err) with a clear message including the offending field
names and values; update the load path that currently uses the
toml.defaults...map(...).unwrap_or(...) to propagate that validation error
instead of silently accepting or disabling the section (refer to
ParticipantContextConfig, min_participants, max_participants, token_budget, and
base_defaults.participant_context).

In `@src/conversation/participants.rs`:
- Around line 113-129: The participant_display_name function currently prefers
metadata["sender_display_name"] over InboundMessage.formatted_author; change the
lookup to prefer formatted_author first and fall back to
metadata["sender_display_name"], keeping the same trimming, empty-string
filtering, and final fallback to message.sender_id; update the chain in
participant_display_name to call message.formatted_author (as_deref -> trim ->
filter -> to_owned) before attempting metadata.get("sender_display_name") so
formatted_author is the primary rendered name.

In `@src/memory/working.rs`:
- Around line 771-779: participant.display_name is user-controlled and is
written raw into the markdown block; sanitize it first by collapsing
whitespace/newlines and stripping control/non-printable chars, then truncate to
a safe max length (e.g., 200 chars) before calling write!. Implement this inside
the loop (or add a small helper like sanitize_display_name) and use the
sanitized value instead of participant.display_name when writing the "**{}**"
header (and apply same treatment to participant.profile_summary if needed).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 4ce377f0-8150-4a15-997b-2e61edb09c9a

📥 Commits

Reviewing files that changed from the base of the PR and between f333167 and 130a76e.

📒 Files selected for processing (16)
  • docs/design-docs/participant-awareness.md
  • docs/design-docs/working-memory.md
  • prompts/en/channel.md.j2
  • src/agent/channel.rs
  • src/agent/channel_history.rs
  • src/api/channels.rs
  • src/config.rs
  • src/config/load.rs
  • src/config/runtime.rs
  • src/config/toml_schema.rs
  • src/config/types.rs
  • src/conversation.rs
  • src/conversation/participants.rs
  • src/memory/working.rs
  • src/prompts/engine.rs
  • tests/context_dump.rs
✅ Files skipped from review due to trivial changes (3)
  • docs/design-docs/participant-awareness.md
  • src/api/channels.rs
  • src/config.rs
🚧 Files skipped from review as they are similar to previous changes (6)
  • prompts/en/channel.md.j2
  • src/config/runtime.rs
  • src/conversation.rs
  • src/config/toml_schema.rs
  • docs/design-docs/working-memory.md
  • src/prompts/engine.rs

Comment on lines +38 to +42
pub fn participant_memory_key(humans: &[HumanDef], platform: &str, sender_id: &str) -> String {
find_human_for_sender(humans, platform, sender_id)
.map(|entry| entry.id.clone())
.unwrap_or_else(|| format!("{platform}:{sender_id}"))
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Separate lookup platform from participant key scope.

message.source is only the base platform, so using a single platform string for both HumanDef lookup and fallback key generation collapses named adapters onto the same participant key when sender IDs overlap. The decision-event write path in src/agent/channel.rs uses the same scheme, so recent activity will mix across adapters. Keep lookup on the base platform, but derive the stored fallback key from message.adapter.as_deref().unwrap_or(message.source.as_str()).

Also applies to: 54-67

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/conversation/participants.rs` around lines 38 - 42, The participant key
function conflates the base platform used for HumanDef lookup with the
adapter-scoped fallback key; change participant_memory_key to keep using the
base platform for find_human_for_sender(humans, base_platform, sender_id) but
build the fallback string from the adapter when present
(adapter.as_deref().unwrap_or(base_platform)) instead of the lookup platform.
Add an adapter: Option<&str> parameter (or equivalent) to participant_memory_key
and any other affected functions called in the 54-67 region, use the
adapter-derived string only for the format! fallback
(format!("{adapter_or_base}:{sender_id}")) while leaving the lookup call
unchanged.

@jamiepine
Copy link
Copy Markdown
Member

Review notes — participant context foundation

Nice work on this, the architecture is clean and the test coverage is solid. A few things worth addressing before merge:

Should fix:

  1. Decision summary picks the wrong sentence (channel.rs:139) — has_explicit_marker scans the full reply but the summary always grabs the first sentence. So "Got it. We'll switch to X." emits "Got it" as the Decision, which is exactly the acknowledgement noise this PR is trying to filter out. The summary should select the first sentence that actually contains a marker, falling back to the first sentence only if none match.

  2. System retriggered turns get attributed to a fake participant (channel.rs:1971) — participant_memory_key() runs unconditionally, so synthetic retrigger messages (where source == "system") get written under system/system. Should guard with !is_retrigger or message.source != "system" and set user_id = None for those turns.

  3. Coalesced replies silently drop decisions (channel.rs:1590) — The batched/coalesced path discards reply_text and never emits a Decision event. Any explicit decision made in a multi-message turn is lost. Should thread reply_text through and emit a channel-scoped decision even when sender attribution is ambiguous.

Nice to have:

  1. Display name fallback order (participants.rs:113) — Currently prioritizes metadata["sender_display_name"] over formatted_author, but the InboundMessage contract has formatted_author as primary. Swap the order.

  2. Sanitize display names (participants.rs:60) — These come from user-controlled metadata and get rendered into the system prompt. Worth normalizing to a single line and capping length.

  3. Log participant-context render failures (api/channels.rs:615) — unwrap_or_default() makes errors invisible on a debugging endpoint. A tracing::warn before the fallback would help.

Overall this is in good shape — #1-3 are the ones I'd want addressed before merging.

@vsumner
Copy link
Copy Markdown
Collaborator Author

vsumner commented Apr 3, 2026

Addressed @jamiepine review feedback in f801efc.

Fixed:

  1. Decision extraction now selects the sentence that actually contains the decision marker instead of always taking the first sentence.
  2. Retrigger/system turns no longer get a fake participant attribution.
  3. Coalesced reply paths now emit Decision events instead of dropping them.
  4. Participant display-name fallback now prefers formatted_author over metadata.
  5. Participant display names are sanitized before entering prompt context.
  6. Prompt inspection now logs participant-context render failures instead of silently swallowing them.

Verification:

  • cargo test extracts_decision_summary_from_reply_text --lib
  • cargo test decision_user_id_skips_retrigger_messages --lib
  • cargo test participant_display_name_ --lib
  • just gate-pr

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (3)
src/agent/channel.rs (3)

116-139: Consider extracting the duplicated explicit markers array.

The explicit_markers array is defined identically in both extract_decision_summary_from_reply (lines 116-139) and used in sentence_contains_decision_marker (line 91-93). Consider extracting it to a module-level constant to avoid drift and reduce duplication.

♻️ Suggested refactor
+const DECISION_MARKERS: &[&str] = &[
+    "we decided to ",
+    "i decided to ",
+    "decision:",
+    "the decision is ",
+    "approved: ",
+    "approved to ",
+    "moving forward with ",
+    "move forward with ",
+    "going with ",
+    "switching to ",
+    "we will use ",
+    "i will use ",
+    "we'll use ",
+    "i'll use ",
+    "we will switch to ",
+    "i will switch to ",
+    "we'll switch to ",
+    "i'll switch to ",
+    "we will proceed with ",
+    "i will proceed with ",
+    "we'll proceed with ",
+    "i'll proceed with ",
+];
+
+const CHANGE_COMPARISON_VERBS: &[&str] = &[
+    "use ", "switch", "adopt ", "choose ", "pick ", "go with ", "proceed with ",
+];
+
 fn sentence_contains_decision_marker(sentence: &str, explicit_markers: &[&str]) -> bool {

Then use DECISION_MARKERS and CHANGE_COMPARISON_VERBS in both functions.

Also applies to: 89-106

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/agent/channel.rs` around lines 116 - 139, The explicit_markers array is
duplicated between extract_decision_summary_from_reply and
sentence_contains_decision_marker; extract that array (and any related arrays
like change/comparison verbs) into a module-level constant (e.g.,
DECISION_MARKERS and CHANGE_COMPARISON_VERBS) and replace the in-function
explicit_markers usages with references to those constants so both
extract_decision_summary_from_reply and sentence_contains_decision_marker use
the shared definitions to avoid drift and duplication.

1747-1764: Inconsistent handling of empty strings in prompt parameters.

participant_context passes through empty_to_none (line 1764), but memory_bulletin_text and knowledge_synthesis_text (lines 1749-1750) are passed directly from render_memory_layers as Some(...). This inconsistency means empty bulletin/synthesis strings may suppress fallback behavior that None would enable.

Consider applying the same empty_to_none treatment to bulletin and synthesis, or fix at the source in render_memory_layers as noted earlier.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/agent/channel.rs` around lines 1747 - 1764,
render_channel_prompt_with_links is being passed memory_bulletin_text and
knowledge_synthesis_text directly while participant_context uses empty_to_none,
causing empty strings to behave differently; update the call site to wrap
memory_bulletin_text and knowledge_synthesis_text with empty_to_none (same
helper used for participant_context and working_memory) so empty strings become
None and allow fallback behavior, or alternatively adjust render_memory_layers
to return None for empty results and keep the call unchanged—locate
render_channel_prompt_with_links invocation and modify either the arguments
(memory_bulletin_text, knowledge_synthesis_text) to use empty_to_none or the
render_memory_layers function to emit Option::None for empty strings.

2540-2547: Return tuple is growing large; consider a struct for clarity.

The run_agent_turn function now returns a 5-tuple. While this works, a named struct would improve readability and make it easier to add future fields without breaking call sites.

♻️ Optional refactor
struct AgentTurnResult {
    result: std::result::Result<String, rig::completion::PromptError>,
    skip_flag: crate::tools::SkipFlag,
    replied_flag: crate::tools::RepliedFlag,
    retrigger_reply_preserved: bool,
    reply_text: Option<String>,
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/agent/channel.rs` around lines 2540 - 2547, The return value of
run_agent_turn is a growing 5-tuple which harms readability; define a named
struct (e.g., AgentTurnResult) with fields matching the tuple (result:
Result<String, rig::completion::PromptError>, skip_flag: crate::tools::SkipFlag,
replied_flag: crate::tools::RepliedFlag, retrigger_reply_preserved: bool,
reply_text: Option<String>) and change run_agent_turn's signature to return
Result<AgentTurnResult, ...> (or wrap in the existing Result), then update all
call sites and any pattern matches that destructure the previous tuple to use
the struct fields (AgentTurnResult::result, ::skip_flag, ::replied_flag,
::retrigger_reply_preserved, ::reply_text) so future additions are non-breaking
and code is clearer.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@src/agent/channel.rs`:
- Around line 116-139: The explicit_markers array is duplicated between
extract_decision_summary_from_reply and sentence_contains_decision_marker;
extract that array (and any related arrays like change/comparison verbs) into a
module-level constant (e.g., DECISION_MARKERS and CHANGE_COMPARISON_VERBS) and
replace the in-function explicit_markers usages with references to those
constants so both extract_decision_summary_from_reply and
sentence_contains_decision_marker use the shared definitions to avoid drift and
duplication.
- Around line 1747-1764: render_channel_prompt_with_links is being passed
memory_bulletin_text and knowledge_synthesis_text directly while
participant_context uses empty_to_none, causing empty strings to behave
differently; update the call site to wrap memory_bulletin_text and
knowledge_synthesis_text with empty_to_none (same helper used for
participant_context and working_memory) so empty strings become None and allow
fallback behavior, or alternatively adjust render_memory_layers to return None
for empty results and keep the call unchanged—locate
render_channel_prompt_with_links invocation and modify either the arguments
(memory_bulletin_text, knowledge_synthesis_text) to use empty_to_none or the
render_memory_layers function to emit Option::None for empty strings.
- Around line 2540-2547: The return value of run_agent_turn is a growing 5-tuple
which harms readability; define a named struct (e.g., AgentTurnResult) with
fields matching the tuple (result: Result<String, rig::completion::PromptError>,
skip_flag: crate::tools::SkipFlag, replied_flag: crate::tools::RepliedFlag,
retrigger_reply_preserved: bool, reply_text: Option<String>) and change
run_agent_turn's signature to return Result<AgentTurnResult, ...> (or wrap in
the existing Result), then update all call sites and any pattern matches that
destructure the previous tuple to use the struct fields
(AgentTurnResult::result, ::skip_flag, ::replied_flag,
::retrigger_reply_preserved, ::reply_text) so future additions are non-breaking
and code is clearer.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 10a50999-36a7-4e53-a510-71722b8340ae

📥 Commits

Reviewing files that changed from the base of the PR and between 130a76e and f801efc.

📒 Files selected for processing (3)
  • src/agent/channel.rs
  • src/api/channels.rs
  • src/conversation/participants.rs
🚧 Files skipped from review as they are similar to previous changes (2)
  • src/api/channels.rs
  • src/conversation/participants.rs

@vsumner vsumner force-pushed the codex/participant-context-foundation branch from f801efc to c375803 Compare April 3, 2026 18:03
@vsumner vsumner enabled auto-merge April 3, 2026 18:03
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
src/memory/working.rs (1)

749-766: Redundant empty check after min_participants guard.

Line 764-766 checks if participants.is_empty() after line 759 already returns early when participants.len() < config.min_participants. If min_participants >= 1 (which is the default), the empty check is unreachable. Consider removing it unless you intend to support min_participants = 0:

     if !config.enabled || participants.len() < config.min_participants {
         return Ok(String::new());
     }

     let now = Utc::now();
-    if participants.is_empty() {
-        return Ok(String::new());
-    }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/memory/working.rs` around lines 749 - 766, The check for
participants.is_empty() at the end of render_participant_context is redundant
because the earlier guard returns when participants.len() <
config.min_participants; either remove the unreachable if-block (delete the `if
participants.is_empty() { return Ok(String::new()); }` lines) or explicitly
support min_participants == 0 by adjusting the earlier condition (e.g., change
the len comparison) — locate this in the render_participant_context function and
apply the chosen fix.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@src/memory/working.rs`:
- Around line 749-766: The check for participants.is_empty() at the end of
render_participant_context is redundant because the earlier guard returns when
participants.len() < config.min_participants; either remove the unreachable
if-block (delete the `if participants.is_empty() { return Ok(String::new()); }`
lines) or explicitly support min_participants == 0 by adjusting the earlier
condition (e.g., change the len comparison) — locate this in the
render_participant_context function and apply the chosen fix.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 39b47854-f570-409f-b4c2-d8a84286cf60

📥 Commits

Reviewing files that changed from the base of the PR and between f801efc and c375803.

📒 Files selected for processing (16)
  • docs/design-docs/participant-awareness.md
  • docs/design-docs/working-memory.md
  • prompts/en/channel.md.j2
  • src/agent/channel.rs
  • src/agent/channel_history.rs
  • src/api/channels.rs
  • src/config.rs
  • src/config/load.rs
  • src/config/runtime.rs
  • src/config/toml_schema.rs
  • src/config/types.rs
  • src/conversation.rs
  • src/conversation/participants.rs
  • src/memory/working.rs
  • src/prompts/engine.rs
  • tests/context_dump.rs
✅ Files skipped from review due to trivial changes (4)
  • docs/design-docs/participant-awareness.md
  • src/config/toml_schema.rs
  • src/conversation.rs
  • src/config/types.rs
🚧 Files skipped from review as they are similar to previous changes (6)
  • src/config/runtime.rs
  • prompts/en/channel.md.j2
  • src/config.rs
  • src/config/load.rs
  • src/prompts/engine.rs
  • src/api/channels.rs

@vsumner
Copy link
Copy Markdown
Collaborator Author

vsumner commented Apr 3, 2026

Follow-up review cleanup is pushed in 73e227b.

Included:

  • shared decision marker/comparison verb constants to avoid drift
  • replaced the run_agent_turn return tuple with a named AgentTurnResult
  • normalized the coalesce prompt path so optional synthesis/bulletin text is handled consistently
  • removed the redundant participants.is_empty() guard in render_participant_context

Verification:

  • cargo test extracts_decision_summary_from_reply_text --lib
  • cargo test decision_user_id_skips_retrigger_messages --lib
  • cargo test test_render_participant_context_uses_humans_and_recent_activity --lib
  • just gate-pr

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants