feat(memory): add participant context foundation#521
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds in-memory per-channel active participant tracking, a participant-context configuration (TOML + runtime), prompt-time rendering of participant context, expands prompt-engine inputs (knowledge_synthesis + participant_context), extracts decision summaries from agent replies to emit Decision events, and changes channel history return type and tests. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
682f7cf to
ac7ae1d
Compare
src/agent/channel.rs
Outdated
| return None; | ||
| } | ||
|
|
||
| let mut summary = trimmed |
There was a problem hiding this comment.
Small edge case: has_explicit_marker is computed across the full reply, but the summary always takes the first sentence. For replies like Got it. We'll switch to X., this will emit a decision event with Got it.
Consider selecting the first sentence that contains an explicit marker/change-comparison, falling back to the first sentence only if none match.
| let mut summary = trimmed | |
| let mut summary = trimmed | |
| .split_terminator(['.', '!', '?', '\n']) | |
| .map(str::trim) | |
| .filter(|sentence| !sentence.is_empty()) | |
| .find(|sentence| { | |
| let sentence_lower = sentence.to_ascii_lowercase(); | |
| explicit_markers.iter().any(|marker| sentence_lower.contains(marker)) | |
| || (sentence_lower.contains(" instead of ") | |
| && [ | |
| "use ", | |
| "switch", | |
| "adopt ", | |
| "choose ", | |
| "pick ", | |
| "go with ", | |
| "proceed with ", | |
| ] | |
| .iter() | |
| .any(|marker| sentence_lower.contains(marker))) | |
| }) | |
| .unwrap_or(trimmed) | |
| .to_string(); |
| } | ||
|
|
||
| let human = find_human_for_sender(humans, platform, &message.sender_id); | ||
| let display_name = human |
There was a problem hiding this comment.
display_name can come from user-controlled metadata (sender_display_name / formatted_author) and gets rendered into the system prompt as markdown. Normalizing it to a single line (and probably capping length) would avoid newline/formatting surprises.
| let display_name = human | |
| let display_name = human | |
| .and_then(|entry| entry.display_name.as_deref()) | |
| .map(str::trim) | |
| .filter(|name| !name.is_empty()) | |
| .map(ToOwned::to_owned) | |
| .unwrap_or_else(|| participant_display_name(message)); | |
| let display_name = display_name.split_whitespace().collect::<Vec<_>>().join(" "); |
src/api/channels.rs
Outdated
| &participant_config, | ||
| ) | ||
| .await | ||
| .unwrap_or_default(); |
There was a problem hiding this comment.
For inspect_prompt, swallowing errors with unwrap_or_default() makes missing sections hard to debug. Logging a warn (and still returning empty) would help.
| .unwrap_or_default(); | |
| .await | |
| .unwrap_or_else(|error| { | |
| tracing::warn!(%error, channel_id = %query.channel_id, "failed to render participant context for prompt inspection"); | |
| String::new() | |
| }); |
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/agent/channel.rs (1)
1590-1602:⚠️ Potential issue | 🟠 MajorCoalesced replies never reach decision memory.
Lines 1590-1591 discard
reply_text, and this path never emits aWorkingMemoryEventType::Decision. Any explicit decision made in a batched/coalesced turn is therefore lost, even though the single-message path records it. At minimum, thread the extracted reply through here and record a channel-scoped decision when sender attribution is ambiguous.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/agent/channel.rs` around lines 1590 - 1602, The batched/coalesced path currently ignores the extracted reply_text returned from run_agent_turn, so decisions are never emitted to decision memory; modify the call sequence around run_agent_turn and handle_agent_result to capture and pass the reply_text (or equivalent returned value) into handle_agent_result (or a small wrapper) and ensure that when reply attribution is ambiguous you explicitly emit a WorkingMemoryEventType::Decision scoped to the channel; update the tuple unpacking from run_agent_turn to include reply_text, thread that variable into handle_agent_result or a new record_decision helper, and call the memory/event emitter to record a channel-scoped Decision when necessary.
🧹 Nitpick comments (2)
src/config.rs (1)
1141-1159: Test name promises fallback behavior, but only full override is exercised.This currently proves explicit override parsing, not default fallback merge. Consider adding a second case with only one or two fields set (e.g., only
enabled) and assert remaining fields come from defaults.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/config.rs` around lines 1141 - 1159, The test test_participant_context_defaults_and_overrides_resolution currently only verifies a full override; add a second sub-case (or a new test) that constructs a TomlConfig/TOML string with a partial override (e.g., set only defaults.participant_context.enabled = true or only min_participants) then call Config::from_toml and assert that the unset fields on config.defaults.participant_context fall back to the library defaults (token_budget, min_participants, max_participants) rather than being left unset or zero; reference the existing test function name and use TomlConfig and Config::from_toml to locate where to add the new assertions.src/memory/working.rs (1)
771-813: Budget complete participant blocks before appending.Lines 793-810 trim after the whole section has already been written, so one long profile or
Recent:line can leave a half-rendered participant block or even just the header. Build each participant entry into a temporary string and only append it when the full block fits the remaining budget.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/memory/working.rs` around lines 771 - 813, The loop currently writes each participant directly into output so a single long profile or "Recent:" line can leave a partial participant block when trimming later; change the logic to build each participant's entire block into a temporary String (e.g., per_participant or entry_buf) — include display_name, role, profile_summary, the await call to participant_recent_activity(...), and its "Recent:" line — then call estimate_tokens on that temporary block plus the current output tokens and only append it to output if it fits within config.token_budget; keep the final fallback trimming logic but it should rarely trigger once you append only full participant blocks.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/design-docs/working-memory.md`:
- Around line 353-356: Update the participant-context sections to consistently
describe the current config-backed implementation: replace or remove references
to the older cached-summary pipeline (mentions of a `humans` table,
`HumanStore`, and “cortex-generated” participant summaries) and align all text
with the active implementation that uses a per-session active participant map
keyed by canonical human ID or `platform:sender_id`, matches humans to
configured `HumanDef` entries, and renders `display_name`, `role`, and
`description`/`bio` inline; ensure lines around and after the existing paragraph
(including the text currently referencing the older pipeline) are rewritten to
reflect this single architecture and note that the design can later switch to
user-scoped memories or the participant-awareness.md approach without changing
prompt shape.
In `@src/agent/channel.rs`:
- Around line 1952-1961: The code sets user_id by calling
participant_memory_key(...) for all messages, which incorrectly attributes
synthetic system retrigger turns to a fake participant; update the logic around
the user_id assignment (the block that uses self.deps.humans.load() and
participant_memory_key) to skip creating a participant key when the current turn
is a synthetic retrigger—i.e., only compute Some(participant_memory_key(...)) if
!is_retrigger (or message.source != "system") and the existing empty-string
checks pass, otherwise set user_id to None so retrigger/system events remain
channel-scoped.
In `@src/api/channels.rs`:
- Around line 608-615: The current call to render_participant_context uses
.await.unwrap_or_default(), which hides rendering errors; change it to capture
the Result from render_participant_context (do not call unwrap_or_default), then
on Err log the error with context (include query.channel_id,
tracked_participants or participant_config) using the crate's logging/tracing
facility, and only then fall back to a default participant_context if you must;
alternatively propagate the error out of the handler instead of swallowing it.
Refer to render_participant_context, participant_context,
channel_state.deps.working_memory, tracked_participants, query.channel_id, and
participant_config to locate and fix the code.
In `@src/conversation/participants.rs`:
- Around line 113-130: The participant_display_name function currently prefers
metadata["sender_display_name"] over InboundMessage.formatted_author contrary to
the contract; update participant_display_name to first check
message.formatted_author (trim, non-empty, owned) and only if absent/empty fall
back to message.metadata.get("sender_display_name") (trim, non-empty, owned),
finally defaulting to message.sender_id.clone() so formatted_author is primary
and metadata is fallback.
---
Outside diff comments:
In `@src/agent/channel.rs`:
- Around line 1590-1602: The batched/coalesced path currently ignores the
extracted reply_text returned from run_agent_turn, so decisions are never
emitted to decision memory; modify the call sequence around run_agent_turn and
handle_agent_result to capture and pass the reply_text (or equivalent returned
value) into handle_agent_result (or a small wrapper) and ensure that when reply
attribution is ambiguous you explicitly emit a WorkingMemoryEventType::Decision
scoped to the channel; update the tuple unpacking from run_agent_turn to include
reply_text, thread that variable into handle_agent_result or a new
record_decision helper, and call the memory/event emitter to record a
channel-scoped Decision when necessary.
---
Nitpick comments:
In `@src/config.rs`:
- Around line 1141-1159: The test
test_participant_context_defaults_and_overrides_resolution currently only
verifies a full override; add a second sub-case (or a new test) that constructs
a TomlConfig/TOML string with a partial override (e.g., set only
defaults.participant_context.enabled = true or only min_participants) then call
Config::from_toml and assert that the unset fields on
config.defaults.participant_context fall back to the library defaults
(token_budget, min_participants, max_participants) rather than being left unset
or zero; reference the existing test function name and use TomlConfig and
Config::from_toml to locate where to add the new assertions.
In `@src/memory/working.rs`:
- Around line 771-813: The loop currently writes each participant directly into
output so a single long profile or "Recent:" line can leave a partial
participant block when trimming later; change the logic to build each
participant's entire block into a temporary String (e.g., per_participant or
entry_buf) — include display_name, role, profile_summary, the await call to
participant_recent_activity(...), and its "Recent:" line — then call
estimate_tokens on that temporary block plus the current output tokens and only
append it to output if it fits within config.token_budget; keep the final
fallback trimming logic but it should rarely trigger once you append only full
participant blocks.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: f45a2b26-6b44-4e7a-a07a-67ab3a4f016d
📒 Files selected for processing (16)
docs/design-docs/participant-awareness.mddocs/design-docs/working-memory.mdprompts/en/channel.md.j2src/agent/channel.rssrc/agent/channel_history.rssrc/api/channels.rssrc/config.rssrc/config/load.rssrc/config/runtime.rssrc/config/toml_schema.rssrc/config/types.rssrc/conversation.rssrc/conversation/participants.rssrc/memory/working.rssrc/prompts/engine.rstests/context_dump.rs
ac7ae1d to
f333167
Compare
f333167 to
130a76e
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/agent/channel.rs (1)
1590-1602:⚠️ Potential issue | 🟠 MajorCoalesced turns currently skip decision recording.
run_agent_turn()now returnsreply_textspecifically for decision extraction, but this branch throws it away and only dispatches the visible reply. Explicit decisions made during coalesced turns never reach working memory, so recent participant/context rendering diverges from the single-message path.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/agent/channel.rs` around lines 1590 - 1602, run_agent_turn is returning a reply_text used for decision extraction but the coalesced-turn branch currently drops it by destructuring as (_, _, _, _, _); update the destructuring to capture that reply_text (e.g., let (result, skip_flag, replied_flag, reply_text, _) = self.run_agent_turn(...).await?;) and forward that captured reply_text to whatever path records decisions (either by passing it into handle_agent_result or invoking the decision-recording function used on the single-message path) so explicit decisions from coalesced turns are written to working memory; reference run_agent_turn, handle_agent_result, combined_text, system_prompt, conversation_id, and attachment_parts to locate the change.
♻️ Duplicate comments (3)
src/api/channels.rs (1)
608-615:⚠️ Potential issue | 🟡 MinorDon't hide participant-context render failures in prompt inspection.
Line 615 still turns renderer errors into an empty section, so this debugging endpoint can't distinguish “no participant context” from “participant-context rendering broke.” Please log the error or surface it instead of defaulting silently.
As per coding guidelines, "Don't silently discard errors. No
let _ =on Results. Handle them, log them, or propagate them. The only exception is.ok()on channel sends where the receiver may be dropped."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/api/channels.rs` around lines 608 - 615, The call to render_participant_context in channels.rs is swallowing renderer errors via .await.unwrap_or_default(), making prompt inspection unable to tell if there was no context or a render failure; replace the unwrap_or_default usage around crate::memory::working::render_participant_context so you explicitly handle the Result (e.g., match or ?), and on Err log the error with context (include query.channel_id and participant_config/tracked_participants) or propagate the error into the inspection response instead of returning an empty participant_context; update any downstream usage of participant_context accordingly so failures are visible in the debugging endpoint.src/agent/channel.rs (2)
1952-1961:⚠️ Potential issue | 🟠 MajorDo not attribute synthetic retrigger replies to a fake participant.
On retrigger turns
message.sourceandmessage.sender_idare both"system", so this still records a user-scopedDecisionunder a syntheticsystem/systemkey. Leave these events channel-scoped by guardingparticipant_memory_key(...)with!is_retrigger(ormessage.source != "system").🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/agent/channel.rs` around lines 1952 - 1961, The current logic always creates a participant-scoped key even for synthetic retrigger replies (when message.source and message.sender_id are "system"); change the branch that sets user_id so that you only call participant_memory_key(...) when the turn is not a retrigger—e.g., guard the Some(participant_memory_key(...)) with a condition like !is_retrigger or message.source != "system" (and still treat empty source/sender_id as None); update the code paths that reference user_id accordingly to ensure retrigger events remain channel-scoped.
121-144:⚠️ Potential issue | 🟠 MajorApply the decision detector to individual sentences.
The marker scan runs across the full reply, but the summary still comes from the first sentence. That mis-summarizes
Got it. We'll switch to X.asGot it, and the currentinstead of+ generic-verb branch will also accept question/hypothetical sentences likeShould we use X instead of Y?. Split first, then select the first sentence that itself contains explicit, declarative decision language.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/agent/channel.rs` around lines 121 - 144, The marker detection currently tests the whole reply but always picks the first sentence for summary; change the flow in channel::... so you first split trimmed into sentences (using the existing split_terminator call), iterate sentences, and for each sentence compute lowercase `lower` and test the existing checks (`explicit_markers` membership and the `instead of` + verb-list test now named `has_explicit_marker` / `has_change_comparison`), then select the first sentence that returns true and set `summary` to that sentence (trimmed and to_string()); if no sentence matches, return None as before. Use the same symbols `trimmed`, `explicit_markers`, `has_explicit_marker`, `has_change_comparison`, and `summary` so changes are localized.
🧹 Nitpick comments (1)
src/memory/working.rs (1)
1579-1582: Pinmin_participantsin the test setup.This test only exercises the renderer while
ParticipantContextConfig::default().min_participants <= 1. Settingmin_participants: 1here keeps the test tied to the behavior under test instead of a default that can change.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/memory/working.rs` around lines 1579 - 1582, The test's ParticipantContextConfig should explicitly set min_participants to 1 so the behavior under test is stable; update the config construction (the variable named config of type crate::config::ParticipantContextConfig) to include min_participants: 1 alongside max_participants: 3 (i.e., set min_participants: 1 in the ParticipantContextConfig initializer) rather than relying on ParticipantContextConfig::default() for that field.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/config/load.rs`:
- Around line 1609-1626: In the participant_context mapping in
src/config/load.rs, add validation after constructing ParticipantContextConfig
(the block creating ParticipantContextConfig inside
toml.defaults.participant_context.map(...)) to reject configurations where
max_participants < min_participants; e.g., after building the struct check if
max_participants < min_participants and return an error (or propagate a
ConfigLoad/Validation error) so the loader fails fast instead of accepting
impossible bounds; ensure the check references
ParticipantContextConfig.min_participants and .max_participants and plugs into
the function's existing Result/validation error path.
In `@src/memory/working.rs`:
- Around line 771-779: ActiveParticipant.display_name (and its fallback
participant_display_name) can contain newlines/control characters from user
input; before writing participant.display_name in the loop in working.rs,
sanitize it by stripping control characters and newlines and truncating to a
safe max length (e.g. 64 chars). Implement this by normalizing the name string
(replace any \r/\n with a space, remove non-printable/control chars) and then
truncating to the cap, either inline before the write! calls or via a small
helper function like sanitize_display_name(name: &str) used where
participant.display_name (and optionally participant.profile_summary) are
written. Ensure the sanitized variable is used in write!(output, "**{}**", ...)
and any subsequent writes so no raw user-controlled newlines or control
characters reach the system context.
---
Outside diff comments:
In `@src/agent/channel.rs`:
- Around line 1590-1602: run_agent_turn is returning a reply_text used for
decision extraction but the coalesced-turn branch currently drops it by
destructuring as (_, _, _, _, _); update the destructuring to capture that
reply_text (e.g., let (result, skip_flag, replied_flag, reply_text, _) =
self.run_agent_turn(...).await?;) and forward that captured reply_text to
whatever path records decisions (either by passing it into handle_agent_result
or invoking the decision-recording function used on the single-message path) so
explicit decisions from coalesced turns are written to working memory; reference
run_agent_turn, handle_agent_result, combined_text, system_prompt,
conversation_id, and attachment_parts to locate the change.
---
Duplicate comments:
In `@src/agent/channel.rs`:
- Around line 1952-1961: The current logic always creates a participant-scoped
key even for synthetic retrigger replies (when message.source and
message.sender_id are "system"); change the branch that sets user_id so that you
only call participant_memory_key(...) when the turn is not a retrigger—e.g.,
guard the Some(participant_memory_key(...)) with a condition like !is_retrigger
or message.source != "system" (and still treat empty source/sender_id as None);
update the code paths that reference user_id accordingly to ensure retrigger
events remain channel-scoped.
- Around line 121-144: The marker detection currently tests the whole reply but
always picks the first sentence for summary; change the flow in channel::... so
you first split trimmed into sentences (using the existing split_terminator
call), iterate sentences, and for each sentence compute lowercase `lower` and
test the existing checks (`explicit_markers` membership and the `instead of` +
verb-list test now named `has_explicit_marker` / `has_change_comparison`), then
select the first sentence that returns true and set `summary` to that sentence
(trimmed and to_string()); if no sentence matches, return None as before. Use
the same symbols `trimmed`, `explicit_markers`, `has_explicit_marker`,
`has_change_comparison`, and `summary` so changes are localized.
In `@src/api/channels.rs`:
- Around line 608-615: The call to render_participant_context in channels.rs is
swallowing renderer errors via .await.unwrap_or_default(), making prompt
inspection unable to tell if there was no context or a render failure; replace
the unwrap_or_default usage around
crate::memory::working::render_participant_context so you explicitly handle the
Result (e.g., match or ?), and on Err log the error with context (include
query.channel_id and participant_config/tracked_participants) or propagate the
error into the inspection response instead of returning an empty
participant_context; update any downstream usage of participant_context
accordingly so failures are visible in the debugging endpoint.
---
Nitpick comments:
In `@src/memory/working.rs`:
- Around line 1579-1582: The test's ParticipantContextConfig should explicitly
set min_participants to 1 so the behavior under test is stable; update the
config construction (the variable named config of type
crate::config::ParticipantContextConfig) to include min_participants: 1
alongside max_participants: 3 (i.e., set min_participants: 1 in the
ParticipantContextConfig initializer) rather than relying on
ParticipantContextConfig::default() for that field.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 0f5368df-1c8e-448d-8d04-616541f833ea
📒 Files selected for processing (16)
docs/design-docs/participant-awareness.mddocs/design-docs/working-memory.mdprompts/en/channel.md.j2src/agent/channel.rssrc/agent/channel_history.rssrc/api/channels.rssrc/config.rssrc/config/load.rssrc/config/runtime.rssrc/config/toml_schema.rssrc/config/types.rssrc/conversation.rssrc/conversation/participants.rssrc/memory/working.rssrc/prompts/engine.rstests/context_dump.rs
✅ Files skipped from review due to trivial changes (2)
- src/config.rs
- docs/design-docs/participant-awareness.md
🚧 Files skipped from review as they are similar to previous changes (7)
- prompts/en/channel.md.j2
- tests/context_dump.rs
- src/conversation.rs
- src/prompts/engine.rs
- docs/design-docs/working-memory.md
- src/config/types.rs
- src/conversation/participants.rs
| participant_context: toml | ||
| .defaults | ||
| .participant_context | ||
| .map(|participant_context| ParticipantContextConfig { | ||
| enabled: participant_context | ||
| .enabled | ||
| .unwrap_or(base_defaults.participant_context.enabled), | ||
| min_participants: participant_context | ||
| .min_participants | ||
| .unwrap_or(base_defaults.participant_context.min_participants), | ||
| token_budget: participant_context | ||
| .token_budget | ||
| .unwrap_or(base_defaults.participant_context.token_budget), | ||
| max_participants: participant_context | ||
| .max_participants | ||
| .unwrap_or(base_defaults.participant_context.max_participants), | ||
| }) | ||
| .unwrap_or(base_defaults.participant_context), |
There was a problem hiding this comment.
Validate impossible participant-context bounds during load.
This block accepts combinations like min_participants = 4 with max_participants = 3. Once that reaches runtime, the section can never satisfy both constraints and just looks like participant context silently stopped working. Please reject at least max_participants < min_participants here so bad config fails fast.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/config/load.rs` around lines 1609 - 1626, In the participant_context
mapping in src/config/load.rs, add validation after constructing
ParticipantContextConfig (the block creating ParticipantContextConfig inside
toml.defaults.participant_context.map(...)) to reject configurations where
max_participants < min_participants; e.g., after building the struct check if
max_participants < min_participants and return an error (or propagate a
ConfigLoad/Validation error) so the loader fails fast instead of accepting
impossible bounds; ensure the check references
ParticipantContextConfig.min_participants and .max_participants and plugs into
the function's existing Result/validation error path.
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/agent/channel.rs (1)
1590-1602:⚠️ Potential issue | 🟠 MajorCoalesced turns currently bypass decision logging.
This batch path ignores the new
reply_textreturn and never callsextract_decision_summary_from_reply, so any decision made in a coalesced reply is absent from working memory. Reuse the same post-turn extraction helper here that the single-message path uses.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/agent/channel.rs` around lines 1590 - 1602, The batch/coalesced path calls run_agent_turn and then handle_agent_result but ignores the new reply_text return and therefore skips decision extraction; modify the coalesced branch that calls run_agent_turn (the tuple currently bound as (result, skip_flag, replied_flag, _, _)) to capture the reply_text return, then call extract_decision_summary_from_reply(reply_text, &conversation_id, ...) exactly as the single-message path does before/after handle_agent_result so the decision is recorded into working memory; adjust the tuple binding and ensure the extracted decision summary is passed/stored in the same way the single-message flow uses.
♻️ Duplicate comments (5)
src/config/load.rs (1)
1614-1631:⚠️ Potential issue | 🟡 MinorValidate participant-context bounds during config load.
Line [1614]-Line [1631] still accepts impossible configs (for example,
min_participants = 4withmax_participants = 3). Please reject this during load so misconfiguration fails fast instead of silently disabling the section at runtime.Suggested fix
- participant_context: toml - .defaults - .participant_context - .map(|participant_context| ParticipantContextConfig { - enabled: participant_context - .enabled - .unwrap_or(base_defaults.participant_context.enabled), - min_participants: participant_context - .min_participants - .unwrap_or(base_defaults.participant_context.min_participants), - token_budget: participant_context - .token_budget - .unwrap_or(base_defaults.participant_context.token_budget), - max_participants: participant_context - .max_participants - .unwrap_or(base_defaults.participant_context.max_participants), - }) - .unwrap_or(base_defaults.participant_context), + participant_context: { + let participant_context = toml + .defaults + .participant_context + .map(|participant_context| ParticipantContextConfig { + enabled: participant_context + .enabled + .unwrap_or(base_defaults.participant_context.enabled), + min_participants: participant_context + .min_participants + .unwrap_or(base_defaults.participant_context.min_participants), + token_budget: participant_context + .token_budget + .unwrap_or(base_defaults.participant_context.token_budget), + max_participants: participant_context + .max_participants + .unwrap_or(base_defaults.participant_context.max_participants), + }) + .unwrap_or(base_defaults.participant_context); + if participant_context.max_participants < participant_context.min_participants { + return Err(ConfigError::Invalid( + "defaults.participant_context.max_participants must be >= min_participants" + .to_string(), + ) + .into()); + } + participant_context + },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/config/load.rs` around lines 1614 - 1631, The ParticipantContextConfig being built from toml.defaults.participant_context can accept invalid bounds (e.g., min_participants > max_participants); after constructing the ParticipantContextConfig (or in ParticipantContextConfig::new/validate) check that min_participants <= max_participants and that token_budget (and other numeric fields if applicable) are non-negative, and if the check fails return a config load error (or Err) with a clear message including the offending field names and values; update the load path that currently uses the toml.defaults...map(...).unwrap_or(...) to propagate that validation error instead of silently accepting or disabling the section (refer to ParticipantContextConfig, min_participants, max_participants, token_budget, and base_defaults.participant_context).src/conversation/participants.rs (1)
113-129:⚠️ Potential issue | 🟡 MinorPrefer
formatted_authorbefore metadata fallback.The
InboundMessagecontract treatsformatted_authoras the primary rendered name. Reversing that order here can downgrade the display name now that channel logging and participant prompts both call this helper.Suggested fix
pub fn participant_display_name(message: &InboundMessage) -> String { message - .metadata - .get("sender_display_name") - .and_then(|value| value.as_str()) - .map(str::trim) - .filter(|name| !name.is_empty()) - .map(ToOwned::to_owned) - .or_else(|| { - message - .formatted_author - .as_deref() - .map(str::trim) - .filter(|name| !name.is_empty()) - .map(ToOwned::to_owned) - }) + .formatted_author + .as_deref() + .map(str::trim) + .filter(|name| !name.is_empty()) + .map(ToOwned::to_owned) + .or_else(|| { + message + .metadata + .get("sender_display_name") + .and_then(|value| value.as_str()) + .map(str::trim) + .filter(|name| !name.is_empty()) + .map(ToOwned::to_owned) + }) .unwrap_or_else(|| message.sender_id.clone()) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/conversation/participants.rs` around lines 113 - 129, The participant_display_name function currently prefers metadata["sender_display_name"] over InboundMessage.formatted_author; change the lookup to prefer formatted_author first and fall back to metadata["sender_display_name"], keeping the same trimming, empty-string filtering, and final fallback to message.sender_id; update the chain in participant_display_name to call message.formatted_author (as_deref -> trim -> filter -> to_owned) before attempting metadata.get("sender_display_name") so formatted_author is the primary rendered name.src/agent/channel.rs (2)
1957-1970:⚠️ Potential issue | 🟠 MajorLog decisions only after a real user reply is delivered.
Two supported paths are wrong here: plaintext fallback replies never set
replied_flag, so their decisions are dropped, and retrigger/system turns still compute asystem:systemparticipant key when extraction does fire. Move extraction behind the actual delivery path and guard attribution with!is_retrigger.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/agent/channel.rs` around lines 1957 - 1970, The decision extraction and attribution run too early — move the call to extract_decision_summary_from_reply and any use of reply_text so it only happens after an actual reply delivery path (the same place where plaintext fallback and delivered replies are handled), and ensure participant attribution (participant_memory_key call using message.source/message.sender_id and humans) is only performed when !is_retrigger (i.e., avoid computing system:system keys for retrigger/system turns); in short, defer extract_decision_summary_from_reply behind the delivery branch and wrap the participant_memory_key logic with a guard that rejects retrigger/system turns.
139-144:⚠️ Potential issue | 🟠 MajorPick the sentence that actually contains the decision.
The marker scan happens on the full reply, but this still keeps the first sentence unconditionally. Replies like
Sounds good. We'll switch to X.will emitSounds good, which puts acknowledgement noise back into working memory.Suggested fix
- let mut summary = trimmed - .split_terminator(['.', '!', '?', '\n']) - .find(|sentence| !sentence.trim().is_empty()) - .unwrap_or(trimmed) - .trim() - .to_string(); + let mut summary = trimmed + .split_terminator(['.', '!', '?', '\n']) + .map(str::trim) + .filter(|sentence| !sentence.is_empty()) + .find(|sentence| { + let sentence_lower = sentence.to_ascii_lowercase(); + explicit_markers + .iter() + .any(|marker| sentence_lower.contains(marker)) + || (sentence_lower.contains(" instead of ") + && [ + "use ", + "switch", + "adopt ", + "choose ", + "pick ", + "go with ", + "proceed with ", + ] + .iter() + .any(|marker| sentence_lower.contains(marker))) + }) + .unwrap_or(trimmed) + .to_string();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/agent/channel.rs` around lines 139 - 144, The current code unconditionally picks the first sentence from trimmed via the split_terminator chain (variable summary), which captures acknowledgements like "Sounds good." instead of the actual decision; change this to iterate all sentences from trimmed (using the same split_terminator([...]) iterator) and choose the sentence that likely contains the decision by preferring sentences that include decision/intent markers (e.g., contains words like "will", "we'll", "we will", "going to", "shall", "plan to", "switch to", "use", verbs in future/imperative forms or contains a clear object), falling back to the last non-empty sentence if no marker is found; implement this replacement where summary is set so summary becomes the chosen sentence.trim().to_string() rather than always the first sentence.src/memory/working.rs (1)
771-779:⚠️ Potential issue | 🟠 MajorSanitize participant display names before rendering this system block.
participant.display_namecan still come from user-controlled author metadata. Writing it raw into## Participantslets newlines/control characters escape the intended markdown structure and inject higher-authority prompt text. Normalize it to a single printable line and cap the rendered length beforewrite!.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/memory/working.rs` around lines 771 - 779, participant.display_name is user-controlled and is written raw into the markdown block; sanitize it first by collapsing whitespace/newlines and stripping control/non-printable chars, then truncate to a safe max length (e.g., 200 chars) before calling write!. Implement this inside the loop (or add a small helper like sanitize_display_name) and use the sanitized value instead of participant.display_name when writing the "**{}**" header (and apply same treatment to participant.profile_summary if needed).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/agent/channel.rs`:
- Around line 2224-2225: The code currently wraps possibly-empty strings in
Some(...) for memory_bulletin_text and knowledge_synthesis_text, which prevents
render_channel_prompt_with_links() from falling back when the synthesis is
empty; change those assignments so they produce None for empty strings (e.g.,
set knowledge_synthesis_text to None when
rc.knowledge_synthesis.load().to_string() is empty, and similarly for
memory_bulletin_text) so render_channel_prompt_with_links() can perform its
fallback behavior.
In `@src/conversation/participants.rs`:
- Around line 38-42: The participant key function conflates the base platform
used for HumanDef lookup with the adapter-scoped fallback key; change
participant_memory_key to keep using the base platform for
find_human_for_sender(humans, base_platform, sender_id) but build the fallback
string from the adapter when present
(adapter.as_deref().unwrap_or(base_platform)) instead of the lookup platform.
Add an adapter: Option<&str> parameter (or equivalent) to participant_memory_key
and any other affected functions called in the 54-67 region, use the
adapter-derived string only for the format! fallback
(format!("{adapter_or_base}:{sender_id}")) while leaving the lookup call
unchanged.
---
Outside diff comments:
In `@src/agent/channel.rs`:
- Around line 1590-1602: The batch/coalesced path calls run_agent_turn and then
handle_agent_result but ignores the new reply_text return and therefore skips
decision extraction; modify the coalesced branch that calls run_agent_turn (the
tuple currently bound as (result, skip_flag, replied_flag, _, _)) to capture the
reply_text return, then call extract_decision_summary_from_reply(reply_text,
&conversation_id, ...) exactly as the single-message path does before/after
handle_agent_result so the decision is recorded into working memory; adjust the
tuple binding and ensure the extracted decision summary is passed/stored in the
same way the single-message flow uses.
---
Duplicate comments:
In `@src/agent/channel.rs`:
- Around line 1957-1970: The decision extraction and attribution run too early —
move the call to extract_decision_summary_from_reply and any use of reply_text
so it only happens after an actual reply delivery path (the same place where
plaintext fallback and delivered replies are handled), and ensure participant
attribution (participant_memory_key call using message.source/message.sender_id
and humans) is only performed when !is_retrigger (i.e., avoid computing
system:system keys for retrigger/system turns); in short, defer
extract_decision_summary_from_reply behind the delivery branch and wrap the
participant_memory_key logic with a guard that rejects retrigger/system turns.
- Around line 139-144: The current code unconditionally picks the first sentence
from trimmed via the split_terminator chain (variable summary), which captures
acknowledgements like "Sounds good." instead of the actual decision; change this
to iterate all sentences from trimmed (using the same split_terminator([...])
iterator) and choose the sentence that likely contains the decision by
preferring sentences that include decision/intent markers (e.g., contains words
like "will", "we'll", "we will", "going to", "shall", "plan to", "switch to",
"use", verbs in future/imperative forms or contains a clear object), falling
back to the last non-empty sentence if no marker is found; implement this
replacement where summary is set so summary becomes the chosen
sentence.trim().to_string() rather than always the first sentence.
In `@src/config/load.rs`:
- Around line 1614-1631: The ParticipantContextConfig being built from
toml.defaults.participant_context can accept invalid bounds (e.g.,
min_participants > max_participants); after constructing the
ParticipantContextConfig (or in ParticipantContextConfig::new/validate) check
that min_participants <= max_participants and that token_budget (and other
numeric fields if applicable) are non-negative, and if the check fails return a
config load error (or Err) with a clear message including the offending field
names and values; update the load path that currently uses the
toml.defaults...map(...).unwrap_or(...) to propagate that validation error
instead of silently accepting or disabling the section (refer to
ParticipantContextConfig, min_participants, max_participants, token_budget, and
base_defaults.participant_context).
In `@src/conversation/participants.rs`:
- Around line 113-129: The participant_display_name function currently prefers
metadata["sender_display_name"] over InboundMessage.formatted_author; change the
lookup to prefer formatted_author first and fall back to
metadata["sender_display_name"], keeping the same trimming, empty-string
filtering, and final fallback to message.sender_id; update the chain in
participant_display_name to call message.formatted_author (as_deref -> trim ->
filter -> to_owned) before attempting metadata.get("sender_display_name") so
formatted_author is the primary rendered name.
In `@src/memory/working.rs`:
- Around line 771-779: participant.display_name is user-controlled and is
written raw into the markdown block; sanitize it first by collapsing
whitespace/newlines and stripping control/non-printable chars, then truncate to
a safe max length (e.g., 200 chars) before calling write!. Implement this inside
the loop (or add a small helper like sanitize_display_name) and use the
sanitized value instead of participant.display_name when writing the "**{}**"
header (and apply same treatment to participant.profile_summary if needed).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 4ce377f0-8150-4a15-997b-2e61edb09c9a
📒 Files selected for processing (16)
docs/design-docs/participant-awareness.mddocs/design-docs/working-memory.mdprompts/en/channel.md.j2src/agent/channel.rssrc/agent/channel_history.rssrc/api/channels.rssrc/config.rssrc/config/load.rssrc/config/runtime.rssrc/config/toml_schema.rssrc/config/types.rssrc/conversation.rssrc/conversation/participants.rssrc/memory/working.rssrc/prompts/engine.rstests/context_dump.rs
✅ Files skipped from review due to trivial changes (3)
- docs/design-docs/participant-awareness.md
- src/api/channels.rs
- src/config.rs
🚧 Files skipped from review as they are similar to previous changes (6)
- prompts/en/channel.md.j2
- src/config/runtime.rs
- src/conversation.rs
- src/config/toml_schema.rs
- docs/design-docs/working-memory.md
- src/prompts/engine.rs
| pub fn participant_memory_key(humans: &[HumanDef], platform: &str, sender_id: &str) -> String { | ||
| find_human_for_sender(humans, platform, sender_id) | ||
| .map(|entry| entry.id.clone()) | ||
| .unwrap_or_else(|| format!("{platform}:{sender_id}")) | ||
| } |
There was a problem hiding this comment.
Separate lookup platform from participant key scope.
message.source is only the base platform, so using a single platform string for both HumanDef lookup and fallback key generation collapses named adapters onto the same participant key when sender IDs overlap. The decision-event write path in src/agent/channel.rs uses the same scheme, so recent activity will mix across adapters. Keep lookup on the base platform, but derive the stored fallback key from message.adapter.as_deref().unwrap_or(message.source.as_str()).
Also applies to: 54-67
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/conversation/participants.rs` around lines 38 - 42, The participant key
function conflates the base platform used for HumanDef lookup with the
adapter-scoped fallback key; change participant_memory_key to keep using the
base platform for find_human_for_sender(humans, base_platform, sender_id) but
build the fallback string from the adapter when present
(adapter.as_deref().unwrap_or(base_platform)) instead of the lookup platform.
Add an adapter: Option<&str> parameter (or equivalent) to participant_memory_key
and any other affected functions called in the 54-67 region, use the
adapter-derived string only for the format! fallback
(format!("{adapter_or_base}:{sender_id}")) while leaving the lookup call
unchanged.
|
Review notes — participant context foundation Nice work on this, the architecture is clean and the test coverage is solid. A few things worth addressing before merge: Should fix:
Nice to have:
Overall this is in good shape — #1-3 are the ones I'd want addressed before merging. |
|
Addressed @jamiepine review feedback in Fixed:
Verification:
|
There was a problem hiding this comment.
🧹 Nitpick comments (3)
src/agent/channel.rs (3)
116-139: Consider extracting the duplicated explicit markers array.The
explicit_markersarray is defined identically in bothextract_decision_summary_from_reply(lines 116-139) and used insentence_contains_decision_marker(line 91-93). Consider extracting it to a module-level constant to avoid drift and reduce duplication.♻️ Suggested refactor
+const DECISION_MARKERS: &[&str] = &[ + "we decided to ", + "i decided to ", + "decision:", + "the decision is ", + "approved: ", + "approved to ", + "moving forward with ", + "move forward with ", + "going with ", + "switching to ", + "we will use ", + "i will use ", + "we'll use ", + "i'll use ", + "we will switch to ", + "i will switch to ", + "we'll switch to ", + "i'll switch to ", + "we will proceed with ", + "i will proceed with ", + "we'll proceed with ", + "i'll proceed with ", +]; + +const CHANGE_COMPARISON_VERBS: &[&str] = &[ + "use ", "switch", "adopt ", "choose ", "pick ", "go with ", "proceed with ", +]; + fn sentence_contains_decision_marker(sentence: &str, explicit_markers: &[&str]) -> bool {Then use
DECISION_MARKERSandCHANGE_COMPARISON_VERBSin both functions.Also applies to: 89-106
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/agent/channel.rs` around lines 116 - 139, The explicit_markers array is duplicated between extract_decision_summary_from_reply and sentence_contains_decision_marker; extract that array (and any related arrays like change/comparison verbs) into a module-level constant (e.g., DECISION_MARKERS and CHANGE_COMPARISON_VERBS) and replace the in-function explicit_markers usages with references to those constants so both extract_decision_summary_from_reply and sentence_contains_decision_marker use the shared definitions to avoid drift and duplication.
1747-1764: Inconsistent handling of empty strings in prompt parameters.
participant_contextpasses throughempty_to_none(line 1764), butmemory_bulletin_textandknowledge_synthesis_text(lines 1749-1750) are passed directly fromrender_memory_layersasSome(...). This inconsistency means empty bulletin/synthesis strings may suppress fallback behavior thatNonewould enable.Consider applying the same
empty_to_nonetreatment to bulletin and synthesis, or fix at the source inrender_memory_layersas noted earlier.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/agent/channel.rs` around lines 1747 - 1764, render_channel_prompt_with_links is being passed memory_bulletin_text and knowledge_synthesis_text directly while participant_context uses empty_to_none, causing empty strings to behave differently; update the call site to wrap memory_bulletin_text and knowledge_synthesis_text with empty_to_none (same helper used for participant_context and working_memory) so empty strings become None and allow fallback behavior, or alternatively adjust render_memory_layers to return None for empty results and keep the call unchanged—locate render_channel_prompt_with_links invocation and modify either the arguments (memory_bulletin_text, knowledge_synthesis_text) to use empty_to_none or the render_memory_layers function to emit Option::None for empty strings.
2540-2547: Return tuple is growing large; consider a struct for clarity.The
run_agent_turnfunction now returns a 5-tuple. While this works, a named struct would improve readability and make it easier to add future fields without breaking call sites.♻️ Optional refactor
struct AgentTurnResult { result: std::result::Result<String, rig::completion::PromptError>, skip_flag: crate::tools::SkipFlag, replied_flag: crate::tools::RepliedFlag, retrigger_reply_preserved: bool, reply_text: Option<String>, }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/agent/channel.rs` around lines 2540 - 2547, The return value of run_agent_turn is a growing 5-tuple which harms readability; define a named struct (e.g., AgentTurnResult) with fields matching the tuple (result: Result<String, rig::completion::PromptError>, skip_flag: crate::tools::SkipFlag, replied_flag: crate::tools::RepliedFlag, retrigger_reply_preserved: bool, reply_text: Option<String>) and change run_agent_turn's signature to return Result<AgentTurnResult, ...> (or wrap in the existing Result), then update all call sites and any pattern matches that destructure the previous tuple to use the struct fields (AgentTurnResult::result, ::skip_flag, ::replied_flag, ::retrigger_reply_preserved, ::reply_text) so future additions are non-breaking and code is clearer.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/agent/channel.rs`:
- Around line 116-139: The explicit_markers array is duplicated between
extract_decision_summary_from_reply and sentence_contains_decision_marker;
extract that array (and any related arrays like change/comparison verbs) into a
module-level constant (e.g., DECISION_MARKERS and CHANGE_COMPARISON_VERBS) and
replace the in-function explicit_markers usages with references to those
constants so both extract_decision_summary_from_reply and
sentence_contains_decision_marker use the shared definitions to avoid drift and
duplication.
- Around line 1747-1764: render_channel_prompt_with_links is being passed
memory_bulletin_text and knowledge_synthesis_text directly while
participant_context uses empty_to_none, causing empty strings to behave
differently; update the call site to wrap memory_bulletin_text and
knowledge_synthesis_text with empty_to_none (same helper used for
participant_context and working_memory) so empty strings become None and allow
fallback behavior, or alternatively adjust render_memory_layers to return None
for empty results and keep the call unchanged—locate
render_channel_prompt_with_links invocation and modify either the arguments
(memory_bulletin_text, knowledge_synthesis_text) to use empty_to_none or the
render_memory_layers function to emit Option::None for empty strings.
- Around line 2540-2547: The return value of run_agent_turn is a growing 5-tuple
which harms readability; define a named struct (e.g., AgentTurnResult) with
fields matching the tuple (result: Result<String, rig::completion::PromptError>,
skip_flag: crate::tools::SkipFlag, replied_flag: crate::tools::RepliedFlag,
retrigger_reply_preserved: bool, reply_text: Option<String>) and change
run_agent_turn's signature to return Result<AgentTurnResult, ...> (or wrap in
the existing Result), then update all call sites and any pattern matches that
destructure the previous tuple to use the struct fields
(AgentTurnResult::result, ::skip_flag, ::replied_flag,
::retrigger_reply_preserved, ::reply_text) so future additions are non-breaking
and code is clearer.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 10a50999-36a7-4e53-a510-71722b8340ae
📒 Files selected for processing (3)
src/agent/channel.rssrc/api/channels.rssrc/conversation/participants.rs
🚧 Files skipped from review as they are similar to previous changes (2)
- src/api/channels.rs
- src/conversation/participants.rs
f801efc to
c375803
Compare
There was a problem hiding this comment.
🧹 Nitpick comments (1)
src/memory/working.rs (1)
749-766: Redundant empty check after min_participants guard.Line 764-766 checks
if participants.is_empty()after line 759 already returns early whenparticipants.len() < config.min_participants. Ifmin_participants >= 1(which is the default), the empty check is unreachable. Consider removing it unless you intend to supportmin_participants = 0:if !config.enabled || participants.len() < config.min_participants { return Ok(String::new()); } let now = Utc::now(); - if participants.is_empty() { - return Ok(String::new()); - }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/memory/working.rs` around lines 749 - 766, The check for participants.is_empty() at the end of render_participant_context is redundant because the earlier guard returns when participants.len() < config.min_participants; either remove the unreachable if-block (delete the `if participants.is_empty() { return Ok(String::new()); }` lines) or explicitly support min_participants == 0 by adjusting the earlier condition (e.g., change the len comparison) — locate this in the render_participant_context function and apply the chosen fix.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/memory/working.rs`:
- Around line 749-766: The check for participants.is_empty() at the end of
render_participant_context is redundant because the earlier guard returns when
participants.len() < config.min_participants; either remove the unreachable
if-block (delete the `if participants.is_empty() { return Ok(String::new()); }`
lines) or explicitly support min_participants == 0 by adjusting the earlier
condition (e.g., change the len comparison) — locate this in the
render_participant_context function and apply the chosen fix.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 39b47854-f570-409f-b4c2-d8a84286cf60
📒 Files selected for processing (16)
docs/design-docs/participant-awareness.mddocs/design-docs/working-memory.mdprompts/en/channel.md.j2src/agent/channel.rssrc/agent/channel_history.rssrc/api/channels.rssrc/config.rssrc/config/load.rssrc/config/runtime.rssrc/config/toml_schema.rssrc/config/types.rssrc/conversation.rssrc/conversation/participants.rssrc/memory/working.rssrc/prompts/engine.rstests/context_dump.rs
✅ Files skipped from review due to trivial changes (4)
- docs/design-docs/participant-awareness.md
- src/config/toml_schema.rs
- src/conversation.rs
- src/config/types.rs
🚧 Files skipped from review as they are similar to previous changes (6)
- src/config/runtime.rs
- prompts/en/channel.md.j2
- src/config.rs
- src/config/load.rs
- src/prompts/engine.rs
- src/api/channels.rs
|
Follow-up review cleanup is pushed in Included:
Verification:
|
Why
Working memory had two related gaps.
First, Layer 4 participant context was useful enough to ship, but the initial implementation put too much responsibility into prompt-time rendering. That made the current config-backed version harder to evolve into the full participant-awareness pipeline.
Second, decision extraction was too eager. Routine acknowledgements could be recorded as durable
Decisionmemories, which adds noise and can distort persistence behavior.Scope
This PR is the foundation slice only. It improves participant-aware prompt context and tightens working-memory decision capture, but it does not add the full humans store or cortex-generated participant summary pipeline yet.
What Changed
ParticipantContextConfigsurface, wired through config loading and runtime hot reload, so future participant-awareness work has a stable integration seam.Decisionevents, reducing working-memory noise from routine acknowledgements.Example Usage
Testing
just preflightjust gate-prcargo test extracts_decision_summary_from_reply_text --libcargo test renderable_participants_respects_minimum_and_ordering --libcargo test upgrades_fallback_participant_entry_without_duplicates --libcargo test test_participant_context_defaults_and_overrides_resolution --libcargo test participant_memory_key_uses_platform_scope_for_unknown_sender --libcargo test test_render_participant_context_uses_humans_and_recent_activity --libNotes
HumanDefmetadata, and recent activity. The DB-backed humans store and summary generation loop are intentionally deferred.Docs
Note
This commit establishes the foundation for participant-aware prompt context in Spacebot's working memory system. It decouples participant state tracking from prompt-time rendering by introducing dedicated configuration-backed participant context paths, enabling more flexible evolution toward full multi-user awareness. Decision extraction logic is tightened to reduce noise from routine acknowledgements. The changes maintain platform-aware activity scoping across write and read paths, with comprehensive test coverage for participant tracking, decision capture, and context rendering. The participant store and cortex-generated synthesis pipelines remain intentionally deferred for future work.
Written by Tembo for commit ac7ae1d. This will update automatically on new commits.