Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
40 commits
Select commit Hold shift + click to select a range
3623b4c
refactor: Change scenario prompts in agents/scenarios.py
legend5teve Mar 5, 2026
df2f056
chore: update the suggested routes and destination for Lytton
legend5teve Mar 6, 2026
477b0de
fix: update replay.py to fix key error
legend5teve Mar 6, 2026
0f4ac33
feat: optimize the visualization module for plotting statistic result…
legend5teve Mar 9, 2026
747d1ae
Merge branch 'main' into feat/visualization-module-plots
legend5teve Mar 9, 2026
4f3172f
feat: implement timeline analysis for evacuation in scripts/plot_agen…
legend5teve Mar 11, 2026
aaca505
Merge branch 'main' into feat/visualization-module-plots
legend5teve Mar 11, 2026
1c7ff71
chore: add test cases to cover newly added features; update doc strin…
legend5teve Mar 11, 2026
44bbd68
chore: update plotting scales according to actual KPI scales
legend5teve Mar 11, 2026
a1f935f
feat: log run parameters for plotting modules
legend5teve Mar 11, 2026
077be18
Merge branch 'main' into feat/visualization-module-plots
legend5teve Mar 11, 2026
bc3874e
feat: implement signal conflict modeling, distance-based noise scalin…
legend5teve Mar 16, 2026
d73e226
Merge branch 'main' into feat/visualization-module-plots
legend5teve Mar 16, 2026
ef0602e
feat: extend utility scoring to all scenarios and fix hazard exposure…
legend5teve Mar 16, 2026
73ab269
feat: replace vehicle-count loop with time-based sim end and tune fir…
legend5teve Mar 17, 2026
b357efe
feat: add LLM input-hash caching and parallel predeparture LLM dispatch
legend5teve Mar 18, 2026
edd71e9
feat: add early termination when all agents have evacuated
legend5teve Mar 19, 2026
e0c0695
feat: add prioritized prompt framework and fix arrival-based termination
legend5teve Mar 20, 2026
36014f4
Merge remote-tracking branch 'origin/main' into feat/visualization-mo…
legend5teve Mar 20, 2026
e01c017
feat: add edge-trace replay, departure destination choice, and SUMO n…
legend5teve Mar 22, 2026
7892b89
Merge remote-tracking branch 'origin/main' into feat/visualization-mo…
legend5teve Mar 22, 2026
92db674
fix: record departure destination in metrics and round-robin vehicle …
legend5teve Mar 22, 2026
62eea81
fix: add --sumo-seed CLI arg and make RQ scripts POSIX-compatible
legend5teve Mar 22, 2026
ad162c7
feat: export per-agent profile parameters to JSON at simulation end
legend5teve Mar 23, 2026
01145a9
feat: scale fire proximity thresholds to match wildfire simulation di…
legend5teve Mar 24, 2026
5fbf2a9
feat: add visual fire observation penalty for no_notice agents
legend5teve Mar 24, 2026
f3858cd
fix: use travel time instead of edge count for no_notice exposure sca…
legend5teve Mar 24, 2026
814ebb9
feat: add anti-hallucination factual grounding guard to all LLM polic…
legend5teve Mar 24, 2026
a65814b
feat: add proximity-based fire perception for no_notice agents
legend5teve Mar 24, 2026
dcf53ce
refactor: extract messaging module, add scenario history filtering, a…
legend5teve Mar 30, 2026
ea5e61c
feat: overhaul departure/routing prompts with decision rules, theta_t…
legend5teve Mar 30, 2026
f17ecac
chore: update spawn configuration and SUMO network geometry
legend5teve Mar 30, 2026
eaad12f
feat: externalize map config and add compact spawn format
legend5teve Apr 3, 2026
fcd3cbe
Merge branch 'main' into feat/visualization-module-plots
legend5teve Apr 3, 2026
1185b03
feat: add Halifax map config, building-based spawn generator, and see…
legend5teve Apr 6, 2026
13d2c87
feat: add LLM token usage tracking and total_agents to run metrics
legend5teve Apr 8, 2026
4576e4e
feat: add institutional information delay model and expand run params
legend5teve Apr 13, 2026
3a061e1
feat: emit run_params companion files in experiment sweeps
legend5teve Apr 13, 2026
789227b
feat: multi-run KPI dashboard and robust metrics-glob loading
legend5teve Apr 13, 2026
17faaa0
data: add Halifax spawn group config
legend5teve Apr 13, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -13,3 +13,6 @@ venv/
Wildfire_Proposal.pdf
.idea/
.venv/
Research_Proposal_0317.pptx
sumo/halifax.net.xml
sumo/Halifax_buildings.xml
31 changes: 27 additions & 4 deletions agentevac/agents/agent_state.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,11 +47,14 @@ class AgentRuntimeState:
derived fields (entropy, entropy_norm, uncertainty_bucket).
psychology: Scalar summaries derived from the belief (perceived_risk, confidence).
signal_history: Bounded list of recent environment signals (noisy margin observations).
Used by the delay model to replay stale observations.
social_history: Bounded list of recent social signals derived from inbox messages.
decision_history: Bounded list of past decision records (predeparture + routing).
Passed to the LLM as ``agent_self_history`` so agents can avoid repeated mistakes.
observation_history: Bounded list of system-generated local neighborhood observations.
institutional_history: Bounded list of institutional snapshots (forecast + annotated
menu) pushed each decision round. Used by
``information_model.apply_institutional_delay`` to serve stale official
information when ``INFO_DELAY_S > 0``.
has_departed: True once the vehicle has been added to the SUMO simulation.
"""

Expand All @@ -65,6 +68,7 @@ class AgentRuntimeState:
social_history: List[Dict[str, Any]] = field(default_factory=list)
decision_history: List[Dict[str, Any]] = field(default_factory=list)
observation_history: List[Dict[str, Any]] = field(default_factory=list)
institutional_history: List[Dict[str, Any]] = field(default_factory=list)
has_departed: bool = True
last_input_hash: Optional[int] = None
last_llm_choice_idx: Optional[int] = None
Expand Down Expand Up @@ -237,9 +241,7 @@ def append_signal_history(
) -> None:
"""Append an environment signal record to the agent's signal history.

The history is bounded to ``max_items`` entries (default 16). The delay model in
``information_model.apply_signal_delay`` uses this history to retrieve stale
observations when ``INFO_DELAY_S > 0``.
The history is bounded to ``max_items`` entries (default 16).

Args:
state: The agent whose history to update.
Expand Down Expand Up @@ -307,6 +309,26 @@ def append_observation_history(
_append_bounded(state.observation_history, observation, max_items)


def append_institutional_history(
state: AgentRuntimeState,
snapshot: Dict[str, Any],
*,
max_items: int = 16,
) -> None:
"""Append an institutional snapshot to the agent's institutional history.

Each snapshot contains the forecast and annotated destination/route menu
produced for that decision round. Used by ``apply_institutional_delay``
to serve stale official information when ``INFO_DELAY_S > 0``.

Args:
state: The agent whose history to update.
snapshot: Dict with ``forecast`` and ``annotated_menu`` keys.
max_items: Maximum number of snapshots to retain.
"""
_append_bounded(state.institutional_history, snapshot, max_items)


def snapshot_agent_state(state: AgentRuntimeState) -> Dict[str, Any]:
"""Serialize an ``AgentRuntimeState`` to a plain dict for logging or replay.

Expand All @@ -331,5 +353,6 @@ def snapshot_agent_state(state: AgentRuntimeState) -> Dict[str, Any]:
"social_history": [dict(item) for item in state.social_history],
"decision_history": [dict(item) for item in state.decision_history],
"observation_history": [dict(item) for item in state.observation_history],
"institutional_history": [dict(item) for item in state.institutional_history],
"has_departed": bool(state.has_departed),
}
60 changes: 55 additions & 5 deletions agentevac/agents/information_model.py
Original file line number Diff line number Diff line change
@@ -1,14 +1,24 @@
"""Information sensing and social signal processing for evacuation agents.
"""Information sensing, institutional delay, and social signal processing for evacuation agents.

This module handles the two information streams available to each agent each decision round:
This module handles the information streams available to each agent each decision round:

**Environmental signals** (``sample_environment_signal``):
The agent observes the closest fire margin on its current edge and the minimum margin
across the head of its planned route. Gaussian noise (``sigma_info`` metres, std-dev)
is injected to model imperfect sensing — e.g., smoke obscuring visibility or GPS
inaccuracy. An optional delay (``INFO_DELAY_S`` seconds, converted to ``delay_rounds``)
is applied by replaying a stale record from the agent's signal history, simulating
delayed emergency broadcasts or slow rumour propagation.
inaccuracy. Environmental signals are always **real-time** — personal observation
has no institutional delay.

**Institutional delay** (``apply_institutional_delay``):
Official information channels (fire forecasts, route guidance, advisory labels,
expected-utility scores) are subject to ``INFO_DELAY_S`` seconds of institutional
delay. When ``delay_rounds > 0``, the agent receives a stale snapshot from
``delay_rounds`` decision periods ago. When the agent's history is too short
(i.e., fewer rounds have elapsed than the delay), **no institutional information
is available** — the agent operates as if in a ``no_notice`` regime on that channel.
This models the real-world lag in emergency management information production and
dissemination. In ``no_notice`` mode the institutional channel is already invisible,
so the delay has no additional effect.

**Social signals** (``build_social_signal``):
Peer messages from the agent's inbox are parsed with a simple keyword-vote approach.
Expand Down Expand Up @@ -150,6 +160,46 @@ def apply_signal_delay(
return out


def apply_institutional_delay(
history: List[Dict[str, Any]],
delay_rounds: int,
) -> Optional[Dict[str, Any]]:
"""Resolve the institutional snapshot the agent should see this round.

Institutional snapshots contain official forecast and annotated menu data
pushed each decision round. When ``delay_rounds > 0``, the agent receives
the snapshot from ``delay_rounds`` periods ago.

Unlike ``apply_signal_delay``, this function returns ``None`` when the
history is too short — the agent has not yet accumulated enough rounds for
the first delayed report to arrive. The caller should treat ``None`` as
"no institutional information available" and present a ``no_notice``-grade
view to the agent.

Args:
history: The agent's bounded ``institutional_history`` list (oldest-first).
delay_rounds: Number of decision periods of institutional lag.
When 0, returns ``None`` — caller should use the current (real-time)
snapshot directly.

Returns:
A stale institutional snapshot dict when available, or ``None`` when
either ``delay_rounds == 0`` (use current) or history is too short
(information not yet available).
"""
delay = max(0, int(delay_rounds))
if delay <= 0:
# No delay — caller uses the current snapshot directly.
return None
if delay <= len(history):
snapshot = dict(history[-delay])
snapshot["is_delayed"] = True
snapshot["delay_rounds_applied"] = delay
return snapshot
# History too short — institutional information has not arrived yet.
return None


def sample_environment_signal(
agent_id: str,
sim_t_s: float,
Expand Down
Loading
Loading