fix: workflow incorrectly marked as completed while nodes are still executing#13
Conversation
Code Review by Qodo
1. print() in propagate_skip_from_edge
|
| # Start execution and enqueue node | ||
| print(f"Starting execution for node: {downstream_node_id}") | ||
| self._state_manager.start_execution(downstream_node_id) |
There was a problem hiding this comment.
1. print() in propagate_skip_from_edge 📘 Rule violation ✧ Quality
• The backend workflow traversal code introduces a print() statement, which bypasses the standard logging pipeline and makes observability inconsistent in production. • This violates the requirement to use a module-level logger and also prevents consistent log levels/structure and richer context typically required for debugging workflow execution issues.
Agent prompt
## Issue description
A `print()` statement was introduced in backend production code. This violates the requirement to use a module-level logger and results in unstructured output.
## Issue Context
The message is emitted when starting node execution. It should go through the standard logging pipeline (with level, formatting, and optional context).
## Fix Focus Areas
- api/core/workflow/graph_engine/graph_traversal/skip_propagator.py[1-13]
- api/core/workflow/graph_engine/graph_traversal/skip_propagator.py[59-65]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| def get_incoming_edges_side_effect(node_id): | ||
| if node_id == "node_2": | ||
| return [edge1] | ||
| elif node_id == "node_4": | ||
| return [edge3] | ||
| return [] | ||
|
|
||
| mock_graph.get_incoming_edges.side_effect = get_incoming_edges_side_effect | ||
|
|
||
| # Setup get_outgoing_edges to return different values based on node | ||
| def get_outgoing_edges_side_effect(node_id): | ||
| if node_id == "node_2": | ||
| return [edge3] | ||
| elif node_id == "node_4": | ||
| return [] # No outgoing edges, stops recursion | ||
| return [] |
There was a problem hiding this comment.
2. *_side_effect missing type hints 📘 Rule violation ✓ Correctness
• The test file defines nested helper functions without parameter and return type annotations, which violates the requirement that all Python function definitions include explicit type hints. • Missing annotations reduces static type safety and can cause basedpyright/typing checks to miss incorrect mock wiring or unexpected return types.
Agent prompt
## Issue description
Two nested helper functions in the new pytest module are missing parameter and return type annotations.
## Issue Context
The repository requires modern Python 3.12+ type syntax for all function defs (including nested helpers).
## Fix Focus Areas
- api/tests/unit_tests/core/workflow/graph_engine/graph_traversal/test_skip_propagator.py[203-222]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| # Mark all outgoing edges as skipped and propagate | ||
| outgoing_edges = self._graph.get_outgoing_edges(node_id) | ||
| for edge in outgoing_edges: | ||
| self._state_manager.mark_edge_skipped(edge.id) | ||
| # Recursively propagate skip | ||
| self.propagate_skip_from_edge(edge.id) | ||
| self._state_manager.mark_edge_skipped(edge.id) |
There was a problem hiding this comment.
3. Skip propagation order regressed 🐞 Bug ✓ Correctness
• _propagate_skip_to_node now calls propagate_skip_from_edge(edge.id) before marking that edge as SKIPPED, so downstream nodes will still see that incoming edge as UNKNOWN during analysis. • Because propagation explicitly stops when any incoming edge is UNKNOWN, this can halt skip propagation prematurely and leave downstream nodes/paths in the wrong state (e.g., not skipped when they should be). • The same file’s skip_branch_paths uses the opposite (correct) order (mark skipped, then propagate), highlighting the inconsistency.
Agent prompt
## Issue description
`_propagate_skip_to_node()` propagates skip to downstream nodes *before* marking the outgoing edge as `SKIPPED`. Since propagation halts when any incoming edge is `UNKNOWN`, downstream skip propagation can stop prematurely.
## Issue Context
`Edge.state` defaults to `UNKNOWN`, and `GraphStateManager.analyze_edge_states()` treats any UNKNOWN incoming edge as a reason to stop propagation.
## Fix Focus Areas
- api/core/workflow/graph_engine/graph_traversal/skip_propagator.py[81-87]
- api/tests/unit_tests/core/workflow/graph_engine/graph_traversal/test_skip_propagator.py[115-151]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| self._state_manager.start_execution(downstream_node_id) | ||
| self._state_manager.enqueue_node(downstream_node_id) |
There was a problem hiding this comment.
4. Start_execution order inconsistent 🐞 Bug ⛯ Reliability
• propagate_skip_from_edge calls start_execution(node) before enqueue_node(node), whereas other parts of the engine consistently do enqueue_node first, then start_execution. • This ordering creates a transient state where a node is tracked as executing before it is marked TAKEN / queued, making behavior harder to reason about and diverging from established conventions.
Agent prompt
## Issue description
`SkipPropagator.propagate_skip_from_edge()` calls `start_execution()` before `enqueue_node()`, which is inconsistent with the rest of the engine and makes state transitions harder to reason about.
## Issue Context
Elsewhere (GraphEngine start, event handlers), the order is `enqueue_node()` then `start_execution()`.
## Fix Focus Areas
- api/core/workflow/graph_engine/graph_traversal/skip_propagator.py[59-65]
- api/core/workflow/graph_engine/graph_engine.py[331-340]
- api/core/workflow/graph_engine/event_management/event_handlers.py[194-198]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| # Setup get_incoming_edges to return empty list to stop recursion | ||
| mock_graph.get_incoming_edges.return_value = [] | ||
|
|
There was a problem hiding this comment.
5. Tests miss regression case 🐞 Bug ✓ Correctness
• The added unit tests don’t exercise the real state-machine interaction that caused the ordering regression, because they mock state analysis and often force recursion to stop by returning empty incoming edges. • This can let the incorrect skip-propagation ordering slip through even with test coverage present.
Agent prompt
## Issue description
Current unit tests don’t validate the ordering-dependent behavior of skip propagation because they short-circuit recursion and/or mock state analysis.
## Issue Context
The correctness bug depends on real `Edge.state` values (UNKNOWN vs SKIPPED) and `analyze_edge_states` behavior.
## Fix Focus Areas
- api/tests/unit_tests/core/workflow/graph_engine/graph_traversal/test_skip_propagator.py[115-151]
- api/core/workflow/graph_engine/graph_traversal/skip_propagator.py[81-87]
- api/core/workflow/graph_engine/graph_state_manager.py[129-146]
- api/core/workflow/graph/edge.py[7-15]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
Benchmark PR from agentic-review-benchmarks#13