Skip to content

fix: workflow incorrectly marked as completed while nodes are still executing#13

Open
tomerqodo wants to merge 4 commits intoqodo_full_base_fix_workflow_incorrectly_marked_as_completed_while_nodes_are_still_executing_pr13from
qodo_full_head_fix_workflow_incorrectly_marked_as_completed_while_nodes_are_still_executing_pr13
Open

fix: workflow incorrectly marked as completed while nodes are still executing#13
tomerqodo wants to merge 4 commits intoqodo_full_base_fix_workflow_incorrectly_marked_as_completed_while_nodes_are_still_executing_pr13from
qodo_full_head_fix_workflow_incorrectly_marked_as_completed_while_nodes_are_still_executing_pr13

Conversation

@tomerqodo
Copy link
Copy Markdown

Benchmark PR from agentic-review-benchmarks#13

@qodo-code-review
Copy link
Copy Markdown

Code Review by Qodo

🐞 Bugs (3) 📘 Rule violations (2) 📎 Requirement gaps (0)

Grey Divider


Action required

1. print() in propagate_skip_from_edge 📘 Rule violation ✧ Quality
Description
• The backend workflow traversal code introduces a print() statement, which bypasses the standard
  logging pipeline and makes observability inconsistent in production.
• This violates the requirement to use a module-level logger and also prevents consistent log
  levels/structure and richer context typically required for debugging workflow execution issues.
Code

api/core/workflow/graph_engine/graph_traversal/skip_propagator.py[R61-63]

+            # Start execution and enqueue node
+            print(f"Starting execution for node: {downstream_node_id}")
+            self._state_manager.start_execution(downstream_node_id)
Evidence
PR Compliance ID 9 requires using a module-level logger and forbids print() in production
backend code. The added line uses print() in skip_propagator.py, and PR Compliance ID 27 further
requires logging with appropriate context/levels rather than ad-hoc output.

AGENTS.md
AGENTS.md
api/core/workflow/graph_engine/graph_traversal/skip_propagator.py[61-63]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
A `print()` statement was introduced in backend production code. This violates the requirement to use a module-level logger and results in unstructured output.

## Issue Context
The message is emitted when starting node execution. It should go through the standard logging pipeline (with level, formatting, and optional context).

## Fix Focus Areas
- api/core/workflow/graph_engine/graph_traversal/skip_propagator.py[1-13]
- api/core/workflow/graph_engine/graph_traversal/skip_propagator.py[59-65]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. *_side_effect missing type hints 📘 Rule violation ✓ Correctness
Description
• The test file defines nested helper functions without parameter and return type annotations, which
  violates the requirement that all Python function definitions include explicit type hints.
• Missing annotations reduces static type safety and can cause basedpyright/typing checks to miss
  incorrect mock wiring or unexpected return types.
Code

api/tests/unit_tests/core/workflow/graph_engine/graph_traversal/test_skip_propagator.py[R204-219]

+        def get_incoming_edges_side_effect(node_id):
+            if node_id == "node_2":
+                return [edge1]
+            elif node_id == "node_4":
+                return [edge3]
+            return []
+
+        mock_graph.get_incoming_edges.side_effect = get_incoming_edges_side_effect
+
+        # Setup get_outgoing_edges to return different values based on node
+        def get_outgoing_edges_side_effect(node_id):
+            if node_id == "node_2":
+                return [edge3]
+            elif node_id == "node_4":
+                return []  # No outgoing edges, stops recursion
+            return []
Evidence
PR Compliance ID 7 requires type annotations for every Python function parameter and return value
using modern syntax. The new helper functions get_incoming_edges_side_effect and
get_outgoing_edges_side_effect omit both parameter and return type annotations.

AGENTS.md
api/tests/unit_tests/core/workflow/graph_engine/graph_traversal/test_skip_propagator.py[204-219]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Two nested helper functions in the new pytest module are missing parameter and return type annotations.

## Issue Context
The repository requires modern Python 3.12+ type syntax for all function defs (including nested helpers).

## Fix Focus Areas
- api/tests/unit_tests/core/workflow/graph_engine/graph_traversal/test_skip_propagator.py[203-222]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


3. Skip propagation order regressed 🐞 Bug ✓ Correctness
Description
_propagate_skip_to_node now calls propagate_skip_from_edge(edge.id) before marking that edge
  as SKIPPED, so downstream nodes will still see that incoming edge as UNKNOWN during analysis.
• Because propagation explicitly stops when any incoming edge is UNKNOWN, this can halt skip
  propagation prematurely and leave downstream nodes/paths in the wrong state (e.g., not skipped when
  they should be).
• The same file’s skip_branch_paths uses the opposite (correct) order (mark skipped, then
  propagate), highlighting the inconsistency.
Code

api/core/workflow/graph_engine/graph_traversal/skip_propagator.py[R81-86]

        # Mark all outgoing edges as skipped and propagate
        outgoing_edges = self._graph.get_outgoing_edges(node_id)
        for edge in outgoing_edges:
-            self._state_manager.mark_edge_skipped(edge.id)
            # Recursively propagate skip
            self.propagate_skip_from_edge(edge.id)
+            self._state_manager.mark_edge_skipped(edge.id)
Evidence
Edge state defaults to UNKNOWN, and analyze_edge_states sets has_unknown=True when any incoming
edge is UNKNOWN; propagate_skip_from_edge returns early on has_unknown. Therefore, propagating
before marking the edge SKIPPED can cause has_unknown to remain true and stop recursion.

api/core/workflow/graph_engine/graph_traversal/skip_propagator.py[55-65]
api/core/workflow/graph_engine/graph_traversal/skip_propagator.py[81-87]
api/core/workflow/graph_engine/graph_state_manager.py[129-146]
api/core/workflow/graph/edge.py[7-15]
api/core/workflow/graph_engine/graph_traversal/skip_propagator.py[88-97]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`_propagate_skip_to_node()` propagates skip to downstream nodes *before* marking the outgoing edge as `SKIPPED`. Since propagation halts when any incoming edge is `UNKNOWN`, downstream skip propagation can stop prematurely.

## Issue Context
`Edge.state` defaults to `UNKNOWN`, and `GraphStateManager.analyze_edge_states()` treats any UNKNOWN incoming edge as a reason to stop propagation.

## Fix Focus Areas
- api/core/workflow/graph_engine/graph_traversal/skip_propagator.py[81-87]
- api/tests/unit_tests/core/workflow/graph_engine/graph_traversal/test_skip_propagator.py[115-151]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

4. start_execution order inconsistent 🐞 Bug ⛯ Reliability
Description
propagate_skip_from_edge calls start_execution(node) before enqueue_node(node), whereas
  other parts of the engine consistently do enqueue_node first, then start_execution.
• This ordering creates a transient state where a node is tracked as executing before it is marked
  TAKEN / queued, making behavior harder to reason about and diverging from established conventions.
Code

api/core/workflow/graph_engine/graph_traversal/skip_propagator.py[R63-64]

+            self._state_manager.start_execution(downstream_node_id)
            self._state_manager.enqueue_node(downstream_node_id)
Evidence
The engine’s common pattern is to enqueue first then mark execution tracking. The skip propagator
now does the reverse, despite enqueue_node being the operation that sets NodeState.TAKEN and
adds to the ready queue.

api/core/workflow/graph_engine/graph_traversal/skip_propagator.py[59-65]
api/core/workflow/graph_engine/graph_engine.py[331-340]
api/core/workflow/graph_engine/event_management/event_handlers.py[194-200]
api/core/workflow/graph_engine/graph_state_manager.py[42-55]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`SkipPropagator.propagate_skip_from_edge()` calls `start_execution()` before `enqueue_node()`, which is inconsistent with the rest of the engine and makes state transitions harder to reason about.

## Issue Context
Elsewhere (GraphEngine start, event handlers), the order is `enqueue_node()` then `start_execution()`.

## Fix Focus Areas
- api/core/workflow/graph_engine/graph_traversal/skip_propagator.py[59-65]
- api/core/workflow/graph_engine/graph_engine.py[331-340]
- api/core/workflow/graph_engine/event_management/event_handlers.py[194-198]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Advisory comments

5. Tests miss regression case 🐞 Bug ✓ Correctness
Description
• The added unit tests don’t exercise the real state-machine interaction that caused the ordering
  regression, because they mock state analysis and often force recursion to stop by returning empty
  incoming edges.
• This can let the incorrect skip-propagation ordering slip through even with test coverage present.
Code

api/tests/unit_tests/core/workflow/graph_engine/graph_traversal/test_skip_propagator.py[R134-136]

+        # Setup get_incoming_edges to return empty list to stop recursion
+        mock_graph.get_incoming_edges.return_value = []
+
Evidence
In the key test for _propagate_skip_to_node, get_incoming_edges is forced to return [], which
avoids exercising analyze_edge_states with real Edge.state values (UNKNOWN vs SKIPPED). That
means ordering of mark_edge_skipped vs propagation is not validated.

api/tests/unit_tests/core/workflow/graph_engine/graph_traversal/test_skip_propagator.py[134-136]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Current unit tests don’t validate the ordering-dependent behavior of skip propagation because they short-circuit recursion and/or mock state analysis.

## Issue Context
The correctness bug depends on real `Edge.state` values (UNKNOWN vs SKIPPED) and `analyze_edge_states` behavior.

## Fix Focus Areas
- api/tests/unit_tests/core/workflow/graph_engine/graph_traversal/test_skip_propagator.py[115-151]
- api/core/workflow/graph_engine/graph_traversal/skip_propagator.py[81-87]
- api/core/workflow/graph_engine/graph_state_manager.py[129-146]
- api/core/workflow/graph/edge.py[7-15]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



ⓘ The new review experience is currently in Beta. Learn more

Qodo Logo

Comment on lines +61 to +63
# Start execution and enqueue node
print(f"Starting execution for node: {downstream_node_id}")
self._state_manager.start_execution(downstream_node_id)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

1. print() in propagate_skip_from_edge 📘 Rule violation ✧ Quality

• The backend workflow traversal code introduces a print() statement, which bypasses the standard
  logging pipeline and makes observability inconsistent in production.
• This violates the requirement to use a module-level logger and also prevents consistent log
  levels/structure and richer context typically required for debugging workflow execution issues.
Agent prompt
## Issue description
A `print()` statement was introduced in backend production code. This violates the requirement to use a module-level logger and results in unstructured output.

## Issue Context
The message is emitted when starting node execution. It should go through the standard logging pipeline (with level, formatting, and optional context).

## Fix Focus Areas
- api/core/workflow/graph_engine/graph_traversal/skip_propagator.py[1-13]
- api/core/workflow/graph_engine/graph_traversal/skip_propagator.py[59-65]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +204 to +219
def get_incoming_edges_side_effect(node_id):
if node_id == "node_2":
return [edge1]
elif node_id == "node_4":
return [edge3]
return []

mock_graph.get_incoming_edges.side_effect = get_incoming_edges_side_effect

# Setup get_outgoing_edges to return different values based on node
def get_outgoing_edges_side_effect(node_id):
if node_id == "node_2":
return [edge3]
elif node_id == "node_4":
return [] # No outgoing edges, stops recursion
return []
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

2. *_side_effect missing type hints 📘 Rule violation ✓ Correctness

• The test file defines nested helper functions without parameter and return type annotations, which
  violates the requirement that all Python function definitions include explicit type hints.
• Missing annotations reduces static type safety and can cause basedpyright/typing checks to miss
  incorrect mock wiring or unexpected return types.
Agent prompt
## Issue description
Two nested helper functions in the new pytest module are missing parameter and return type annotations.

## Issue Context
The repository requires modern Python 3.12+ type syntax for all function defs (including nested helpers).

## Fix Focus Areas
- api/tests/unit_tests/core/workflow/graph_engine/graph_traversal/test_skip_propagator.py[203-222]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines 81 to +86
# Mark all outgoing edges as skipped and propagate
outgoing_edges = self._graph.get_outgoing_edges(node_id)
for edge in outgoing_edges:
self._state_manager.mark_edge_skipped(edge.id)
# Recursively propagate skip
self.propagate_skip_from_edge(edge.id)
self._state_manager.mark_edge_skipped(edge.id)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

3. Skip propagation order regressed 🐞 Bug ✓ Correctness

_propagate_skip_to_node now calls propagate_skip_from_edge(edge.id) before marking that edge
  as SKIPPED, so downstream nodes will still see that incoming edge as UNKNOWN during analysis.
• Because propagation explicitly stops when any incoming edge is UNKNOWN, this can halt skip
  propagation prematurely and leave downstream nodes/paths in the wrong state (e.g., not skipped when
  they should be).
• The same file’s skip_branch_paths uses the opposite (correct) order (mark skipped, then
  propagate), highlighting the inconsistency.
Agent prompt
## Issue description
`_propagate_skip_to_node()` propagates skip to downstream nodes *before* marking the outgoing edge as `SKIPPED`. Since propagation halts when any incoming edge is `UNKNOWN`, downstream skip propagation can stop prematurely.

## Issue Context
`Edge.state` defaults to `UNKNOWN`, and `GraphStateManager.analyze_edge_states()` treats any UNKNOWN incoming edge as a reason to stop propagation.

## Fix Focus Areas
- api/core/workflow/graph_engine/graph_traversal/skip_propagator.py[81-87]
- api/tests/unit_tests/core/workflow/graph_engine/graph_traversal/test_skip_propagator.py[115-151]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +63 to 64
self._state_manager.start_execution(downstream_node_id)
self._state_manager.enqueue_node(downstream_node_id)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remediation recommended

4. Start_execution order inconsistent 🐞 Bug ⛯ Reliability

propagate_skip_from_edge calls start_execution(node) before enqueue_node(node), whereas
  other parts of the engine consistently do enqueue_node first, then start_execution.
• This ordering creates a transient state where a node is tracked as executing before it is marked
  TAKEN / queued, making behavior harder to reason about and diverging from established conventions.
Agent prompt
## Issue description
`SkipPropagator.propagate_skip_from_edge()` calls `start_execution()` before `enqueue_node()`, which is inconsistent with the rest of the engine and makes state transitions harder to reason about.

## Issue Context
Elsewhere (GraphEngine start, event handlers), the order is `enqueue_node()` then `start_execution()`.

## Fix Focus Areas
- api/core/workflow/graph_engine/graph_traversal/skip_propagator.py[59-65]
- api/core/workflow/graph_engine/graph_engine.py[331-340]
- api/core/workflow/graph_engine/event_management/event_handlers.py[194-198]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +134 to +136
# Setup get_incoming_edges to return empty list to stop recursion
mock_graph.get_incoming_edges.return_value = []

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Advisory comments

5. Tests miss regression case 🐞 Bug ✓ Correctness

• The added unit tests don’t exercise the real state-machine interaction that caused the ordering
  regression, because they mock state analysis and often force recursion to stop by returning empty
  incoming edges.
• This can let the incorrect skip-propagation ordering slip through even with test coverage present.
Agent prompt
## Issue description
Current unit tests don’t validate the ordering-dependent behavior of skip propagation because they short-circuit recursion and/or mock state analysis.

## Issue Context
The correctness bug depends on real `Edge.state` values (UNKNOWN vs SKIPPED) and `analyze_edge_states` behavior.

## Fix Focus Areas
- api/tests/unit_tests/core/workflow/graph_engine/graph_traversal/test_skip_propagator.py[115-151]
- api/core/workflow/graph_engine/graph_traversal/skip_propagator.py[81-87]
- api/core/workflow/graph_engine/graph_state_manager.py[129-146]
- api/core/workflow/graph/edge.py[7-15]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant