Skip to content

refactor(workflow): add Jinja2 renderer abstraction for template transform#154

Open
tomerqodo wants to merge 6 commits intoqodo_claude_vs_qodo_base_refactorworkflow_add_jinja2_renderer_abstraction_for_template_transform_pr3from
qodo_claude_vs_qodo_head_refactorworkflow_add_jinja2_renderer_abstraction_for_template_transform_pr3
Open

refactor(workflow): add Jinja2 renderer abstraction for template transform#154
tomerqodo wants to merge 6 commits intoqodo_claude_vs_qodo_base_refactorworkflow_add_jinja2_renderer_abstraction_for_template_transform_pr3from
qodo_claude_vs_qodo_head_refactorworkflow_add_jinja2_renderer_abstraction_for_template_transform_pr3

Conversation

@tomerqodo
Copy link
Copy Markdown

Benchmark PR from agentic-review-benchmarks#3

laipz8200 and others added 6 commits January 25, 2026 12:04
…de and threaded it through DifyNodeFactory so TemplateTransform nodes receive the dependency by default, keeping behavior unchanged unless an override is provided. Changes are in `api/core/workflow/nodes/template_transform/template_transform_node.py` and `api/core/workflow/nodes/node_factory.py`.

**Commits**
- chore(workflow): identify TemplateTransform dependency on CodeExecutor
- feat(workflow): add CodeExecutor constructor injection to TemplateTransformNode (defaulting to current behavior)
- feat(workflow): inject CodeExecutor from DifyNodeFactory when creating TemplateTransform nodes

**Tests**
- Not run (not requested)

Next step: run `make lint` and `make type-check` if you want to validate the backend checks.
…Transform to use it, keeping CodeExecutor as the default adapter while preserving current behavior. Updates are in `api/core/workflow/nodes/template_transform/template_renderer.py`, `api/core/workflow/nodes/template_transform/template_transform_node.py`, `api/core/workflow/nodes/node_factory.py`, and `api/tests/unit_tests/core/workflow/nodes/template_transform/template_transform_node_spec.py`.

Commit-style summary:
- feat(template-transform): add Jinja2 template renderer abstraction with CodeExecutor adapter
- refactor(template-transform): use renderer in node/factory and update unit test patches

Tests not run (not requested).
…ode creation to return TemplateTransformNode directly for template-transform nodes in `api/core/workflow/nodes/node_factory.py`.

Commit-style summary:
- refactor(template-transform): derive TemplateRenderError from ValueError
- refactor(node-factory): instantiate TemplateTransformNode directly with injected renderer

Tests not run (not requested).
…ts/core/workflow/nodes/template_transform/template_transform_node_spec.py`)

chore(type-check): ran `make type-check` (basedpyright clean, 0 errors)

No errors reported.
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
@qodo-code-review
Copy link
Copy Markdown

Review Summary by Qodo

Add Jinja2 renderer abstraction for template transform nodes

✨ Enhancement 🧪 Tests

Grey Divider

Walkthroughs

Description
• Introduce Jinja2 template renderer abstraction protocol with CodeExecutor adapter
• Inject template renderer dependency into TemplateTransformNode via DifyNodeFactory
• Refactor TemplateTransformNode to use renderer abstraction instead of direct CodeExecutor calls
• Add 30+ comprehensive unit tests covering edge cases and Jinja2 features
Diagram
flowchart LR
  A["DifyNodeFactory"] -->|"injects template_renderer"| B["TemplateTransformNode"]
  C["Jinja2TemplateRenderer<br/>Protocol"] -->|"implemented by"| D["CodeExecutorJinja2TemplateRenderer"]
  D -->|"wraps"| E["CodeExecutor"]
  B -->|"uses"| D
  F["TemplateRenderError<br/>ValueError"] -->|"raised by"| D
Loading

Grey Divider

File Changes

1. api/core/workflow/nodes/template_transform/template_renderer.py ✨ Enhancement +40/-0

New Jinja2 renderer abstraction with CodeExecutor adapter

• Created new file with Jinja2TemplateRenderer protocol defining render_template interface
• Implemented CodeExecutorJinja2TemplateRenderer adapter wrapping
 CodeExecutor.execute_workflow_code_template
• Added TemplateRenderError exception inheriting from ValueError for template rendering failures
• Adapter handles CodeExecutionError conversion and validates rendered output is string type

api/core/workflow/nodes/template_transform/template_renderer.py


2. api/core/workflow/nodes/template_transform/template_transform_node.py Refactor +37/-14

Refactor node to use renderer abstraction

• Added constructor injection of Jinja2TemplateRenderer with default
 CodeExecutorJinja2TemplateRenderer
• Refactored _run method to use renderer.render_template instead of direct CodeExecutor calls
• Simplified error handling to catch TemplateRenderError instead of CodeExecutionError
• Moved output length validation after successful rendering for cleaner logic flow

api/core/workflow/nodes/template_transform/template_transform_node.py


3. api/core/workflow/nodes/node_factory.py ✨ Enhancement +16/-0

Inject template renderer into factory

• Added imports for Jinja2TemplateRenderer, CodeExecutorJinja2TemplateRenderer, and
 TemplateTransformNode
• Added template_renderer parameter to DifyNodeFactory constructor with default
 CodeExecutorJinja2TemplateRenderer
• Added explicit node creation branch for NodeType.TEMPLATE_TRANSFORM to inject renderer dependency
• Maintains backward compatibility by defaulting to CodeExecutor-based renderer

api/core/workflow/nodes/node_factory.py


View more (1)
4. api/tests/unit_tests/core/workflow/nodes/template_transform/template_transform_node_spec.py 🧪 Tests +756/-19

Update tests and add comprehensive coverage

• Updated all existing test patches from CodeExecutor.execute_workflow_code_template to
 CodeExecutorJinja2TemplateRenderer.render_template
• Changed mock return values from dict with "result" key to direct string returns
• Updated exception mocking from CodeExecutionError to TemplateRenderError
• Added 30+ new comprehensive test cases covering boolean values, nested dicts, filters, loops,
 special characters, unicode, edge cases

api/tests/unit_tests/core/workflow/nodes/template_transform/template_transform_node_spec.py


Grey Divider

Qodo Logo

@qodo-code-review
Copy link
Copy Markdown

qodo-code-review Bot commented Mar 10, 2026

Code Review by Qodo

🐞 Bugs (2) 📘 Rule violations (3) 📎 Requirement gaps (0)

Grey Divider


Action required

1. New tests lack type hints 📘 Rule violation ✓ Correctness
Description
New pytest test functions were added without parameter and return type annotations. This violates
the requirement that all Python function definitions include modern Python 3.12+ type annotations.
Code

api/tests/unit_tests/core/workflow/nodes/template_transform/template_transform_node_spec.py[R437-438]

+    def test_run_with_boolean_values(self, mock_execute, mock_graph, mock_graph_runtime_state, graph_init_params):
+        """Test _run with boolean variable values."""
Evidence
PR Compliance ID 1 requires type annotations for every function parameter and return type; the
newly-added test function definition includes untyped parameters and no return type annotation.

AGENTS.md
api/tests/unit_tests/core/workflow/nodes/template_transform/template_transform_node_spec.py[437-438]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Newly added pytest test functions are missing parameter and return type annotations, violating the project requirement for modern Python 3.12+ typing.

## Issue Context
The PR adds many new test functions in `template_transform_node_spec.py`; these `def` statements introduce untyped parameters (e.g., mocks/fixtures) and omit `-&gt; None`.

## Fix Focus Areas
- api/tests/unit_tests/core/workflow/nodes/template_transform/template_transform_node_spec.py[434-1151]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. Test file exceeds 800 📘 Rule violation ⛯ Reliability
Description
A Python file under api/ now exceeds 800 lines due to the large test additions. This violates the
file size limit requirement and harms readability and reviewability.
Code

api/tests/unit_tests/core/workflow/nodes/template_transform/template_transform_node_spec.py[R1000-1006]

+        assert result.status == WorkflowNodeExecutionStatus.SUCCEEDED
+        assert result.outputs["output"] == "Population: 8000000000"
+
+    @patch(
+        "core.workflow.nodes.template_transform.template_transform_node.CodeExecutorJinja2TemplateRenderer.render_template"
+    )
+    def test_run_with_mixed_types_in_list(self, mock_execute, mock_graph, mock_graph_runtime_state, graph_init_params):
Evidence
PR Compliance ID 4 caps api/ Python files at <800 lines; the updated test file has new lines
numbered well above 800 (e.g., 1000+), indicating it exceeds the limit after this PR.

AGENTS.md
api/tests/unit_tests/core/workflow/nodes/template_transform/template_transform_node_spec.py[1000-1006]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
A Python file under `api/` exceeds the 800-line limit after adding many new tests.

## Issue Context
The test module grew to at least ~1151 lines (based on new line numbers in the diff), violating the repository constraint for Python files under `api/`.

## Fix Focus Areas
- api/tests/unit_tests/core/workflow/nodes/template_transform/template_transform_node_spec.py[430-1151]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


3. Broken TemplateTransform tests 🐞 Bug ✓ Correctness
Description
New unit tests instantiate TemplateTransformNode with graph=... (unsupported) and pass config as
raw node-data without required id/data, which will raise TypeError/ValueError before assertions
execute. This blocks CI and prevents validating the new renderer abstraction.
Code

api/tests/unit_tests/core/workflow/nodes/template_transform/template_transform_node_spec.py[R451-457]

+        node = TemplateTransformNode(
+            id="test_node",
+            config=node_data,
+            graph_init_params=graph_init_params,
+            graph=mock_graph,
+            graph_runtime_state=mock_graph_runtime_state,
+        )
Evidence
The added tests pass an unexpected graph keyword argument and a config dict that lacks the
id/data wrapper required by the Node base class; TemplateTransformNode’s new explicit __init__
does not accept graph and forwards config directly to Node.__init__, which requires
config['id'] and config['data'].

api/tests/unit_tests/core/workflow/nodes/template_transform/template_transform_node_spec.py[434-457]
api/core/workflow/nodes/template_transform/template_transform_node.py[26-41]
api/core/workflow/nodes/base/node.py[223-235]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
New unit tests for `TemplateTransformNode` pass an unsupported `graph` keyword argument and pass `config` as raw node data (missing `id`/`data` wrapper). This causes `TypeError`/`ValueError` during construction, preventing the test suite from running.

### Issue Context
`TemplateTransformNode.__init__` forwards `config` to `Node.__init__`, which requires `config[&#x27;id&#x27;]` and `config[&#x27;data&#x27;]`. The newly added tests should mirror production construction (via `DifyNodeFactory`), which uses a node-config wrapper.

### Fix Focus Areas
- api/tests/unit_tests/core/workflow/nodes/template_transform/template_transform_node_spec.py[430-470]
- api/tests/unit_tests/core/workflow/nodes/template_transform/template_transform_node_spec.py[451-457]

### What to change
- Remove `graph=mock_graph` from all `TemplateTransformNode(...)` calls in this spec.
- Replace `config=node_data` with `config={&quot;id&quot;: &quot;test_node&quot;, &quot;data&quot;: {**node_data, &quot;type&quot;: &quot;template-transform&quot;}}` (and include `version` if required by other tests).
- Consider adding a small helper fixture like `make_node_config(node_id, data)` to avoid repeating the wrapper logic across the many added tests.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


View more (1)
4. Renderer returns None 🐞 Bug ✓ Correctness
Description
CodeExecutorJinja2TemplateRenderer.render_template is annotated to return str but can return
None when the executor result lacks a result key or sets it to null. TemplateTransformNode then
calls len(rendered) and returns rendered in outputs, which will crash or violate the node output
contract if None occurs.
Code

api/core/workflow/nodes/template_transform/template_renderer.py[R37-40]

+        rendered = result.get("result")
+        if rendered is not None and not isinstance(rendered, str):
+            raise TemplateRenderError("Template render result must be a string.")
+        return rendered
Evidence
The adapter uses result.get('result') without rejecting None, and returns it despite a -> str
signature. TemplateTransformNode assumes a string by calling len(rendered); the factory explicitly
supports injecting a custom executor/renderer, so malformed or nonconforming results can propagate
to this crash point.

api/core/workflow/nodes/template_transform/template_renderer.py[29-40]
api/core/workflow/nodes/template_transform/template_transform_node.py[67-80]
api/core/workflow/nodes/node_factory.py[37-64]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`CodeExecutorJinja2TemplateRenderer.render_template()` is typed as returning `str` but can return `None` via `result.get(&#x27;result&#x27;)`. `TemplateTransformNode._run()` calls `len(rendered)` and emits `rendered` in outputs, which will raise `TypeError` or produce invalid outputs if `None` is returned.

### Issue Context
The node factory supports injecting custom executors/renderers. The renderer adapter should strictly enforce its contract and never return `None`.

### Fix Focus Areas
- api/core/workflow/nodes/template_transform/template_renderer.py[29-40]
- api/core/workflow/nodes/template_transform/template_transform_node.py[67-80]

### What to change
- In `render_template`, treat missing/None results as an error:
 - `rendered = result.get(&quot;result&quot;)`
 - `if not isinstance(rendered, str): raise TemplateRenderError(...)`
 - `return rendered`
 (This covers both `None` and non-`str` values.)
- (Optional) Add a defensive guard in `TemplateTransformNode._run()` to fail gracefully if a renderer violates the protocol contract.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

5. Tests not AAA structured 📘 Rule violation ✓ Correctness
Description
New pytest tests were added without clear Arrange/Act/Assert phase separation. This reduces test
readability and violates the required AAA structure.
Code

api/tests/unit_tests/core/workflow/nodes/template_transform/template_transform_node_spec.py[R437-463]

+    def test_run_with_boolean_values(self, mock_execute, mock_graph, mock_graph_runtime_state, graph_init_params):
+        """Test _run with boolean variable values."""
+        node_data = {
+            "title": "Boolean Template",
+            "variables": [{"variable": "is_active", "value_selector": ["sys", "active_status"]}],
+            "template": "{% if is_active %}Active{% else %}Inactive{% endif %}",
+        }
+
+        mock_status = MagicMock()
+        mock_status.to_object.return_value = True
+
+        mock_graph_runtime_state.variable_pool.get.return_value = mock_status
+        mock_execute.return_value = "Active"
+
+        node = TemplateTransformNode(
+            id="test_node",
+            config=node_data,
+            graph_init_params=graph_init_params,
+            graph=mock_graph,
+            graph_runtime_state=mock_graph_runtime_state,
+        )
+
+        result = node._run()
+
+        assert result.status == WorkflowNodeExecutionStatus.SUCCEEDED
+        assert result.outputs["output"] == "Active"
+
Evidence
PR Compliance ID 7 requires tests to be clearly organized into Arrange, Act, and Assert sections;
the added test mixes setup, execution, and assertions without explicit AAA separation.

AGENTS.md
api/tests/unit_tests/core/workflow/nodes/template_transform/template_transform_node_spec.py[437-463]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
New pytest tests do not clearly separate Arrange/Act/Assert phases.

## Issue Context
Multiple added tests follow a single flow without explicit AAA sectioning; the compliance rule requires clear phase separation for maintainability.

## Fix Focus Areas
- api/tests/unit_tests/core/workflow/nodes/template_transform/template_transform_node_spec.py[434-1151]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Qodo Logo

Comment on lines +437 to +438
def test_run_with_boolean_values(self, mock_execute, mock_graph, mock_graph_runtime_state, graph_init_params):
"""Test _run with boolean variable values."""
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

1. New tests lack type hints 📘 Rule violation ✓ Correctness

New pytest test functions were added without parameter and return type annotations. This violates
the requirement that all Python function definitions include modern Python 3.12+ type annotations.
Agent Prompt
## Issue description
Newly added pytest test functions are missing parameter and return type annotations, violating the project requirement for modern Python 3.12+ typing.

## Issue Context
The PR adds many new test functions in `template_transform_node_spec.py`; these `def` statements introduce untyped parameters (e.g., mocks/fixtures) and omit `-> None`.

## Fix Focus Areas
- api/tests/unit_tests/core/workflow/nodes/template_transform/template_transform_node_spec.py[434-1151]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +1000 to +1006
assert result.status == WorkflowNodeExecutionStatus.SUCCEEDED
assert result.outputs["output"] == "Population: 8000000000"

@patch(
"core.workflow.nodes.template_transform.template_transform_node.CodeExecutorJinja2TemplateRenderer.render_template"
)
def test_run_with_mixed_types_in_list(self, mock_execute, mock_graph, mock_graph_runtime_state, graph_init_params):
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

2. Test file exceeds 800 📘 Rule violation ⛯ Reliability

A Python file under api/ now exceeds 800 lines due to the large test additions. This violates the
file size limit requirement and harms readability and reviewability.
Agent Prompt
## Issue description
A Python file under `api/` exceeds the 800-line limit after adding many new tests.

## Issue Context
The test module grew to at least ~1151 lines (based on new line numbers in the diff), violating the repository constraint for Python files under `api/`.

## Fix Focus Areas
- api/tests/unit_tests/core/workflow/nodes/template_transform/template_transform_node_spec.py[430-1151]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +437 to +463
def test_run_with_boolean_values(self, mock_execute, mock_graph, mock_graph_runtime_state, graph_init_params):
"""Test _run with boolean variable values."""
node_data = {
"title": "Boolean Template",
"variables": [{"variable": "is_active", "value_selector": ["sys", "active_status"]}],
"template": "{% if is_active %}Active{% else %}Inactive{% endif %}",
}

mock_status = MagicMock()
mock_status.to_object.return_value = True

mock_graph_runtime_state.variable_pool.get.return_value = mock_status
mock_execute.return_value = "Active"

node = TemplateTransformNode(
id="test_node",
config=node_data,
graph_init_params=graph_init_params,
graph=mock_graph,
graph_runtime_state=mock_graph_runtime_state,
)

result = node._run()

assert result.status == WorkflowNodeExecutionStatus.SUCCEEDED
assert result.outputs["output"] == "Active"

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remediation recommended

3. Tests not aaa structured 📘 Rule violation ✓ Correctness

New pytest tests were added without clear Arrange/Act/Assert phase separation. This reduces test
readability and violates the required AAA structure.
Agent Prompt
## Issue description
New pytest tests do not clearly separate Arrange/Act/Assert phases.

## Issue Context
Multiple added tests follow a single flow without explicit AAA sectioning; the compliance rule requires clear phase separation for maintainability.

## Fix Focus Areas
- api/tests/unit_tests/core/workflow/nodes/template_transform/template_transform_node_spec.py[434-1151]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +451 to +457
node = TemplateTransformNode(
id="test_node",
config=node_data,
graph_init_params=graph_init_params,
graph=mock_graph,
graph_runtime_state=mock_graph_runtime_state,
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

4. Broken templatetransform tests 🐞 Bug ✓ Correctness

New unit tests instantiate TemplateTransformNode with graph=... (unsupported) and pass config as
raw node-data without required id/data, which will raise TypeError/ValueError before assertions
execute. This blocks CI and prevents validating the new renderer abstraction.
Agent Prompt
### Issue description
New unit tests for `TemplateTransformNode` pass an unsupported `graph` keyword argument and pass `config` as raw node data (missing `id`/`data` wrapper). This causes `TypeError`/`ValueError` during construction, preventing the test suite from running.

### Issue Context
`TemplateTransformNode.__init__` forwards `config` to `Node.__init__`, which requires `config['id']` and `config['data']`. The newly added tests should mirror production construction (via `DifyNodeFactory`), which uses a node-config wrapper.

### Fix Focus Areas
- api/tests/unit_tests/core/workflow/nodes/template_transform/template_transform_node_spec.py[430-470]
- api/tests/unit_tests/core/workflow/nodes/template_transform/template_transform_node_spec.py[451-457]

### What to change
- Remove `graph=mock_graph` from all `TemplateTransformNode(...)` calls in this spec.
- Replace `config=node_data` with `config={"id": "test_node", "data": {**node_data, "type": "template-transform"}}` (and include `version` if required by other tests).
- Consider adding a small helper fixture like `make_node_config(node_id, data)` to avoid repeating the wrapper logic across the many added tests.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +37 to +40
rendered = result.get("result")
if rendered is not None and not isinstance(rendered, str):
raise TemplateRenderError("Template render result must be a string.")
return rendered
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

5. Renderer returns none 🐞 Bug ✓ Correctness

CodeExecutorJinja2TemplateRenderer.render_template is annotated to return str but can return
None when the executor result lacks a result key or sets it to null. TemplateTransformNode then
calls len(rendered) and returns rendered in outputs, which will crash or violate the node output
contract if None occurs.
Agent Prompt
### Issue description
`CodeExecutorJinja2TemplateRenderer.render_template()` is typed as returning `str` but can return `None` via `result.get('result')`. `TemplateTransformNode._run()` calls `len(rendered)` and emits `rendered` in outputs, which will raise `TypeError` or produce invalid outputs if `None` is returned.

### Issue Context
The node factory supports injecting custom executors/renderers. The renderer adapter should strictly enforce its contract and never return `None`.

### Fix Focus Areas
- api/core/workflow/nodes/template_transform/template_renderer.py[29-40]
- api/core/workflow/nodes/template_transform/template_transform_node.py[67-80]

### What to change
- In `render_template`, treat missing/None results as an error:
  - `rendered = result.get("result")`
  - `if not isinstance(rendered, str): raise TemplateRenderError(...)`
  - `return rendered`
  (This covers both `None` and non-`str` values.)
- (Optional) Add a defensive guard in `TemplateTransformNode._run()` to fail gracefully if a renderer violates the protocol contract.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants