refactor(workflow): add Jinja2 renderer abstraction for template transform#154
Conversation
…de and threaded it through DifyNodeFactory so TemplateTransform nodes receive the dependency by default, keeping behavior unchanged unless an override is provided. Changes are in `api/core/workflow/nodes/template_transform/template_transform_node.py` and `api/core/workflow/nodes/node_factory.py`. **Commits** - chore(workflow): identify TemplateTransform dependency on CodeExecutor - feat(workflow): add CodeExecutor constructor injection to TemplateTransformNode (defaulting to current behavior) - feat(workflow): inject CodeExecutor from DifyNodeFactory when creating TemplateTransform nodes **Tests** - Not run (not requested) Next step: run `make lint` and `make type-check` if you want to validate the backend checks.
…Transform to use it, keeping CodeExecutor as the default adapter while preserving current behavior. Updates are in `api/core/workflow/nodes/template_transform/template_renderer.py`, `api/core/workflow/nodes/template_transform/template_transform_node.py`, `api/core/workflow/nodes/node_factory.py`, and `api/tests/unit_tests/core/workflow/nodes/template_transform/template_transform_node_spec.py`. Commit-style summary: - feat(template-transform): add Jinja2 template renderer abstraction with CodeExecutor adapter - refactor(template-transform): use renderer in node/factory and update unit test patches Tests not run (not requested).
…ode creation to return TemplateTransformNode directly for template-transform nodes in `api/core/workflow/nodes/node_factory.py`. Commit-style summary: - refactor(template-transform): derive TemplateRenderError from ValueError - refactor(node-factory): instantiate TemplateTransformNode directly with injected renderer Tests not run (not requested).
…ts/core/workflow/nodes/template_transform/template_transform_node_spec.py`) chore(type-check): ran `make type-check` (basedpyright clean, 0 errors) No errors reported.
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Review Summary by QodoAdd Jinja2 renderer abstraction for template transform nodes
WalkthroughsDescription• Introduce Jinja2 template renderer abstraction protocol with CodeExecutor adapter • Inject template renderer dependency into TemplateTransformNode via DifyNodeFactory • Refactor TemplateTransformNode to use renderer abstraction instead of direct CodeExecutor calls • Add 30+ comprehensive unit tests covering edge cases and Jinja2 features Diagramflowchart LR
A["DifyNodeFactory"] -->|"injects template_renderer"| B["TemplateTransformNode"]
C["Jinja2TemplateRenderer<br/>Protocol"] -->|"implemented by"| D["CodeExecutorJinja2TemplateRenderer"]
D -->|"wraps"| E["CodeExecutor"]
B -->|"uses"| D
F["TemplateRenderError<br/>ValueError"] -->|"raised by"| D
File Changes1. api/core/workflow/nodes/template_transform/template_renderer.py
|
Code Review by Qodo
1. New tests lack type hints
|
| def test_run_with_boolean_values(self, mock_execute, mock_graph, mock_graph_runtime_state, graph_init_params): | ||
| """Test _run with boolean variable values.""" |
There was a problem hiding this comment.
1. New tests lack type hints 📘 Rule violation ✓ Correctness
New pytest test functions were added without parameter and return type annotations. This violates the requirement that all Python function definitions include modern Python 3.12+ type annotations.
Agent Prompt
## Issue description
Newly added pytest test functions are missing parameter and return type annotations, violating the project requirement for modern Python 3.12+ typing.
## Issue Context
The PR adds many new test functions in `template_transform_node_spec.py`; these `def` statements introduce untyped parameters (e.g., mocks/fixtures) and omit `-> None`.
## Fix Focus Areas
- api/tests/unit_tests/core/workflow/nodes/template_transform/template_transform_node_spec.py[434-1151]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| assert result.status == WorkflowNodeExecutionStatus.SUCCEEDED | ||
| assert result.outputs["output"] == "Population: 8000000000" | ||
|
|
||
| @patch( | ||
| "core.workflow.nodes.template_transform.template_transform_node.CodeExecutorJinja2TemplateRenderer.render_template" | ||
| ) | ||
| def test_run_with_mixed_types_in_list(self, mock_execute, mock_graph, mock_graph_runtime_state, graph_init_params): |
There was a problem hiding this comment.
2. Test file exceeds 800 📘 Rule violation ⛯ Reliability
A Python file under api/ now exceeds 800 lines due to the large test additions. This violates the file size limit requirement and harms readability and reviewability.
Agent Prompt
## Issue description
A Python file under `api/` exceeds the 800-line limit after adding many new tests.
## Issue Context
The test module grew to at least ~1151 lines (based on new line numbers in the diff), violating the repository constraint for Python files under `api/`.
## Fix Focus Areas
- api/tests/unit_tests/core/workflow/nodes/template_transform/template_transform_node_spec.py[430-1151]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| def test_run_with_boolean_values(self, mock_execute, mock_graph, mock_graph_runtime_state, graph_init_params): | ||
| """Test _run with boolean variable values.""" | ||
| node_data = { | ||
| "title": "Boolean Template", | ||
| "variables": [{"variable": "is_active", "value_selector": ["sys", "active_status"]}], | ||
| "template": "{% if is_active %}Active{% else %}Inactive{% endif %}", | ||
| } | ||
|
|
||
| mock_status = MagicMock() | ||
| mock_status.to_object.return_value = True | ||
|
|
||
| mock_graph_runtime_state.variable_pool.get.return_value = mock_status | ||
| mock_execute.return_value = "Active" | ||
|
|
||
| node = TemplateTransformNode( | ||
| id="test_node", | ||
| config=node_data, | ||
| graph_init_params=graph_init_params, | ||
| graph=mock_graph, | ||
| graph_runtime_state=mock_graph_runtime_state, | ||
| ) | ||
|
|
||
| result = node._run() | ||
|
|
||
| assert result.status == WorkflowNodeExecutionStatus.SUCCEEDED | ||
| assert result.outputs["output"] == "Active" | ||
|
|
There was a problem hiding this comment.
3. Tests not aaa structured 📘 Rule violation ✓ Correctness
New pytest tests were added without clear Arrange/Act/Assert phase separation. This reduces test readability and violates the required AAA structure.
Agent Prompt
## Issue description
New pytest tests do not clearly separate Arrange/Act/Assert phases.
## Issue Context
Multiple added tests follow a single flow without explicit AAA sectioning; the compliance rule requires clear phase separation for maintainability.
## Fix Focus Areas
- api/tests/unit_tests/core/workflow/nodes/template_transform/template_transform_node_spec.py[434-1151]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| node = TemplateTransformNode( | ||
| id="test_node", | ||
| config=node_data, | ||
| graph_init_params=graph_init_params, | ||
| graph=mock_graph, | ||
| graph_runtime_state=mock_graph_runtime_state, | ||
| ) |
There was a problem hiding this comment.
4. Broken templatetransform tests 🐞 Bug ✓ Correctness
New unit tests instantiate TemplateTransformNode with graph=... (unsupported) and pass config as raw node-data without required id/data, which will raise TypeError/ValueError before assertions execute. This blocks CI and prevents validating the new renderer abstraction.
Agent Prompt
### Issue description
New unit tests for `TemplateTransformNode` pass an unsupported `graph` keyword argument and pass `config` as raw node data (missing `id`/`data` wrapper). This causes `TypeError`/`ValueError` during construction, preventing the test suite from running.
### Issue Context
`TemplateTransformNode.__init__` forwards `config` to `Node.__init__`, which requires `config['id']` and `config['data']`. The newly added tests should mirror production construction (via `DifyNodeFactory`), which uses a node-config wrapper.
### Fix Focus Areas
- api/tests/unit_tests/core/workflow/nodes/template_transform/template_transform_node_spec.py[430-470]
- api/tests/unit_tests/core/workflow/nodes/template_transform/template_transform_node_spec.py[451-457]
### What to change
- Remove `graph=mock_graph` from all `TemplateTransformNode(...)` calls in this spec.
- Replace `config=node_data` with `config={"id": "test_node", "data": {**node_data, "type": "template-transform"}}` (and include `version` if required by other tests).
- Consider adding a small helper fixture like `make_node_config(node_id, data)` to avoid repeating the wrapper logic across the many added tests.
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| rendered = result.get("result") | ||
| if rendered is not None and not isinstance(rendered, str): | ||
| raise TemplateRenderError("Template render result must be a string.") | ||
| return rendered |
There was a problem hiding this comment.
5. Renderer returns none 🐞 Bug ✓ Correctness
CodeExecutorJinja2TemplateRenderer.render_template is annotated to return str but can return None when the executor result lacks a result key or sets it to null. TemplateTransformNode then calls len(rendered) and returns rendered in outputs, which will crash or violate the node output contract if None occurs.
Agent Prompt
### Issue description
`CodeExecutorJinja2TemplateRenderer.render_template()` is typed as returning `str` but can return `None` via `result.get('result')`. `TemplateTransformNode._run()` calls `len(rendered)` and emits `rendered` in outputs, which will raise `TypeError` or produce invalid outputs if `None` is returned.
### Issue Context
The node factory supports injecting custom executors/renderers. The renderer adapter should strictly enforce its contract and never return `None`.
### Fix Focus Areas
- api/core/workflow/nodes/template_transform/template_renderer.py[29-40]
- api/core/workflow/nodes/template_transform/template_transform_node.py[67-80]
### What to change
- In `render_template`, treat missing/None results as an error:
- `rendered = result.get("result")`
- `if not isinstance(rendered, str): raise TemplateRenderError(...)`
- `return rendered`
(This covers both `None` and non-`str` values.)
- (Optional) Add a defensive guard in `TemplateTransformNode._run()` to fail gracefully if a renderer violates the protocol contract.
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
Benchmark PR from agentic-review-benchmarks#3