Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
105 commits
Select commit Hold shift + click to select a range
96f296f
Add mempcy and memset library nodes for expansion
ThrudPrimrose Sep 9, 2025
69cae77
Merge branch 'main' into memcpy_map_to_libnode_pass
ThrudPrimrose Sep 18, 2025
58d3d81
Add tests
ThrudPrimrose Sep 18, 2025
716f926
Fix lbi
ThrudPrimrose Sep 18, 2025
b998a3f
Fix
ThrudPrimrose Sep 18, 2025
5e7f13b
Fix yield edge
ThrudPrimrose Sep 18, 2025
f479f34
Fix yield edge for states
ThrudPrimrose Sep 18, 2025
8a4e60a
Merge branch 'yield-edge-fix' into memcpy_map_to_libnode_pass
ThrudPrimrose Sep 18, 2025
ea91324
F
ThrudPrimrose Sep 18, 2025
7c244eb
Fix
ThrudPrimrose Sep 18, 2025
6c2f2eb
Fix
ThrudPrimrose Sep 18, 2025
6185d8b
Fix things
ThrudPrimrose Sep 18, 2025
b7c95f7
Finalize pass cleanup
ThrudPrimrose Sep 19, 2025
936dd26
Fix
ThrudPrimrose Sep 22, 2025
b56da93
Refactor
ThrudPrimrose Sep 22, 2025
c5659f9
Refactor
ThrudPrimrose Sep 22, 2025
6c2f6ad
Rm unnecessary test
ThrudPrimrose Sep 22, 2025
0d99997
Run refactor
ThrudPrimrose Sep 22, 2025
cc4068c
Naming fixes, copyright
ThrudPrimrose Sep 22, 2025
fc8cb69
Fix
ThrudPrimrose Sep 23, 2025
864525c
Array dimension utility extension
ThrudPrimrose Nov 2, 2025
d68f46b
is packed and is contigous functions
ThrudPrimrose Nov 2, 2025
5089fa6
Run precommit
ThrudPrimrose Nov 2, 2025
b60a74c
Copyright and documentation
ThrudPrimrose Nov 2, 2025
be9b6a8
Update
ThrudPrimrose Nov 2, 2025
5e58dae
Merge branch 'is_packed_storage_utility' into memcpy_map_to_libnode_pass
ThrudPrimrose Nov 2, 2025
d2912d6
Update
ThrudPrimrose Nov 2, 2025
cf26a0f
Update, improve support for dynamic in connectors
ThrudPrimrose Nov 2, 2025
6ccb8b3
Add environment
ThrudPrimrose Nov 2, 2025
40c45f8
Asignment map to kernel fixes
ThrudPrimrose Nov 9, 2025
82a71b6
Fix minor issue in tests
ThrudPrimrose Nov 9, 2025
4cf3a8c
Attempt fixes
ThrudPrimrose Nov 10, 2025
291e621
Fix stuff
ThrudPrimrose Nov 10, 2025
bc3a8b9
Disable autoopt for the assignment / memcpy map to libnode stuff
ThrudPrimrose Nov 10, 2025
fe01c11
Fixes
ThrudPrimrose Nov 10, 2025
f8b6f13
things
ThrudPrimrose Nov 10, 2025
f519b26
Extend cases supported by the explicit copy transformations
ThrudPrimrose Jan 6, 2026
e483a25
Refactor, use views+
ThrudPrimrose Jan 6, 2026
4d7156f
Refactor
ThrudPrimrose Jan 6, 2026
5b3adae
Prep
ThrudPrimrose Jan 6, 2026
09e29e6
Refactor
ThrudPrimrose Jan 6, 2026
14de39e
Merge branch 'explicit-gpu-global-copies' into explicit-streams
ThrudPrimrose Jan 6, 2026
95fc7ec
Add things
ThrudPrimrose Jan 6, 2026
d31158e
Add
ThrudPrimrose Jan 7, 2026
b94dcb0
Add tests
ThrudPrimrose Jan 7, 2026
8a754c3
Extensions
ThrudPrimrose Jan 7, 2026
dad01d3
Fix bug
ThrudPrimrose Jan 7, 2026
551ffaa
Merge branch 'explicit-gpu-global-copies' into explicit-streams
ThrudPrimrose Jan 7, 2026
db71ff2
Fix
ThrudPrimrose Jan 7, 2026
e38016c
Fix
ThrudPrimrose Jan 7, 2026
831724d
Merge branch 'explicit-gpu-global-copies' into explicit-streams
ThrudPrimrose Jan 7, 2026
944db27
Check for GPU outputs in current stream generation
ThrudPrimrose Jan 7, 2026
c77ea55
Fix cpp codegen
ThrudPrimrose Jan 7, 2026
4b52240
Merge branch 'explicit-gpu-global-copies' into explicit-streams
ThrudPrimrose Jan 7, 2026
e29ca86
Refactor
ThrudPrimrose Jan 7, 2026
cab1fa7
Merge branch 'main' into memcpy_map_to_libnode_pass
ThrudPrimrose Jan 7, 2026
194f600
Update memcpy and memset nodes
ThrudPrimrose Jan 7, 2026
5600161
Fixes
ThrudPrimrose Jan 7, 2026
283430a
Cleanup
ThrudPrimrose Jan 9, 2026
4300797
Refactor GPU stream handling in cpp.py
ThrudPrimrose Jan 9, 2026
6ed2621
Refactor
ThrudPrimrose Jan 9, 2026
425d652
refactor
ThrudPrimrose Jan 9, 2026
0ed6406
Precommit
ThrudPrimrose Jan 9, 2026
70141b9
Merge branch 'explicit-gpu-global-copies' into explicit-streams
ThrudPrimrose Jan 9, 2026
bdf51a2
Fix dace current stream name conflict for old codegen compat
ThrudPrimrose Jan 9, 2026
690be16
Implement comments, minor refactor, fix inlining impacting tests
ThrudPrimrose Jan 9, 2026
1dac422
Merge branch 'main' into explicit-gpu-global-copies
ThrudPrimrose Jan 11, 2026
faf8ed0
Merge branch 'explicit-gpu-global-copies' into explicit-streams
ThrudPrimrose Jan 11, 2026
f762d5f
Merge branch 'main' into explicit-gpu-global-copies
ThrudPrimrose Jan 23, 2026
925a3e5
Merge
ThrudPrimrose Jan 23, 2026
02b4182
Merge branch 'main' into explicit-gpu-global-copies
ThrudPrimrose Jan 23, 2026
5ae9ca0
Merge branch 'explicit-gpu-global-copies' into explicit-streams
ThrudPrimrose Jan 23, 2026
c88c993
Fix to gpu stream dtype
ThrudPrimrose Jan 24, 2026
bca117c
Add dtype
ThrudPrimrose Jan 24, 2026
9dd1605
Rm gpu def
ThrudPrimrose Jan 24, 2026
a3ddb2d
Rm gpu def
ThrudPrimrose Jan 24, 2026
3e83c74
Merge branch 'main' into explicit-gpu-global-copies
ThrudPrimrose Jan 28, 2026
923d4e1
Merge branch 'main' into memcpy_map_to_libnode_pass
ThrudPrimrose Jan 28, 2026
155ea94
Refactor
ThrudPrimrose Jan 28, 2026
c9cc32d
Update dace/transformation/passes/gpu_specialization/helpers/copy_str…
ThrudPrimrose Jan 28, 2026
6aa4c27
Update dace/transformation/passes/gpu_specialization/helpers/copy_str…
ThrudPrimrose Jan 28, 2026
e12d7fa
Merge
ThrudPrimrose Apr 13, 2026
cfb8c02
Update according to PR comments
ThrudPrimrose Apr 13, 2026
9aaec0e
Merge main
ThrudPrimrose Apr 13, 2026
08f1c79
Merge
ThrudPrimrose Apr 13, 2026
b170009
Merge branch 'main' into memcpy_map_to_libnode_pass
ThrudPrimrose Apr 15, 2026
8ddd63f
Add utilities and more copy nodes
ThrudPrimrose Apr 15, 2026
bd34ebe
Refactor
ThrudPrimrose Apr 15, 2026
6078dfd
Update dace/transformation/passes/assignment_and_copy_kernel_to_memse…
ThrudPrimrose Apr 15, 2026
1f53797
Update dace/libraries/standard/environments/cpu.py
ThrudPrimrose Apr 15, 2026
3678f55
Refactor
ThrudPrimrose Apr 16, 2026
718bdcb
Merge branch 'memcpy_map_to_libnode_pass' of github.com:spcl/dace int…
ThrudPrimrose Apr 16, 2026
e29d88a
Fix patterns
ThrudPrimrose Apr 16, 2026
4faf9ac
Refactor tests
ThrudPrimrose Apr 16, 2026
6ccefb7
Merge branch 'main' into memcpy_map_to_libnode_pass
ThrudPrimrose Apr 20, 2026
df5c6f5
Refactor
ThrudPrimrose Apr 20, 2026
774e8e0
Merge branch 'memcpy_map_to_libnode_pass' into explicit-gpu-global-co…
ThrudPrimrose Apr 20, 2026
fe4370e
Update
ThrudPrimrose Apr 20, 2026
695f274
Update GPU specialization passes to use copy nodes
ThrudPrimrose Apr 20, 2026
b006026
Update tests
ThrudPrimrose Apr 20, 2026
e251fb1
Refactor
ThrudPrimrose Apr 20, 2026
fc27b90
Check no other subset
ThrudPrimrose Apr 20, 2026
2142406
Merge
ThrudPrimrose Apr 20, 2026
84d07ec
Fix
ThrudPrimrose Apr 20, 2026
bfc97e1
Clean save calls
ThrudPrimrose Apr 20, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion dace/codegen/dispatcher.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@ class DefinedType(attr_enum.ExtensibleAttributeEnum):
Object = auto() # An object moved by reference
Stream = auto() # A stream object moved by reference and accessed via a push/pop API
StreamArray = auto() # An array of Streams
GPUStream = auto() # A backend GPU stream handle (e.g., cudaStream_t / hipStream_t)


class DefinedMemlets:
Expand Down Expand Up @@ -91,7 +92,8 @@ def add(self, name: str, dtype: DefinedType, ctype: str, ancestor: int = 0, allo
for _, scope, can_access_parent in reversed(self._scopes):
if name in scope:
err_str = "Shadowing variable {} from type {} to {}".format(name, scope[name], dtype)
if (allow_shadowing or config.Config.get_bool("compiler", "allow_shadowing")):
if (allow_shadowing or config.Config.get_bool("compiler", "allow_shadowing")
or dtype == DefinedType.GPUStream):
if not allow_shadowing:
print("WARNING: " + err_str)
else:
Expand Down
16 changes: 10 additions & 6 deletions dace/codegen/targets/cpp.py
Original file line number Diff line number Diff line change
Expand Up @@ -216,14 +216,17 @@ def memlet_copy_to_absolute_strides(dispatcher: 'TargetDispatcher',

def is_cuda_codegen_in_device(framecode) -> bool:
"""
Check the state of the CUDA code generator, whether it is inside device code.
Check the state of the (Experimental) CUDA code generator, whether it is inside device code.
"""
from dace.codegen.targets.cuda import CUDACodeGen

cudaClass = CUDACodeGen

if framecode is None:
cuda_codegen_in_device = False
else:
for codegen in framecode.targets:
if isinstance(codegen, CUDACodeGen):
if isinstance(codegen, cudaClass):
cuda_codegen_in_device = codegen._in_device_code
break
else:
Expand All @@ -246,11 +249,9 @@ def ptr(name: str, desc: data.Data, sdfg: SDFG = None, framecode: 'DaCeCodeGener
root = name.split('.')[0]
if root in sdfg.arrays and isinstance(sdfg.arrays[root], data.Structure):
name = name.replace('.', '->')

# Special case: If memory is persistent and defined in this SDFG, add state
# struct to name
if (desc.transient and desc.lifetime in (dtypes.AllocationLifetime.Persistent, dtypes.AllocationLifetime.External)):

if desc.storage == dtypes.StorageType.CPU_ThreadLocal: # Use unambiguous name for thread-local arrays
return f'__{sdfg.cfg_id}_{name}'
elif not is_cuda_codegen_in_device(framecode): # GPU kernels cannot access state
Expand Down Expand Up @@ -805,9 +806,12 @@ def unparse_cr(sdfg, wcr_ast, dtype):
def connected_to_gpu_memory(node: nodes.Node, state: SDFGState, sdfg: SDFG):
for e in state.all_edges(node):
path = state.memlet_path(e)
if ((isinstance(path[0].src, nodes.AccessNode)
and path[0].src.desc(sdfg).storage is dtypes.StorageType.GPU_Global)):
if (((isinstance(path[0].src, nodes.AccessNode)
and path[0].src.desc(sdfg).storage is dtypes.StorageType.GPU_Global))
or ((isinstance(path[-1].dst, nodes.AccessNode)
and path[-1].dst.desc(sdfg).storage is dtypes.StorageType.GPU_Global))):
return True

return False


Expand Down
28 changes: 23 additions & 5 deletions dace/codegen/targets/cpu.py
Original file line number Diff line number Diff line change
Expand Up @@ -502,6 +502,14 @@ def allocate_array(self,

return
elif (nodedesc.storage == dtypes.StorageType.Register):
# The assignment necessary to unify the explicit streams and streams declared through
# the state of the SDFG.
if nodedesc.dtype == dtypes.gpuStream_t:
ctype = dtypes.gpuStream_t.ctype
allocation_stream.write(f"{ctype}* {name} = __state->gpu_context->streams;")
define_var(name, DefinedType.Pointer, ctype)
return

ctypedef = dtypes.pointer(nodedesc.dtype).ctype
if nodedesc.start_offset != 0:
raise NotImplementedError('Start offset unsupported for registers')
Expand Down Expand Up @@ -577,6 +585,9 @@ def deallocate_array(self, sdfg: SDFG, cfg: ControlFlowRegion, dfg: StateSubgrap

if isinstance(nodedesc, (data.Scalar, data.View, data.Stream, data.Reference)):
return
elif nodedesc.dtype == dtypes.gpuStream_t:
callsite_stream.write(f"{alloc_name} = nullptr;")
return
elif (nodedesc.storage == dtypes.StorageType.CPU_Heap
or (nodedesc.storage == dtypes.StorageType.Register and
(symbolic.issymbolic(arrsize, sdfg.constants) or
Expand Down Expand Up @@ -994,6 +1005,11 @@ def process_out_memlets(self,
dst_edge = dfg.memlet_path(edge)[-1]
dst_node = dst_edge.dst

if isinstance(dst_node, nodes.AccessNode) and dst_node.desc(state).dtype == dtypes.gpuStream_t:
# Special case: GPU Streams do not represent data flow - they assing GPU Streams to kernels/tasks
# Thus, nothing needs to be written and out memlets of this kind should be ignored.
continue

# Target is neither a data nor a tasklet node
if isinstance(node, nodes.AccessNode) and (not isinstance(dst_node, nodes.AccessNode)
and not isinstance(dst_node, nodes.CodeNode)):
Expand Down Expand Up @@ -1035,8 +1051,7 @@ def process_out_memlets(self,
# Tasklet -> array with a memlet. Writing to array is emitted only if the memlet is not empty
if isinstance(node, nodes.CodeNode) and not edge.data.is_empty():
if not uconn:
raise SyntaxError("Cannot copy memlet without a local connector: {} to {}".format(
str(edge.src), str(edge.dst)))
return

conntype = node.out_connectors[uconn]
is_scalar = not isinstance(conntype, dtypes.pointer)
Expand Down Expand Up @@ -1254,7 +1269,6 @@ def memlet_definition(self,
# Dynamic WCR memlets start uninitialized
result += "{} {};".format(memlet_type, local_name)
defined = DefinedType.Scalar

else:
if not memlet.dynamic:
if is_scalar:
Expand Down Expand Up @@ -1289,8 +1303,12 @@ def memlet_definition(self,
memlet_type = ctypedef
result += "{} &{} = {};".format(memlet_type, local_name, expr)
defined = DefinedType.Stream
else:
raise TypeError("Unknown variable type: {}".format(var_type))

# Set Defined Type for GPU Stream connectors
# Shadowing for stream variable needs to be allowed
if memlet_type == 'gpuStream_t':
var_type = DefinedType.GPUStream
defined = DefinedType.GPUStream

if defined is not None:
self._dispatcher.defined_vars.add(local_name, defined, memlet_type, allow_shadowing=allow_shadowing)
Expand Down
4 changes: 2 additions & 2 deletions dace/config_schema.yml
Original file line number Diff line number Diff line change
Expand Up @@ -291,7 +291,7 @@ required:
type: str
title: Arguments
description: Compiler argument flags
default: '-fPIC -Wall -Wextra -O3 -march=native -ffast-math -Wno-unused-parameter -Wno-unused-label'
default: '-fopenmp -fPIC -Wall -Wextra -O3 -march=native -ffast-math -Wno-unused-parameter -Wno-unused-label'
default_Windows: '/O2 /fp:fast /arch:AVX2 /D_USRDLL /D_WINDLL /D__restrict__=__restrict'

libs:
Expand Down Expand Up @@ -349,7 +349,7 @@ required:
Additional CUDA architectures (separated by commas)
to compile GPU code for, excluding the current
architecture on the compiling machine.
default: '60'
default: '86'

hip_arch:
type: str
Expand Down
9 changes: 6 additions & 3 deletions dace/dtypes.py
Original file line number Diff line number Diff line change
Expand Up @@ -87,6 +87,8 @@ class ScheduleType(ExtensibleAttributeEnum):
StorageType.GPU_Shared,
]

GPU_KERNEL_ACCESSIBLE_STORAGES = [StorageType.GPU_Global, StorageType.GPU_Shared, StorageType.Register]


class ReductionType(Enum):
""" Reduction types natively supported by the SDFG compiler. """
Expand Down Expand Up @@ -176,7 +178,7 @@ class TilingType(Enum):
ScheduleType.GPU_ThreadBlock: StorageType.Register,
ScheduleType.GPU_ThreadBlock_Dynamic: StorageType.Register,
ScheduleType.SVE_Map: StorageType.CPU_Heap,
ScheduleType.Snitch: StorageType.Snitch_TCDM
ScheduleType.Snitch: StorageType.Snitch_TCDM,
}

# Maps from ScheduleType to default ScheduleType for sub-scopes
Expand All @@ -193,7 +195,7 @@ class TilingType(Enum):
ScheduleType.GPU_ThreadBlock_Dynamic: ScheduleType.Sequential,
ScheduleType.SVE_Map: ScheduleType.Sequential,
ScheduleType.Snitch: ScheduleType.Snitch,
ScheduleType.Snitch_Multicore: ScheduleType.Snitch_Multicore
ScheduleType.Snitch_Multicore: ScheduleType.Snitch_Multicore,
}

# Maps from StorageType to a preferred ScheduleType for helping determine schedules.
Expand Down Expand Up @@ -1184,6 +1186,7 @@ class complex128(_DaCeArray, npt.NDArray[numpy.complex128]): ...
class string(_DaCeArray, npt.NDArray[numpy.str_]): ...
class vector(_DaCeArray, npt.NDArray[numpy.void]): ...
class MPI_Request(_DaCeArray, npt.NDArray[numpy.void]): ...
class gpuStream_t(_DaCeArray, npt.NDArray[numpy.void]): ...
# yapf: enable
else:
# Runtime definitions
Expand All @@ -1204,7 +1207,7 @@ class MPI_Request(_DaCeArray, npt.NDArray[numpy.void]): ...
complex128 = typeclass(numpy.complex128)
string = stringtype()
MPI_Request = opaque('MPI_Request')

gpuStream_t = opaque('gpuStream_t')
_bool = bool


Expand Down
1 change: 1 addition & 0 deletions dace/libraries/standard/environments/__init__.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
# Copyright 2019-2023 ETH Zurich and the DaCe authors. All rights reserved.
from .cuda import CUDA
from .hptt import HPTT
from .cpu import CPU
21 changes: 21 additions & 0 deletions dace/libraries/standard/environments/cpu.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
# Copyright 2019-2026 ETH Zurich and the DaCe authors. All rights reserved.
import dace.library


@dace.library.environment
class CPU:

cmake_minimum_version = None
cmake_packages = []
cmake_variables = {}
cmake_includes = []
cmake_libraries = []
cmake_compile_flags = []
cmake_link_flags = []
cmake_files = []

headers = {'frame': ["cstring"]}
state_fields = []
init_code = ""
finalize_code = ""
dependencies = []
2 changes: 1 addition & 1 deletion dace/libraries/standard/environments/cuda.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ class CUDA:
cmake_link_flags = []
cmake_files = []

headers = []
headers = {'frame': ["cuda_runtime.h"]}
state_fields = []
init_code = ""
finalize_code = ""
Expand Down
76 changes: 76 additions & 0 deletions dace/libraries/standard/helper.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
# Copyright 2019-2026 ETH Zurich and the DaCe authors. All rights reserved.
"""
Shared helpers for CopyLibraryNode and MemsetLibraryNode expansions: subset
stride collapsing (used to size nested-SDFG data descriptors from memlet
subsets) and dynamic map-range input promotion.
"""
from typing import List

import dace
import copy


def collapse_shape_and_strides(subset: List[dace.subsets.Range], strides: List[dace.symbolic.SymExpr]):
"""Remove singleton dimensions (length 1) from a subset/stride pair.

The resulting strides describe the access pattern of the subset as a
view into the parent array, so each parent stride is scaled by the
subset's step (``stride * s``). For unit-step subsets this is a
no-op; for strided subsets it yields the effective per-element
distance in the underlying memory.
"""
collapsed_shape = []
collapsed_strides = []
for (b, e, s), stride in zip(subset, strides):
length = (e + 1 - b) // s
if length != 1:
collapsed_shape.append(length)
collapsed_strides.append(stride * s)
return collapsed_shape, collapsed_strides


def add_dynamic_inputs(dynamic_inputs, sdfg: dace.SDFG, subset: dace.subsets.Range, state: dace.SDFGState):
"""Promote dynamic map-range inputs to SDFG-level data descriptors.

For each dynamic input not already present in the SDFG (e.g., a
runtime-determined array dimension), the function adds the descriptor,
renames existing symbolic references with a ``sym_`` prefix, and
inserts a pre-assignment state that reads the concrete value into the
symbol. If no promotion is needed, the SDFG is left unchanged.
Returns the collapsed (non-singleton) map lengths after substitution.
"""
pre_assignments = dict()
map_lengths = [dace.symbolic.SymExpr((e + 1 - b) // s) for (b, e, s) in subset]

for dynamic_input_name, datadesc in dynamic_inputs.items():
if dynamic_input_name in sdfg.arrays:
continue

if dynamic_input_name in sdfg.symbols:
continue

sdfg.replace(str(dynamic_input_name), "sym_" + str(dynamic_input_name))
ndesc = copy.deepcopy(datadesc)
ndesc.transient = False
sdfg.add_datadesc(dynamic_input_name, ndesc)
# Should be scalar
if isinstance(ndesc, dace.data.Scalar):
pre_assignments["sym_" + dynamic_input_name] = f"{dynamic_input_name}"
else:
assert ndesc.shape == (1, ) or ndesc.shape == [
1,
]
pre_assignments["sym_" + dynamic_input_name] = f"{dynamic_input_name}[0]"

new_map_lengths = []
for ml in map_lengths:
nml = ml.subs({str(dynamic_input_name): "sym_" + str(dynamic_input_name)})
new_map_lengths.append(nml)
map_lengths = new_map_lengths

if pre_assignments != dict():
# Add a state for assignments in the beginning
sdfg.add_state_before(state=state, label="pre_assign", is_start_block=True, assignments=pre_assignments)

collapsed_map_lengths = [ml for ml in map_lengths if ml != 1]
return collapsed_map_lengths
4 changes: 3 additions & 1 deletion dace/libraries/standard/nodes/__init__.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
# Copyright 2019-2023 ETH Zurich and the DaCe authors. All rights reserved.
# Copyright 2019-2026 ETH Zurich and the DaCe authors. All rights reserved.
from .code import CodeLibraryNode
from .copy_node import CopyLibraryNode
from .memset_node import MemsetLibraryNode
from .gearbox import Gearbox
from .reduce import Reduce
from .transpose import Transpose
Expand Down
Loading
Loading