🐛 Bug
torch.inverse operation is currently [fully codegen'd][1]. Which means that we automatically generate the following:
XLANativeFunctions::inverse function
- Ir node
Inverse class, which is derived from XlaNode
- PyTorch dispatcher registration for the XLA dispatch key
While that's convenient, that also means that all the tracing code is automatically generated. In that generated code, there's no inputs validation or checks that we would expect.
Given that, the following code fails as expected:
1 # Create a non-square matrix (2x3)
2 matrix = torch.randn(2, 3, device="xla")
3 result = torch.inverse(matrix)
4 materialized = result.cpu()
Traceback (most recent call last):
File "example.py", line 4, in main
materialized = result.cpu()
^^^^^^^^^^^^
RuntimeError: Error while lowering: [] aten::inverse, xla_shape=f32[2,3]{1,0}, dynamic_dims: ()
XLA builder error: INVALID_ARGUMENT: The two minor dimensions of 'a' must have equal size, got f32[2,3].:
Frames:
Note that instead of failing in the tracing step (line 3 -- torch.inverse call), it fails in the lowering step (line 4 -- torch.cpu call). This makes bugs harder to trace and understand, specially for the end user.
Expected behavior
- The error should be raised in the tracing step!
- A more user-friendly error message should be displayed
🐛 Bug
torch.inverseoperation is currently [fully codegen'd][1]. Which means that we automatically generate the following:XLANativeFunctions::inversefunctionInverseclass, which is derived fromXlaNodeWhile that's convenient, that also means that all the tracing code is automatically generated. In that generated code, there's no inputs validation or checks that we would expect.
Given that, the following code fails as expected:
Note that instead of failing in the tracing step (line 3 --
torch.inversecall), it fails in the lowering step (line 4 --torch.cpucall). This makes bugs harder to trace and understand, specially for the end user.Expected behavior