You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Simplify mixed precision implementations by computing types on demand (#759)
Remove cached T32 and Torig types from init_cacheval return tuples.
Instead compute these types on demand in solve! functions to reduce
complexity while maintaining zero allocations for subsequent solves.
This change affects all mixed precision implementations:
- MKL32MixedLUFactorization
- OpenBLAS32MixedLUFactorization
- AppleAccelerate32MixedLUFactorization
- RF32MixedLUFactorization
- CUDAOffload32MixedLUFactorization
- MetalOffload32MixedLUFactorization
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-authored-by: ChrisRackauckas <accounts@chrisrackauckas.com>
Co-authored-by: Claude <noreply@anthropic.com>
0 commit comments