Hi guys!
First of all, thanks for the amazing work and library!
I wonder if it is possible to further improve the speed of the current rrlu decomposition at the possible cost of reducing the accuracy. I've been trying to use the library (in python mostly) for large matrices (5000 x 15000) and large bond (~2000) dimensions and the time it takes to run some experiment is quite large (40 mins +).
I was reading this paper and I found the suggested algorithms to work quite fast and with accurate results. Essentially a chain of pivoted QR on columns of the matrix and columns of its transpose (this could be changed to a row pivoted LU I guess). Do you think this is worth the effort?
Happy to contribute if it is of interest to you!
Hi guys!
First of all, thanks for the amazing work and library!
I wonder if it is possible to further improve the speed of the current rrlu decomposition at the possible cost of reducing the accuracy. I've been trying to use the library (in python mostly) for large matrices (5000 x 15000) and large bond (~2000) dimensions and the time it takes to run some experiment is quite large (40 mins +).
I was reading this paper and I found the suggested algorithms to work quite fast and with accurate results. Essentially a chain of pivoted QR on columns of the matrix and columns of its transpose (this could be changed to a row pivoted LU I guess). Do you think this is worth the effort?
Happy to contribute if it is of interest to you!