Hi!
I am trying to read from large-ish datasets. In the docs, I see there is a method to read a single row so I tried using that in a row to extract all my data. It's very very slow that way, much slower than if my data was stored in a simple CSV or parquet, for which I can get iterators to loop over entries.
Is there a way to efficiently read large tables, that I may have missed?
Thanks!
Hi!
I am trying to read from large-ish datasets. In the docs, I see there is a method to read a single row so I tried using that in a row to extract all my data. It's very very slow that way, much slower than if my data was stored in a simple CSV or parquet, for which I can get iterators to loop over entries.
Is there a way to efficiently read large tables, that I may have missed?
Thanks!