Great work and thanks for releasing the code for this paper.
I’m trying to run the evaluation, but I’m confused about the test split setup. The README mentions these files under data/dynamic_re10k/test/:
- dre10k_final_context_2.txt
- dre10k_final_context_2_view_idx.json
- wildrayzer_final_context_2.txt
- wildrayzer_final_context_2_view_idx.json
And the default inference config also seems to expect wildrayzer_final_context_2.txt and its corresponding view_idx json. I couldn’t find these files in the current public release, so I’m not sure what the intended way to reproduce the evaluation.
Could you please let me know whether these files will be released, or whether there is a recommended way to generate them from the released dataset?
Thanks a lot.
Great work and thanks for releasing the code for this paper.
I’m trying to run the evaluation, but I’m confused about the test split setup. The README mentions these files under data/dynamic_re10k/test/:
And the default inference config also seems to expect wildrayzer_final_context_2.txt and its corresponding view_idx json. I couldn’t find these files in the current public release, so I’m not sure what the intended way to reproduce the evaluation.
Could you please let me know whether these files will be released, or whether there is a recommended way to generate them from the released dataset?
Thanks a lot.