hi, many thanks for you work on this method
I'm looking into reproducing your results on the Famous dataset. But I noticed that the .npz files are missing from the download link in the readme (https://drive.google.com/drive/folders/1qre9mgJNCKiX11HnZO10qMZMmPv_gnh3?usp=sharing). Would you be able to add those files in?
I also tried recreating the point clouds with python sample_query_point.py --out_dir /home/linus/workspace/data/neural_pull/famous_new/ --input_dir /home/linus/workspace/data/points2surf/famous_noisefree/04_pts/ --dataset famous, but I get the error
/home/linus/workspace/data/neural_pull/famous_new/3DBenchy.npz
Traceback (most recent call last):
File "/home/linus/workspace/NeuralPull/sample_query_point.py", line 124, in <module>
point_idx = np.random.choice(pointcloud.shape[0], POINT_NUM_GT, replace = False)
File "numpy/random/mtrand.pyx", line 1000, in numpy.random.mtrand.RandomState.choice
ValueError: Cannot take a larger sample than population when 'replace=False'
Since the 3DBenchy dataset has less than POINT_NUM_GT(=20000) points. Could you clarify what value of POINT_NUM_GT was used for the famous dataset?
hi, many thanks for you work on this method
I'm looking into reproducing your results on the Famous dataset. But I noticed that the .npz files are missing from the download link in the readme (https://drive.google.com/drive/folders/1qre9mgJNCKiX11HnZO10qMZMmPv_gnh3?usp=sharing). Would you be able to add those files in?
I also tried recreating the point clouds with
python sample_query_point.py --out_dir /home/linus/workspace/data/neural_pull/famous_new/ --input_dir /home/linus/workspace/data/points2surf/famous_noisefree/04_pts/ --dataset famous, but I get the errorSince the 3DBenchy dataset has less than
POINT_NUM_GT(=20000) points. Could you clarify what value ofPOINT_NUM_GTwas used for the famous dataset?