-
Notifications
You must be signed in to change notification settings - Fork 5
Open
Description
While running python -m sign_language_segmentation.src.train --dataset=dgs_corpus --pose=holistic --fps=25 --hidden_dim=64 --encoder_depth=1 --encoder_bidirectional=false --optical_flow=true --only_optical_flow=true --weighted_loss=false --classes=io as suggested in the README, I was getting an Out Of Memory error while loading the dataset when I had 50GB of memory available but 100GB of available memory were enough. How much memory does loading the dataset take for you? Is it expected to take this much memory?
Metadata
Metadata
Assignees
Labels
No labels