Hey,
So I’m using a jupyter notebook developed by github user AlphaAtlas
https://github.com/AlphaAtlas/VapourSynthColab
So when I run the thing in Colab, we get this
vapoursynth.Error: Degrain3: failed to retrieve first frame from super clip. Error message: CUDA out of memory. Tried to allocate 7.91 GiB (GPU 0; 14.73 GiB total capacity; 8.00 GiB already allocated; 5.19 GiB free; 8.00 GiB reserved in total by PyTorch)
And an error message that says:
“pipe:: Invalid data found when processing input”
We are testing the script out on a 4 second video clip that has a resolution of 1920×1080
Here is the full traceback:
Traceback (most recent call last):
File “src\cython\vapoursynth.pyx”, line 1946, in vapoursynth.vpy_evaluateScript
File “src\cython\vapoursynth.pyx”, line 1947, in vapoursynth.vpy_evaluateScript
File “/content/autogenerated.vpy”, line 85, in
clip = G41.SMDegrain(clip, tr=3, RefineMotion=True, pel = 1, prefilter = prefilter)
File “/VapourSynthImports/G41Fun.py”, line 2123, in SMDegrain
output = D3(mfilter, super_render, bv1, fv1, bv2, fv2, bv3, fv3, **degrain_args)
File “src\cython\vapoursynth.pyx”, line 1852, in vapoursynth.Function.call
vapoursynth.Error: Degrain3: failed to retrieve first frame from super clip. Error message: CUDA out of memory. Tried to allocate 7.91 GiB (GPU 0; 14.73 GiB total capacity; 8.00 GiB already allocated; 5.19 GiB free; 8.00 GiB reserved in total by PyTorch)
pipe:: Invalid data found when processing input
We are using the python 3 google compute engine backend (GPU) runtime and apparently we have access to 12.72 GB of RAM (Although it looks like it would be14.73 based on the error message)
We are wondering if you guys might have any ideas what could be causing it. I am thinking the solution would be to pay for more RAM, but obviously we are trying to avoid that, which is why I am posting here
Hey,
So I’m using a jupyter notebook developed by github user AlphaAtlas
https://github.com/AlphaAtlas/VapourSynthColab
So when I run the thing in Colab, we get this
vapoursynth.Error: Degrain3: failed to retrieve first frame from super clip. Error message: CUDA out of memory. Tried to allocate 7.91 GiB (GPU 0; 14.73 GiB total capacity; 8.00 GiB already allocated; 5.19 GiB free; 8.00 GiB reserved in total by PyTorch)
And an error message that says:
“pipe:: Invalid data found when processing input”
We are testing the script out on a 4 second video clip that has a resolution of 1920×1080
Here is the full traceback:
Traceback (most recent call last):
File “src\cython\vapoursynth.pyx”, line 1946, in vapoursynth.vpy_evaluateScript
File “src\cython\vapoursynth.pyx”, line 1947, in vapoursynth.vpy_evaluateScript
File “/content/autogenerated.vpy”, line 85, in
clip = G41.SMDegrain(clip, tr=3, RefineMotion=True, pel = 1, prefilter = prefilter)
File “/VapourSynthImports/G41Fun.py”, line 2123, in SMDegrain
output = D3(mfilter, super_render, bv1, fv1, bv2, fv2, bv3, fv3, **degrain_args)
File “src\cython\vapoursynth.pyx”, line 1852, in vapoursynth.Function.call
vapoursynth.Error: Degrain3: failed to retrieve first frame from super clip. Error message: CUDA out of memory. Tried to allocate 7.91 GiB (GPU 0; 14.73 GiB total capacity; 8.00 GiB already allocated; 5.19 GiB free; 8.00 GiB reserved in total by PyTorch)
pipe:: Invalid data found when processing input
We are using the python 3 google compute engine backend (GPU) runtime and apparently we have access to 12.72 GB of RAM (Although it looks like it would be14.73 based on the error message)
We are wondering if you guys might have any ideas what could be causing it. I am thinking the solution would be to pay for more RAM, but obviously we are trying to avoid that, which is why I am posting here