Replies: 2 comments 1 reply
-
|
sorry for the late response, i thought i updated the repo with the patch for the pipe call, must still be in the 3.2.4 branch under branches, will upload the new update tmrw, onnx providers running cpu is just letting you know that the rife onnx upscale models couldn't find tensorRT in your system so it fell back to CPU, totally fine still runs on cpu just in the Frametools tab, when loading the safetensor from hugging face its letting you know that its running on device 0 meaning that your gpu is running, if you had multiple GPUs it would tell you which GPU it was running off of, since you only have one it brings back 0, everything was normal except the pipe not being loaded, that's when i implemented marigold diffusion and forgot to tie back in the hugging face pipeline and onnx model logic, |
Beta Was this translation helpful? Give feedback.
-
|
I have now updated the repo, to the most recent updates, inference logic for all depth models should work properly now, Depthcrafter is Still a WIP https://github.com/VisionDepth/VisionDepth3D#%EF%B8%8F-guide-sheet-updating-visiondepth3d |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, just setup VisionDepth3D today with conda, it launches fine.
However when I try to use the depth estimation tool I get this:
On top of that, it does state that CUDA is available and running on CUDA at startup, then why is it using ['CPUExecutionProvider'] and cuda:0?
Python 3.12
CUDA 12.8
Pythorch 2.7.0
4060Ti OC 16GB
Ryzen 5 X7600
32GB of ram
Thanks in advance.
Beta Was this translation helpful? Give feedback.
All reactions