You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In mid-26.04, RAPIDS was building its wheels with v13.1.1 of the CUDA toolkit (including libnvJitLink 13.1.1) and directly linking against libnvJitLink for JIT-LTO (example: rapidsai/cuvs#1405).
This resulted in runtime issues in environments with v13.0.x of the CTK, like this:
libcugraph.so: undefined symbol: __nvJitLinkGetErrorLog_13_1, version libnvJitLink.so.13
Requiring nvidia-nvjitlink>=13.1 at runtime would solve those issues, but it'd also make RAPIDS wheels incompatible with cuda-toolkit[nvjitlink]<13.1, which torch 2.10 (the latest release) pins to:
In an offline discussion with @bdice@vyasr and @divyegala we discussed and tried several options (example: rapidsai/cuvs#1855), and decided to try building RAPIDS wheels against CTK 13.0.x for RAPIDS 26.04, to avoid losing compatibility with projects tightly pinned to earlier nvJitLink versions.
This tracks that work.
Benefits of this work
allows RAPIDS to continue adopting JIT-LTO while also staying compatible with torch and other projects tightly pinning to earlier nvidia-nvjitlink versions
Acceptance Criteria
all RAPIDS libraries build wheels against CTK 13.0
RAPIDS conda builds continue to build against the latest CUDA 13 CTK RAPIDS supports (as of this writing, 13.1.1)
RAPIDS devcontainers continue to support the latest CUDA 13 CTK RAPIDS supports
RAPIDS CUDA 12 wheels continue to build against the latest CUDA 12 CTK RAPIDS support (as of this writing, 12.9.1)
cugraph-gnn wheels CI is successfully testing against CUDA 12 and CUDA 13 torch wheels
Description
In mid-26.04, RAPIDS was building its wheels with v13.1.1 of the CUDA toolkit (including libnvJitLink 13.1.1) and directly linking against libnvJitLink for JIT-LTO (example: rapidsai/cuvs#1405).
This resulted in runtime issues in environments with v13.0.x of the CTK, like this:
Requiring
nvidia-nvjitlink>=13.1at runtime would solve those issues, but it'd also make RAPIDS wheels incompatible withcuda-toolkit[nvjitlink]<13.1, whichtorch2.10 (the latest release) pins to:In an offline discussion with @bdice @vyasr and @divyegala we discussed and tried several options (example: rapidsai/cuvs#1855), and decided to try building RAPIDS wheels against CTK 13.0.x for RAPIDS 26.04, to avoid losing compatibility with projects tightly pinned to earlier nvJitLink versions.
This tracks that work.
Benefits of this work
torchand other projects tightly pinning to earliernvidia-nvjitlinkversionsAcceptance Criteria
cugraph-gnnwheels CI is successfully testing against CUDA 12 and CUDA 13torchwheelsApproach
ci-imgschangesci-wheel:{rapids-version}-latestto 13.0.2 (ci-wheel: drop CUDA 12.2.2 images, move latest back to 13.0.2 ci-imgs#386)shared-workflowschanges (wheels-build: build on CUDA 13.0 shared-workflows#510)cuda-toolkitversion in CI #256)dask-cudanot necessary: pure Pythonpip install nvidia-nvjitlink(ref: ensure 'torch' CUDA wheels are installed in CI, remove unused dependencies cugraph#5453 (comment))WIP: wheels CI: stricter torch index selection, test oldest versions of dependencies cugraph-gnn#413torchdependency handling (ensure 'torch' CUDA wheels are installed in CI, test that 'torch' is an optional dependency cugraph-gnn#425)pip install nvidia-nvjitlink(wheels: build with CUDA 13.0, test against mix of CTK versions, make 'torch-geometric' fully optional for 'cugraph-pyg' cugraph-gnn#434)nx-cugraphnot necessary: pure Pythonci-imgsCUDA 12.2.2 images if we end up not needing them (ci-wheel: drop CUDA 12.2.2 images, move latest back to 13.0.2 ci-imgs#386)release/26.04 -> mainNotes
N/A