forked from tinygrad/open-gpu-kernel-modules
-
Notifications
You must be signed in to change notification settings - Fork 19
Open
Labels
bugSomething isn't workingSomething isn't working
Description
NVIDIA Open GPU Kernel Modules Version
580.76.05
Please confirm this issue does not happen with the proprietary driver (of the same version). This issue tracker is only for bugs specific to the open kernel driver.
- I confirm that this does not happen with the proprietary driver package.
Operating System and Version
Ubuntu noble
Kernel Release
6.14.0-29-generic
Please confirm you are running a stable release kernel (e.g. not a -rc). We do not accept bug reports for unreleased kernels.
- I am running on a stable kernel release.
Hardware: GPU
5090 x2
Describe the bug
I am trying to use NVSHMEM but even the demos / examples crash.
/dvs/p4/build/sw/rel/gpgpu/toolkit/r12.8/main_nvshmem/src/host/mem/mem_heap.cpp:1510: non-zero status: 101 cuMemCreate failed
/dvs/p4/build/sw/rel/gpgpu/toolkit/r12.8/main_nvshmem/src/host/mem/mem_heap.cpp:1591: non-zero status: 7 allocate_physical_memory_to_heap failed
/dvs/p4/build/sw/rel/gpgpu/toolkit/r12.8/main_nvshmem/src/host/team/team_internal.cpp:1031: NULL value nvshmemi_psync_pool allocation failed
/dvs/p4/build/sw/rel/gpgpu/toolkit/r12.8/main_nvshmem/src/host/init/init.cu:1040: non-zero status: 2 team setup failed
/dvs/p4/build/sw/rel/gpgpu/toolkit/r12.8/main_nvshmem/src/host/init/init.cu:nvshmemi_check_state_and_init:1080: nvshmem initialization failed, exiting
/dvs/p4/build/sw/rel/gpgpu/toolkit/r12.8/main_nvshmem/src/host/util/cs.cpp:21: non-zero status: 16: Device or resource busy, exiting... mutex destroy failed
meanwhile the cuda sample work
Device: 0, NVIDIA GeForce RTX 5090, pciBusID: 1, pciDeviceID: 0, pciDomainID:0
Device: 1, NVIDIA GeForce RTX 5090, pciBusID: 3, pciDeviceID: 0, pciDomainID:0
Device=0 CAN Access Peer Device=1
Device=1 CAN Access Peer Device=0
***NOTE: In case a device doesn't have P2P access to other one, it falls back to normal memcopy procedure.
So you can see lesser Bandwidth (GB/s) and unstable Latency (us) in those cases.
P2P Connectivity Matrix
D\D 0 1
0 1 1
1 1 1
Unidirectional P2P=Disabled Bandwidth Matrix (GB/s)
D\D 0 1
0 1525.88 19.91
1 20.08 1559.38
Unidirectional P2P=Enabled Bandwidth (P2P Writes) Matrix (GB/s)
D\D 0 1
0 1522.90 28.71
1 28.71 1550.10
Bidirectional P2P=Disabled Bandwidth Matrix (GB/s)
D\D 0 1
0 1527.30 21.48
1 21.51 1541.64
Bidirectional P2P=Enabled Bandwidth Matrix (GB/s)
D\D 0 1
0 1528.05 48.37
1 46.99 1542.40
P2P=Disabled Latency Matrix (us)
GPU 0 1
0 2.08 14.75
1 14.91 2.11
CPU 0 1
0 1.76 4.42
1 4.43 1.73
P2P=Enabled Latency (P2P Writes) Matrix (us)
GPU 0 1
0 2.09 0.52
1 0.59 2.08
CPU 0 1
0 1.75 1.14
1 1.14 1.69
To Reproduce
in ubuntu noble install 25.7 hpc-sdk (https://developer.nvidia.com/hpc-sdk-downloads)
...
apt install nvhpc-25-7
bash
module add /opt/nvidia/hpc_sdk/modulefiles/nvhpc-hpcx-2.20-cuda12/25.7
/opt/nvidia/hpc_sdk/Linux_x86_64/25.7/comm_libs/12.9/nvshmem/bin/perftest/device/pt-to-pt/shmem_atomic_bw
Bug Incidence
Always
nvidia-bug-report.log.gz
More Info
No response
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working