Skip to content

Commit 992affa

Browse files
committed
Adding more about running with numba
1 parent b91f5b6 commit 992affa

1 file changed

Lines changed: 30 additions & 8 deletions

File tree

docs/day4/gpu.rst

Lines changed: 30 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -289,9 +289,9 @@ As before, we need a batch script to run the code. There are no GPUs on the logi
289289
salloc: Granted job allocation 406444
290290
salloc: Waiting for resource configuration
291291
salloc: Nodes p202 are ready for job
292-
[bbrydsoe@p202 ~]$ module load numba/0.60.0-foss-2024a
292+
[bbrydsoe@p202 ~]$ module load numba/0.60.0-foss-2024a CUDA/13.0.2
293293
[bbrydsoe@p202 ~]$ python add-list.py
294-
294+
CORE DUMP!!!!
295295
296296
.. tab:: UPPMAX: batch
297297

@@ -377,6 +377,28 @@ As before, we need a batch script to run the code. There are no GPUs on the logi
377377
# Run your Python script
378378
python add-list.py
379379
380+
.. tab:: C3SE: batch
381+
382+
Batch script, "add-list.sh", to run the same GPU Python script (the numba code, "add-list.py") at Alvis. As before, submit with "sbatch add-list.sh" (assuming you called the batch script thus - change to fit your own naming style).
383+
384+
.. code::block::
385+
386+
#!/bin/bash
387+
# Remember to change this to your own project ID after the course!
388+
#SBATCH -A naiss2025-22-934
389+
# We are asking for 10 minutes
390+
#SBATCH -t 00:10:00
391+
#SBATCH -p alvis
392+
#SBATCH -N 1 --gpus-per-node=T4:2
393+
# Writing output and error files
394+
#SBATCH --output=output%J.out
395+
#SBATCH --error=error%J.error
396+
397+
# Load any needed GPU modules and any prerequisites - on Alvis this module loads all
398+
ml purge > /dev/null 2>&1
399+
module load numba-cuda/0.20.0-foss-2025b-CUDA-12.9.1
400+
python add-list.py
401+
380402
.. tab:: NSC: batch
381403

382404
Batch script, "add-list.sh", to run the same GPU Python script (the numba code, "add-list.py") at Tetralith. As before, submit with "sbatch add-list.sh" (assuming you called the batch script thus - change to fit your own naming style).
@@ -394,17 +416,17 @@ As before, we need a batch script to run the code. There are no GPUs on the logi
394416
395417
# Remove any loaded modules and load the ones we need
396418
module purge > /dev/null 2>&1
397-
module load buildtool-easybuild/4.8.0-hpce082752a2 GCC/13.2.0 Python/3.11.5 SciPy-bundle/2023.11 JupyterLab/4.2.0
419+
module load buildenv-gcccuda/12.9.1-gcc11-hpc1 Python/3.11.5-env-hpc1-gcc-2023b-eb
398420
399-
# Load a virtual environment where numba is installed
400-
# Use the one you created previously under "Install packages"
401-
# or you can create it with the following steps:
402-
# ml buildtool-easybuild/4.8.0-hpce082752a2 GCC/13.2.0 Python/3.11.5 SciPy-bundle/2023.11 JupyterLab/4.2.0
421+
# The above modules should have numba. If it does not work, install numba yourself
422+
# or load a virtual environment where numba is installed
423+
# This is the steps to create it and then load:
424+
# ml buildenv-gcccuda/12.9.1-gcc11-hpc1 Python/3.11.5-bare-hpc1-gcc-2023b-eb
403425
# python -m venv mynumba
404426
# source mynumba/bin/activate
405427
# pip install numba
406428
#
407-
source <path-to>/mynumba/bin/activate
429+
# source <path-to>/mynumba/bin/activate
408430
409431
# Run your Python script
410432
python add-list.py

0 commit comments

Comments
 (0)