Skip to content

Dask Plugin does not obey working_directory attribute correctly (dev branch) #6

@drelu

Description

@drelu

Pilot Description:

RESOURCE_URL_HPC = "slurm://localhost"
WORKING_DIRECTORY = os.path.join(os.environ["HOME"], "work")

{
            "resource": RESOURCE_URL_HPC,
            "working_directory": WORKING_DIRECTORY,
            "type": "dask",
            "number_of_nodes": 1,
            "cores_per_node": 2,
            "gpus_per_node": 0,
            "queue": "debug",
            "walltime": 30,            
            "project": "m4408",
            "conda_environment": "/pscratch/sd/l/luckow/conda/quantum-mini-apps2",
            "scheduler_script_commands": ["#SBATCH --constraint=cpu"]
 }

Generated SLURM script:

#!/bin/bash
#SBATCH -n 2
#SBATCH -N 1
#SBATCH -J pq-150a4
#SBATCH -t 0:30:00

#SBATCH -A m4408

#SBATCH -o /tmp/pcs-d3275f34-5f53-4a0c-befb-edc5c652f99c/dask-150a08b6-5ecd-11ef-bf98-0040a68706da/pq-150a4.stdout
#SBATCH -e /tmp/pcs-d3275f34-5f53-4a0c-befb-edc5c652f99c/dask-150a08b6-5ecd-11ef-bf98-0040a68706da/pq-150a4.stderr
#SBATCH -q debug
#SBATCH --constraint=cpu
conda activate /pscratch/sd/l/luckow/conda/quantum-mini-apps2
cd /tmp/pcs-d3275f34-5f53-4a0c-befb-edc5c652f99c/dask-150a08b6-5ecd-11ef-bf98-0040a68706da
python -m pilot.plugins.dask.bootstrap_dask -t dask -p 2 -s /tmp/pcs-d3275f34-5f53-4a0c-befb-edc5c652f99c/dask_scheduler -n pilot-8da0d299-4dd7-49d1-9b8f-c02dc918c01d

Issue: /tmp only locally available on work node (i.e. after job end it is difficult to access the output)

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions