Launch Positron on Alpine (CU Boulder) or amc-bodhi (CU Anschutz) HPC clusters.
This script allocates a compute node on your HPC cluster and provides SSH connection instructions for remote development with Positron. It uses a ProxyJump SSH configuration to connect through the login node to your allocated compute node.
- Alpine (CU Boulder): Uses SLURM job scheduler
- amc-bodhi (CU Anschutz): Uses SLURM job scheduler
- Access to Alpine or amc-bodhi HPC cluster
- Positron installed on your local machine
- SSH key configured for cluster access
- amc-bodhi only: Connected to AMC VPN
Run the setup command from your local machine:
# For Alpine:
./positron-remote.sh setup alpine
# For amc-bodhi:
./positron-remote.sh setup bodhiThis will:
- Copy your local SSH public key to the cluster (via
ssh-copy-id) - Create a Positron Server symlink on scratch storage (Alpine only —
$HOMEhas limited space,/scratch/alpinehas more room) - Print recommended Positron settings
Important notes (Alpine):
/scratch/alpineis purged every 90 days of files not accessed- If the directory is purged, Positron will automatically reinstall the server when you next connect
- You may need to re-run
./positron-remote.sh setup alpineto recreate the symlink - For more details on how Positron Remote-SSH works, see: https://positron.posit.co/remote-ssh.html#how-it-works-troubleshooting
By default, R and Python sessions terminate when Positron disconnects. On HPC, brief network interruptions are common and you don't want to lose your session within a running SLURM allocation. Add this to your Positron settings.json (local machine):
{
"kernelSupervisor.shutdownTimeout": "never"
}This keeps R/Python sessions alive on the remote host so you can reconnect without losing your work.
./positron-remote.sh alpinesqueue -u $USERWait until your job is in the "R" (running) state.
cat logs/positron-<JOB_ID>.outReplace <JOB_ID> with your actual job ID from squeue.
-
Open Positron on your local machine
- Press
Cmd+Shift+P(Mac) orCtrl+Shift+P(Windows/Linux) - Select "Remote-SSH: Open SSH Configuration File"
- Press
-
Paste in your SSH config (from the log file) and save:
Host positron-alpine-<JOB_ID> HostName <compute-node> User <your-username> ProxyJump <your-username>@login-ci.rc.colorado.edu ForwardAgent yes ServerAliveInterval 60 ServerAliveCountMax 3 -
Select "Remote-SSH: Connect to Host"
-
Choose
positron-alpine-<JOB_ID>from the list -
Positron will install its server components on the remote node automatically
Always cancel your job to free resources:
scancel <JOB_ID>Important: You must be connected to the AMC VPN before proceeding.
./positron-remote.sh bodhisqueue -u $USERWait until your job is in the "R" (running) state.
cat logs/positron-<JOB_ID>.outReplace <JOB_ID> with your actual job ID from squeue.
-
Open Positron on your local machine
- Press
Cmd+Shift+P(Mac) orCtrl+Shift+P(Windows/Linux) - Select "Remote-SSH: Open SSH Configuration File"
- Press
-
Paste in your SSH config (from the log file) and save:
Host positron-bodhi-<JOB_ID> HostName <compute-node> User <your-username> ProxyJump <your-username>@amc-bodhi.ucdenver.pvt ForwardAgent yes ServerAliveInterval 60 ServerAliveCountMax 3 -
Select "Remote-SSH: Connect to Host"
-
Choose
positron-bodhi-<JOB_ID>from the list -
Positron will install its server components on the remote node automatically
Always cancel your job to free resources:
scancel <JOB_ID>Resources are configured via SLURM directives in positron-remote.sh. Default values per cluster:
| Setting | Alpine | amc-bodhi |
|---|---|---|
--time |
8 hours | 8 hours |
--mem |
24 GB | 20 GB |
--partition |
amilan | normal |
--qos |
normal | normal |
See Alpine documentation for available options.
- Check queue status:
squeue -u $USER - Check available resources:
sinfo - Verify your account has hours:
curc-quota
- Ensure SSH config was added to your local
~/.ssh/config(not on Alpine) - Verify job is running:
squeue -u $USER - Check log file for correct hostname and job ID
- Verify your local SSH public key is on the cluster (re-run
./positron-remote.sh setup alpine)
- SSH connection may timeout if idle. The job itself will continue running.
- Reconnect using the same SSH host entry.
- The SSH config generated by the script includes
ServerAliveIntervalandServerAliveCountMaxto reduce idle timeouts.
- The Positron client and server must be exactly the same version. If you update Positron on your local machine, the remote
~/.positron-servermay have an old version. - Delete the remote server and reconnect:
# On Alpine: rm -rf /scratch/alpine/${USER}/.positron-server # Or if not using the scratch symlink: rm -rf ~/.positron-server
- Positron will automatically reinstall the correct server version on reconnect.
- Extensions installed on your local machine are not automatically available on the remote host.
- After connecting to a remote session, install any needed extensions from the Extensions panel — they will be installed on the remote server.
R interpreter discovery can be unreliable on remote systems. If you don't see R under "Start Session" (even though Python interpreters appear):
If you find you are able to use R versions available in the modules system, please let me know.
-
Install R through mamba/conda on the remote system:
mamba install -c conda-forge r-base
-
Enable conda discovery in Positron settings (on your local machine):
- Press
Cmd+,(Mac) orCtrl+,(Windows/Linux) to open Settings - Search for "Positron R Interpreters Conda Discovery"
- Enable the checkbox, or add to your
settings.json:{ "positron.r.interpreters.condaDiscovery": true }
- Press
-
Manually trigger interpreter discovery:
- Press
Cmd+Shift+P(Mac) orCtrl+Shift+P(Windows/Linux) - Select "Interpreter: Discover all interpreters"
- Press
After these steps, R should appear in the interpreter dropdown and start successfully.
Note: There are multiple discussions in the Positron repository about interpreter discovery issues, though solutions may vary by system configuration.
- The compute node allocation will run for the full time requested or until you cancel it
- Always remember to
scancelyour job when done to free resources - Log files are stored in the
logs/directory with the patternpositron-<JOB_ID>.out