Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 20 additions & 1 deletion docs/guides/_toc.json
Original file line number Diff line number Diff line change
Expand Up @@ -204,7 +204,6 @@
],
"collapsible": false
},

{
"title": "Qiskit Functions",
"children": [
Expand Down Expand Up @@ -637,6 +636,26 @@
}
]
},
{
"title": "High-performance compute",
"children": [
{
"title": "Quantum resource management interface (QRMI)",
"url": "/docs/guides/qrmi",
"isNew": true
},
{
"title": "SPANK plugin for QRMI",
"url": "/docs/guides/slurm-plugin",
"isNew": true
},
{
"title": "SPANK plugin user guide",
"url": "/docs/guides/slurm-hpc-ux",
"isNew": true
}
]
},
{
"title": "Visualization",
"children": [
Expand Down
140 changes: 140 additions & 0 deletions docs/guides/qrmi.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,140 @@
---
title: Quantum resource management interface (QRMI)
description: Overview of the Quantum Resource Management Interface for integrating quantum resources to high-performance compute systems
---
{/* cspell:ignore QRMI, stubgen, maturin, Doxyfile, rowser */}

# Quantum resource management interface (QRMI)

The Quantum resource management interface (QRMI) is a vendor-agnostic library for high-performance compute (HPC) systems to access, control, and monitor the behavior of quantum computational resources. It acts as a thin middleware layer that abstracts away the complexities associated with controlling quantum resources through a set of simple APIs. Written in Rust, this interface also exposes Python and C APIs for ease of integration into nearly any computational environment.

Find the source code to build and deploy QRMI in this [GitHub repository](https://github.com/qiskit-community/qrmi).

An optional `task_runner` command line tool to execute quantum payloads against quantum hardware is included in the Python package. Find the [full documentation](https://github.com/qiskit-community/qrmi/blob/main/python/qrmi/tools/task_runner/README.md) in the GitHub repository.

## Build the QRMI libraries


This section shows how to build QRMI for C and Python.

### Requirements

QRMI supports the following operating systems:

```
AlmaLinux 9, Amazon Linux 2023, CentOS Stream 9, CentOS Stream 10,
RedHat Enterprise Linux 8, RedHat Enterprise Linux 9,
RedHat Enterprise Linux 10, Rocky Linux 8, Rocky Linux 9, SuSE 15,
Ubuntu 22.04, Ubuntu 24.04, MacOS Sequoia 15.1 or above
```

#### Compiling environment
* Rust compiler 1.91 or above [Link](https://www.rust-lang.org/tools/install)
* A C compiler: for example, GCC (`gcc`) on Linux and Clang (`clang-tools-extra`) for Rust unknown targets/cross compilations. QRMI is compatible with a compiler conforming to the C11 standard.
* `make/cmake` (make/cmake RPM for RHEL-compatible OS)
* `openssl` (openssl-devel RPM for RHEL-compatible OS)
* `zlib` (zlib-devel RPM for RHEL-compatible OS)
* Python 3.11, 3.12, or 3.13 (For Python API)
* Libraries and header files needed for Python development (python3.1x-devel RPM for RHEL-compatible OS):
* /usr/include/python3.1x
* /usr/lib64/libpython3.1x.so
* Doxygen (for generating C API document), depending on the OS:
* ```dnf install doxygen``` for Linux(RHEL/CentOS/Rocky Linux etc.)
* ```apt install doxygen``` for Linux(Ubuntu etc.)
* ```brew install doxygen```for MacOS

#### Runtime environment
* gcc (libgcc RPM for RHEL-compatible OS)
* openssl (openssl-libs RPM for RHEL-compatible OS)
* zlib (zlib RPM for RHEL-compatible OS)
* Python 3.11, 3.12, or 3.13 (For Python API)
* Libraries and header files needed for Python development (python3.1x-devel RPM for RHEL-compatible OS)

---

Build the Rust/C API library with the following commands wherever you have saved the QRMI repository.
```shell-session
. ~/.cargo/env
cargo clean
cargo build --release
```



To build the Python package, first set up a Python environment and install the required dependencies.
```shell-session
. ~/.cargo/env
cargo clean
python3.12 -m venv ~/py312_qrmi_venv
source ~/py312_qrmi_venv/bin/activate
pip install --upgrade pip
pip install -r requirements-dev.txt
```

Create the stub files for the Python code.
```shell-session
. ~/.cargo/env
cargo run --bin stubgen --features=pyo3
```

Lastly, build the Python wheels for distribution to your hosts.
```shell-session
source ~/py312_qrmi_venv/bin/activate
CARGO_TARGET_DIR=./target/release/maturin maturin build --release
```

The wheel is created in the `./target/release/maturin/wheels` directory. You can distribute and install on your hosts by `pip install <wheel>`.



## Logging

QRMI supports [log crate](https://crates.io/crates/log) for logging. You can find the detailed QRMI runtime logs by specifying `RUST_LOG` environment variable with the log level. Supported levels are `error`, `warn`, `info`, `debug` and `trace`. The default level is `warn`.

If you specify `trace`, you can find underlying HTTP transaction logs.


```shell-session
RUST_LOG=trace <your QRMI executable>
```

Example logs:
```shell-session
[2025-08-16T03:47:38Z DEBUG request::connect] starting new connection: https://iam.cloud.ibm.com/
[2025-08-16T03:47:38Z DEBUG direct_access_api::middleware::auth] current token ...
```


## Build the API documentation

The Rust API documentation can be created by running
```shell-session
. ~/.cargo/env
cargo doc --no-deps --open
```

The C API documentation can be created by using doxygen:
```shell-session
doxygen Doxyfile
```

This will create an HTML document under the `./html` directory, which you can open in a web browser.


The Python API documentation is generated with `pydoc`. After entering the virtual environment with the QRMI packaged installed, run the following commands:
```shell-session
python -m pydoc -p 8290
Server ready at http://localhost:8290/
Server commands: [b]rowser, [q]uit
server> b
```

Then, open the following page in your browser:
```shell-session
http://localhost:8290/qrmi.html
```

Stop the server with
```shell-session
server> q
```
172 changes: 172 additions & 0 deletions docs/guides/slurm-hpc-ux.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,172 @@
---
title: SPANK plugin user guide
description: User guide for the quantum resource management SPANK plugin
---
{/* cspell:ignore QRMI, SBATCH, srun, Pasqal, slurmd, Doxyfile, Gres */}

# SPANK plugin user guide

Slurm QPU resource definitions determine what physical resources can be used by Slurm jobs in high-performance compute (HPC) environments. User source code should be agnostic to specific backend instances, and even backend types whenever possible. This keeps source code portable while the QPU selection criteria are part of the resource definition (which is considered configuration rather than source code).

## Configure QPU resources in job creation

<Admonition type="caution">
Note that this plugin is under active development and the exact syntax is subject to change.
</Admonition>

### Administrator scope

HPC administrators configure the SPANK plugin to specify what physical resources can be provided to Slurm jobs.
This configuration contains all the information needed to have Slurm jobs access the physical resources, such as endpoints and access credentials.

Read the [`qrmi_config.json.example`](https://github.com/qiskit-community/spank-plugins/blob/main/plugins/spank_qrmi/qrmi_config.json.example) for a comprehensive example configuration.

In `slurm.conf`, QPU resources can be assigned to some or all nodes for usage:
```
...
GresTypes=qpu,name
NodeName=node[1-5000] Gres=qpu,name:ibm_fez
...
```

### User scope

HPC users submit jobs using QPU resources that are tied to Slurm QPU resources. The name attribute references what the HPC administrator has defined. During a slurm job's runtime, backend selection can be based on criteria other than a predefined name referring to a specific backend (for example, by capacity and error rate qualifiers, to help down-select among the defined set of backends).

There might be additional environment variables required, depending on the backend type.

`SBATCH` parameters will point to one or more QPU resources assigned to the application as generic resources.
Environment variables provided through the plugin will provide the necessary information to the application (see the [HPC application scope](#hpc-application-scope) section for details).

```shell
#SBATCH --time=100
#SBATCH --output=<LOGS_PATH>
#SBATCH --gres=qpu:1
#SBATCH --qpu=ibm_fez
#SBATCH --... # other options

srun ...
```

To use more QPU resources, add more QPUs to the `--qpu` parameter:

```shell
#SBATCH --time=100
#SBATCH --output=<LOGS_PATH>
#SBATCH --gres=qpu:3
#SBATCH --qpu=my_local_qpu,ibm_fez,ibm_marrakesh
#SBATCH --... # other options

srun ...
```

### HPC application scope

HPC applications use the Slurm QPU resources assigned to the Slurm job.

Environment variables provide more details for use by the application; for example, `SLURM_JOB_QPU_RESOURCES` lists the quantum resource names (comma-separated if several are provided).
These variables will be used by QRMI. (See the README files in the various QRMI directories ([IBM](https://github.com/qiskit-community/qrmi/blob/main/examples/qiskit_primitives/ibm/README.md), [pasqal](https://github.com/qiskit-community/qrmi/blob/main/examples/qiskit_primitives/pasqal/README.md)) for more details.)

```python
from qiskit import QuantumCircuit
# using an IBM QRMI flavor:
from qrmi.primitives import QRMIService
from qrmi.primitives.ibm import SamplerV2, get_backend

# define circuit

circuit = QuantumCircuit(2)
circuit.h(0)
circuit.cx(0, 1)
circuit.measure_all()

# instantiate QRMI service and get quantum resource (we'll take the first one should there be serveral of them)
# inject credentials needed for accessing the service at this point
load_dotenv()
service = QRMIService()

resources = service.resources()
qrmi = resources[0]

# Generate transpiler target from backend configuration & properties and transpile
backend = get_backend(qrmi)
pm = generate_preset_pass_manager(
optimization_level=1,
backend=backend,
)

isa_circuit = pm.run(circuit)

# run the circuit
options = {}
sampler = SamplerV2(qrmi, options=options)

job = sampler.run([(isa_circuit, isa_observable, param_values)])
print(f">>> Job ID: {job.job_id()}")

result = job.result()

if job.done():
pub_result = result[0]
print(f"Counts for the 'meas' output register: {pub_result.data.meas.get_counts()}")
elif job.cancelled():
print("Cancelled")
elif job.errored():
print(qrmi.task_logs(job.job_id()))
```

See the [examples directory](https://github.com/qiskit-community/qrmi/tree/main/examples/qiskit_primitives/) for example files.

### Backend specifics
#### IBM Direct Access API
##### Administrator scope
Configuration of Direct Access API backends (HPC admin scope) includes endpoints and credentials to the Direct Access endpoint and authentication services as well as to the S3 endpoint.
Specifically, this includes:

* IBM Cloud&reg; API key for creating bearer tokens
* Endpoint of the Direct Access API
* S3 bucket and access details

Access credentials should not be visible to HPC users or other non-privileged users on the system.
Therefore, sensitive data can be put in separate files, which can be access-protected accordingly.

Note that Slurm has full access to the backend.
This has several implications:

* The Slurm plugin is responsible for multi-tenancy (ensuring that users don't see results of other users' jobs)
* The HPC cluster side is responsible for vetting users (who is allowed to access the QPU) and ensuring according access
* The capacity and priority of the QPU usage is solely managed through Slurm; there is no other scheduling of users involved outside of Slurm

##### User scope
Execution lanes are not exposed to the HPC administrator or user directly.
Instead, during runtime, there can be two different modes that HPC users can specify:

* `exclusive=true` specifies that no other jobs can use the resource at the same time. An exclusive mode job gets all execution lanes and cannot run at the same time as a non-exclusive job
* `exclusive=false` allows other jobs to run in parallel. In this case, there can be as many jobs as there are execution lanes, all running at the same time, and the job is assigned one lane

#### Qiskit Runtime Service
##### User scope

It is expected that users specify additional access details in environment variables.
Specifically, this includes the following:

* Qiskit Runtime service instance (CRN, Cloud Resource Name)
* Endpoint for Qiskit Runtime (unless auto-detected from the CRN)
* API key, which has access to the CRN
* S3 instance, bucket, and access token/credentials for data transfers

These details determine under which user and service instance the Qiskit Runtime service is used.
Accordingly, IBM Quantum&reg; Platform scheduling considers the user's and service instance's capabilities for scheduling.

At this time, users must provide the above details (no shared cluster-wide quantum access).

#### Pasqal Cloud Services
##### HPC admin scope
There is no specific setup required from HPC admins for PCS usage.

##### HPC user scope
It is expected that users specify additional access details in environment variables.
Currently, this includes the following:

* PCS resource to target (FRESNEL, EMU_FRESNEL, EMU_MPS)
* Authorization token
Loading
Loading