Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
160 changes: 74 additions & 86 deletions demo/qrmi/slurm-docker-cluster/INSTALL.md
Original file line number Diff line number Diff line change
@@ -1,27 +1,25 @@
# Installation

This document describes how to setup development environment and the plugins developed in this project.
This document describes how to set up a local, container-based, Slurm development environment and how to build and install QRMI and the SPANK plugin in a Slurm cluster.

## Set Up Local Development Environment

## Setup Local Development Environment

### Jump To:
- [Pre-requisites](#pre-requisites)
### Jump To
- [Prerequisite](#prerequisite)
- [Creating Docker-based Slurm Cluster](#creating-docker-based-slurm-cluster)
- [Building and installing QRMI and SPANK Plugins](#building-and-installing-qrmi-and-spank-plugins)
- [Running examples of primitive job in Slurm Cluster](#running-examples-of-primitive-job-in-slurm-cluster)

- [Building and Installing QRMI and the SPANK Plugin](#building-and-installing-qrmi-and-the-spank-plugin)
- [Running Primitive Job Examples in Slurm Cluster](#running-primitive-job-examples-in-slurm-cluster)
- [Running Serialized Jobs Using the QRMI Task Runner](#running-serialized-jobs-using-the-qrmi-task-runner)

### Pre-requisites
### Prerequisite

- [Podman](https://podman.io/getting-started/installation.html) or [Docker](https://docs.docker.com/get-docker/) installed. You can use [Rancher Desktop](https://rancherdesktop.io/) instead of installing Docker on your PC.
A container manager such as [Podman](https://podman.io/getting-started/installation.html), [Rancher Desktop](https://rancherdesktop.io/), or [Docker](https://docs.docker.com/get-docker/).


### Creating Docker-based Slurm Cluster

You can skip below steps if you already have Slurm Cluster for development.
#### 1. Creating your local workspace

#### 1. Creating your workspace on your PC
```bash
mkdir -p <YOUR WORKSPACE>
cd <YOUR WORKSPACE>
Expand All @@ -34,6 +32,8 @@ git clone -b 0.9.0 https://github.com/giovtorres/slurm-docker-cluster.git
cd slurm-docker-cluster
```

Slurm Docker Cluster v0.9.0 uses SLURM_TAG defined in slurm-docker-cluster/.env to specify the Slurm version. Currently, SLURM_TAG is set to slurm-25-05-3-1. This corresponds to a tag in Slurm's major release 25.05 from May 2025. Using a Slurm release prior to slurm-24-05-5-1 requires rebuilding the SPANK plugin with -DPRIOR_TO_V24_05_5_1 due to interface changes in slurm-24-05-5-1.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this description may lead to misunderstandings. It feels odd that the section titled “Cloning the Slurm Docker Cluster git repository” includes an explanation of the build option(-DPRIOR_TO_V24_05_5_1) for our SPANK plugin. This build option should be moved to the section that explains how to build the SPANK plugin itself. In this part, we should describe only how to change the version of Slurm being used.

Additionally, the file slurm-docker-cluster/.env is created only after applying our patch file, which is described later. Since it does not exist in the original cloned repository, it is likely to confuse users.


#### 3. Cloning qiskit-community/spank-plugins and qiskit-community/qrmi

```bash
Expand All @@ -50,7 +50,7 @@ popd
patch -p1 < ./shared/spank-plugins/demo/qrmi/slurm-docker-cluster/file.patch
```

Rocky Linux 9 is used as default. If you want to another operating system, apply additional patch.
Rocky Linux 9 is used as default. If you want another operating system, you must apply an additional patch (see below for CentOS 9 and CentOS 10 examples). The patch is used to avoid the Slurm Docker Cluster requirement to include its copyright notice in repositories that copy the Slurm Docker Cluster code.

##### CentOS Stream 9

Expand All @@ -69,104 +69,97 @@ patch -p1 < ./shared/spank-plugins/demo/qrmi/slurm-docker-cluster/centos10.patch
```bash
docker compose build --no-cache
```
* Need to install `docker-compose` for the `podman` users.
* For example, `brew install docker-compose` for a MAC user

Podman users must install `docker-compose`. MacOS users can do this with `brew install docker-compose`.

#### 6. Starting a cluster

```bash
docker compose up -d
```

> [!NOTE]
> Ensure that the following 6 containers are running on the PC.
>
> - c2 (Compute Node #2)
> - c1 (Compute Node #1)
> - slurmctld (Central Management Node)
> - slurmdbd (Slurm DB Node)
> - login (Login Node)
> - mysql (Database node)
Use `docker ps` to check that the following 6 containers are running:

Slurm Cluster is now set up as shown.
- c2 (Compute Node #2)
- c1 (Compute Node #1)
- slurmctld (Central Management Node)
- slurmdbd (Slurm DB Node)
- login (Login Node)
- mysql (Database node)

You now have a Slurm cluster as shown below:

<p align="center">
<img src="../../../docs/images/slurm-docker-cluster.png" width="640">
</p>

## Building and Installing QRMI and the SPANK plugin

### Building and installing QRMI and SPANK Plugins

The following steps assume you are building code on `c1` (Compute Node #1). Other nodes are also acceptable.

> [!NOTE]
> The following explanation assumes:
> - building code on `c1` node. Other nodes are also acceptable.
1. Log in to c1


1. Login to c1 container
```bash
% docker exec -it c1 bash
docker exec -it c1 bash
```

2. Creating python virtual env under shared volume
2. Creating python virtual env under shared volume **on c1**
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think you need to describe "on c1" because we're working under shared volume, not specific to the node filesystem.


```bash
[root@c1 /]# python3.12 -m venv /shared/pyenv
[root@c1 /]# source /shared/pyenv/bin/activate
[root@c1 /]# pip install --upgrade pip
python3.12 -m venv /shared/pyenv
source /shared/pyenv/bin/activate
pip install --upgrade pip
```

3. Building and installing [QRMI](https://github.com/qiskit-community/qrmi/blob/main/INSTALL.md)
3. Building and installing [QRMI](https://github.com/qiskit-community/qrmi/blob/main/INSTALL.md) **on c1**

```bash
[root@c1 /]# source ~/.cargo/env
[root@c1 /]# cd /shared/qrmi
[root@c1 /]# pip install -r requirements-dev.txt
[root@c1 /]# maturin build --release
[root@c1 /]# pip install /shared/qrmi/target/wheels/qrmi-*.whl
source ~/.cargo/env
cd /shared/qrmi
pip install -r requirements-dev.txt
maturin build --release
pip install /shared/qrmi/target/wheels/qrmi-*.whl
```

4. Building [SPANK Plugin](../../../plugins/spank_qrmi/README.md)
4. Building the [SPANK plugin](../../../plugins/spank_qrmi/README.md) **on c1**

```bash
[root@c1 /]# cd /shared/spank-plugins/plugins/spank_qrmi
[root@c1 /]# mkdir build
[root@c1 /]# cd build
[root@c1 /]# cmake ..
[root@c1 /]# make
cd /shared/spank-plugins/plugins/spank_qrmi
mkdir build
cd build
cmake ..
make
```
Which will install the QRMI from the [GitHub repo](https://github.com/qiskit-community/qrmi).

If you are building locally for development it may be easier to build the QRMI from source, mounted at `/shared/qrmi` as per this guide.
This will install QRMI from the [QRMI git repository](https://github.com/qiskit-community/qrmi). If you are building locally for development it might be easier to build QRMI from source mounted at `/shared/qrmi` as shown below:

```bash
[root@c1 /]# cd /shared/spank-plugins/plugins/spank_qrmi
[root@c1 /]# mkdir build
[root@c1 /]# cd build
[root@c1 /]# cmake -DQRMI_ROOT=/shared/qrmi ..
[root@c1 /]# make
cd /shared/spank-plugins/plugins/spank_qrmi
mkdir build
cd build
cmake -DQRMI_ROOT=/shared/qrmi ..
make
```


5. Creating qrmi_config.json

Refer to [this example](https://github.com/qiskit-community/spank-plugins/blob/main/plugins/spank_qrmi/qrmi_config.json.example) and describe your environment.
Then, create a file under `/etc/slurm` or another location accessible to the Slurm daemons.
Modify [this example](https://github.com/qiskit-community/spank-plugins/blob/main/plugins/spank_qrmi/qrmi_config.json.example) to fit your environment and add it to `/etc/slurm` or another location accessible to the Slurm daemons on each compute node you intend to use.

IBM Quantum Platform (IQP) provides limited, free access to IBM Quantum systems. After registering with IBM Cloud and IQP, the list of accessible IBM Quantum systems can be found [here](https://quantum.cloud.ibm.com/computers). The qrmi_config.json file will require an API key and a CRN for each IQP system. API key instructions can be found [here](https://cloud.ibm.com/iam/apikeys). The CRN for each IQP system can be found [here](https://quantum.cloud.ibm.com/computers). For example, click on "ibm_torino" then open the “Instance access” section for the "ibm_torino" CRN.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the term “free access” may cause confusion with IQP’s “open plan access.” I also don’t understand why "limited" and "free access" are listed together in the first place. Furthermore, I’m not sure whether it is appropriate for this document to include information that is specific to IBM IQP. Considering that more quantum vendors may appear in the future, I would suggest to create a separate file to store vendor‑specific information.
If you intend to include information about IQP "open plan access" here, it would be helpful to also describe the necessary workaround. Open plan access does not work out of the box in this setup.


6. Installing SPANK Plugins
6. Installing the SPANK plugin

Create `/etc/slurm/plugstack.conf` and ensure it has the following line (assuming `qrmi_config.json` was added to `/etc/slurm`):

Create `/etc/slurm/plugstack.conf` if not exists and add the following lines:
```bash
optional /shared/spank-plugins/plugins/spank_qrmi/build/spank_qrmi.so /etc/slurm/qrmi_config.json
```

Above example assumes you create `qrmi_config.json` under `/etc/slurm` directory.

> [!NOTE]
> When you setup your own slurm cluster, `plugstack.conf`, `qrmi_config.json` and above plugin libraries need to be installed on the machines that execute slurmd (compute nodes) as well as on the machines that execute job allocation utilities such as salloc, sbatch, etc (login nodes). Refer [SPANK documentation](https://slurm.schedmd.com/spank.html#SECTION_CONFIGURATION) for more details.
`plugstack.conf`, `qrmi_config.json`, and `spank_qrmi.so` must be installed on the machines that execute slurmd (compute nodes) as well as on the machines that execute job allocation utilities such as salloc, sbatch, etc (login nodes). Refer to the [SPANK documentation](https://slurm.schedmd.com/spank.html#SECTION_CONFIGURATION) for more details.

7. Checking SPANK Plugins installation
7. Checking SPANK plugin installation

If you complete above step, you must see additional options of `sbatch` like below.
After completing the steps above, `sbatch --help` should show the QPU resource option as shown below:

```bash
[root@c1 /]# sbatch --help
Expand All @@ -175,39 +168,39 @@ Options provided by plugins:
--qpu=names Comma separated list of QPU resources to use.
```

### Running examples of primitive job in Slurm Cluster
### Running Primitive Job Examples in Slurm Cluster

1. Loging in to login node
1. Logging in to the login node

```bash
% docker exec -it login bash
docker exec -it login bash
cd /data # Or another directory shared between the login and compute nodes
```

2. Running Sampler job
2. Running Sampler job on the **login node**

```bash
[root@login /]# sbatch /shared/spank-plugins/demo/qrmi/jobs/run_sampler.sh
sbatch /shared/spank-plugins/demo/qrmi/jobs/run_sampler.sh
```

3. Running Estimator job
3. Running Estimator job on the **login node**

```bash
[root@login /]# sbatch /shared/spank-plugins/demo/qrmi/jobs/run_estimator.sh
sbatch /shared/spank-plugins/demo/qrmi/jobs/run_estimator.sh
```

4. Running Pasqal job
4. Running Pasqal job on the **login node**

```bash
[root@login /]# sbatch /shared/spank-plugins/demo/qrmi/jobs/run_pulser_backend.sh
sbatch /shared/spank-plugins/demo/qrmi/jobs/run_pulser_backend.sh
```

5. Checking primitive results

Once above scripts are completed, you must find `slurm-{job_id}.out` in the current directory.
You should find `slurm-{job_id}.out` files in the current directory. For example,

For example,
```bash
[root@login /]# cat slurm-81.out
cat slurm-81.out # Assuming job_id is 81
{'backend_name': 'test_eagle'}
>>> Observable: ['IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII...',
'IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII...',
Expand All @@ -232,13 +225,8 @@ For example,
> Metadata: {'shots': 4096, 'target_precision': 0.015625, 'circuit_metadata': {}, 'resilience': {}, 'num_randomizations': 32}
```

### Running serialized jobs using the qrmi_task_runner Slurm Cluster
### Running Serialized Jobs Using the QRMI Task Runner

It is possible to run JSON-serialized jobs directly using a commandline utility called qrmi_task runner.
See [the docs](https://github.com/qiskit-community/qrmi/blob/main/bin/task_runner/README.md) for that tool for details.

```bash
[root@login /]# sbatch /shared/spank-plugins/demo/qrmi/jobs/run_task.sh
```
It is possible to run JSON-serialized jobs directly using a commandline utility called qrmi_task_runner. See the [task_runner examples](https://github.com/qiskit-community/qrmi/python/qrmi/tools/README.md) for details.

## END OF DOCUMENT
31 changes: 18 additions & 13 deletions docs/howtos/ibmcloud_cos.md
Original file line number Diff line number Diff line change
@@ -1,23 +1,20 @@
# Using IBM Cloud COS as S3 compatible storage

This document describes how to use IBM Cloud COS as S3-compatible storage, specifically how to obtain the AWS Access Key ID and Secret Access Key for use with S3-compatible tools and libraries.
This document describes how to use IBM Cloud COS as S3-compatible storage, specifically, how to obtain the AWS Access Key ID (`QRMI_IBM_DA_AWS_ACCESS_KEY_ID`), the AWS Secret Access Key (`QRMI_IBM_DA_AWS_SECRET_ACCESS_KEY`), and the S3 endpoint URL (`QRMI_IBM_DA_S3_ENDPOINT`).

## Prerequisites
* IBM Cloud COS instance
## Prerequisite

## How to obtain AWS Access Key ID and Secret Access Key
IBM Cloud Object Storage instance and bucket -- Go to the [IBM Cloud Object Storage web page](https://cloud.ibm.com/objectstorage/overview) to create an S3 instance and a bucket in your instance.

Refer [this guide](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-uhc-hmac-credentials-main&locale=en) to obtain AWS Access Key ID and Secret Access Key.
## How to obtain the AWS Access Key ID and Secret Access Key

`IBM Cloud -> Infrastructure -> Storage -> Objective Storage` in order to navigate IBM Cloud website.
To create your credentials, navigate to the `Service credentials` tab in your instance's web page. All instances can be found in the [IBM Cloud Instances web page](https://cloud.ibm.com/objectstorage/instances). Click on `New Credential` in the `Service credential` tab to create your HMAC (Hash-based Message Authentication Code) credentials.
Copy link
Collaborator

@ohtanim ohtanim Mar 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In our previous discussion, I suggested to not describe IBM Cloud UI labels or UI operations directly in our documentation as practice. The IBM Cloud UI changes frequently, and we would have to constantly monitor those changes and update our files each time. To address this issue, I thought you proposed using the IBM Cloud API—which changes far less frequently—to achieve the same purpose and you did in below. I expected to remove UI labels and operations as well.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alternatively, you could write that

The operations described here for the IBM Cloud Web UI may differ from the actual interface. For details, please refer to the IBM Cloud documentation —

especially if these UI operation instructions are truly necessary.


Then, `Create Instance` and `Create Bucket` in your instance, accordingly. After the bucket is created, navigate to your instance, and click `Service credentials` and Click `New Credentials` to create your credential with HMAC.

HMAC credentials consist of an Access Key and Secret Key paired for use with S3-compatible tools and libraries that require authentication. Users can create a set of HMAC credentials as part of a Service Credential by switching the `Include HMAC Credential` to `On` during credential creation in the console.
HMAC credentials consist of an Access Key and Secret Key paired for use with S3-compatible tools and libraries that require authentication. Users can create a set of HMAC credentials as part of a Service Credential by switching the `Include HMAC Credential` to `On` as shown below:

![include_HMAC_credential](https://cloud.ibm.com/docs-content/v4/content/3842758572478f973a02d6e5afad955eb1a777d2/cloud-object-storage/images/hmac-credential-dialog.jpg)

After the Service Credential is created, the HMAC Key is included in the `cos_hmac_keys` field like below. `access_key_id` is AWS Access Key ID and `secret_access_key` is AWS Secret Access Key.
After the Service Credential is created, the HMAC credentials are included in the `cos_hmac_keys` field as shown below. Click on the `v` on the left to expose the full Service Credential. `access_key_id` is AWS Access Key ID and `secret_access_key` is AWS Secret Access Key.

```bash
{
Expand All @@ -30,9 +27,17 @@ After the Service Credential is created, the HMAC Key is included in the `cos_hm
"iam_apikey_description": ...
```

## How to obtain S3 endpoint URL
You can also use the IBM Cloud CLI to create the credentials as shown below. The `access_key_id` and `secret_access_key` will be output from the command.

```bash
ibmcloud resource service-key-create <key-name-without-spaces> Writer --instance-name "<instance name--use quotes if your instance name has spaces>" --parameters '{"HMAC":true}'
```

Refer to the [IBM Cloud documentation](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-uhc-hmac-credentials-main) for details about using HMAC credentials.

## How to obtain the S3 endpoint URL

Service credential contains `endpoints` field. Open this URL and choose one to fit to your IBM Cloud COS instance. For example, if your instance is located in us-east region, `https://s3.us-east.cloud-object-storage.appdomain.cloud` is an endpoint for your instance.
S3 endpoints can be found in the [IBM Cloud Object Storage Regional Endpoints list](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-endpoints#endpoints-region). Choose one to fit to your IBM Cloud Object Storage instance. For example, if your instance is located in the `us-east` region then the endpoint for your instance is `https://s3.us-east.cloud-object-storage.appdomain.cloud`.


END OF DOCUMENT
## END OF DOCUMENT