Skip to content

Commit bd13b4f

Browse files
authored
Fixing some typos
1 parent 1e3bc57 commit bd13b4f

File tree

1 file changed

+22
-22
lines changed

1 file changed

+22
-22
lines changed

docs/guides/usegpus.md

Lines changed: 22 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -11,20 +11,20 @@ First, you will need to install `bqskit-qfactor-jax`. This can easily done by us
1111
pip install bqskit-qfactor-jax
1212
```
1313

14-
This command will install also all the dependencies including BQSKit and JAX with GPU support.
14+
This command will also install all the dependencies including BQSKit and JAX with GPU support.
1515

1616
## Optimizing a Circuit Using QFactor-Sample and the Gate Deletion Flow
1717
This section explains how to optimize a quantum circuit using QFactor-Sample and the gate deletion flow.
1818

19-
First we load the circuit to be optimized using the Circuit class.
19+
First, we load the circuit to be optimized using the Circuit class.
2020
```python
2121
from bqskit import Circuit
2222

2323
# Load a circuit from QASM
2424
in_circuit = Circuit.from_file("circuit_to_opt.qasm")
2525
```
2626

27-
Then we create the instniator instance, and set the number of multistarts to 32.
27+
Then we create the instantiator instance and set the number of multistarts to 32.
2828
```python
2929
from qfactorjax.qfactor_sample_jax import QFactorSampleJax
3030

@@ -89,9 +89,9 @@ For other usage examples, please refer to the [examples directory](https://githu
8989

9090
## Setting Up a Multi-GPU Environment
9191

92-
To run BQSKit with multiple GPUs, you need to set up the BQSKit runtime properly. Each worker should be assigned to a specific GPU by leveragig NVIDIA's CUDA_VISIBLE_DEVICES enviorment variable. Several workers can use the same GPU by utilizing [NVIDIA's MPS](https://docs.nvidia.com/deploy/mps/). You can set up the runtime on a single server ( or interactive node on a cluster) or using SBATCH on several nodes. You can find scripts to help you set up the runtime in this [link](https://github.com/BQSKit/bqskit-qfactor-jax/tree/main/examples/bqskit_env_scripts).
92+
To run BQSKit with multiple GPUs, you need to set up the BQSKit runtime properly. Each worker should be assigned to a specific GPU by leveraging NVIDIA's CUDA_VISIBLE_DEVICES environment variable. Several workers can use the same GPU by utilizing [NVIDIA's MPS](https://docs.nvidia.com/deploy/mps/). You can set up the runtime on a single server ( or interactive node on a cluster) or using SBATCH on several nodes. You can find scripts to help you set up the runtime in this [link](https://github.com/BQSKit/bqskit-qfactor-jax/tree/main/examples/bqskit_env_scripts).
9393

94-
You may configure the number of GPUs to use on each server and also the number of workers on each GPU. If you use too many workers on the same GPU, you will run out of memory and experince an out-of-memory exception. If you are using QFactor, you may use the following table as a starting configuration and adjust the number of workers according to your specific circuit, unitary size, and GPU performance. If you are using QFactor-Sample, start with a single worker and increase if the memory premits it. You can use the `nvidia-smi` command to check the GPU usage during execution; it specifies the utilization of the memory and the execution units.
94+
You may configure the number of GPUs to use on each server and also the number of workers on each GPU. If you use too many workers on the same GPU, you will run out of memory and experience an out-of-memory exception. If you are using QFactor, you may use the following table as a starting configuration and adjust the number of workers according to your specific circuit, unitary size, and GPU performance. If you are using QFactor-Sample, start with a single worker and increase if the memory permits it. You can use the `nvidia-smi` command to check the GPU usage during execution; it specifies the utilization of the memory and the execution units.
9595

9696
| Unitary Size | Workers per GPU |
9797
|----------------|------------------|
@@ -101,7 +101,7 @@ You may configure the number of GPUs to use on each server and also the number o
101101
| 7 | 2 |
102102
| 8 and more | 1 |
103103

104-
Make sure that in your Python script you are creating the compiler object with the appropriate IP address. When running on the same node as the server, you can use \`localhost\` as the IP address.
104+
Make sure that in your Python script, you are creating the compiler object with the appropriate IP address. When running on the same node as the server, you can use \`localhost\` as the IP address.
105105

106106
```python
107107
with Compiler('localhost') as compiler:
@@ -110,12 +110,12 @@ with Compiler('localhost') as compiler:
110110

111111

112112
### Single Server Multiple GPUs Setup
113-
This section of the guide explains the main concepts in the [single_server_env.sh](https://github.com/BQSKit/bqskit-qfactor-jax/blob/main/examples/bqskit_env_scripts/single_server_env.sh) script template and how to use it. The script creates a GPU enabled BQSKit runtime and is easily configured for any system.
113+
This section of the guide explains the main concepts in the [single_server_env.sh](https://github.com/BQSKit/bqskit-qfactor-jax/blob/main/examples/bqskit_env_scripts/single_server_env.sh) script template and how to use it. The script creates a GPU-enabled BQSKit runtime and is easily configured for any system.
114114

115-
After you configure the template (replacing every <> with an appropriate value) run it, and then in a seperate shell execute your python scirpt that uses this runtime enviorment.
115+
After you configure the template (replacing every <> with an appropriate value) run it, and then in a separate shell execute your python script that uses this runtime environment.
116116

117-
The enviorment script has the following parts:
118-
1. Variable configuration - choosing the number of GPUs to use, and the number of workrs per GPU. Moreover, the scratch dir path is configured, later to be used for logging.
117+
The environment script has the following parts:
118+
1. Variable configuration - choosing the number of GPUs to use, and the number of workers per GPU. Moreover, the scratch dir path is configured and later used for logging.
119119
```bash
120120
#!/bin/bash
121121
hostname=$(uname -n)
@@ -149,7 +149,7 @@ wait_for_server_to_connect(){
149149
done
150150
}
151151
```
152-
3. Creating the log directory, and deleting any old log files that conflicts with the current run logs.
152+
3. Creating the log directory, and deleting any old log files that conflict with the current run logs.
153153
```bash
154154
mkdir -p $scratch_dir/bqskit_logs
155155

@@ -162,7 +162,7 @@ echo "Will start bqskit runtime with id $unique_id gpus = $amount_of_gpus and wo
162162
rm -f $manager_log_file
163163
rm -f $server_log_file
164164
```
165-
4. Starting NVIDA MPS to allow an efficient execution of multiple works on a single GPU.
165+
4. Starting NVIDA MPS to allow efficient execution of multiple works on a single GPU.
166166
```bash
167167
echo "Starting MPS server"
168168
nvidia-cuda-mps-control -d
@@ -175,15 +175,15 @@ bqskit-manager -x -n$total_amount_of_workers -vvv &> $manager_log_file &
175175
manager_pid=$!
176176
wait_for_outgoing_thread_in_manager_log
177177
```
178-
6. Starting the BQSKit server indicating that there is a single manager in the current server. Waiting untill the server connects to the manager before continuing to start the workers.
178+
6. Starting the BQSKit server indicating that there is a single manager in the current server. Waiting until the server connects to the manager before continuing to start the workers.
179179
```bash
180180
echo "starting BQSKit server"
181181
bqskit-server $hostname -vvv &>> $server_log_file &
182182
server_pid=$!
183183

184184
wait_for_server_to_connect
185185
```
186-
7. Starting the workrs, each seeing only a specific GPU.
186+
7. Starting the workers, each seeing only a specific GPU.
187187
```bash
188188
echo "Starting $total_amount_of_workers workers on $amount_of_gpus gpus"
189189
for (( gpu_id=0; gpu_id<$amount_of_gpus; gpu_id++ ))
@@ -200,15 +200,15 @@ echo quit | nvidia-cuda-mps-control
200200
```
201201

202202

203-
### Multis-Server Multi-GPU Enviorment Setup
203+
### Multis-Server Multi-GPU Environment Setup
204204

205-
This section of the guide explains the main concepts in the [init_multi_node_multi_gpu_slurm_run.sh](https://github.com/BQSKit/bqskit-qfactor-jax/blob/main/examples/bqskit_env_scripts/init_multi_node_multi_gpu_slurm_run.sh) [run_workers_and_managers.sh](https://github.com/BQSKit/bqskit-qfactor-jax/blob/main/examples/bqskit_env_scripts/run_workers_and_managers.sh) scripts and how to use them. After configuring the scripts (updating every <>), place both of them in the same directory and initate a an SBATCH command. These scripts assume a SLURM enviorment, but can be easily ported to other disterbutation systems.
205+
This section of the guide explains the main concepts in the [init_multi_node_multi_gpu_slurm_run.sh](https://github.com/BQSKit/bqskit-qfactor-jax/blob/main/examples/bqskit_env_scripts/init_multi_node_multi_gpu_slurm_run.sh) [run_workers_and_managers.sh](https://github.com/BQSKit/bqskit-qfactor-jax/blob/main/examples/bqskit_env_scripts/run_workers_and_managers.sh) scripts and how to use them. After configuring the scripts (updating every <>), place both of them in the same directory and initiate an SBATCH command. These scripts assume a SLURM environment but can be easily ported to other distribution systems.
206206

207207
```bash
208208
sbatch init_multi_node_multi_gpu_slurm_run.sh
209209
```
210210

211-
The rest of this section exaplains in detail both of the scripts.
211+
The rest of this section explains both of the scripts in detail.
212212

213213
#### init_multi_node_multi_gpu_slurm_run
214214
This is a SLURM batch script for running a multi-node BQSKit task across multiple GPUs. It manages job submission, environment setup, launching the BQSKit server and workers on different nodes, and the execution of the main application.
@@ -227,9 +227,9 @@ This is a SLURM batch script for running a multi-node BQSKit task across multipl
227227
scratch_dir=<temp_dir>
228228
```
229229

230-
2. Shell environment setup - Please consulte with your HPC system admin to choose the apropriate modules to load that will enable you to JAX on NVDIA's GPUs. You may use NERSC's Perlmutter [documentation](https://docs.nersc.gov/development/languages/python/using-python-perlmutter/#jax) as a reference.
230+
2. Shell environment setup - Please consult with your HPC system admin to choose the appropriate modules to load that will enable you to JAX on NVDIA's GPUs. You may use NERSC's Perlmutter [documentation](https://docs.nersc.gov/development/languages/python/using-python-perlmutter/#jax) as a reference.
231231
```bash
232-
### load any modules needed and activate the conda enviorment
232+
### load any modules needed and activate the conda environment
233233
module load <module1>
234234
module load <module2>
235235
conda activate <conda-env-name>
@@ -257,9 +257,9 @@ while [ "$(cat "$managers_started_file" | wc -l)" -lt "$n" ]; do
257257
done
258258
```
259259

260-
5. Starting the BQSKit server on the main node, and using SLURM's `SLURM_JOB_NODELIST` enviorment variable to indicate the BQSKit server the hostnames of the managers.
260+
5. Starting the BQSKit server on the main node, and using SLURM's `SLURM_JOB_NODELIST` environment variable to indicate the BQSKit server the hostnames of the managers.
261261
```bash
262-
echo "starting BQSKit server on main node"
262+
echo "starting BQSKit server on the main node"
263263
bqskit-server $(scontrol show hostnames "$SLURM_JOB_NODELIST" | tr '\n' ' ') &> $scratch_dir/bqskit_logs/server_${SLURM_JOB_ID}.log &
264264
server_pid=$!
265265

@@ -334,7 +334,7 @@ stop_mps_servers() {
334334
}
335335
```
336336

337-
Finaly, the script chekcs if GPUs are not needed, it spwans the manager with its default behaviour, else suing the "-x" argument, it indicates to the manager to wait for connecting workers.
337+
Finally, the script checks if GPUs are not needed, it spawns the manager with its default behavior, else using the "-x" argument, it indicates to the manager to wait for connecting workers.
338338
```bash
339339
if [ $amount_of_gpus -eq 0 ]; then
340340
echo "Will run manager on node $node_id with n args of $amount_of_workers_per_gpu"

0 commit comments

Comments
 (0)