|
1 | 1 | # copy that in your home folder and remove this first line
|
2 |
| -#! /bin/bash -l |
| 2 | +#!/bin/bash -l |
3 | 3 |
|
| 4 | +## Bellow we ask for : |
| 5 | +# - only 1 machines (machies have at least 20 CPUs) |
| 6 | +# - only one task (you rarely need more than one task when not doing many machines) |
| 7 | +# - we indicate we plan on using at max a single CPU, and 1G of Ram. |
4 | 8 | #SBATCH --nodes=1
|
5 | 9 | #SBATCH --ntasks=1
|
6 | 10 | #SBATCH --cpus-per-task=1
|
7 | 11 | #SBATCH --mem-per-cpu=1G
|
8 | 12 | #
|
| 13 | + |
9 | 14 | #SBATCH --partition fast.q
|
10 |
| -#SBATCH --time=0-00:15:00 # 0days 15 minutes |
| 15 | +#(You can use multiple partition comma separated) |
| 16 | + |
| 17 | +#SBATCH --time=0-00:15:00 # 0 days 00 hours, 15 minutes, 00 seconds |
11 | 18 | #
|
12 | 19 | #SBATCH --output=myjob_%j.stdout
|
13 | 20 | #
|
14 | 21 | #SBATCH --job-name=test
|
15 | 22 | #SBATCH --export=ALL
|
16 | 23 |
|
| 24 | +## Constraints and Features (advanced users), |
| 25 | +# Many of the Merced's nodes have features that other nodes do not have; for example, GPUs |
| 26 | +# or infiniband interconnect for MPI jobs. |
| 27 | +# |
| 28 | +# You may want to requests nodes with only these features using the --constraint |
| 29 | +# option and specifying a list of what you wish to request. Use `sinfo -o "%20N %10c %10m %25f"` |
| 30 | +# to see which features are availlable. Currently: |
| 31 | +# ib: infiniband (for fast, low latency IO or MPI jobs) |
| 32 | +# gpu: Graphical Process Units , separated in K20m, P100 if you need to be more specific. |
| 33 | +# Examples of requesting features: |
| 34 | +# #SBATCH --constraint=ib |
| 35 | +# #SBATCH --constraint=K20m,ib (infiniband and k20 GPU) |
| 36 | +# |
| 37 | + |
17 | 38 | # This submission file will run a simple set of commands. All stdout will be
|
18 | 39 | # captured in mmyjob_XXXX.stdout (as specified in the Slurm command above).
|
19 | 40 | # This job file uses a shared-memory parallel environment and requests 1 cores
|
|
0 commit comments