Job Parameters
This page gives a short overview of the most important computing job parameters. For a full list consult the Slurm documentation.
If you need help writing job scripts or submitting jobs to the Slurm queue, please consult the provided tutorial.
Note: Job parameters can be specified in a short and long form. The short form requires a space after the parameter, whereas the long form requires an "=" without any whitespace. Examples are provided below for clarity.
Table of Content
- Setting the Job Name
- Setting the Run Time
- Selecting the Output File
- Setting the Number of Cores
- Setting the Memory
- Using Nodes exclusively
- Submitting with a Project
- Submitting a GPU Job
- Submitting to a specific Partition
- Using BeeOND
- See also
- long format:
--job-name
- short format:
-J
- example:
-J my_job
- default: interactive when using
salloc
and the name of the job script when using sbatch - remarks: Avoid using special characters. Only alphanumeric are recommended.
- long format:
--time
- short format:
-t
- example:
--time=1-05:10:15
(1 day, 5 hours, 10 minutes, 15 seconds) - default: 15 minutes
- remarks: Mind the maximum job run time of your computing project.
- long format:
--output
- short format:
-o
- example:
-o /home/ab123456/results.out
- default:
output_%j.txt
where%j
is the job reference number - remarks: Do not use
~
or variables like$HOME
as path! Use full, explicit paths as shown in the example. Combine STDERR with STDOUT for easier error analysis by avoiding the--error
parameter.
Depending on your use case one or multiple of the following options can be set:
CPUs per Task (usually is used for OpenMP or Hybrid parallelisation)
- long format:
--cpus-per-task
- short format:
-c
- example:
-c 10
- default: 1
Number of Tasks (usually used for MPI parallelisation)
- long format:
--ntasks
- short format:
-n
- example:
-n 10
- default: 1
Number of Tasks per Node
- long format:
--ntasks-per-node
- example:
--ntasks-per-node 2
Number of Nodes
- long format:
--nodes
- short format:
-N
- example:
-N 1
- default: will be set according to number of tasks
- long format:
--mem-per-cpu
- example:
--mem-per-cpu 2G
(2GB per core) - default: Depends of the selected partition.
- remarks:
- Usually you do not need to set this parameter, as the default will get you the maximum memory without being billed for more cores. If you need more memory than the default, choose a different partition.
- Mind that memory is assigned per core. If you use hybrid calculations (e.g. MPI + OMP), please consider the number of cores per process used, when calculating the memory each process needs.
- Slurm considers memory units as binary prefixed, meaning that the conversion factor is 1024 and not 1000. As a consequence, requesting
60G
is the same as requesting61440M
. The difference is negligible for small amounts but can affect jobs that require several hundreds of GB of memory.
- long format:
--exclusive
- default: Not used
- remarks: For larger jobs, this is almost always what you want to use.
- long format:
--account
- short format:
-A
- example:
-A rwthXXXX
- default: your personal quota
- remarks: You must be a member of the computing time project to charge computing time to it.
- long format:
--gres=gpu:<N>
- example:
--gres=gpu:2
(requests two GPUs) - default: none
- remarks:
<N>
is the number of GPUs that you want. Mind that your project needs to be allowed to use GPUs.
Submitting to a specific Partition
Depending on your computational needs (e.g. memory) you may want to specify a specific partition on the cluster.
- long format:
--partition
- short format:
-p
- example:
-p c23mm
- default: depends on your computing time project
- long format:
--beeond
- default: BeeOND is not used
- remarks: Details on how to use BeeOND (BeeGFS On-Demand) can be found on the page about the available filesystems. Please mind that your job will become exclusive if you use BeeOND.
- First Steps in Submitting Computing Jobs
- Computing Project Management
- Available Partitions
- BeeOND: BeeGFS OnDemand