Using partitions
In general, you do not need to explicitly request a partition in your job script, since this is done automatically for you by depending on the project and resources you already requested in your job batch script.
Every project has a default partition and (eventually) additional partitions that jobs can be submitted to.
In general it is not required to submit your job to a specific partition, since the selection is automated by the project and/or the specified job requirements. However, in some cases (e.g., performance analysis of specific hardware) it might be relevant.
partition | #nodes | #cores per node | #mem per node | billing | remarks |
c18m | 1240 | 48 | 192 | cpu=1 mem=0.25 | default partition for the "default" project |
c18g | 54 | 48 | 192 | cpu=1 mem=0.25 gpu=24 | 2 V100 gpus, request of volta gpu needed to submit to this partition |
c16s | 2 | 144 | 1024 | cpu=1 mem=0.140625 | project needed to be able to submit to |
devel | 8 | 48 | 192 | Designed for testing jobs and programs. Maximum runtime: 25 minutes Has to be used without an project! Further information can be found here |
Explanation of the billing values of the table:
Each job that runs on the HPC cluster consumes core-hours (cpu-hours); cpu, mem and gpu are billed as corehours.
cpu is the basic reference resource and has a value of 1 billing cost. Using all 48 cores in a c18m node is equal to a cpu billing value of 48.
mem is billed as cores per gigabyte. Consider a c18m node with 48 cores and 192GB memory: 1 GB requested memory equals to 0.25 cores (billing), and 4 GB equal to 1 core (billing).
gpu: each c18g node has 2 gpus, thus 1 gpu equals to a half node, or 24 cores (billing)