Sie befinden sich im Service: RWTH High Performance Computing (Linux)

Using partitions

Using partitions


In general, you do not need to explicitly request a partition in your job script, since this is done automatically for you by depending on the project and resources you already requested in your job batch script.
Every project has a default partition and (eventually) additional partitions that jobs can be submitted to.


In general it is not required to submit your job to a specific partition, since the selection is automated by the project and/or the specified job requirements. However, in some cases (e.g., performance analysis of specific hardware) it might be relevant.


#nodes#cores per node#mem per nodebillingremarks
c18m124048192cpu=1 mem=0.25default partition for the "default" project
c18g5448192cpu=1 mem=0.25 gpu=242 V100 gpus, request of volta gpu needed to submit to this partition
c16s21441024cpu=1 mem=0.140625project needed to be able to submit to

Explanation of the billing values of the table:

Each job that runs on the HPC cluster consumes core-hours (cpu-hours); cpu, mem and gpu are billed as corehours.

cpu is the basic reference resource and has a value of 1 billing cost. Using all 48 cores in a c18m node is equal to a cpu billing value of 48. 
mem is billed as cores per gigabyte. Consider a c18m node with 48 cores and 192GB memory: 1 GB requested memory equals to 0.25 cores (billing), and 4 GB equal to 1 core (billing).
gpu: each c18g node has 2 gpus, thus 1 gpu equals to a half node, or 24 cores (billing)

zuletzt geändert am 18.11.2022

Wie hat Ihnen dieser Inhalt geholfen?

Creative Commons Lizenzvertrag
Dieses Werk ist lizenziert unter einer Creative Commons Namensnennung - Weitergabe unter gleichen Bedingungen 3.0 Deutschland Lizenz