Queue Partitions
The CLAIX hardware is organized in the batch system as queues or partitions of nodes. Each partition consists of a collection of node types with similar specifications.
The following table shows how the various node types are organized in partitions that can be selected within Slurm.
Abbreviations:
- p.N. - per Node
- all CPU Codename are meant to be 'Intel', if not anything to the contrary
- Value '#Cores per Node' mean the physical cores (not Hyperthreading 'CPUs' reported in the OS system). By default the HyperThreading is OFF on our nodes; for those few with HT=ON remarks are posted here.
Slurm information | Node information | Cluster information | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Node Type | Partition | Features | max recomm. memory per node [MB] | default memory per task [MB] | Hardware Node Type | CPU Codename | CPU Model | Clock Speed [GHz] | #Nodes | #Sockets per Node | #Cores per Socket | #Cores per Node | Memory per Node [GB] | SSD/ HDD size [GB] | Sum Sockets | Sum Cores | Sum Memory [GB] | Beginning of operation |
ncm | c18m | skylake, skx8160, hpcwork | 187.200 | 3.900 | Intel HNS2600BPB | Skylake | Platinum 8160 | 2.1 | 1.032 | 2 ![]() | 24 ![]() | 48 | 192 | 480 | 2.064 | 49.536 | 198.144 | December 2018 |
nrm | c18m | skylake, skx8160, hpcwork | 187.200 | 3.900 | Intel HNS2600BPB | Skylake | Platinum 8160 | 2.1 | 211 | 2 ![]() | 24 ![]() | 48 | 192 | 480 | 422 | 10.128 | 40.512 | February 2019 |
ncg | c18g | skylake, skx8160, hpcwork | 187.200 | 3.900 | Supermicro 1029GQ-TVRT-01 | Skylake | Platinum 8160 | 2.1 | 48 | 2 ![]() | 24 ![]() | 48 | 192 | 480 | 96 | 2.304 | 9.216 | March 2019 |
nrg | c18g | skylake, skx8160, hpcwork | 187.200 | 3.900 | Supermicro 1029GQ-TVRT-01 | Skylake | Platinum 8160 | 2.1 | 6 | 2 ![]() | 24 ![]() | 48 | 192 | 480 | 12 | 288 | 1.152 | March 2019 |
SubNUMAClustering enabled for Skylake CPUs, which means, there exist 4 NUMA nodes with 12 cores each. Slurm interpretes these NUMA nodes as sockets, therefore these nodes have 4 sockets and 12 cores per socket.
Further systems are integrated into the RWTH High Performance Computing through the Integrative hosting service.