MPI backend systems
Abbreviations:
- p.N. - per Node
- all CPU Codename are meant to be 'Intel', if not anything to the contrary
- Value '#Cores per Node' mean the physical cores (not Hyperthreading 'CPUs' reported in the OS system). By default the HyperThreading is OFF on our nodes; for those few with HT=ON remarks are posted here.
MPI backend Systems
For testing proper start-up of MPI jobs we provide a dedicated small partition within the cluster. The Xeon Phi (KNL) nodes are not used for MPI tests by default but are also listed here as they are not integrated into SLURM.
Node information | Cluster information | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Hardware Node Type | CPU Codename | CPU Model | Clock Speed [GHz] | #Nodes | #Sockets per Node | #Cores per Socket | #Cores per Node | Memory per Node [GB] | SSD/HDD size [GB] | Sum Sockets | Sum Cores | Sum Memory [GB] | Beginning of operation |
Intel HNS2600BPB | Skylake | Platinum 8160 | 2.1 | 4 | 2 | 24 | 48 | 192 | 480 | 8 | 192 | 768 | February 2019 |
NEC | Xeon Phi (KNL) | 7210 | 1.3 | 15 | 1 | 64 | 64 | 196 | 240 | 15 | 960 | 2940 | November 2017 |
Note that Xeon Phi (KNL) nodes have HyperThreading ON (4x).