Overview
A quick overview of the available hardware systems and how to use them.
In order to login to the cluster please use one of our login nodes. These systems should be used for programming, debugging, preparation and postprocessing of batch jobs.
They are not intended for production runs. Processes that have consumed more than 20 minutes CPU time may get killed without warning.
For file transfers (remote and local) you have to use our dedicated Data Transfer Nodes.
All computing jobs have to be submitted to Slurm and will request node resources from the computing queues. Note that it is not possible to login directly to compute nodes without having a Slurm job running on the node.
Node types | #Nodes | Total TFLops | Total | Total Mio Coreh | Project Categories | |
---|---|---|---|---|---|---|
CLAIX-2023 installed Jan 2024 | CLAIX-2023-HPC 2x Intel Xeon 8468 Sapphire 96 cores per node, InfiniBand | 632 | 4077 (CPU) | 63552 | Tier-2: 346 | PREP |
Tier-3: 185 | RWTH open RWTH thesis RWTH lecture RWTH small | |||||
CLAIX-2023-ML 2x Intel Xeon 8468 Sapphire 96 cores per node, 4x NVIDIA H100 96 GB HBM2e per node InfiniBand | 52 | 335 (CPU) + 7072 (GPU) | 4992 / 208 | Tier-2: 27 (*) | PREP | |
Tier-3: 4 (*) | RWTH open RWTH thesis RWTH lecture RWTH small | |||||
WestAI: 13 (*) | WestAI | |||||
CLAIX-2018 installed Dec 2018 | CLAIX-2018-MPI 2x Intel Xeon Platinum 8160 Processors “SkyLake” 48 cores per node, | 1243 | 4009 | 59664 | 525 | PREP |
CLAIX-2018-GPU 2x NVIDIA Volta 100 (V100-SXM2 ) GPUs per node | 54 | 843 | 2592 / 108 | 10 (*) | PREP |
(*) One GPU-h corresponds to 24 Coreh.
An overview of the available machinery and the corresponding project categories can be found here:
What is project-based resource management?