Overview
A quick overview of the available hardware systems and how to use them.
In order to login to the cluster please use one of our login nodes. These systems should be used for programming, debugging, preparation and postprocessing of batch jobs.
They are not intended for production runs. Processes that have consumed more than 20 minutes CPU time may get killed without warning.
For file transfers (remote and local) you have to use our dedicated Data Transfer Nodes.
All computing jobs have to be submitted to Slurm and will request node resources from the computing queues. Note that it is not possible to login directly to compute nodes without having a Slurm job running on the node.
Node types | #Nodes | Total TFLops | Total | Total Mio Coreh | Project Categories | |
---|---|---|---|---|---|---|
CLAIX-2023 installed Jan 2024 | CLAIX-2023-HPC Per Node:
| 632 | 4077 (CPU) | 63552 | Tier-2: 346 | PREP |
Tier-3: 185 | RWTH open RWTH thesis RWTH lecture RWTH small | |||||
CLAIX-2023-ML Per Node:
| 52 | 335 (CPU) + 7072 (GPU) | 4992 / 208 | Tier-2: 27 (*) | PREP | |
Tier-3: 4 (*) | RWTH open RWTH thesis RWTH lecture RWTH small | |||||
WestAI: 12 (*) | WestAI |
(*) One GPU-h corresponds to 24 Core-h.
An overview of the available machinery and the corresponding project categories can be found here:
What is project-based resource management?