Overview
A quick overview of the available hardware systems and how to use them.
In order to login to the cluster please use one of our login nodes. These systems should be used for programming, debugging, preparation and postprocessing of batch jobs.
They are not intended for production runs. Processes that have consumed more than 20 minutes CPU time may get killed without warning.
For file transfers (remote and local) you have to use our dedicated Data Transfer Nodes.
All computing jobs have to be submitted to Slurm and will request node resources from the computing queues. Note that it is not possible to login directly to compute nodes without having a Slurm job unning on the node.
Node types | #Nodes | accumulated | accumulated | Mio Coreh | Mio Coreh | Project Categories (**) | |
---|---|---|---|---|---|---|---|
CLAIX-2018 installed Dec 2018 | CLAIX-2018-MPI 2 Intel Xeon Platinum 8160 Processors “SkyLake” 48 cores per node, | 1243 | 4009 | 59664 | 44 | 525 | PREP |
CLAIX-2018-GPU plus 2 NVIDIA Volta 100 GPUs per node coupled with NVLINK | 54 | 843 | 2592 | 0.84 (*) | 10 (*) | PREP |
(*) only counting the host processors' compute capability
(**) If you need different node type for your project category please add a reason for your requirements.
An overview of the available machinery and the corresponding project categories can be found here:
What is project-based resource management?