Overview
A quick overview of the available hardware systems and how to use them.
In order to login to the cluster please use one of our Dialog Systems. These systems should be used for programming, debugging, preparation and postprocessing of batch jobs.
They are not intended for production runs. Processes that have consumed more than 20 minutes CPU time may get killed without warning.
For file transfers (remote and local) you have to use our dedicated Data Transfer Nodes.
For testing proper start-up of MPI jobs we provide a dedicated MPI backends.
All productive jobs have to be submitted to the workload manager SLURM and will run on of the batch system nodes. Note that it is not possible to login directly to one of these nodes.
Node types | #Nodes | accumulated | accumulated | Mio Coreh | Mio Coreh | Project Categories (**) | |
---|---|---|---|---|---|---|---|
CLAIX-2016 installed Nov 2016 | CLAIX-2016-SMP 8 Intel E7-8860 v4 processors “Broadwell“ 144 cores per node, | 2 | 40 | 1152 | 0.83 | 10.09 | All categories upon special requirements |
CLAIX-2018 installed Dec 2018 | CLAIX-2018-MPI 2 Intel Xeon Platinum 8160 Processors “SkyLake” 48 cores per node, | ~1250 | 2800 | 60000 | 44 | 525 | PREP |
CLAIX-2018-GPU plus 2 NVIDIA Volta 100 GPUs per node coupled with NVLINK | 54 | ~750 | 1152 | 0.84 (*) | 10 (*) | PREP |
(*) only counting the host processors' compute capability
(**) If you need different node type for your project category please add a reason for your requirements.
An overview of the available machinery and the corresponding project categories can be found here:
What is project-based resource management?