You are located in service: RWTH Compute Cluster Linux (HPC)

Overview

Overview

Kurzinformation

A quick overview of the available hardware systems and how to use them.


 Detailinformation

In order to login to the cluster please use one of our Dialog Systems. These systems should be used for programming, debugging, preparation and postprocessing of batch jobs.
They are not intended for production runs. Processes that have consumed more than 20 minutes CPU time may get killed without warning.

For file transfers (remote and local) you have to use our dedicated Data Transfer Nodes.

For testing proper start-up of MPI jobs we provide a dedicated MPI backends.

All productive jobs have to be submitted to the workload manager SLURM and will run on of the batch system nodes. Note that it is not possible to login directly to one of these nodes.

 
 

Node types

#Nodes

accumulated
peak TFLops

accumulated
#cores

Mio Coreh
per month

Mio Coreh
per year

Project Categories (**)

CLAIX-2016

installed Nov 2016

CLAIX-2016-MPI

2 Intel E5-2650 v4 processors “Broadwell”
(2.2 GHz, 12 cores each)

24 cores per node,
128 GB main memory per node
( ~5 GB main memory per core)

~600

600

         14400  

           10.37  

         124.42  

JARA

CLAIX-2016-SMP

8 Intel E7-8860 v4 processors “Broadwell“
(2.2 GHz, 18 cores each)

144 cores per node,
1 TB main memory per node
(~7 GB main memory per core)

84011520.8310.09

All categories upon special requirements

CLAIX-2016-GPU
like CLAIX-2016-MPI

plus 2 NVIDIA Pascal 100 GPUs  per node

911.8 (*)240 (*)0.17 (*)1.89 (*)
 

All categories upon special requirements

CLAIX-2018

installed Dec 2018

CLAIX-2018-MPI

2 Intel Xeon Platinum 8160 Processors “SkyLake”
(2.1 GHz, 24 cores each)

48 cores per node,
192 GB main memory per node
(~4 GB main memory  per core)

~125028006000044525

PREP
JARA
BUND
RWTH open
RWTH thesis
RWTH lecture
RWTH small
RWTH medium

CLAIX-2018-GPU
like CLAIX-2018-MPI

plus 2 NVIDIA Volta 100 GPUs  per node coupled with NVLINK
and with 16 GB HBM2 memory per GPU

54~75011520.84 (*)10 (*)

PREP
JARA
BUND
RWTH open
RWTH thesis
RWTH lecture
RWTH small
RWTH medium

(*) only counting the host processors' compute capability

(**) If you need different node type for your project category please add a reason for your requirements.


 Zusatzinformation

An overview of the available machinery and the corresponding project categories can be found here:
What is project-based resource management?

last changed on 29.01.2021

How did this content help you?