Sie befinden sich im Service: RWTH High Performance Computing (Linux)

Overview

Overview

Kurzinformation

A quick overview of the available hardware systems and how to use them.


 

Detailinformation

In order to login to the cluster please use one of our login nodes. These systems should be used for programming, debugging, preparation and postprocessing of batch jobs.
They are not intended for production runs. Processes that have consumed more than 20 minutes CPU time may get killed without warning.

For file transfers (remote and local) you have to use our dedicated Data Transfer Nodes.

All computing jobs have to be submitted to Slurm  and will request node resources from the computing queues. Note that it is not possible to login directly to compute nodes without having a Slurm job running on the node.

 
 

Node types

#Nodes

Total TFLops

Total
#cores / #gpus

Total Mio Coreh
per year

Project Categories

CLAIX-2023

installed Jan 2024

CLAIX-2023-HPC

Per Node:

Memory configurations:

  • 470 nodes with  256 GB
  • 160 nodes with  512 GB
  • 2 nodes with 1024 GB
6324077 (CPU)63552Tier-2: 346

PREP
NHR normal
NHR large

Tier-3: 185RWTH open
RWTH thesis
RWTH lecture
RWTH small

CLAIX-2023-ML

Per Node:

52

335 (CPU)

+

7072 (GPU)

4992 / 208

Tier-2: 27 (*)

 

PREP
JARA
NHR normal
NHR large

Tier-3: 4 (*)RWTH open
RWTH thesis
RWTH lecture
RWTH small
WestAI: 12 (*)WestAI

(*) One GPU-h corresponds to 24 Core-h.


  Zusatzinformation

An overview of the available machinery and the corresponding project categories can be found here:
What is project-based resource management?

zuletzt geändert am 09.07.2024

Wie hat Ihnen dieser Inhalt geholfen?

Creative Commons Lizenzvertrag
Dieses Werk ist lizenziert unter einer Creative Commons Namensnennung - Weitergabe unter gleichen Bedingungen 3.0 Deutschland Lizenz