You are located in service: RWTH High Performance Computing (Linux)

Partitions

Partitions

Partitions in Slurm are sets of computing nodes with dedicated queues. Typically, it is not necessary to request a specific partition. Without any special requirements, jobs will be scheduled to the c23ms partition (see below). If you request GPU resources, your job will be scheduled to the c23g partition.

However, in some cases it may be necessary to specify the partition:

Greater Memory Requirements on Claix-2023

The nodes in the Claix-2023 cluster offer three memory configurations: small, medium and large. The c23ms partition includes all nodes, the c23mm partition includes nodes with medium and large memory configurations, and the c23ml partition includes only nodes with the large memory configuration.

This setup allows you to select the partition that meets your memory requirements, while also ensuring that all nodes are utilized, even if the c23mm or c23ml queues are empty.

Testing Applications

The devel partition is designed for quick testing of computing jobs, offering short wait but limited run times. You can find more information here.

Overview

The following table gives an overview of the currently available partitions:

Partition

NodesCores per node

Memory per core*

Billing

remarks

c23ms

625962540 MiB**RegularClaix-2023 (small memory),
default partition
c23mm166965210 MiBClaix-2023 (medium memory)
c23ml29610560 MiBClaix-2023 (large memory)
c23g50965200 MiBUsing one GPU for one hour is billed as 24 core-hClaix-2023
(each node has four H100 GPUs)
devel296975 MiBFreeDesignated for testing purposes.
Maximum runtime of 1 Hour
Only few simultaneous jobs
Please use without project

* This is the default value and the recommended maximum for #SBATCH --mem-per-cpu
** 1 MiB = 220 Bytes = 1024 kiB = 1048576 Bytes = 1.048576 MB

last changed on 11/04/2024

How did this content help you?

Creative Commons Lizenzvertrag
This work is licensed under a Creative Commons Attribution - Share Alike 3.0 Germany License