Sie befinden sich im Service: RWTH High Performance Computing (Linux)

Usable Hardware

Usable Hardware

Kurzinformation

The HPC JupyterHub service allows users to use the same hardware (Slurm partitions, nodes, Filesystems, etc.) that they can already use in the existing RWTH High Performance Computing through the Slurm Workload Manager.

Usable partitions

All open access hardware within the normal HPC System can be used in the HPC JupyterHub

Additionally HPCJupyterHub users can use the c23i partition with 28 smaller H100 NVIDIA GPUs (MIGs) with the same architecture as the c23g GPUs.

Requesting resources

Just like normal Slurm batch jobs, the HPC JupyterHub users need to specify the amount of hardware they want to use.

It is recommended to select their requirements: memory (max 187GB per Node), cores (48 max per Node), GPU( max 2 per Node) and runtime!

Waiting for resources

Because the  HPC JupyterHub uses Slurm to allocate HPC hardware, users that require GPUs or above 8 cores, need to wait for their request to be processed in the Slurm queue.

Queuing is inherent to HPC hardware and HPC workflow.

To quickly get access to a JupyterLab instance users must request 8 or less cores.

Limitations

Hardware that is not available through Slurm cannot be currently used by the HPC JupyterHub.

 

zuletzt geändert am 25.06.2024

Wie hat Ihnen dieser Inhalt geholfen?

Creative Commons Lizenzvertrag
Dieses Werk ist lizenziert unter einer Creative Commons Namensnennung - Weitergabe unter gleichen Bedingungen 3.0 Deutschland Lizenz