Sie befinden sich im Service: RWTH High Performance Computing (Linux)

Usable Hardware

Usable Hardware

Kurzinformation

The HPC JupyterHub service allows users to use the same hardware (Slurm partitions, nodes, Filesystems, etc.) that they can already use in the existing RWTH High Performance Computing through the Slurm Workload Manager.

Usable partitions

All hardware within these partitions can be used in the HPC JupyterHub c18m and c18m .

Requesting resources

Just like normal Slurm batch jobs, the HPC JupyterHub users need to specify the amount of hardware they want to use.

It is recommended to select their requirements: memory (max 187GB per Node), cores (48 max per Node), GPU( max 2 per Node) and runtime!

Waiting for resources

Because the  HPC JupyterHub uses Slurm to allocate HPC hardware, users that require GPUs or above 8 cores, need to wait for their request to be processed in the Slurm queue.

Queuing is inherent to HPC hardware and HPC workflow.

To quickly get access to a JupyterLab instance users must request 8 or less cores.

Limitations

Hardware that is not available through Slurm cannot be currently used by the HPC JupyterHub.

 

zuletzt geändert am 12.09.2023

Wie hat Ihnen dieser Inhalt geholfen?

Creative Commons Lizenzvertrag
Dieses Werk ist lizenziert unter einer Creative Commons Namensnennung - Weitergabe unter gleichen Bedingungen 3.0 Deutschland Lizenz