Sie befinden sich im Service: RWTH Compute Cluster Linux (HPC)

Access GPU Cluster

Access GPU Cluster

Kurninformation

Access

All users can use GPGPUs. However, you have to specify this in your project application in case you want to use more ressources than available in the default queue.

 

Interactive Mode

You can access the GPU front end nodes interactively via SSH. First, log into one of our (graphical) cluster front ends and then use SSH to log into one of the interactive (login) GPU nodes.

GPU login nodes are listed here.

 

Friendly Usage

The GPUs in interactive nodes are in the "exclusive process" compute mode, which means that whenever a GPU program is run it gets the whole GPU and does not have to compete with other programs for resources (e.g. GPU memory). Furthermore, it enables several threads in a single process to use both GPUs that are available on each node (cf. e.g. cudaSetDevice) instead of being restricted to one thread per device. Therefore you should use them reasonably. Please run long computations in batch mode only and close any debuggers or Matlab scripts after usage. You may have a strict runtime limitation on dialogue nodes.

 

GPU + MPI

If you would like to test your GPU + MPI program interactively, you can do so on the GPGPU dialog nodes using our mpiexec wrapper $MPIEXEC. To get your MPI program run on the GPU machines, you have to explicitly specify their hostnames, otherwise your program will get started on the regular MPI backends which does not have any GPUs. If you are on a GPU front end you can use the $HOST envvar to stay on the same node.

Example usage, from a dialog node equipped with GPGPUs:

$MPIEXEC -np 2 -H $HOST myApp.exe       # 2 processes on the actual node

However, you should not use the '-H' option with regular MPI applications and/or front ends!

zuletzt geändert am 29.01.2021

Wie hat Ihnen dieser Inhalt geholfen?