Sie befinden sich im Service: RWTH High Performance Computing (Linux)

Learn Login via Remote Desktop

Learn Login via Remote Desktop

Kurzinformation

We provide the software FastX for running remote desktop sessions. You have the choice to use either a desktop client software or to start a session from inside a browser window (which does not achieve the performance of the desktop client though).

 

Please Note:

We only support FastX 3 as a remote desktop application for connecting to our cluster.

Detailinformation

Desktop client (ssh-based FastX sessions)

Native desktop clients are available for

  • Windows
  • Windows (non-root)
  • Linux (64Bit)
  • MacOS X
and can be downloaded at this link.

The vendor documentation describes in detail how to configure and use the desktop client. Choose ssh as the connection method.

You may start ssh-based FastX sessions on any of our login nodes. However, we provide special login nodes, namely

  • login18-x-1.hpc.itc.rwth-aachen.de
  • login18-x-2.hpc.itc.rwth-aachen.de

which are protected by stricter resource limitations (described here) in order to make it more difficult for a single user to overload them.

Browser based client

Web access to the FastX servers works only from inside the RWTH network (eduroam, VPN, or institute network)!

To use the browser based client follow either of the links:

You will then get redirected to a SSL-secured connection on port 3443 where you can login with your usual HPC account:

More information can be found in the vendor documentation.


 Zusatzinformation

Currently, the desktop environments MATE and XFCE can be started. KDE as well as GNOME tend to consume many hardware resources and are therefore not available for selection.


MFA Login

For any login method (including remote desktops) a second factor is required. Please consider the MFA Step by Step Guide. It is possible to use a SSH key pair (e.g. via Putty Pageant) as described in the "Login via SSH and MFA" section.

X11-Forwarding

Without  support, you can also enable X11 forwarding/X11 tunneling. For OpenSSH:

ssh -X -l <your_userid> login18-1.hpc.itc.rwth-aachen.de

If your X11 application is not running properly try to use the (less secure and not recommended) -Y instead of -X :

ssh -Y -l <your_userid> login18-1.hpc.itc.rwth-aachen.de

There is no sbatch --x11 but you can do native ssh-X11-Forwarding now that pam_slurm_adopt is installed.

You have two options for the time being:

  • Use salloc to get an allocation of nodes. Start SSH with X-forwarding to the allocated hosts.
    • The drawback is, that salloc injects many Slurm variables into the environment, which still exist once the allocation ended. So please use a new shell for salloc
  • Use a normal batch script, which includes e.g. a sleep command or similar, then use ssh to the remote nodes as soon as the job runs
    • This is the preferred way and we will provide a small program to ease this for you as soon as possible
      • At the moment, we are distributing guishell throughout the cluster.

Drawback of that method is, that you "just" do a ssh to one compute node, so it is a plain environment without all the slurm variables, you would expect in a real job.

"Full" X11-Forwarding would mean, that in the batchscript you can e.g. directly start Intel Vtune or an xterm, without any additional work. Also the environment set there is the real job environment. This is still to be implemented.

 

zuletzt geändert am 05.02.2024

Wie hat Ihnen dieser Inhalt geholfen?

Creative Commons Lizenzvertrag
Dieses Werk ist lizenziert unter einer Creative Commons Namensnennung - Weitergabe unter gleichen Bedingungen 3.0 Deutschland Lizenz