You are located in service: RWTH High Performance Computing (Linux)

X11-Forwarding

X11-Forwarding

Without  support, you can also enable X11 forwarding/X11 tunneling. For OpenSSH:

ssh -X -l <your_userid> login23-1.hpc.itc.rwth-aachen.de

If your X11 application is not running properly try to use the (less secure and not recommended) -Y instead of -X :

ssh -Y -l <your_userid> login23-1.hpc.itc.rwth-aachen.de

There is no sbatch --x11 but you can do native ssh-X11-Forwarding now that pam_slurm_adopt is installed.

You have two options for the time being:

  • Use salloc to get an allocation of nodes. Start SSH with X-forwarding to the allocated hosts.
    • The drawback is, that salloc injects many Slurm variables into the environment, which still exist once the allocation ended. So please use a new shell for salloc
  • Use a normal batch script, which includes e.g. a sleep command or similar, then use ssh to the remote nodes as soon as the job runs
    • This is the preferred way and we will provide a small program to ease this for you as soon as possible
      • At the moment, we are distributing guishell throughout the cluster.

Drawback of that method is, that you "just" do a ssh to one compute node, so it is a plain environment without all the slurm variables, you would expect in a real job.

"Full" X11-Forwarding would mean, that in the batchscript you can e.g. directly start Intel Vtune or an xterm, without any additional work. Also the environment set there is the real job environment. This is still to be implemented.

last changed on 01/14/2025

How did this content help you?

Creative Commons Lizenzvertrag
This work is licensed under a Creative Commons Attribution - Share Alike 3.0 Germany License