You are located in service: RWTH High Performance Computing (Linux)

Overview on Changes with Rocky Linux 8

Overview on Changes with Rocky Linux 8

 

Please note

Due to the operating system change from CentOS 7.9 to Rocky 8.7, larger adjustments to the multi-factor authentication are necessary to make it available under Rocky 8 as well. Since these require a little more time, the MFA dialog system login18-4.hpc.itc.rwth-aachen.de will continue running on CentOS 7.9 until the adjustments are completed.

After the successful MFA login on this dialog system, you can connect to one of the other Rocky 8 dialog systems via ssh without any problems.

This page is designed to give users a quick overview on relevant changes that come with our migration to Rocky Linux 8, the Lmod module system and our renewed software stack. You may find more exhaustive information on specific topics on the respective pages here on IT Center Help.

New Operating System

The operating system on the new cluster nodes has changed from CentOS 7 to Rocky Linux 8. Rocky Linux 8 comes with a more recent set of software, e.g. the system compilers and several system libraries are available in newer versions. This has two relevant implications for your individual software:

  • More software may run out of the box, especially if it requires a more recent GLIBC version
  • If you previously compiled software with very specific dependencies on system libraries, you may need to recompile this software


Even though it may not be necessary in many cases, we strongly recommend recompiling your software for use on the new systems to avoid any kind of unexpected behavior. Especially Python packages installed locally via pip may exhibit strange behavior otherwise.


New Software Stack

The software stack comprises the entirety of software the HPC team prepares for compilation, testing and execution of production jobs on the HPC cluster. We have rebuilt this stack from scratch which leads to some changes in how you need to interact with the system. In many cases this simplifies loading procedures and makes changing the environment for other software a lot easier than it used to be. However, this requires some understanding of how the new module system works.

 

Lmod and the Hierarchical Module System

We have switched to Lmod, a more modern implementation of the module command you know from working on our cluster. Fortunately, Lmod supports the same commands as the old module system in addition to some handy shortcuts so software can still be loaded in the same way as before. The major change lies in how we group modules together. We used to organize modules in a "plain" system based on categories. Our new system implements a hierarchy based on compiler and MPI dependencies.

What this means is that our software is grouped based on whether it depends on a compiler module, a compiler module and an MPI module or none of either (so called "Core" modules). Software only becomes visible and loadable once its compiler and MPI dependencies have been met. Our module system help page offers extensive information on how you can find the software you need and how to load it: Module system

A perk of our new software build system is that many libraries that are required as a dependency are exposed via the module system and can, in principle, be used by users to build their own software.

IMPORTANT: The module load command is case-sensitive and several module names changed in case, e.g. openmpi became OpenMPI and ansys became ANSYS. The module spider command will help you to find the correct spelling. For an explanation of the spider workflow, please click the link above.


Changes in Available Software

Not all software or software versions previously available on the cluster have been integrated into the new software stack. We took the opportunity to remove unused and old software (versions) depending on the usage statistics of the last year. This also includes old software versions of software for which a newer release is available on the cluster. On the other hand we have installed several recent software releases that will not be backported to the CentOS 7 systems. We encourage users to migrate to newer software releases where possible. If you find yourself needing a particular release or a software build for a specific compiler-MPI-combination, please send your request to servicedesk@itc.rwth-aachen.de

 

Graphical Sessions with FastX

You can still connect to the new login nodes via FastX as usual. Be aware that you need the FastX 3 client for this, though! You can find a download link for FastX 3 on the following page: Remote Desktop Sessions

 

Has the batch system changed?

The batch system Slurm remains the same. Job scripts are still submitted like before and their syntax has not changed. You may need adjust the bottom part of your job script, however, depending on what kind of software you use.

 

Submitting batch jobs to the new system

To submit jobs to Rocky 8 batch nodes, simply log into one of the Rocky 8 login nodes (cf. Login Nodes) and submit your batch job from there. After the transitioning phase all login nodes will solely submit to Rocky 8 nodes.

In case you need to use constraints (#SBATCH -C) in your job script, you will have to set them via the command line for the first part of transitioning phase and add the Rocky8 constraint like so: sbatch -C Rocky8,yourconstraint batchscript.sh
If you have not used constraints before, this change does not affect you.

 

CUDA

The default CUDA version has been set to CUDA 11.8.0 which is compatible with the default GCC version 11.3.0.

CUDA is available as a standalone module and not included in toolchains. Please note that CUDA 11.x, including the default version, is not compatible with GCC 11 (which is included in the default toolchains). This means you will need to switch to an older version of the compiler or MPI toolchains before loading any CUDA 11.x release.
 

Where do I start?

The following list may help you to migrate over to the new system:

  1. If you automatically load any modules on login via your .zshrc, remove any module commands from the file
  2. Familiarize yourself with the hierarchy and commands of the Module system
  3. Search for the module names of your desired software and check if its name has changed (it probably has)
  4. Modify all module commands in your job scripts and possible helper scripts and adjust the module name accordingly
  5. If you are using open source software or individual codes, recompile them to ensure compatibility
  6. Consult the software's help page to see if there were any changes you need to adopt

last changed on 05/10/2023

How did this content help you?

Creative Commons Lizenzvertrag
This work is licensed under a Creative Commons Attribution - Share Alike 3.0 Germany License