Sie befinden sich im Service: RWTH High Performance Computing (Linux)

Available Filesystems (RWTH High Performance Computing Linux)

Available Filesystems (RWTH High Performance Computing Linux)



The following table gives an overview of all file systems that each user has at his/her disposal:


File System



Limited Lifetime



Quota (space) 

Quota (#files) 

Use cases


  DescriptionDescriptionDescriptionDescription DescriptionDescriptionDescription  







150 GB


Source code, configuration files







250 GB


Output files, working data







1 TB


IO intensive jobs, large files

Parallel high performance file system designed for high throughput when working with few large files.

$TMPLocal (Ext4/XFS)n.a.yes-no-limited by the size of local disk-

local scratch data

Local on each node. The space is shared by all processes(/jobs/users for non-exclusive case) of a batch job on the node. Limited space available - see size of local disks in Batch Systems (SLURM) and note that some 40GB per node is used by the operating system.

limited by the sum of sizes of local disks, e.g.:

  • CLX16-MPI: max. 80 GB per node
  • CLX18-MPI: max. 400 GB per node
-IO intensive jobs, large amount of small files, any kind of scrath dataLocal for the batch job, shared for all processes on each node belonging to the job. Limited space available - see size of local disks in Batch Systems (SLURM) and note that some 40GB per node is used by the operating system.


Each user file system is only mounted when a process actually accesses it. Therefore, you might not see a specific user directory in a listing of /home, /work or /hpcwork (which can be confusing, especially if you are using a graphical file manager). This does not mean that a user directory does not exist, but that you might have to type the path to a user directory explicitly to actually get there.



We currently remove outdated files in $TMP only. The $TMP and $BEEOND directories belonging to a batch job are removed immediately after the job ends. The lifetime policies regarding $WORK and $HPCWORK are subject to change however, i.e. we may decide to clean up outdated files in these areas, too, if space starts to get short.


We make a backup of $HOME on a daily basis. For all other areas no backup is being made.

Exclusion list

The following files and directories are currently excluded from the tape backup:

  • Complete sub-directories: 
    • ~/.cache
    • ~/.comsol/*/configuration
    • ~/.Trash*
    • ~/.local/share/Trash*
  • File patterns:
    • core.*.rz.RWTH-Aachen.DE.[1-9]*.[1-9]*
    • core.*[1-9]*.[1-9]*


Snapshots reflect the state of a file system at previous points in time. By changing to a snapshot directory, e.g. $HOME_SNAPSHOT or $WORK_SNAPSHOT, you can access previous versions of your files. The files within the snapshots are read-only, they can not be altered or deleted. Please note that the snapshots creation policy is subject to change. If space gets short, we may decide to create less snapshots, delete existing snapshots or omit them completely.

Please note:

that snapshots are not an alternative to a tape backup. If a file system gets damaged, all snapshots are lost, too. This means that you should not store files in $WORK, $HPCWORK or even $TMP that you cannot reproduce with finite effort

zuletzt geändert am 03.11.2022

Wie hat Ihnen dieser Inhalt geholfen?

Creative Commons Lizenzvertrag
Dieses Werk ist lizenziert unter einer Creative Commons Namensnennung - Weitergabe unter gleichen Bedingungen 3.0 Deutschland Lizenz