Available Filesystems (RWTH High Performance Computing Linux)
Overview
The following table gives an overview of all file systems that each user has at his/her disposal:
File System | Type | Path | Limited Lifetime | Snapshots | Backup | Quota(type) | Quota (space) | Quota (#files) | Use cases | Comment |
---|---|---|---|---|---|---|---|---|---|---|
Description | Description | Description | Description | Description | Description | Description | ||||
$HOME | NFS/CIFS | /home/<username> | no | $HOME_SNAPSHOT | yes | tree | 150 GB | - | Source code, configuration files | |
$WORK | NFS/CIFS | /work/<username> | no | $WORK_SNAPSHOT | no | tree | 250 GB | - | Output files, working data | |
$HPCWORK | Lustre | /hpcwork/<username> | no | - | no | tree | 1 TB | 50.000 | IO intensive jobs, large files | Parallel high performance file system designed for high throughput when working with few large files. |
$TMP | Local (Ext4/XFS) | n.a. | yes | - | no | - | limited by the size of local disk | - | local scratch data | Local on each node. The space is shared by all processes(/jobs/users for non-exclusive case) of a batch job on the node. Limited space available - see size of local disks in Batch Systems (SLURM) and note that some 40GB per node is used by the operating system. |
$BEEOND | BeeGFS On Demand https://www.beegfs.io/wiki/BeeOND | n.a. | yes | - | no | - | limited by the sum of sizes of local disks, e.g.:
| - | IO intensive jobs, large amount of small files, any kind of scrath data | Local for the batch job, shared for all processes on each node belonging to the job. Limited space available - see size of local disks in Batch Systems (SLURM) and note that some 40GB per node is used by the operating system. |
Each user file system is only mounted when a process actually accesses it. Therefore, you might not see a specific user directory in a listing of /home, /work or /hpcwork (which can be confusing, especially if you are using a graphical file manager). This does not mean that a user directory does not exist, but that you might have to type the path to a user directory explicitly to actually get there.
We currently remove outdated files in $TMP only. The $TMP and $BEEOND directories belonging to a batch job are removed immediately after the job ends. The lifetime policies regarding $WORK and $HPCWORK are subject to change however, i.e. we may decide to clean up outdated files in these areas, too, if space starts to get short.
We make a backup of $HOME on a daily basis. For all other areas no backup is being made.
Exclusion list
The following files and directories are currently excluded from the tape backup:
- Complete sub-directories:
- .NOBACKUP
- ~/.cache
- ~/.comsol/*/configuration
- ~/.Trash*
- ~/.local/share/Trash*
- File patterns:
- core.*.rz.RWTH-Aachen.DE.[1-9]*.[1-9]*
- core.*.hpc.itc.rwth-aachen.de.[1-9]*.[1-9]*
Snapshots reflect the state of a file system at previous points in time. By changing to a snapshot directory, e.g. $HOME_SNAPSHOT or $WORK_SNAPSHOT, you can access previous versions of your files. The files within the snapshots are read-only, they can not be altered or deleted. Please note that the snapshots creation policy is subject to change. If space gets short, we may decide to create less snapshots, delete existing snapshots or omit them completely.
Please note: that snapshots are not an alternative to a tape backup. If a file system gets damaged, all snapshots are lost, too. This means that you should not store files in $WORK, $HPCWORK or even $TMP that you cannot reproduce with finite effort |