Sie befinden sich im Service: RWTH Compute Cluster Linux (HPC)

FDS

FDS

Kurzinformation

Fire Dynamics Simulator (FDS) is a computational fluid dynamics (CFD) model of fire-driven fluid flow: https://pages.nist.gov/fds-smv/


Detailinformation

1. General Information

In the HPC Cluster, FDS is available in one flavour:

  • binary installation, downloaded from https://pages.nist.gov/fds-smv/downloads.html You detect this version on '-bin' suffix in the module version string.  The name of the FDS binary is 'fds' for all versions. Note that older binary version of FDS need an old version of Open MPI (1.8.4) which does not support InfiniBand, so using this version of FDS on multiple nodes is a very bad idea (as the job will be quite likely very slow) - stay within a single node! Better: use latest binary release, using Intel MPI.
 

2. How to access the software

$ module load TECHNICS
$ module load fds
 

3. Example batch scripts

a. serial job

#!/usr/local_rwth/bin/zsh
 
### Job name
#SBATCH --job-name=FDS_SERIAL
  
### Request the time you need for execution in minutes
#SBATCH --time=80
  
### Request memory you need for your job in MB
#SBATCH --mem-per-cpu=3950
  
### load modules
module load TECHNICS
module load fds
  
### start non-interactive batch job
fds room_fire.fds

b. OpenMP parallel job, in-house compilation from sources

#!/usr/local_rwth/bin/zsh
 
### Job name
#SBATCH --job-name=FDS_OMP
  
### Request the time you need for execution in minutes
#SBATCH --time=80
  
### Request memory you need in MB
#SBATCH --mem-per-cpu=3950
 
### Request the number of threads you want to use
#SBATCH --cpus-per-task=8
 
### load modules
module load TECHNICS
module load fds
  
### start non-interactive batch job
fds_omp room_fire.fds

c. MPI parallel job, binary release from NIST

#!/usr/local_rwth/bin/zsh
### Job name
#SBATCH --job-name=FDS_MPI
  
###  12 processes, all on 1 node
#SBATCH --ntasks=12
#SBATCH --nodes=1
 
  
### Limit for maximum memory per slot (in MB)
#SBATCH --mem-per-cpu=3900
  
### The time limit for the job in minutes (reaching this time limit, the process is signaled and killed)
#SBATCH --time=80
  
### load the necessary module files
module load TECHNICS
module load fds
  
### start the FDS MPI binary
$MPIEXEC $FLAGS_MPI_BATCH  fds room_fire.fds

d. Hybrid (MPI+OpenMP) parallel job (two nodes)

#!/usr/local_rwth/bin/zsh
### Job name
#SBATCH --job-name=FDS_HYBRID
 
### Hybrid Job with 6 MPI Processes in groups to 2 processes per node
#SBATCH --ntasks=6
#SBATCH --ntasks-per-node=2
### ask for 24 threads per task=MPI rank (which is 1 thread per core on one socket on CLAIX18)
#SBATCH --cpus-per-task=24
 
### Limit for maximum memory per slot (in MB)
#SBATCH --mem-per-cpu=1850
  
### The time limit for the job in minutes (reaching this time limit, the process is signaled and killed)
#SBATCH --time=80
 
 
### load the necessary module files
module load TECHNICS
module load fds
  
### start the FDS MPI binary
$MPIEXEC $FLAGS_MPI_BATCH fds_hyb room_fire.fds

Zusatzhinweise

  • FDS is a Fortran program and known to consume a lot of stack space. As now the default stack limit is set to 'unlimited' you do not need to issue any additional command.
  • The OpenMP and Hybrid versions did have very serious problems in the past. Please use these versions at your own risk and report any issues.
  • The OpenMP and Hybrid versions use OpenMP WORKSHARE construct; this is known for Intel compiler to have some issues:
    • version 14:0 and older: WORKSHARE is not parallelised at all. Do not use these versions at all!
    • later versions: WORKSHARE construct need additional OMP_STACKSIZE to beware of ransom SIGSEGVs.
  • Thus, if running OpenMP version of FDS compiled with Intel compilers, or a Hybrid version on a single node, set the environment variable using

    export OMP_STACKSIZE=200M

    Note that for Hybrid jobs running across multiple nodes you have to forward the value of this variable explicitly to all nodes by a MPI-vendor-specific flag:

    • Intel MPI:

      $MPIEXEC $FLAGS_MPI_BATCH -env OMP_STACKSIZE 200M fds_hyb room_fire.fds
    • Open MPI:

      $MPIEXEC $FLAGS_MPI_BATCH -x OMP_STACKSIZE=200M  fds_hyb room_fire.fds
 

zuletzt geändert am 29.01.2021

Wie hat Ihnen dieser Inhalt geholfen?

Creative Commons Lizenzvertrag
Dieses Werk ist lizenziert unter einer Creative Commons Namensnennung - Weitergabe unter gleichen Bedingungen 3.0 Deutschland Lizenz