You are located in service: RWTH High Performance Computing (Linux)

Example Batch Scripts

Example Batch Scripts

Anleitung

Serial

A sample program   ./a.out   that does not use MPI:

#!/usr/local_rwth/bin/zsh
 
# ask for 10 GB memory
#SBATCH --mem-per-cpu=10240M   #M is the default and can therefore be omitted, but could also be K(ilo)|G(iga)|T(era)
 
# name the job
#SBATCH --job-name=SERIAL_JOB
 
# declare the merged STDOUT/STDERR file
#SBATCH --output=output.%J.txt
 
### begin of executable commands
./a.out

OpenMP

A sample program   ./a.out   that only uses OpenMP multithreading:

#!/usr/local_rwth/bin/zsh
 
# ask for eight cores
#SBATCH --cpus-per-task=8
 
#
#################
# ATTENTION !!! #
#################
# Divide the needed memory per task through the cpus-per-task, as slurm requests memory per cpu, not per task !
# Example:
# You need 2 GB memory per task, you have 8 cpus per task ordered
# order 2048/8 -> 256M memory per task=thread
#SBATCH --mem-per-cpu=256M   #M is the default and can therefore be omitted, but could also be K(ilo)|G(iga)|T(era)
 
# name the job
#SBATCH --job-name=OPENMP_JOB
 
# declare the merged STDOUT/STDERR file
#SBATCH --output=output.%J.txt
 
### beginning of executable commands
# Note: the OMP_NUM_THREADS envvar is set automatically - do not overwrite!
 
./a.out
# alternatively you may run your application under control of srun:
#srun ./a.out

MPI

A sample program  ./a.out  that uses MPI parallelism:

#!/usr/local_rwth/bin/zsh
 
# ask for eight tasks (MPI Ranks)
#SBATCH --ntasks=8
 
# Ask for one node, use several nodes in case you need additional resources
#SBATCH --nodes=1
 
# ask for less than 4 GB memory per task=MPI rank
#SBATCH --mem-per-cpu=3900M   #M is the default and can therefore be omitted, but could also be K(ilo)|G(iga)|T(era)
 
# name the job
#SBATCH --job-name=MPI_JOB
 
# declare the merged STDOUT/STDERR file
#SBATCH --output=output.%J.txt
 
### beginning of executable commands
$MPIEXEC $FLAGS_MPI_BATCH ./a.out

Hybrid

This example illustrates a batch script for an MPI + OpenMP hybrid job, which uses 2 CLAIX 18 nodes with 4 ranks, 2 ranks per node, 1 MPI rank per socket and 24 OMP threads per socket. This uses all 48 cores per node.

#!/usr/local_rwth/bin/zsh
 
# ask for four tasks (which are 4 MPI ranks)
#SBATCH --ntasks=4
 
# ask for 24 threads per task=MPI rank (which is 1 thread per core on one socket on CLAIX18)
#SBATCH --cpus-per-task=24
#
#################
# ATTENTION !!! #
#################
# Divide the needed memory per task through the cpus-per-task, as slurm requests memory per cpu, not per task !
# Example:
# You need 24 GB memory per task, you have 24 cpus per task ordered
# order 24GB/24 -> 1G memory per cpu (i.e., per thread)
#SBATCH --mem-per-cpu=1G   #M is the default and can therefore be omitted, but could also be K(ilo)|G(iga)|T(era)
 
# name the job
#SBATCH --job-name=HYBRID_JOB
 
# declare the merged STDOUT/STDERR file
#SBATCH --output=output.%J.txt
 
### beginning of executable commands
# Note: the OMP_NUM_THREADS envvar is set automatically - do not overwrite!
 
$MPIEXEC $FLAGS_MPI_BATCH ./a.out

 Zusatzinformation

Hybrid toy example for download

Please find a hybrid Fortran toy code, Makefile and SLURM job script for download here.
You need to adjust your project account, your working directory and probably your job log file name in slurm.job.

Download

hybrid-slurm-example.tar

extract using the command

tar -xf hybrid-slurm-example.tar

edit and adjust the file

slurm.job

Compile with

make compile

and submit with

make submit

and check the job log file after the job has terminated.

last changed on 11/18/2022

How did this content help you?

Creative Commons Lizenzvertrag
This work is licensed under a Creative Commons Attribution - Share Alike 3.0 Germany License