Example Batch Scripts
Serial
#!/usr/local_rwth/bin/zsh # ask for 10 GB memory #SBATCH --mem-per-cpu=10240M #M is the default and can therefore be omitted, but could also be K(ilo)|G(iga)|T(era) # name the job #SBATCH --job-name=SERIAL_JOB # declare the merged STDOUT/STDERR file #SBATCH --output=output.%J.txt ### begin of executable commands ./a.out |
OpenMP
#!/usr/local_rwth/bin/zsh # ask for eight cores #SBATCH --cpus-per-task=8 # ################# # ATTENTION !!! # ################# # Divide the needed memory per task through the cpus-per-task, as slurm requests memory per cpu, not per task ! # Example: # You need 2 GB memory per task, you have 8 cpus per task ordered # order 2048/8 -> 256M memory per task=thread #SBATCH --mem-per-cpu=256M #M is the default and can therefore be omitted, but could also be K(ilo)|G(iga)|T(era) # name the job #SBATCH --job-name=OPENMP_JOB # declare the merged STDOUT/STDERR file #SBATCH --output=output.%J.txt ### beginning of executable commands # Note: the OMP_NUM_THREADS envvar is set automatically - do not overwrite! ./a.out # alternatively you may run your appliation under constrol of srun: #srun ./a.out |
MPI
#!/usr/local_rwth/bin/zsh # ask for eight tasks #SBATCH --ntasks=8 # ask for less tahn 4 GB memory per task=MPI rank #SBATCH --mem-per-cpu=3900M #M is the default and can therefore be omitted, but could also be K(ilo)|G(iga)|T(era) # name the job #SBATCH --job-name=MPI_JOB # declare the merged STDOUT/STDERR file #SBATCH --output=output.%J.txt ### beginning of executable commands $MPIEXEC $FLAGS_MPI_BATCH ./a.out |
Hybrid
This example illustrates a batch script for an MPI + OpenMP hybrid job, which uses 2 CLAIX 18 nodes with 1 MPI rank per socket and 24 threads to fill the socket.
#!/usr/local_rwth/bin/zsh # ask for four tasks (which are 4 MPI ranks) #SBATCH --ntasks=4 # ask for 24 threads per task=MPI rank (which is 1 thread per core on one socket on CLAIX18) #SBATCH --cpus-per-task=24 # ################# # ATTENTION !!! # ################# # Divide the needed memory per task through the cpus-per-task, as slurm requests memory per cpu, not per task ! # Example: # You need 24 GB memory per task, you have 24 cpus per task ordered # order 24GB/24 -> 1G memory per cpu (i.e., per thread) #SBATCH --mem-per-cpu=1G #M is the default and can therefore be omitted, but could also be K(ilo)|G(iga)|T(era) # name the job #SBATCH --job-name=HYBRID_JOB # declare the merged STDOUT/STDERR file #SBATCH --output=output.%J.txt ### beginning of executable commands # Note: the OMP_NUM_THREADS envvar is set automatically - do not overwrite! $MPIEXEC $FLAGS_MPI_BATCH ./a.out |
Hybrid toy example for download
Please find a hybrid Fortran toy code, Makefile and SLURM job script for download here.
You need to adjust your project account, your working directory and probably your job log file name in slurm.job.
Download
extract using the command
tar -xf hybrid-slurm-example.tar |
edit and adjust the file
slurm.job |
Compile with
make compile |
and submit with
make submit |
and check the job log file after the job has terminated.