You are located in service: RWTH Compute Cluster Linux (HPC)

OptiStruct SMP (MPI)

OptiStruct SMP (MPI)

Kurzinformation

Only the OptiStruct Solver currently supports parallel execution. OptiStruct supports a number of parallel execution modes of which two can be used:

Shared memory (SMP) mode uses multiple cores within a single node
Distributed memory (SPMD) mode uses multiple cores across multiple nodes via the MPI library


 Detailinformation

There are four different parallelisation schemes for SPMD OptStruct that are selected by different flags:

Domain decompostion: -ddm flag
Multi-model optimisation: -mmo flag
Failsafe topology optimisation: -fso flag

You should launch OptiStruct SPMD using the standard Intel MPI mpirun command.

Note: OptiStruct does not support the use of SGI MPT, you must use Intel MPI.

Example OptiStruct SPMD job submission script:

#!/usr/local_rwth/bin/zsh

# Slurm job options (name, compute nodes, job time)
#SBATCH --job-name=HW_OptiStruct_SPMD
#SBATCH --time=0:20:0
#SBATCH --nodes=2
#SBATCH --tasks-per-node=24
#SBATCH --cpus-per-task=1

# Load Hyperworks module and Intel MPI
module load TECHNICS hyperworks
module load intelmpi

# Set the number of threads to 1
#   This prevents any threaded system libraries from automatically
#   using threading.
export OMP_NUM_THREADS=1

 
 
 

last changed on 29.01.2021

How did this content help you?