Sie befinden sich im Service: RWTH Compute Cluster Linux (HPC)

lsdyna

lsdyna

Detailinformation 

1. How to Access the Software

module load TECHNICS
module load lsdyna
 

2. Example Batch Scripts

For an overview on available versions please see the reference at the bottom of this page. Choosing the right binary for your use case is likely to improve performance.

Serial Job

#!/usr/local_rwth/bin/zsh
 
### Job name
#SBATCH --job-name=LSDYNA_SERIAL
 
### File / path which STDOUT will be written to, the %J is the job id
#SBATCH --output=LSDYNA_SERIAL.%J
 
### Request the time you need for execution. The full format is D-HH:MM:SS
### You must at least specify minutes or days and hours and may add or
### leave out any other parameters
#SBATCH --time=80
  
### Request the memory you need for your job. You can specify this
### in either MB (1024M) or GB (4G).
#SBATCH --mem-per-cpu=1850M
 
 
 
### Load required modules
module load TECHNICS
module load lsdyna
 
### start non-interactive batch job
ls-dyna i=inputfile
 

Shared Memory Parallel Job

#!/usr/local_rwth/bin/zsh
 
### Job name
#SBATCH --job-name=LSDYNA_OMP
 
### Request the time you need for execution. The full format is D-HH:MM:SS
### You must at least specify minutes or days and hours and may add or
### leave out any other parameters
#SBATCH --time=80
  
### Request the memory you need for your job. You can specify this
### in either MB (1000M) or GB (4G).
#SBATCH --mem-per-cpu=1850M
 
### Request the number of compute slots you want to use
#SBATCH --cpus-per-task=12
 
### Load required modules
module load TECHNICS
module load lsdyna
 
### start non-interactive batch job
ls-dyna i=inputfile ncpus=$OMP_NUM_THREADS
 

Distributed Memory (Multi-Node, MPI) Parallel Job

#!/usr/local_rwth/bin/zsh
 
### Job name
#SBATCH --job-name=LSDYNA_IMPI
 
### Request the time you need for execution. The full format is D-HH:MM:SS
### You must at least specify minutes or days and hours and may add or
### leave out any other parameters
#SBATCH --time=80
  
### Request the memory you need for your job. You can specify this
### in either MB (1024M) or GB (4G).
#SBATCH --mem-per-cpu=1850M
  
### Request the number of processes you want to use
#SBATCH --ntasks=12
 
### Load required modules
module load TECHNICS
module load lsdyna
 
### start non-interactive batch job
$MPIEXEC $FLAGS_MPI_BATCH ls-dyna_mpp_ i=lsdynaInput
 

Available Versions

Version \ Command
7.0.0
7.1.1
8.0.0
9.1.0
10.1
11.1
12.0
Hybrid OpenMPI double ls-dyna_hyb     
Hybrid OpenMPI single ls-dyna_hyb_s     
Hybrid IntelMPI doublels-dyna_hyb_d  ls-dyna_hybls-dyna_hyb
ls-dyna_hyb_avx512 (Claix18)
ls-dyna_hyb
ls-dyna_hyb_avx512 (Claix18)
ls-dyna_hyb
ls-dyna_hyb_avx512 (Claix18)
Hybrid IntelMPI singlels-dyna_hyb_s  ls-dyna_hyb_sls-dyna_hyb_s
ls-dyna_hyb_avx512_s
ls-dyna_hyb_s
ls-dyna_hyb_avx512_s
ls-dyna_hyb_s
ls-dyna_hyb_avx512_s
MPP OpenMPI Double ls_dyna_mppls-dyna_mppls_dyna_mpp   
MPP OpenMPI Single ls-dyna_mpp_sls-dyna_mpp_sls-dyna_mpp_s   
MPP IntelMPI Doublels-dyna_mpp_d  ls-dyna_mpp_intells-dyna_mpp
ls-dyna_mpp_avx512
ls-dyna_mpp
ls-dyna_mpp_avx512
ls-dyna_mpp
ls-dyna_mpp_avx512
MPP IntelMPI Singlels-dyna_mpp_d  ls-dyna_mpp_intel_sls-dyna_mpp_s
ls-dyna_mpp_avx512_s
ls-dyna_mpp_s
ls-dyna_mpp_avx512_s
ls-dyna_mpp_s
ls-dyna_mpp_avx512_s
SMP Doublels-dynals-dynals-dynals-dynals-dynals-dynals-dyna
SMP Singlels-dyna_sls-dyna_sls-dyna_sls-dyna_sls-dyna_sls-dyna_sls-dyna_s

zuletzt geändert am 09.09.2021

Wie hat Ihnen dieser Inhalt geholfen?

Creative Commons Lizenzvertrag
Dieses Werk ist lizenziert unter einer Creative Commons Namensnennung - Weitergabe unter gleichen Bedingungen 3.0 Deutschland Lizenz