Sie befinden sich im Service: RWTH Compute Cluster Linux (HPC)

abaqus

abaqus

Kurzinformation

Abaqus is a software suite for finite element analysis and computer aided engineering.


 Detailinformation 

1. How to Access the Software

Load the module and see the version informations:

$> module load TECHNICS
# which abaqus versions are available?
$> module avail abaqus
$> module load abaqus
# or load specific version
$> module load abaqus/2018
$> abaqus information=release

2. Example Batch Scripts

2.1 single node job

#!/usr/local_rwth/bin/zsh
 
### Job name
#SBATCH --job-name=abaqus_slurm_job
 
### File/Path where STDOUT will be written to, %J is the job id
#SBATCH --output abaqus-job-log.%J
 
### Request the time you need for execution. The full format is D-HH:MM:SS
### You must at least specify minutes or days and hours and may add or
### leave out any other parameters
#SBATCH --time=5:00
 
### Request the memory you need for your job. You can specify this
### in either MB (1024M) or GB (4G). BEWARE: This is a per-cpu limit,
### and will be multiplied with cpus-per-task for the total requested memory
#SBATCH --mem-per-cpu=3900M
 
### Request one host
#SBATCH --nodes=1
 
### Request number of CPUs/MPI Ranks
#SBATCH --ntasks=4
 
### Initialization of the software
module load TECHNICS
module load abaqus
 
### Set the amount of memory to be passed to Abaqus as a command line argument
### Beware: This HAS to be lower than the value you requested via --mem-per-cpu
export ABAQUS_MEM_ARG="3584 mb"
 
### Change (!) to your desired work directory
cd $HOME/path/to/your/dir
 
### Create ABAQUS environment file for current job, you can set/add your own options (Python syntax)
env_file=abaqus_v6.env
 
cat << EOF > ${env_file}
#verbose = 3
#ask_delete = OFF
mp_file_system = (SHARED, LOCAL)
mp_host_list = $R_WLM_ABAQUSHOSTLIST
EOF
 
unset SLURM_GTIDS
 
### name your job HERE, name it DIFFERENT from your input file!
JOBNAME=name_of_my_job
INPUTFILE=input_file.inp
 
unsetopt -o NOMATCH
rm -f $JOBNAME.* 2>/dev/null
setopt -o NOMATCH
 
### Execute your application
### Please remember, to adjust the memory, it must be less than requested above
abaqus interactive job=$JOBNAME input=$INPUTFILE cpus=$SLURM_NTASKS memory=$ABAQUS_MEM_ARG

2.2 multi node job

#!/usr/local_rwth/bin/zsh
 
### Job name
#SBATCH --job-name=abaqus_slurm_job
 
### File/Path where STDOUT will be written to, %J is the job id
#SBATCH --output abaqus-job-log.%J
 
### Request the time you need for execution. The full format is D-HH:MM:SS
### You must at least specify minutes or days and hours and may add or
### leave out any other parameters
#SBATCH --time=5:00
 
### Request the memory you need for your job. You can specify this
### in either MB (1024M) or GB (4G). BEWARE: This is a per-cpu limit,
### and will be multiplied with cpus-per-task for the total requested memory
#SBATCH --mem-per-cpu=3900M
 
### Request number of hosts
#SBATCH --nodes=2
 
### Request number of CPUs/MPI Ranks
#SBATCH --ntasks=4
 
### Initialization of the software
module load TECHNICS
module load abaqus
 
### Set the amount of memory to be passed to Abaqus as a command line argument
### Beware: This HAS to be lower than the value you requested via --mem-per-cpu
export ABAQUS_MEM_ARG="3584 mb"
 
### Change (!) to your desired work directory
cd $HOME/path/to/your/dir
 
### Create ABAQUS environment file for current job, you can set/add your own options (Python syntax)
env_file=abaqus_v6.env
 
cat << EOF > ${env_file}
#verbose = 3
#ask_delete = OFF
mp_mpi_implementation = IMPI
mp_mpirun_path = {IMPI:'/opt/intel/impi/2018.4.274/compilers_and_libraries/linux/mpi/bin64/mpiexec.hydra'}
mp_file_system = (SHARED, LOCAL)
mp_host_list = $R_WLM_ABAQUSHOSTLIST
EOF
 
unset SLURM_GTIDS
 
### name your job HERE, name it DIFFERENT from your input file!
JOBNAME=name_of_my_job
INPUTFILE=input_file.inp
 
unsetopt -o NOMATCH
rm -f $JOBNAME.* 2>/dev/null
setopt -o NOMATCH
 
### Execute your application
### Please remember, to adjust the memory, it must be less than requested above
abaqus interactive job=$JOBNAME input=$INPUTFILE cpus=$SLURM_NTASKS memory=$ABAQUS_MEM_ARG

3. Best Practices for Abaqus Jobs

  • Don't name your input file $JOBNAME.inp. It will be deleted by the batch script and your job won't start.
  • Use separate directories for each job. The file abaqus_v6.env e.g. could be overwritten by a newly starting job.
  • Use job dependencies or job arrays, such that only one or at least a few jobs run concurrently. They might "steal" licenses from each other.

4. Submitting Abaqus Jobs with Fortran User Subroutines

  • Pre-compile your subroutine as a shared library with the following command line:
 
$> module load TECHNICS
$> module load abaqus
$> abaqus make -library yoursubfile.f
  • Un-comment (remove the '#' character) from the line 'usub_lib_dir=os.getcwd()' in the above example script
  • Please make sure that the above entrys (between 'EOF' marks) in don't have leading spaces or tabs
 

5. General information about abaqus

Look >>>here<<< for more information about abaqus and especially this PDF document about the license model of abaqus.

But in short, the following table shows the token usage for a number of cores for an abaqus job.

 
number of cores
1
2
4
8
12
16
24
32
64
128
needed tokens56812141619212838

or as picture:

Abaqus Tokens


FAQ

I'm not able to see my model in abaqus cae, I only get a "blue" screen.

open abaqus cae with MESA support ( thanks to Mrs. Toups for the tip ). For this use "abaqus cae -mesa" instead of "abaqus cae". This can be done via
  • FastX2 ( the preferred way )
  • ssh, which is horrably slow

I'm getting an error: Abaqus Error: The following file(s) could not be located: ....inp

Don't name your input file $JOBNAME.inp, it got deleted! Also make sure that you did use an existing .inp filename and the right path.

I'm getting an error: Abaqus Error: It is required that the local host is in the host list for this run. Local host: ncm0100.hpc.itc.rwth-aachen.de, mp_host_list: (('ncm1234.hpc.itc.rwth-aachen.de', 12),)

Use separate directories for each job, abaqus_v6.env got overwritten by another starting job!

I'm getting an error: "..." license request queued for the License Server. Total time in queue: ... seconds.

Too many licenses have been checked out. The Slurm scheduler does not check if free abaqus licenses are available before a job is started. Unfortunately, if no license is available, abaqus does not directly abort but waits until a license becomes available. From Slurm's point of view however, the job is running and will therefore be accounted as usual.

So only let one or at least a few jobs run at one time. This could be done with job dependencies or with array jobs.

zuletzt geändert am 22.09.2021

Wie hat Ihnen dieser Inhalt geholfen?

Creative Commons Lizenzvertrag
Dieses Werk ist lizenziert unter einer Creative Commons Namensnennung - Weitergabe unter gleichen Bedingungen 3.0 Deutschland Lizenz