Sie befinden sich im Service: RWTH Compute Cluster Linux (HPC)




SIESTA is both a method and its computer program implementation, to perform efficient electronic structure calculations and ab initio molecular dynamics simulations of molecules and solids.


1. How to access the software

$ module load CHEMISTRY
$ module load siesta
Available binaries:
  • Serial version: siesta, transiesta
  • MPI version: siesta_mpi, transiesta_mpi
  • Tools: Eig2DOS ccViz cdf2dm cdf2grid cdf2xsf cdf_laplacian countJobs denchar dm2cdf dm_creator driver eig2bxsf eigfat2plot fat fcbuild fmpdos fractional g2c_ng gen-basis getResults get_chem_labels grid2cdf grid2cube grid2val grid_rotate horizontal hs2hsx hsx2hs info_wfsx ioncat lwf2cdf macroave md2axsf mixps mprop new.gnubands orbmol_proj para pdosxml plstm protoNEB readwf readwfx rho2xsf runJobs siesta simple simple_pipes_parallel simple_pipes_serial simplex stm swarm tbtrans vib2xsf vibra wfs2wfsx wfsnc2wfsx wfsx2wfs xv2xsf

2. Further Information

$ okular $SIESTA_ROOT/Docs/siesta.pdf

3. Example batch script (MPI)

### Job name, %J is the job ID
#SBATCH --job-name=siesta.%J
### Request the time you need for execution. The full format is D-HH:MM:SS
### You must at least specify minutes or days and hours and may add or
### leave out any other parameters
#SBATCH --time=60
#### Request the memory you need for your job. You can specify this
### in either MB (1024M) or GB (4G).
#SBATCH --mem-per-cpu=1850M
# Request 12 processes, all on a single node
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=12
# Load the required modulefiles
module load CHEMISTRY
module load siesta
# execute the SIESTA binary
$MPIEXEC $FLAGS_MPI_BATCH siesta_mpi < in.fdf > [name_of_outputfile]

4. FAQ / Known Issues

  • We strongly recommend to use the Intel MPI version.
  • versions compiled using the GCC compiler need also Intel MKL module to be loaded:
module switch intel gcc
module load CHEMISTRY
module load siesta
module load LIBRARIES
module load intelmkl
  • version compiled using the Intel compiler v.16, could produce errors like this:

    {    1,    1}:  On entry to
    PZHETRD parameter number   11 had an illegal value 

    please switch to versions compiled with newer Intel compilers, or GCC (with new Intel MKL versions):

    module switch intel intel/19.0
  • Using OpenMPI, you could suffer a failure with below error message. In this case, swith to the Intel MPI.

    [linuxbmc0008:8435] *** An error occurred in MPI_Bcast
    [linuxbmc0008:8435] *** reported by process [2972647425,4]
    [linuxbmc0008:8435] *** on communicator MPI COMMUNICATOR 12 SPLIT FROM 9
    [linuxbmc0008:8435] *** MPI_ERR_TRUNCATE: message truncated
    [linuxbmc0008:8435] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
    [linuxbmc0008:8435] ***    and potentially your MPI job)
  • If in doubt about correctness of the results, try out a version compiled with another compiler (e.g. gcc instead of intel).

zuletzt geändert am 29.01.2021

Wie hat Ihnen dieser Inhalt geholfen?

Creative Commons Lizenzvertrag
Dieses Werk ist lizenziert unter einer Creative Commons Namensnennung - Weitergabe unter gleichen Bedingungen 3.0 Deutschland Lizenz