Sie befinden sich im Service: RWTH High Performance Computing (Linux)

Gaussian

Gaussian

Kurzinformation

Gaussian provides state-of-the-art capabilities for electronic structure modeling.

Table of Contents

  1. How to access the software
  2. Setting limits
  3. Example batch script
  4. Known issues

Detailinformation

How to access the software

Load the module directly:

module load Gaussian/16.C.01-AVX2

Available Gaussian versions can be listed with module spider Gaussian. Specifying a version will list the needed modules: module spider Gaussian/16.C.01-AVX2

Setting limits

Memory Limits

The most important point about the proper usage of gaussian on the cluster is that the limit for the memory, as given by #SBATCH --mem-per-cpu [value in MB] in the Slurm batch script, has to be higher than the memory limit set within the gaussian input file with the directive %mem, as gaussian itself also needs memory.

Disk Space

It is very important that you define the maximum amount of hard disk space the program is supposed to use via the keyword MaxDisk (e. g. MaxDisk=30GB) in the route section of your gaussian input file!

(Annot.: The default disk space limit defined within the default route file is too low in general (less than 2 GB), but fixed to this value for security reasons)

In Link 906 hard disk space is used in an excessive manner for writing down the integral data into the rwf file if the link decides automatically to switch to the disk based calculation method.

In order to alleviate this problem, you should write down the option word FullDirect in brackets after the relevant method keyword MPn (n = 2,3,4) in the route section of your input file.

(Annot.: If and only if the value for the physical main memory passed to gaussian via %mem is large enough (you have to test this on your own), link 906 decides to run effectively in main memory by recalculating the integrals as needed without saving the whole integral data in a big rwf file on the hard disk.)

Example batch script

#!/usr/bin/zsh

### Job name
#SBATCH --job-name=GAUSSIANJOB

### File / Path which STDOUT will be written to, %J is the job ID
#SBATCH --output=GAUSSIANJOB.%J

### Request the time you need for execution. The full format is D-HH:MM:SS
### You must at least specify minutes or days and hours and may add or
### leave out any other parameters
#SBATCH --time=80

# This corresponds to the number of processors (no hyperthreading possible)
# to use with gaussian as set via %NProcShared=[number_of_threads] in the
# gaussian input file (a number between 4 and 12 should be reasonable)
#SBATCH --cpus-per-task=8

### Request the memory you need for your job. You can specify this
### in either MB (1024M) or GB (4G). BEWARE: This is a per-cpu limit,
### and will be multiplied with cpus-per-task for the total requested memory
#SBATCH --mem-per-cpu=1024M

###### end of batch directives ######

###### start of shell commands ######

# load the necessary module files
module load Gaussian/16.C.01-AVX2

# execute the gaussian binary
g16 < [name_of_inputfile] > [name_of_outputfile]

Known issues

There are several known issues with gaussian, some of the most common ones and their respective solutions or workarounds are listed here.

  1. Maximum Number of Threads
  2. Problems starting gaussview
  3. Problem with opt-freq multi-step jobs in gaussian09
  4. gaussview cannot open chkpointfile
  5. prohibitively large .fchck file output

Maximum Number of Threads

Gaussian has been compiled with the PGI compiler, which by default limits the number of threads to 64.

By adding the environment variable OMP_THREAD_LIMIT=128 this can be overwritten.

You should assure, however, that Gaussian really profits from that many threads. From our experience, 32 threads is a good number for large calculations. But we would be curious to learn more about your experiences with the scalability of Gaussian.

Problems starting gaussview

You encountered problems starting gaussview cause of missing qt library dependencies or Open GL support?

You may alleviate this problem by passing the switch -mesagl to the gview script. So, invoke gaussview by typing:

GV -mesagl

Problem with opt-freq multi-step jobs in gaussian09

The following example demonstrates a serious drawback provoked by the automatic facility of opt-freq chain jobs offered by gaussian automatics.

Description of problem case: This article deals with the technique of opt-freq chain-jobs, a common job type of gaussian calculations.

The route section and the Link0 command section of the first job step, namely the geometry optimization, is set up manually by the user.

Link0 commands:

%mem=8GB
%chk=benzene
%NProcShared=12

The maximum physical memory to be used by the whole bunch of Gaussian processes is limited to 8GiB and the number of OpenMP threads to span by each link is fixed to 12.

The name of the chk-file is given as benzene.chk (the .chk extension is automatically added by gaussian).

Given the content of the route card as written down in the input file:

#P MP2=(Direct)/6-311++G** SCF=(Direct) Opt Freq MaxDisk=40GB                              (1)

Section (1) defines the combination of Opt and Freq and the calculation/job type of a geometry opt automatically followed by a frequency calc. The calculation method is set up to be a second order Moeller-Plesset like with an TZVP HF-basis-set.

It's clearly stated that the amount of disk space to be allocated at maximum by all Gaussian processes together is restricted to 40GB.

During the optimization run the disk space quorum is gracefully honoured by Gaussian and the program prints the correct amount of available disk space to be fixed at 40GB.

And now, let's jump to the second job step automatically set up by Gaussian. The route section for the second job step, namely the frequency calculation is auto-generated by Gaussian and looks like this:

#P Geom=AllCheck  Guess=TCheck  SCRF=Check  GenChk  RMP2(FC)/6-311++G(d,p) Freq           (2)

And woops, the information about disk space quorum is lost and the amount of available disk space is no longer fixed to be 40GiB!

Instead of the quorum defined by the user within the manually set up route section (1), gaussian forgets to write down the disk space quorum in the title section (2) of the second job and simply grabs the default value for the maximum disk space from the Default.Route file from the Gaussian installation path. But this config file restricts the maximal value to be fixed at about 0.19GiB! This is definitively an insufficient amount of disk space for the rwf-file of our test case with about 4.1GiB at the end of the whole calculation run.

Possible solutions: You are able to alleviate this problem by choosing out of two possible workarounds:

  1. Doing the frequency calculation in a separate Gaussian run with a new input file defining the disk space limit again (laborious)
  2. Put the two jobs together in one input file by separating the two jobs (optimization and frequency calc) with the multi-step directive--Link1--in front of the section with the Link0 commands of the second job. The relevant part of the input file should read as follows:

    [final newline of first job]
    --Link1--
    %mem=8GB
    %chk=benzene
    %NProcShared=12
    

But the most convenient workaround one is the following:

Create a file named Default.Route in the working directory of your job containing the following line:

-#- MaxDisk=[number]GB

gaussview cannot open chkpointfile

If you get a parse error when you try to open a chkpointfile:

CConnectionCFCHK::Parse_GFCHK()
Missing or bad data: Alpha Orbital Energies
Line Number xxx

it might be, that the number of independent functions differs from the number of basis functions, as gaussian eliminated some dependent functions. This gives gaussview a big headache, but you are able to help it out.

Just change the aaa in line:

Number of basis functions                 I             aaa

to number bbb of line"

Number of independent functions           I             bbb

Prohibitively large .fchck file output

Starting with revision C.01, gaussian adds the transition densities and unrelaxed excited state densities to formatted checkpoint files. This may lead to prohibitively large file sizes, particularly if that data is unnecessary for the task at hand.

There are two possible workarounds for this issue.

  1. Use an older version of Gaussian

    This issue does not affect revision B.01 and earlier versions. If an earlier version is available to you, this problem can be avoided entirely.

  2. Generate a new smaller checkpoint file using the copychk utility before converting to fchk.

    # execute the gaussian binary
    g16 < CuI_TMG2Mequ_NTO_2.com > CuI_TMG2Mequ_NTO_2.log
    
    # use copychk to make a copy of the NTO checkpoint file removing
    # logical file 633 (transition densities)
    copychk 0 CuI_TMG2Mequ_NTO_2.chk CuI_TMG2Mequ_NTO_2_no_trden.chk not 633
    
    # overwrite original NTO checkpoint file with new, smaller copy
    mv CuI_TMG2Mequ_NTO_2_no_trden.chk CuI_TMG2Mequ_NTO_2.chk
    
    # run formchk
    formchk CuI_TMG2Mequ_NTO_2.chk CuI_TMG2Mequ_NTO_2.fchk
    

 Zusatzinformation

zuletzt geändert am 24.05.2023

Wie hat Ihnen dieser Inhalt geholfen?

Creative Commons Lizenzvertrag
Dieses Werk ist lizenziert unter einer Creative Commons Namensnennung - Weitergabe unter gleichen Bedingungen 3.0 Deutschland Lizenz