You are located in service: RWTH High Performance Computing (Linux)

Intel MPI (impi)

Intel MPI (impi)

Das Bild zeigt eine stilisierte Glühbirne, die in einem hellen Blauton dargestellt ist. Um die Glühbirne herum sind mehrere strahlenförmige Linien angedeutet, die das Licht symbolisieren, das von der Glühbirne ausgeht.

Intel® MPI Library is a multifabric message-passing library that implements the open source MPICH specification. Use the library to create, maintain, and test advanced, complex applications that perform better on HPC clusters based on Intel® and compatible processors.

Intel

Das Bild zeigt ein Symbol, das einen hellblauen Umriss eines Dokuments darstellt. Auf dem Dokument befinden sich drei horizontale Linien, die jeweils von einem Häkchen begleitet werden. Diese Häkchen sind ebenfalls hellblau und symbolisieren eine Checkliste.

Depending on the version, you might have to load additional modules until you can load Intel MPI (impi)

module load intel-compilers/2022.1.0
module load impi/2021.6.0

Available Intel MPI versions can be listed with module spider impi. Specifying a version will list the needed modules: module spider impi/2021.6.0

The loaded module set up several environment variables for further usage. The list of these variables can be obtained with

module show impi/2021.6.0

In particular, the compiler drivers mpiifort, mpifc, mpiicc, mpicc, mpiicpc and mpicxx as well as the MPI application startup scripts mpiexec and mpirun are included in the search path. The compiler drivers mpiifort, mpiicc and mpiicpc use the Intel Compilers whereas mpifc, mpicc and mpicxx are the drivers for the GCC compilers. The necessary include directory $MPI_INCLUDE and the library directory $MPI_LIBDIR are selected automatically by these compiler drivers.

We strongly recommend using the environment variables $MPIFC, $MPICC, $MPICXX and $MPIEXEC set by the module system according to last-loaded compiler module for building and running an MPI application. Example:

$MPIFC -c prog.f90
$MPIFC prog.o -o prog.exe
$MPIEXEC -np 4 ./prog.exe

 Zusatzinformation

Further information / Known issues

  • 2017.10.09: Version '2018' of Intel MPI is known to have a serious performance degradation on InfiniBand network (Bull Cluster, incl. interactive testing on MPI BackEnds). Please avoid using this version of Intel MPI on InfiniBand!
  • Your application is crashing from time to time on the new CLAIX nodes, but known to run well on old Bull nodes for years? Try to add -x PSM2_KASSIST_MODE=none to the command line,

    $MPIEXEC -env PSM2_KASSIST_MODE none $FLAGS_MPI_BATCH ./a.out
    

last changed on 07/23/2024

How did this content help you?

Creative Commons Lizenzvertrag
This work is licensed under a Creative Commons Attribution - Share Alike 3.0 Germany License