You are located in service: RWTH High Performance Computing (Linux)

Testing of MPI Jobs (Deprecated!)

Testing of MPI Jobs (Deprecated!)

Testing of MPI Jobs

 

On Linux we offer dedicated machines for interactive MPI tests. These machines will be used automatically by our interactive mpiexec and mpirun wrappers. The goal is to avoid overloading the frontend machines with MPI tests and to enable larger MPI tests with more processes.


  

How to test an MPI job

You can start your MPI job with the following command.

$MPIEXEC $FLAGS_MPI_BATCH yourapplication.exe

By default two mpi processes are started. You can select the number of processes you need as follows. In the example 32 processes are started.

export FLAGS_MPI_BATCH="-np 32"

$MPIEXEC $FLAGS_MPI_BATCH yourapplication.exe

 

Further information/Known issues

The interactive wrapper works transparently so you can start your MPI programs with the usual MPI options. In order to make sure that MPI programs do not hinder each other the wrapper will check the load on the available machines and choose the least loaded ones. The chosen machines will get one MPI process per available processor. However, this default setting may not work for jobs that need more memory per process than there is available per core. Such jobs have to be spread to more machines. Therefore we added the -m <processes per node> option, which determines how many processes should be started per node.

The hardware type of the MPI back ends may differ from the hardware of the actual front end nodes. If you optimize your binaries to the bleeding edge (e.g. by '-fast' instead of using $FLAGS_FAST envvar) on a (modern) front end node and the mpiexec wrapper let your binary run on (older) back end, your binary may fail:

zsh: illegal hardware instruction (core dumped)  a.out

....

Please verify that both the operating system and the processor support Intel(R) MOVBE, FMA, BMI, LZCNT and AVX2 instructions.

In this case you should either use $FLAGS_FAST optimisation level or stop using the back ends by the '-H' option.

 

You can get a list of the mpiexec wrapper options with

$ $MPIEXEC --help

which will print the list of mpiexec wrapper options, some of which are shown in table below, followed by help of native mpiexec of loaded MPI module.

--help | -h

prints this help and the help information of normal mpiexec

--show | -v

prints out which machines are used

-d

prints debugging information about the wrapper

--mpidebug

prints debugging information of the MPI lib, only Open MPI, needs TotalView

-n, -np <np>

starts <np> processes

-m <nm>

starts exactly <nm> processes on every host (except the last one)

-s, --spawn <ns>

number of processes that can be spawned with MPI_spawn; (np+ns) processes can be started in total

--listcluster

prints out all available clusters

--cluster <clname>

uses only cluster <clname>

--onehost

starts all processes on one host

--listonly

just writes the machine file, without starting the program

$MPIHOSTLIST

specifies which file contains the list of hosts to use; if not specified, the default list is taken

$MPIMACHINELIST

if --listonly is used, this variable specifies the name of the created host file, default is $HOME/host.list

--skip (<cmd>)

(advanced option) skips the wrapper and executes the <cmd> with given arguments. Default <cmd> with openmpi is mpiexec and with intelmpi is mpirun.

 

Passing environment variables from the master, where the MPI program is started, to the other hosts is handled differently by the MPI implementations.

We recommend that if your program depends on environment variables, you let the master MPI process read them and broadcast the value to all other MPI processes.

last changed on 02/27/2023

How did this content help you?

Creative Commons Lizenzvertrag
This work is licensed under a Creative Commons Attribution - Share Alike 3.0 Germany License