Sie befinden sich im Service: RWTH High Performance Computing (Linux)




ParaView is an open-source, multi-platform data analysis and visualization application.

Table of Contents

  1. How to access the software
  2. MPI backends
  3. Further information / Known issues


How to access the software

Depending on the version, you might have to load additional modules until you can load ParaView:

module load GCC/11.2.0
module load OpenMPI/4.1.1
module load ParaView/5.9.1-mpi

Available ParaView versions can be listed with module spider ParaView. Specifying a version will list the needed modules: module spider ParaView/5.9.1-mpi

To access to the GUI, type paraview

Note: this will activate the 'binary' version of ParaView (installed from the official installer from ParaView web site). The included ParaView Server pvserver does not [necessarily] support MPI parallelisation (but often works with Intel MPI).

MPI backends

On some versions, a MPI-capable version of pvserver is available besides the serial version of ParaView. In this version, pvserver can utilise MPI to run in parallel.

In a separate terminal, load the same modules and start pvserver using $MPIEXEC. After pvserver is being started, it will tell the connection data, e.g.

$MPIEXEC -np 4 pvserver

Will show something similar to:

Waiting for client...
Connection URL: cs://
Accepting connection(s):

Then in the ParaView GUI, go to 'File' -> 'Connect', and add a server according to the above settings. (As we use multiple MPI back end servers, the host name may vary from execution to execution.)

Please Note: As the MPI processes notoriously tend to consume 100% CPU, please start as low a number as possible and stop them as soon as possible. We found a workaround for Intel MPI for this issue; thus we strongly recommend to use Intel MPI. We add some tuning variables to enable the above workaround; this tuning will affect the performance of common production jobs. Never load ParaView if you do not need it!

Further information / Known issues
  • DO NOT LOAD ParaView in general as workarounds for above 100% CPU issue could affect general MPI performance.
  • Even if you started multiple MPI processes your data set could be processed by one single rank due to one (or many) following reasons:

    • the data format reader must support parallel processing at all, see this overview
    • the data set must be structured
    • even structured data sets could be misconfigured and became not processible in parallel.

    If your data is not distributed across processes; you are free to try the D3 filter. However we have observed, that for small data sets (some 100s megabytes, you likely do not need parallelisation at all...) the data distribution works but looks more like data duplication, and for bigger data sets (gigabytes, you would like to have parallelisation...) D3 filter fails with allocation and/or communication issues. So it goes.

  • If you connect to multiple MPI processes but the data is still processed by one rank (see above), you would likely get faster user experience by omitting the whole remote connection stuff and opening the same data locally.
  • You are free to try to connect (or even reverse connect) your workstation/laptop ParaView to (a set of) pvserver processes running on our nodes. However note that
    • the version of ParaView and pvserver must match. Otherwise, you get this (or alike) error:

      Connection failed during handshake. This can happen for the following reasons:
       1. Connection dropped during the handshake.
       2. vtkSocketCommunicator::GetVersion() returns different values on the
          two connecting processes (Current value: 100).
       3. ParaView handshake strings are different on the two connecting
          processes (Current value: paraview.5.3.renderingbackend.opengl2).
    • the OGL libraries version must match. Especially this condition is very unlikely to match, as we have no GPU cards on our frontends. Errors like

      X Error of failed request:  BadValue (integer parameter out of range for operation)
        Major opcode of failed request:  150 (GLX)
        Minor opcode of failed request:  3 (X_GLXCreateContext)
        Value in failed request:  0x0
        Serial number of failed request:  50
        Current serial number in output stream:  51

      Concluson: we do not support connecting your workstation/laptop to pvserver running on our nodes, but we know from actual cases this was working.

zuletzt geändert am 29.03.2023

Wie hat Ihnen dieser Inhalt geholfen?

Creative Commons Lizenzvertrag
Dieses Werk ist lizenziert unter einer Creative Commons Namensnennung - Weitergabe unter gleichen Bedingungen 3.0 Deutschland Lizenz