Sie befinden sich im Service: RWTH Compute Cluster Linux (HPC)

paraview

paraview

Kurzinformation

ParaView is an open-source, multi-platform data analysis and visualization application.


 Detailinformation

1. How to access the software

To access to the GUI, type

$ module load GRAPHICS
$ module load paraview
$ paraview

Note: this will activate the 'binary' version of ParaView (installed from the official installer from ParaView web site). The included ParaView Server 'pvserver' does not [necessarily] support MPI parallelisation (but often wirks with Intel MPI).

 

2. MPI backends

On some versions, a MPI-capable version of 'pvserver' is available besides the serial version of ParaView. In this version, 'pvserver' can utilise MPI to run in parallel.

In a separate terminal, load the same modules and start 'pvserver' using $MPIEXEC.   After 'pvserver' being started, it will tell the connection data, e.g.

$ $MPIEXEC -np 4 pvserver
Waiting for client...
Connection URL: cs://nrm215.hpc.itc.rwth-aachen.de:11111
Accepting connection(s): nrm215.hpc.itc.rwth-aachen.de:11111

Then in the ParaView GUI, go to 'File' -> 'Connect', and add a server according to the above settings. (As we use multiple MPI back end servers, the host name may vary from execution to execution.)

Note: As the MPI processes notoriously tend to consume 100% CPU, please start  as low a number as possible and stop them as soon as possible. We found workaround for Intel MPI for this issue; thus we strongly recommend to use Intel MPI (see above).

Note: we add some tuning variables to enable the above workaround; this tuning will affect the performance of common production jobs. Never load ParaView if you do not need it!

3. Further Information / Known issues

  • DO NOT LOAD ParaView in general as workarounds for above 100%CPU issue could affect general MPI performance.
  • Even if you started multiple MPI processes your data set could be processed by one single rank due to one (or many) following reasons:Check if the data of your data set is really distributed across the MPI ranks as described in "Conduct a test to verify parallel connection" in  https://hpc.llnl.gov/data-vis/vis-software/paraview/running-paraview-in-client-server-mode ("Process ID Scalars" filter and 'View->Memory Inspector'). If your data is not distributed across processes; you are free to try the D3 filter. However we have observed, that for small data sets (some 100s megabytes, you likely do not need parallelisation at all...) the data distribution works but looks more like data duplication, and for bigger data sets (gigabytes, you would like to have parallelisation...) D3 filter fails with allocation and/or communication issues. So it goes.
  • If you connect to multiple MPI processes but the data is still processed by one rank (see above), you would likely get faster user expirience by omitting the whole remote connection stuff and opening the same data locally.
  • You are free to try to connect (or even reverse-connect, cf. https://www.paraview.org/Wiki/Setting_up_a_ParaView_Server#Connecting_Through_a_Firewall ) your workstation/laptop ParaView to (a set of) 'pvserver' processes running on our nodes. However note that
    • the version of 'paraview' and 'pvserver' must match. Otherwise, you get this (or alike) error:

      **********************************************************************
      Connection failed during handshake. This can happen for the following reasons:
       1. Connection dropped during the handshake.
       2. vtkSocketCommunicator::GetVersion() returns different values on the
          two connecting processes (Current value: 100).
       3. ParaView handshake strings are different on the two connecting
          processes (Current value: paraview.5.3.renderingbackend.opengl2).
      **********************************************************************
    • the OGL libraries version must match. Especially this condition is very unlikely to match, as we have no GPU cards on our front ends. Errors like

      X Error of failed request:  BadValue (integer parameter out of range for operation)
        Major opcode of failed request:  150 (GLX)
        Minor opcode of failed request:  3 (X_GLXCreateContext)
        Value in failed request:  0x0
        Serial number of failed request:  50
        Current serial number in output stream:  51

      Concluson: we do not support connecting your workstation/laptop to 'pvserver' running on our nodes, but we know from actual cases this was working.

zuletzt geändert am 29.01.2021

Wie hat Ihnen dieser Inhalt geholfen?