
If you’re in CAVE mode, then the visualization shown in the application is only a representative and hence you won’t see the volume rendering in the desktop application. The other nodes dont throw any error messages related to displays and connections. VtkPVServerInformation::GetOGVSupport was deprecated for ParaView 5.5 and will be removed in a future version. Generic Warning: In /gpfs/runtime/opt/paraview/5.6.0_OpenGL2/src/Paraview/ParaViewCore/ClientServerCore/Core/vtkPVServerInformation.cxx, line 784 This is the generated output when the system connects from the desktop to the server:
#PARAVIEW VOLUME RENDERING HOW TO#
Is there a best practice manual on how to handle this distribute rendering case? Maybe there is an option we need to enable or an extra library Paraview needs to be built with.Īlso, this a Linux environment and every node has a Quadro K5000 card. The data is only 6.8mb, so personally I dont think is related to ram /vram memory. Maybe paraview sends all the data to every node instead of dividing it into chunks and distribute them to all the system. Our theory is, there are problems related on how the data is handle on the different nodes. When Paraview is not connected to the server, it renders the data as showed here:īut when it’s connected to the server, we get these outlined transparent voxels: However, we found out that it doesnt render volume data properly. Our system has Paraview installed in a client/server distributed rendering cluster and set it up to do parallel rendering on 69 nodes ( we followed all the instructions in ).
