ParaView (https://paraview.org) is a data analysis and visualization application. ParaView is available on the cluster and can be run in a server client model, where the server runs on the cluster and the client on your local machine. In this document we'll describe how to connect to the ParaView server running on one of the cluster nodes.
The ParaView server can be run on one of the interactive nodes, but not on the login node. The interactive nodes are pg-interactive.hpc.rug.nl and pg-gpu.hpc.rug.nl. The latter has GPUs which can be used if ParaView supports this.
If you need extensive resources it is better to submit a job that will run ParaView on one of the compute nodes of the cluster.
Here is an example of a ParaView job script, using a single GPU and therefore a single process:
#!/bin/bash #SBATCH --nodes=1 #SBATCH --tasks-per-node=1 #SBATCH --time=02:00:00 #SBATCH --job-name=paraview #SBATCH --mem=60G #SBATCH --partition=gpu #SBATCH --gres=gpu:v100:1 module purge module load ParaView/5.10.1-foss-2022a-mpi srun pvserver --server-port=12345
Here is an example of a ParaView job script, using multiple CPU cores using MPI:
#!/bin/bash #SBATCH --nodes=1 #SBATCH --tasks-per-node=8 #SBATCH --time=02:00:00 #SBATCH --job-name=paraview #SBATCH --mem=60G #SBATCH --partition=vulture module purge module load ParaView/5.10.1-foss-2022a-mpi srun pvserver --server-port=12345
In order to prevent conflicts between multiple ParaView tasks it is important to change the port number that is used to a more unique value. You can change the port used by changing the value of 11123
to a different value. The allowed range is from 1024-65535. We will refer to this value as port_number
.
Submit the job using sbatch
as any other job.
Once the job is running you need to check on which node the job is running. This can be done using squeue
:
squeue -u $USER JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) 22918907 gpu paraview username R 7:20 1 pg-gpu27
Take note of the nodename in NODELIST the job is running on. As you will need this for the next step. We will refer to this nodename using peregrine_node
. If you've started pvserver
on the interactive (or interactive GPU) node directly without submitting a job peregrine_node
will be either pg-interactive
or pg-gpu
.
We now need to setup an SSH tunnel from our local machine to the node where the job is running.
On command line SSH this can be done using a command like:
ssh username@peregrine.hpc.rug.nl -L 11111:peregrine_node:port_number
Where you have to replace port_number
with the correct value used in the job script. Note that this command will open a session on Peregrine, which you have to leave open for the tunnel to keep working.
For the local port we have selected 11111, which is the default ParaView port.
When using MobaXterm you can set up an SSH tunnel using the Tunnel icon in the top list. After clicking on this icon you can select “New SSH tunnel”, after which you'll see the following settings menu: Within this menu you have to add the following settings:
peregrine_node
from the squeue
output.port_number
.After saving these settings, you can start the tunnel by clicking on the start button with the triangle icon.
In order to connect to pvserver running on the Peregrine cluster, you need to install the ParaView GUI software on your local system. The software can be downloaded from https://www.paraview.org/ Note that you have to use the same version locally as you are running on the cluster. In our example this is 5.8.
Once you have installed the software you can start ParaView. In the GUI you have to go the File
menu and then select Connect
. In the window that pops up you can configure the connection to use. This is done through the Add Server
button.
You then get a window like:
Where you can fill in the same settings as shown in the example. The port 11111 on the local machine (localhost) will be forwarded through the SSH tunnel to the pvserver process running on the cluster.
After having configured the server you can use Connect
to connect to the server.
Once you have finished your running ParaView it is best to cancel the job in order to release the resources. This can be done using:
scancel jobid
where jobid is the id of the job which is running the notebook. The job id can be discovered by running:
squeue -u $USER