skip to primary navigationskip to content
 

Graphical Interface Nodes

Guidance on connecting to and using the HPHI GPU visualization node for OpenGL applications

The HPHI platform currently has one physical server containing 4 NVIDIA Kepler Grid K1 GPU cards, suitable for vizualization of data and results in applications that will benefit from GPU hardware acceleration and OpenGL rendering.

Access to the the graphical vizualization node is best done from an existing remote desktop session on one of the HPHI login nodes, and work on it can be scheduled via the SLURM workload scheduler in a similar manner to the compute nodes.

The only difference being that you will need to specifically request access to one of the four GPU cards in the server with an additional command flag to SLURM, notifying it that you wish to use a GPU and allowing it to maintain a running tally of what resources are available on the GPU node.

Access to a GPU can be requested on the command-line with;

pfb29@wbic-gate-1:~$ salloc --gres=gpu:1 -p wbic-gpu
salloc: Granted job allocation 70807

And an X-forwarded session then started on the compute node with

pfb29@wbic-gate-1:~$ ssh pfb29@wbic-gpu-n1 -Y

From there, applications that can benefit from OpenGL 2D and 3D rendering should be started with the "vglrun" command, which ensures that the rendering is properly done on the hardware GPU, captured and then forwarded to your X session.

For example, the test program "glxgears" would be started on the GPU node with;

pfb29@wbic-gpu-n1:~$ vglrun glxgears

Testing glxgears from GPU node

 

Jobs on the GPU vizualization node currently have a maximum run-time of 12 hours only. If you finish with a job on the GPU vizualization node ahead of time, please log out of the GPU node and cancel your job allocation in order to release the GPU assigned to you back to the pool of available resources;

pfb29@wbic-gate-1:~$ scancel 70807

Since the GPU vizualization node has 4 GPU cards, currently 4 simultaneous GPU jobs can be scheduled on the node. After this point, new jobs requesting GPU access will be queued until a card frees up;

GPU limit reached on the GPU node