VisIt is a powerful scientific visualization software package designed to create scalable visualizations and animations of scientific results which can range from small simulations that would run on a single computer up to simulations that run on many nodes on a supercomputer.
Running VisIt on RCC
Below are a number of ways to run VisIt using the Spear or HPC cluster. Note that if HPC resources are being accessed from a computer not on the FSU network, a VPN connection is required. An installer can be found here.
Running VisIt Remotely
The user must use an SSH client (PuTTY on Windows, or the basic terminal on Unix) to use VisIt. To speed up the image rending, xpra is strongly recommended.
The simplest way to run VisIt through an SSH client is on the Spear node. Though batch jobs can be executed using commands through X, it is suggested that the user installs the local client and runs batch jobs through VisIt's internal scheduling routines.
- Login using
ssh -Y firstname.lastname@example.org
- Note that the -Y option allows for VisIt's GUI to be piped through the SSH client.
- Other options may be used, but this is the basic command.
- To run VisIt, navigate to the visit/bin directory and execute with
- Alternatively, change your PATH variable as export PATH=/gpfs/research/software/visit_new/parallel/bin:$PATH
Running VisIt Locally Offloading to RCC Resources
Running VisIt on a user's local machine and offloading the processing to HPC or Spear allows for the best performance and does not slow down the GUI response time. This is the preferred method of running VisIt if the user will regularly use the program on a local machine. Both require that VisIt be installed and properly configured.
- Much faster rendering and UI response time.
- Installation may take a while depending on the OS used.
- Need to setup Host and Launch Profiles.
Configuration for Spear
Configuring VisIt for Spear will allow the user to immediately connect to a Spear node and run in either serial or parallel. First click 'Options' on the main VisIt UI and go to 'Host Profiles...'. This is where we will setup and save the configuration options for interactive visualization on a Spear node. Click 'New Host' at the bottom of the screen. Under the 'Host Settings' tab on the right, use the following entries, which will rename this host profile 'FSU HPC Spear', specify the location of the remote copy of VisIt on Spear, and specify maximum number of resources we can use.
Host nickname: FSU Spear Remote host name: spear-login.hpc.fsu.edu Host name aliases: spear-## (checked) Maximum nodes: 1 (checked) Maximum processors: 8 Path to VisIt installation: /gpfs/research/software/visit_new/parallel Username: [Enter your FSU HPC username here] (checked) Tunnel data connections through SSH.
The remainder of the options do not need to be modified. Now move to the 'Launch Profiles' tab. Create a new profile and under 'Profile name' call it 'Serial' with a timeout of 480 (should be the default). This is the setup for serial jobs on Visit. Now create another profile called 'Parallel' with a timeout of 480. Under the 'Parallel' tab, use the following entries:
(checked) Launch parallel engine (checked) Parallel launch method: sbatch/srun Default number of processors: 2 (checked) Default number of nodes: 1 (checked) Default machine file: /etc/visit-hostfile
No other options need to be modified here. Note that the default number of processors here is 2, but this value can be changed to any value up to 8 when opening a file (explained in the Example Visualization section). Running serial processes on Spear does not require any more modifications. However, to run on Spear in parallel, the gnu-openmpi module must be automatically loaded on Spear. To do this, a .bash_profile and .bashrc file must be either created or modified in the user's home directory on Lustre (the default system for Spear).
First create a .bash_profile file in your home directory and add the following lines:
# .bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs PATH=$PATH:$HOME/bin export PATH
Then create a .bashrc file (or modify your existing one) and add the following lines:
# .bashrc # Source global definitions if [ -f /etc/bashrc ]; then . /etc/bashrc fi # User specific aliases and functions module load gnu-openmpi
From here, VisIt should work properly on Spear for both serial and parallel processes. To test your configuration and/or see an example visualization, try this example.
Configuration for HPC Queue
While we can immediately run interactive jobs on Spear, this limits our processing to only a single node. VisIt has the built-in capability of using a batch queue system to run jobs across multiple nodes in a HPC-like environment. It should be noted that when using VisIt on a queue system, the same rules apply here as would be applied to any batch job. It may take a while for resources to be available and there may be time limitations. However, once connected, no other changes are required and VisIt will function similarly to how it functions on a local machine or when running on Spear, with the added advantage of more processing resources.
To run VisIt on the HPC queue, first choose one of the partitions or queues that you have access to from the list of partitions. Then, in VisIt, click 'Options' on the main GUI window and go to 'Host Profiles...'. Create a new host and call it 'FSU [queue]' under 'Host nickname', where [queue] stands for the specific queue being used. In the follwoing example, we'll use the public queue 'backfill', so Host nickname will be FSU backfill. Then the remote hostname will be 'hpc-login.rcc.fsu.edu'. The maximum number of nodes and maximum number of processors depends on the limitations of the queue used. As a default, 4 compute nodes with 32 processors is used in order to verify that the configuration works across multiple processors and nodes. For a configuration for a generic queue, the final options should look similar to:
Host nickname: FSU backfill Remote host name: hpc-login.rcc.fsu.edu Host name aliases: hpc-* (checked) Maximum nodes: 4 (checked) Maximum processors: 32 Path to VisIt installation: /gpfs/research/software/visit_new/parallel Username: [Enter your HPC FSU username here] (checked) Tunnel data connections through SSH
No other options should need to be modified. Under the 'Launch Profiles' tab, create a new profile and call it 'Parallel' with a timeout of 480. Under 'Parallel', use the following options:
(checked) Launch parallel engine (checked) Parallel launch method: sbatch/srun (checked) Partition/Pool/Queue: backfill Default number of processors: 2
Again, no other options should need to be modified for this tab. Finally go to the 'Advanced' tab and check 'Use VisIt script to set up parallel environment'. From here, running VisIt on a queue should be properly configured. Try the example visualization in the next section to verify that everything is set up correctly.
Currently GPU Acceleration is not yet supported on RCC resources, but should be available at a later time.
To connect to the Spear node or an HPC queue and run an example:
- Go to File > Open File.
- Under host, select "FSU Spear" or "FSU SC" from the drop down menu. Whatever was used for "Host Nickname".
- Enter your HPC password when prompted. Make sure your username is also correct.
- For the path, navigate to the examples directory: /gpfs/research/software
- Under Files on the right, open "crotamine.pdb" to view a Crotamine molecule.
- Under "Open file as type:" you can either leave it as "Guess from file name/extension", or select ProteinDataBank from the drop down menu. Sometimes VisIt will not recognize a file type automatically, so the specific file type must be selected.
- If you've already set up the launch profiles, you'll be presented with the options of running the job using the Serial or Parallel Interactive versions (Spear) or just Parallel Queue (HPC). Choose one, but it may be a good idea to eventually test each option.
- Also note that if you choose the parallel option, the number of processors can be changed if you're running on Spear, or both the number of processors and nodes can be changed if you're running on an HPC queue.
- Now under "Plots" on the main window, select Add > Molecule > element.
- Finally click "Draw" in the "Plots" section. (You may need to click the double arrows on right side of the buttons to see "Draw" under the additional options.)
- The molecule should now be properly displayed in the window.
- NOTE: Running parallel jobs on Spear or the HPC may return a few error messages, which can be ignored.
- This YouTube link has information regarding the manipulation of the image using some of VisIt's tools, if you are not familiar with VisIt yet.
- To open another file, simply select "Open" under "Sources" on the main window, or File > Open like before. Note that only a few of the files in the example folder will work.