UPDATED NOTICE: Longhorn will be decommissioned on Sunday, 16 March 2014, at which time the Longhorn compute nodes will be turned off and users will no longer be able to submit jobs. Users will be able to log into the system and access files, in read-only mode, in their $HOME
and $SCRATCH
directories to facilitate migration of their projects to Maverick. This limited access to Longhorn will continue through Friday, 18 April 2014. Longhorn will then be decommissioned.
Beginning with the deployment of Maverick on March 3, 2014, Longhorn users may begin transferring all personal files and data from Longhorn to Maverick. All transfers must be completed by Friday, April 18, 2014. After this date, users' data cannot be guaranteed to be available or retrievable.
For more detailed information please see:
Longhorn User Guide
System Overview
Longhorn (longhorn.tacc.utexas.edu
), TACC's Dell XD Visualization Cluster, contains 2048 compute cores, 14.5 TB aggregate memory and 512 GPUs. Longhorn has a QDR Infiniband interconnect and has an attached Lustre Parallel file system. Longhorn's head and compute nodes are connected by 10GigE to Ranger's Lustre parallel file system thus making it more convenient to work on datasets generated on Ranger. The following figure illustrates the architecture of Longhorn:
Longhorn has 256 compute nodes and 2 login nodes, with 240 nodes containing 48GB of RAM, 8 Intel Nehalem cores (@ 2.5 GHz), and 2 NVIDIA Quadro FX 5800 GPUs. Longhorn also has an additional 16 large-memory nodes containing 144GB of RAM, 8 Intel Nehalem cores (@ 2.5 GHz), and 2 NVIDIA Quadro FX 5800 GPUs.
File Systems
Longhorn has several different file systems with distinct storage characteristics. There are predefined directories in these file systems for you to store your data. Since these file systems are shared with others, they are managed either by a quota limit or a purge policy. Two local file systems are available: an NFS $HOME
and a parallel Lustre $SCRATCH
. The $HOME
directory has a 4 GB quota. $SCRATCH
is periodically purged and not backed up, and has a very large 50 TB quota. All file systems also impose an inode limit, which affects the number of files allowed.
-
$HOME
- At login, the system automatically sets the current working directory to your home directory.
- Store your source code and build your executables here.
- This directory has a quota limit of 4 GB.
- The frontend nodes and any compute node can access this directory.
- Use
$HOME
to reference your home directory in scripts. - Use
cd
to change to$HOME
.
-
$SCRATCH
- This is NOT a local disk file system on each node.
- This is a global Lustre file system for storing temporary files.
- The quota on this system is 50 TB.
- Files on this system may be purged when a file's access time exceeds 10 days
- If possible, have your job scripts use and store files directly in $WORK (to avoid moving files from
$SCRATCH
later, before they are purged). - Use
$SCRATCH
to reference this file system in scripts. - Use
cds
to change to$SCRATCH
.
NOTE: TACC staff may delete files from scratch if the scratch file system becomes full, even if files are less than 10 days old. A full file system inhibits use of the file system for everyone. The use of programs or scripts to actively circumvent the file purge policy will not be tolerated.
To determine the amount of disk spaced used in a file system, cd
to the directory of interest and execute the df -k .
command, including the dot which represents the current directory. Without the dot all file systems are reported.
System Access
- Commands issued on longhorn's login node will be preceded with a "
login1$
" shell prompt. - Compute-node command line examples are preceded with a "
c203-112$
" prompt. - Commands issued from your own local machine are indicated with a "
mymachine$
" shell prompt.
Longhorn is accessed either using the secure-shell ssh
program (for batch-mode access, but which can be used to initiate interactive VNC access) or via the Longhorn Internet portal.
SSH access
Unix-based systems, including Linux and Mac OS X have an ssh
client available; freely available clients are also available; a popular choice for Windows is PuTTY.
To initiate an SSH connection to a Longhorn login node from a UNIX or Linux system with an SSH client already installed, execute the following command:
login1$ ssh userid@longhorn.tacc.utexas.edu
where userid is replaced with the Longhorn user name assigned to you during the allocation process. Note that this userid specification is only required if the user name on the local machine and the TACC machine differ.
Establishing Interactive Access Via VNC
Longhorn is intended to be used as an interactive system by remote users; this mode is implemented by using VNC
to provide remote users with access to an interactive desktop running on one node of a set of allocated Longhorn compute nodes. To set this up, the user uses the batch-mode interface to start a job that:
- Allocates one or more Longhorn compute nodes;
- Starts
vncserver
on one of the nodes; - Creates an
ssh
tunnel on the Longhorn login node that provides IP access to the compute node'svncserver
via a unique port on the Longhorn login node.
Once this job is running, the user need only create a secure SSH socket connection between his remote system and the Longhorn login node, and then connect a vncviewer
. Once the user has a desktop on the Longhorn compute node, serial and parallel applications can be run on that desktop using all the resources allocated to the initial . This is illustrated in the following figure:
Note that all visualization and data analysis (VDA) jobs must be run on Longhorn compute nodes. No VDA applications should be run on the Longhorn login node (longhorn.tacc
). VDA applications running on the login node may be terminated without notice, and repeated violations may result in your account being suspended. Please submit a consulting ticket at https://portal.tacc.utexas.edu/consulting with questions regarding this policy.
To launch an interactive, remotely accessible desktop on a Longhorn compute node:
-
ssh
to longhorn:mymachine$ ssh <username>@longhorn.tacc.utexas.edu
- If this is your first time connecting to Longhorn, you must run
vncpasswd
to create a password for your VNC servers. This should NOT be your login password! This mechanism only deters unauthorized connections; it is not fully secure, as only the first eight characters of the password are saved. All VNC connections are tunnelled through SSH for extra security, as described below. - Launch a vnc desktop via SGE:
login1$ qsub [qsub options] /share/doc/sge/job.vnc
for instance, to specify a particular account and desktop size, use:
login1$ qsub -A TG-MyAcct /share/doc/sge/job.vnc -geometry 1440x900
Note that there are many more options available to the
qsub
command; these are documented below in the section on running batch jobs on Longhorn.This script can be copied to your home directory and modified, particularly if you would like to add your account information or change the default runtime of your job (currently limited to 24 hours). You can also change job runtime using the
qsub
command-line option "-l h_rt=<hours:minutes:seconds>
". To request a specific node, use the command line option "-l h=<node>
". For example, to request visbig, use "-l h=ivisbig
". Note that you must put a leading 'i' before the node name.The default window manager is
twm
, a spartan window manager which reduces connection overhead. Gnome is available, if your connection speed is sufficient to support it. To use gnome, open the file~/.vnc/xstartup
and replace "twm
" with "gnome-session
". - Once the job launches, connection info will be written to a
vncserver.out
file in your home directory. The very first time you run the VNC script, this file will not exist, so you can create it with thetouch
command. You can then track when your connection information is written out to the file using thetail -f
command:login1$ touch ~/vncserver.out login1$ tail -f ~/vncserver.out
- When the
qsub
job begins the connection information will be output. This includes the VNC port on Longhorn for your session. However, for security, TACC requires that you tunnel your VNC session through SSH. You can set up a tunnel on a unix command line, or with a GUI-based SSH client.From your local machine (NOT on the Longhorn login), on the unix command line, forward the port specified in the
vncserver.out
file to the matching port on Longhorn, use the command:mymachine$ ssh -f -N -L yyyy:longhorn.tacc.utexas.edu:xxxx <username>@longhorn.tacc.utexas.edu
where xxxx is the port number from the connection information in
vncserver.out
, and yyyy is a port you have selected on your local machine (generally, xxxx will be a reasonable choice). The '-f
' instructs SSH to only forward ports, not to execute a remote command; the '-N
' puts thessh
command into the background after connecting; and the '-L
' forwards the port.In a GUI-based SSH client, find the menu where tunnels can be specified, and specify the local and remote ports as required, then launch the SSH connection to Longhorn.
- Once the SSH tunnel has been established, use a VNC client to connect to the local port you created, which will then be tunnelled to your VNC server on Longhorn. Connect to
localhost:xxxx
, where xxxx is the local port you used for your tunnel. In the examples above, we would connect the VNC client tolocalhost::xxxx
. Some VNC clients acceptlocalhost:51
. We recommend the TigerVNC VNC Client, a platform independent client/server application. - After connecting your VNC client to your VNC server on Longhorn, you may use visualization applications directly on the remote desktop without launching other SGE jobs. Applications that use hardware-assisted OpenGL library calls must be launched via a wrapper that provides access to the hardware (e.g.
vglrun
ortacc_xrun
) as described below. - When you are finished with your VNC session, kill the session by typing
exit
in the black xterm window titled*** Exit this window to kill your VNC server ***
Note that merely closing your VNC client will NOT kill your VNC server job on Longhorn, and you will continue to be billed for time usage until the job ends. If you close your VNC client, you can reconnect to your VNC server at any time until the server job ends.
The Longhorn Internet Portal
The Longhorn Portal is available at https://portal.longhorn.tacc.utexas.edu. It provides a very simple mechanism to run interactive sessions on Longhorn. It presents two choices: to create a VNC desktop (essentially wrapping the above in a much simplified manner, though at cost of some flexibility), and the ability to run EnVision visualization sessions.
The following image shows the Jobs tab of the Longhorn portal:
Pulldowns on this page enable a user choose either to create a Longhorn VNC desktop or an EnVision session. When VNC is selected, the user is presented with pulldowns for setting the various parameters of a VNC session, including the wayness, number of nodes, and desktop dimensions. The portal will then submit a VNC job to the Longhorn normal
queue. When the job starts, a VNC viewer will be established in in the portal; alternatively, the Jobs tab will present the a URL and port number that the can be used to connect an external VNC viewer. Note that the portal provides access to only some of the options available through the qsub
interface, and the previous method of creating a VNC session through the qsub
interface will be necessary in some cases.
Other parts of the Longhorn Portal Jobs page show the current usage of Longhorn; it is a very easy mechanism to find the status of jobs. All jobs submitted to Longhorn - either via qsub
or via the Portal, running or in various wait queues, will appear in the status information shown.
Computing Environment
Modules
Longhorn provides a set of visualization-specific modules. Visualization modules available on Longhorn include:
-
blender
: Animation and video-stream editting application -
ensight
: Access to CEI's Ensight visualization application -
ffmpeg
: Toolkit for processing audio and video data -
glew
: GL Extension Wrangler library -
idl
: Access to IDL visualization application -
mesa
: Access to software implementation of OpenGL -
mplayer
: Access to movie player -
paraview
: Access to Paraview visualization application -
qt
: Application and toolkit for building GUIs -
sdl
: Simple application development toolkit -
silo
: Access to the Silo visualization application and associated tools and libraries -
vapor
: Access to NCAR's Vapor visualization application -
visit
: Access to Visit visualization application -
vtk
: Toolkit for scientific and information visualization -
XSEDE
: Tools supporting XSEDE environment, including Globus
Application Development
In general, application development on Longhorn is identical to that on Ranger, including the availability and usage of compilers, the parallel development libraries (e.g. MPI and OpenMP), tuning and debugging.
Additional visualization-oriented libraries available on Longhorn are made accessible through the modules system and are listed above. Library and include-file search path environment variables are modified when modules are loaded. For detailed information on the effect of loading a module, use:
login1$ module help modulename
Running your applications
Jobs are run on Longhorn using one of two methods: A can be submitted from the Longhorn login node, and interactively from a remotely accessed VNC desktop running on an allocated Longhorn compute node.
Running Batch Jobs on Longhorn
Batch jobs are run on Longhorn via the SGE job scheduler using the qsub
command. Use this command to submit a batch job from the Longhorn login node:
login1$ qsub [options] job_script
where job_script is the name of a UNIX format text file containing job script commands. This file should contain both shell commands and special statements that include qsub
options and resource specifications. Some of the most common options are described in the following table. Details on using these options and examples of job scripts follow.
Common qsub Options | ||
Option | Argument | Function |
---|---|---|
-q | <queue_name> | Submits to queue designated by <queue_name>. |
-pe | <TpN>way <NoN x 8> | Executes the job using the specified number of tasks (cores to use) per node ("wayness") and the number of nodes times 8 (total number of cores). (See example script below.) |
-N | <job_name> | Names the job <job_name>. |
-M | <email_address> | Specify the email address to use for notifications. |
-m | {b|e|a|s|n} | Specify when user notifications are to be sent. |
-V | Use current environment setting in batch job. | |
-cwd | Use current directory as the job's working directory. | |
-j | y | Join stderr output with the file specified by the -o option. (Do not use with the -e option.) |
-o | <output_file> | Direct job output to <output_file>. |
-e | <error_file> | Direct job error to <error_file>. (Don't also use the -j option.) |
-A | <project_account_name> | Charges run to <project_account_name>. Used only for multi-project logins. Account names and reports are displayed at login. |
-l | <resource> =<value> | Specify resource limits. (See qsub man page.) |
Options can be passed to qsub
on the command line or specified in the job script file. The latter approach is preferable. It is easier to store commonly used qsub
commands in a script file that will be reused several times rather than retyping the qsub
commands at every batch request. In addition, it is easier to maintain a consistent batch environment across runs if the same options are stored in a reusable job script.
Batch scripts contain two types of statements: special comments and shell commands. Special comment lines begin with #$
and are followed with qsub
options. The SGE shell_start_mode has been set to unix_behavior
, which means the UNIX shell commands are interpreted by the shell specified on the first line after the #!
sentinel; otherwise the Bourne shell (/bin/sh
) is used. The job script below requests an MPI job with 32 cores and 1.5 hours of run time:
#!/bin/bash | |
#$ -V | # Inherit the submission environment |
#$ -cwd | # Start job in submission directory |
#$ -N myMPI | # Job Name |
#$ -j y | # Combine stderr and stdout |
#$ -o $JOB_NAME.o$JOB_ID | # Name of the output file (eg. myMPI.oJobID) |
#$ -pe 8way 32 | # Requests 8 tasks/node, 32 cores total |
#$ -q normal | # Queue name "normal" |
#$ -l h_rt=01:30:00 | # Run time (hh:mm:ss) - 1.5 hours |
#$ -M | # Use email notification address |
#$ -m be | # Email at Begin and End of job |
set -x | # Echo commands, use "set echo" with csh |
ibrun ./a.out | # Run the MPI executable named "a.out" |
If you don't want stderr and stdout directed to the same file, replace the -j
option line with a -e
option to name a separate output file for stderr (but don't use both). By default, stderr and stdout are sent to files out.o
and err.o
, respectively.
Example job scripts are available online in /share/doc/sge. They include details for launching large jobs, running multiple executables with different MPI stacks, executing hybrid applications, and other operations.
The following tables list the valid queues and project types for qsub
job submissions:
Longhorn SGE Batch Queues | |||
Queue Name | Max Runtime | Max Cores | Node Pool |
normal | 6 hrs | 128 | All nodes |
long | 24 hrs | 128 | All nodes |
largemem | 8 hrs | 128 | 16 large memory nodes |
development | 1 hr | 32 | 8 nodes |
request | --- | --- | special requests |
Longhorn Project Types | ||
Type | Purpose | Special Environment Modifications |
vis | Visualization jobs | |
data | Data Analysis jobs | |
gpgpu | GPGPU jobs | disables X server |
hpc | HPC jobs |
The queue is selected via the -q
flag, and the project type is selected with the -P
flag. For example, to submit a job to the normal
queue with project type vis
:
login1$ qsub -q normal -P vis
Job Wayness
When launching your application with ibrun
, the total number of processes available to your application is controlled by the -pe
argument given to qsub
. The wayness argument takes the form:
-pe PpNway NoN
where PpN
is the number of processes per node, and NoN
is the number of nodes to allocate to the job. Supported PpN
values are 1, 2, 4, 6 and 8.
Please note that the maximum wayness per node is 8 (not 16). Please adjust scripts and application variables accordingly.
Running Interactive Applications on Longhorn
As discussed above, Longhorn is designed for interactive use through a remotely accessible VNC desktop. Several specialized tools facilitate using high-performance graphics applications on Longhorn.
-
ibrun
: This wrapper enables parallel MPI jobs to be started from the VNC desktop.ibrun
uses information from the user's environment to start MPI jobs across the user's set of Longhorn compute nodes. This information is determined by the initial SGE job submission, and includes the location of the hostfile created by SGE (found in thePE_HOSTFILE
environment variable).To run an MPI-parallel job from the VNC desktop, run:
c203-112$ ibrun [ibrun options] application application-args
For more information on
ibrun
, run "ibrun --help
" on either the Longhorn login node or from a window on a Longhorn VNC desktop. -
vglrun
: VNC does not support OpenGL applications.vglrun
is a wrapper for OpenGL applications that redirects rendering instructions to graphics hardware and then copies the results to destination windows on the desktop.To run an application using
vglrun
:c203-112$ vglrun [vglrun options] application application-args
For more information about
vglrun
, see VirtualGL. -
tacc_vglrun
: Some parallel visualization back-end tasks create visible windows on the display indicated by their DISPLAY environment variable. When run undervglrun
(e.g.ibrun vglrun application application-args
) all participating tasks will receive a pointer to the VNC desktop and will then cause windows to appear on the VNC desktop with major impact to performance and usability. Instead,tacc_vglrun
ensures that the only the root process of the parallel application will receive a DISPLAY environment variable that points to the VNC desktop, while the remainder will receive pointers to invisible desktops running on the local hardware graphics cards. Note that the available graphics cards are assigned to tasks in a round-robin order.To run an application using
tacc_vglrun
:c203-112$ tacc_vglrun application application-args
-
tacc_xrun
: Sometimes, only the assembled rendering results should be shown on the VNC desktop, not the individual windows of the parallel processes. When this is the case, usetacc_xrun
to direct all the tasks to use invisible desktops running on the local hardware graphics cards. Again, the available graphics cards are assigned to tasks in a round-robin order.To run an application using
tacc_xrun
:c203-112$ tacc_xrun application application-args
Tools
TACC supports several widely used visualization tools on Longhorn: CUDA, Parallel VisIt & IDL
Using CUDA on Longhorn
NVIDIA's CUDA compiler and libraries are accessed by loading the CUDA module:
login1$ module load cuda
This puts nvcc
in your $PATH
and the CUDA libraries in your $LD_LIBRARY_PATH
. Applications should be compiled on the Longhorn login nodes, but these must be run by submitting an SGE job to the compute nodes, both in accordance with TACC user policies and because the login nodes have no GPUs. The CUDA module should be loaded within your job script to ensure access to the proper libraries when your program runs.
Longhorn's GPUs are compute capability 1.3 devices. When compiling your code, make sure to specify this level of capability with:
nvcc -arch=compute_13 -code=sm_13
For further information on the CUDA compiler, please see: $TACC_CUDA_DIR/doc/nvcc.pdf
.
For more information about using CUDA, please see: $TACC_CUDA_DIR/doc/CUDA_C_Programming_Guide.pdf
.
For the complete CUDA API, please see: $TACC_CUDA_DIR/doc/CUDA_Toolkit_Reference_Manual.pdf
.
Using the CUDA SDK on Longhorn
The NVIDIA CUDA SDK can be accessed by loading the CUDA SDK module:
login1$ module load cuda_SDK
This defines the environment variable $TACC_CUDASDK_DIR
which can be used to access the libraries and executables in the CUDA SDK.
Using multiple GPUs in CUDA
CUDA contains functions to query the number of devices connected to each host, and to select among devices. CUDA commands are sent to the current device, which is GPU 0 by default. To query the number of available devices, use the function:
int devices; cudaGetDeviceCount( &devices );
To set a particular device, use the function:
int device = 0; cudaSetDevice( device );
Remember that any calls after cudaSetDevice()
typically pertain only to the device that was set. Please see the CUDA C Programming Guide and Toolkit Reference Manual for more details. For a multi-GPU CUDA example, please see the code at: $TACC_CUDASDK_DIR/C/src/simpleMultiGPU/
.
Debugging CUDA kernels
The NVIDA CUDA debugger, cuda-gdb
, is included in the CUDA module. Applications must be debugged through a job using either the idev
module or using a VNC session. Please see the relevant sections for more information on idev
and launching a VNC session. For more information on the CUDA debugger, see: $TACC_CUDA_DIR/doc/cuda-gdb.pdf
.
Using OpenCL on Longhorn
Longhorn has the NVIDIA implementation of the OpenCL v. 1.0 standard that is included in the NVIDIA CUDA SDK. To access it, first load both the CUDA and the CUDA SDK modules:
login1$ module load cuda cuda_SDK
OpenCL is contained within the $TACC_CUDASDK_DIR/OpenCL
directory. When compiling, you should use the following include directory on the compile line:
login1$ g++ -I${TACC_CUDASDK_DIR}/OpenCL/common/inc
If you use the NVIDIA OpenCL utilities, also add the following directory and libraries on your link line:
login1$ g++ -L${TACC_CUDASDK_DIR}/OpenCL/common/lib -loclUtil_x86_64
For more information on OpenCL, please see the OpenCL specification at: $TACC_CUDASDK_DIR/OpenCL/doc/Khronos_OpenCL_Specification.pdf
.
Using multiple GPUs in OpenCL
OpenCL contains functions to query the number of GPU devices connected to each host, and to select among devices. OpenCL commands are sent to the specified device. To query the number of available devices, use the following code:
cl_platform_id platform; cl_device_id* devices; cl_uint device_count; oclGetPlatformID(&platform); clGetDeviceIDs(platform, CL_DEVICE_TYPE_GPU, 0, NULL, &device_count); cdDevices = (cl_device_id*)malloc(device_count * sizeof(cl_device_id) ); clGetDeviceIDs(platform, CL_DEVICE_TYPE_GPU, device_count, devices, NULL);
In OpenCL, multiple devices can be a part of a single context. To create a context with all available GPUs and to create a command queue for each device, use the above code snippet to detect the GPUs, and the following to create the context and command queues:
cl_context context; cl_device_id device; cl_command_queue* command_queues; int i; context = clCreateContext(0, device_count, devices, NULL, NULL, NULL); command_queues = (cl_command_queue*)malloc(device_count * sizeof(cl_command_queue)); for (i=0; i < device_count; ++i) { cdDevice = oclGetDev(cxGPUContext, i); command_queue[i] = clCreateCommandQueue(context, device, 0, NULL); }
For a multi-GPU OpenCL example, please see the code at: $TACC_CUDASDK_DIR/OpenCL/src/oclSimpleMultiGPU/
.
Using the NVIDIA Compute Visual Profiler
The NVIDIA Compute Visual Profiler, computeprof
can be used to profile both CUDA programs and OpenCL programs that are run using the NVIDIA OpenCL implementation. Since the profiler is X based, it must be run either within a VNC session or by SSH-ing into an allocated compute node with X-forwarding enabled. The profiler executable path should be loaded by the CUDA module. If the computeprof
executable cannot be located, define the following environment variables:
login1$ export PATH=$TACC_CUDA_DIR/computeprof/bin:$PATH login1$ export LD_LIBRARY_PATH=$TACC_CUDA_DIR/computeprof/bin:$LD_LIBRARY_PATH
Running Parallel VisIt on Longhorn
After connecting to a VNC server on Longhorn, as described above, do the following:
- VisIt was compiled under the intel v11 compiler and both the mvapich2 v1.4 and the openmpi v1.3 MPI stacks.
- Load the VisIt module:
module load visit
- Launch VisIt:
c203-112$ vglrun visit
When VisIt first loads a dataset, it will present a dialog allowing the user to select either a serial or parallel engine. Select the parallel engine. Note that this dialog will also present options for the number of processes to start and the number of nodes to use; these options are actually ignored in favor of the options specified when the VNC server job was started.
Preparing data for Parallel Visit
In order to take advantage of parallel processing, VisIt input data must be partitioned and distributed across the cooperating processes. This requires that the input data be explicitly partitioned into independent subsets at the time it is input to VisIt. VisIt supports SILO data (see SILO), which incorporates a parallel, partitioned representation. Otherwise, VisIt supports a metadata file (with an .visit extension) that lists multiple data files of any supported format that are to be associated into a single logical dataset. In addition, VisIt supports a "brick of values" format, also using the .visit metadata file, which enables single files containing data defined on rectilinear grids to be partitioned and imported in parallel. Note that VisIt does not support VTK parallel XML formats (.pvti, .pvtu, .pvtr, .pvtp, and .pvts). For more information on importing data into VisIt, see Getting Data Into VisIt; though this refers to VisIt version 1.5.4, it appears to be the most current available.
For more information on VisIt, see //wci.llnl.gov/codes/visit/home.html
Running Parallel ParaView on Longhorn
After connecting to a VNC server on Longhorn, as described above, do the following:
- Load the Python and ParaView module:
module load python paraview
- Launch ParaView:
vglrun paraview [paraview client options]
- Click the "Connect" button, or select File -> Connect
- If this is the first time you've used ParaView in parallel (or failed to save your connection configuration in your prior runs):
- Select "Add Server"
- Enter a "Name", e.g. "ibrun"
- Click "Configure"
- For "Startup Type" and enter the command:
c203-112$ ibrun tacc_xrun pvserver [paraview server options]
and click "Save"
- Select the name of your server configuration, and click "Connect"
You will see the parallel servers being spawned and the connection establushed in the ParaView Output Messages window.
Preparing data for Parallel ParaView
In order to take advantage of parallel processing, ParaView data must be partitioned and distributed across the cooperating processes. While ParaView will import unpartitioned data and then partition and distribute it, best performance (by far) is attained when the input data is explicitly partitioned into independent subsets at the time it is loaded, enabling ParaView to import data in parallel. ParaView supports SILO data (see SILO), which incorporates a parallel, partitioned representation, as well as a comprehensive set of parallel XML formats, which utilize a metadata file to associate partitions found in separate files into a single logical dataset. In addition, ParaView supports a "brick of values" format enabling single files containing data defined on rectilinear grids to be partitioned and imported in parallel. This is not done with a metadata file; rather, the file is described to ParaView using a dialog that is presented when a file with a .raw extension is imported (this importer is also among the options presented when an unrecognized file type is imported). For more information on ParaView file formats, see VTK File Formats.
For more information on ParaView, see www.paraview.org
Running IDL on Longhorn
To run IDL interactively in a VNC session, connect to a VNC server on Longhorn as described above, then do the following:
- If the vis module is not yet loaded, you must load it: module load vis
- Load the IDL module: module load idl
- Launch IDL
c203-112$ idl
or Launch the IDL virtual machine:
c203-112$ idl -vm
If you are running IDL in scripted form, without interaction, simply submit an SGE job that loads IDL and runs your script.
If you need to run IDL interactively from an xterm from your local machine outside of a VNC session, you will need to run an SGE job in the vis queue to allocate a Longhorn compute node. A vncserver job is an easy way to do this, as documented above in "Establishing Interactive Access Via VNC". The output, coming by default to ~/vncserver.out, will include the name of the node that has been allocated to you by SGE (search for "running on node"). Note that this will start a vncserver process on the compute node which you can safely ignore. Alternatively, you can avoid running the vncserver process by qsub'ing your own SGE job script containing two commands:
hostname sleep n
where n is the number of seconds you wish to allocate the node for. This must be less than or equal to the time specified in the -l ht=hh:mm:ss
SGE argument. See the above section titled Running Batch Jobs on Longhorn for more information on submitting jobs via qsub
.
Once you have the name of the allocated node, you can ssh
to it through the login node. From an X terminal window on your local machine.
mymachine$ ssh -Y longhorn.tacc.utexas.edu
will result in a command prompt on the Longhorn login node. From there, ssh
to the compute node:
login1$ ssh -Y cxxx-yyy
This will result in a command prompt on the compute node. Commands that create X windows from that command prompt will create them on your local screen. Note that graphics programs run from this command prompt will be significantly slower than when run through a VNC session.
Last update: February 27, 2013