TACC Longhorn User Guide
Last update: June 3, 2020

Status Updates and Notices

  • Longhorn is in early user period and not allocable at this time.
  • The Stockyard/$WORK filesystem is not yet available. Early users must run their jobs out of $SCRATCH.
  • You may now subscribe to Longhorn User News. Stay up-to-date on Longhorn's status, scheduled maintenances and other notifications.
  • Longhorn's production queue limits are subject to change at any time.
  • All users: read the Good Citizenship section. Longhorn is a shared resource and your actions can impact other users.

Introduction to Longhorn

Longhorn is a TACC resource built in partnership with IBM to support GPU-accelerated workloads. The power of this system is in its multiple GPUs per node, and it is intended to support sophisticated workloads that require high GPU density and little CPU compute. Longhorn will support double-precision machine learning and deep learning workloads that can be accelerated by GPU-powered frameworks, as well as general purpose GPU calculations. Longhorn is also part of the GPU subsystem to one of TACC's flagship supercomputers, Frontera, funded by the National Science Foundation (NSF) through award #1818253, Computing for the Endless Frontier.

TACC's Longhorn Cluster
Figure 1. TACC's Longhorn System

Quickstart for Experienced Users

Experienced HPC/TACC users will be very familiar with many of the topics presented in this guide. Here we'll highlight some sections for a quick start on Longhorn.

  • Log into the TACC User Portal to confirm that you've been added to a Longhorn allocation. Then, connect via SSH to longhorn.tacc.utexas.edu.
  • Review the TACC info box (taccinfo) displayed at login for your allocation availability and SU balances.
  • Read the Good Citizenship section. Longhorn is a shared resource and this section covers practices and etiquette to keep your account in good standing and keep Longhorn's systems running smoothly for all users.
  • Consult the Longhorn File Systems and Longhorn Production Queues tables. These should be near identical to the structure used on other TACC systems but there are a few minor changes you will want to take note of.
  • Copy and modify any of the Sample Job Scripts for your own use. These scripts will also be helpful to show you how to modify any Jobs Scripts you are bringing over from other TACC systems so that they run efficiently on Longhorn.
  • Review the default modules with "module list". Make any changes needed for your code.
  • Start small. Run any jobs from other systems on a smaller scale in order to test the performance of your code on Longhorn. You may find your code needs to be altered or recompiled in order to perform well and at scale on the new system.

IBM Power System Specifications

Longhorn is comprised of 108 IBM Power System AC922 nodes distributed across nine racks, plus an IBM Elastic Storage System (representing the home and scratch filesystems) as a standalone 10th rack. Four nodes are reserved as login nodes and management nodes, leaving 104 nodes for the compute system.

GPU Nodes

Longhorn hosts 96 V100 nodes, each with 4 GPUs per node. Access these nodes via the v100 queue.

Model:  IBM Power System AC922 (8335-GTH)
Processor:  IBM Power 9
Total processors per node:  2
Total cores per processor:  20
Total cores per node:  40
Hardware threads per core:  4
Hardware threads per node:  160
Clock rate:  2.3GHz
Clock rate (turbo):  3.8GHz
RAM:  256GB
Local storage:  ~900 GB (/tmp)
GPUs:  4x NVIDIA Tesla V100
GPU RAM:  4x 16GB (64 GB aggregate)

GPU Large Memory Nodes

Longhorn hosts 8 large memory V100 nodes, each with 4 GPUs per node. Access these nodes via the v100-lm queue.

Model:  IBM Power System AC922 (8335-GTH)
Processor:  IBM Power 9
Total processors per node:  2
Total cores per processor:  20
Total cores per node:  40
Hardware threads per core:  4
Hardware threads per node:  160
Clock rate:  2.3GHz
Clock rate (turbo):  3.8GHz
RAM:  512GB
Local storage:  ~900 GB (/tmp)
GPUs:  4x NVIDIA Tesla V100
GPU RAM:  4x 16GB (64 GB aggregate)

Login Nodes

Longhorn hosts two login nodes:

  • Dual socket
  • IBM Power 9 processors @ 2.3 GHz and 20 cores/socket (40 cores/node)
  • 256 GB DDR4 RAM (16 x 16 GB DIMMS @ 2666 MHz)
  • Hyperthreading enabled

Network

Stockyard, the global shared filesystem accessible via $WORK, is not yet available on Longhorn.

Longhorn is attached to $HOME and $SCRATCH filesystems over a fast network.

  • Mellanox EDR Infiniband (MT28800 Family ConnectX-5 Ex adapter)
  • Spine-and-leaf interconnect
  • NetXtreme BCM5719 Gigabit Ethernet 1Gbps adapter

For better performance and more efficient I/O, we recommend staging your data to the $SCRATCH filesystem prior to submitting compute jobs.

Accessing the System

Only users with allocations on Longhorn are permitted to log on to Longhorn. A TACC User Portal account does not enable you to log on to any TACC resources unless you have an active allocation on that resource.

Log on with Secure Shell (SSH)

The "ssh" command (SSH protocol) is the standard way to connect to Longhorn. SSH also includes support for the file transfer utilities scp and sftp. Wikipedia is a good source of information on SSH. SSH is available within Linux and from the terminal app in the Mac OS. If you are using Windows, you will need an SSH client that supports the SSH-2 protocol: e.g. Bitvise, OpenSSH, PuTTY, or SecureCRT. Initiate a session using the ssh command or the equivalent; from the Linux command line the launch command looks like this:

localhost$ ssh taccuserid@longhorn.tacc.utexas.edu

The above command will alternate connections across both available login nodes, login1-login2, and route your connection to one of them. To connect to a specific login node, use its full domain name:

localhost$ ssh taccuserid@login2.longhorn.tacc.utexas.edu

To connect with X11 support on Longhorn (usually required for applications with graphical user interfaces), use the "-X" or "-Y" switch:

localhost$ ssh -X taccuserid@longhorn.tacc.utexas.edu

To report a connection problem, execute the ssh command with the "-vvv" option and include the verbose output when submitting a help ticket.

Do not run the "ssh-keygen" command on Longhorn. This command will create and configure a key pair that will interfere with the execution of job scripts in the batch system. If you do this by mistake, you can recover by renaming or deleting the .ssh directory located in your home directory; the system will automatically generate a new one for you when you next log into Longhorn.

  1. execute "mv .ssh dot.ssh.old"
  2. log out
  3. log into Longhorn again

After logging in again the system will generate a properly configured key pair.

Regardless of your research workflow, you'll need to master Linux basics and a Linux-based text editor (e.g. emacs, nano, gedit, or vi/vim) to use the system properly. However, this user guide does not address these topics. There are numerous resources in a variety of formats that are available to help you learn Linux, including some listed on the TACC and training sites. If you encounter a term or concept in this user guide that is new to you, a quick internet search should help you resolve the matter quickly.

Check your Allocation Status

You must be added to a Longhorn allocation in order to have access to Longhorn. The ability to log on to the TACC User Portal does NOT signify access to Longhorn or any TACC resource. You may monitor your allocations on the TACC User Portal. Please consult the allocations documentation for more information.

Multi-Factor Authentication

Access to all TACC systems now requires Multi-Factor Authentication (MFA). You can create an MFA pairing on the TACC User Portal. After login on the portal, go to your account profile (Home->Account Profile), then click the "Manage" button under "Multi-Factor Authentication" on the right side of the page. See Multi-Factor Authentication at TACC for further information.

Password Management

Use your TACC User Portal password for direct logins to TACC resources. You can change your TACC password through the TACC User Portal. Log into the portal, then select "Change Password" under the "HOME" tab. If you've forgotten your password, go to the TACC User Portal home page and select "Password Reset" under the Home tab.

Account-Level Diagnostics

TACC's sanitytool module loads an account-level diagnostic package that detects common account-level issues and often walks you through the fixes. You should certainly run the package's sanitycheck utility when you encounter unexpected behavior. You may also want to run sanitycheck periodically as preventive maintenance. To run sanitytool's account-level diagnostics, execute the following commands:

login1$ module load sanitytool
login1$ sanitycheck

Execute module help sanitytool for more information.

Linux Shell

The default login shell for your user account is Bash. To determine your current login shell, execute:

$ echo $SHELL

If you'd like to change your login shell to csh, sh, tcsh, or zsh, submit a ticket through the TACC portal. The chsh ("change shell") command will not work on TACC systems.

When you start a shell on Longhorn, system-level startup files initialize your account-level environment and aliases before the system sources your own user-level startup scripts. You can use these startup scripts to customize your shell by defining your own environment variables, aliases, and functions. These scripts (e.g. .profile and .bashrc) are generally hidden files: so-called dotfiles that begin with a period, visible when you execute: ls -a.

Before editing your startup files, however, it's worth taking the time to understand the basics of how your shell manages startup. Bash startup behavior is very different from the simpler csh behavior, for example. The Bash startup sequence varies depending on how you start the shell (e.g. using ssh to open a login shell, executing the bash command to begin an interactive shell, or launching a script to start a non-interactive shell). Moreover, Bash does not automatically source your .bashrc when you start a login shell by using ssh to connect to a node. Unless you have specialized needs, however, this is undoubtedly more flexibility than you want: you will probably want your environment to be the same regardless of how you start the shell. The easiest way to achieve this is to execute source ~/.bashrc from your .profile, then put all your customizations in .bashrc. The system-generated default startup scripts demonstrate this approach. We recommend that you use these default files as templates.

For more information see the Bash Users' Startup Files: Quick Start Guide and other online resources that explain shell startup. To recover the originals that appear in a newly created account, execute /usr/local/startup_scripts/install_default_scripts.

Environment Variables

Your environment includes the environment variables and functions defined in your current shell: those initialized by the system, those you define or modify in your account-level startup scripts, and those defined or modified by the modules that you load to configure your software environment. Be sure to distinguish between an environment variable's name (e.g. HISTSIZE) and its value ($HISTSIZE). Understand as well that a sub-shell (e.g. a script) inherits environment variables from its parent, but does not inherit ordinary shell variables or aliases. Use export (in Bash) or setenv (in csh) to define an environment variable.

Execute the env command to see the environment variables that define the way your shell and child shells behave.

Pipe the results of env into grep to focus on specific environment variables. For example, to see all environment variables that contain the string GIT (in all caps), execute:

$ env | grep GIT

The environment variables PATH and LD_LIBRARY_PATH are especially important. PATH is a colon-separated list of directory paths that determines where the system looks for your executables. LD_LIBRARY_PATH is a similar list that determines where the system looks for shared libraries.

Using Modules to Manage your Environment

Lmod, a module system developed and maintained at TACC, makes it easy to manage your environment so you have access to the software packages and versions that you need to conduct your research. This is especially important on a system like Longhorn that serves thousands of users with an enormous range of needs. Loading a module amounts to choosing a specific package from among available alternatives:

$ module load xl          # load the default IBM compiler
$ module load xl/16.1.1   # load a specific version of the IBM compiler (v16.1.1)

A module does its job by defining or modifying environment variables (and sometimes aliases and functions). For example, a module may prepend appropriate paths to $PATH and $LD_LIBRARY_PATH so that the system can find the executables and libraries associated with a given software package. The module creates the illusion that the system is installing software for your personal use. Unloading a module reverses these changes and creates the illusion that the system just uninstalled the software:

$ module load   ddt  # defines DDT-related env vars; modifies others
$ module unload ddt  # undoes changes made by load

The module system does more, however. When you load a given module, the module system can automatically replace or deactivate modules to ensure the packages you have loaded are compatible with each other. In the example below, the module system automatically unloads one compiler when you load another, and deactivates IBM-compatible versions of MPI:

$ module load xl             # load default version of IBM compiler
$ module load spectrum_mpi   # load default version of Spectrum MPI
$ module load gcc            # change compiler

Lmod is automatically replacing "xl/16.1.1" with "gcc/9.1.0".

Inactive Modules:
  1) spectrum_mpi

On Longhorn, modules generally adhere to a TACC naming convention when defining environment variables that are helpful for building and running software. For example, the papi module defines TACC_PAPI_BIN (the path to PAPI executables), TACC_PAPI_LIB (the path to PAPI libraries), TACC_PAPI_INC (the path to PAPI include files), and TACC_PAPI_DIR (top-level PAPI directory). After loading a module, here are some easy ways to observe its effects:

$ module show papi   # see what this module does to your environment
$ env | grep PAPI    # see env vars that contain the string PAPI
$ env | grep -i papi # case-insensitive search for 'papi' in environment

To see the modules you currently have loaded:

$ module list

To see all modules that you can load right now because they are compatible with the currently loaded modules:

$ module avail

To see all installed modules, even if they are not currently available because they are incompatible with your currently loaded modules:

$ module spider   # list all modules, even those not available to load

To filter your search:

module spider cuda          # all modules with names containing 'cuda'
$ module spider cuda/10.1     # additional details on a specific module

Among other things, the latter command will tell you which modules you need to load before the module is available to load. You might also search for modules that are tagged with a keyword related to your needs (though your success here depends on the diligence of the module writers). For example:

$ module keyword performance

You can save a collection of modules as a personal default collection that will load every time you log into Longhorn. To do so, load the modules you want in your collection, then execute:

$ module save    # save the currently loaded collection of modules 

Two commands make it easy to return to a known, reproducible state:

$ module reset   # load the system default collection of modules
$ module restore # load your personal default collection of modules

On TACC systems, the command module reset is equivalent to module purge; module load TACC. It's a safer, easier way to get to a known baseline state than issuing the two commands separately.

Help text is available for both individual modules and the module system itself:

$ module help cuda/10.1     # show help text for software package swr
$ module help               # show help text for the module system itself

See Lmod's online documentation for more extensive documentation. The online documentation addresses the basics in more detail, but also covers several topics beyond the scope of the help text (e.g. writing and using your own module files).

It's safe to execute module commands in job scripts. In fact, this is a good way to write self-documenting, portable job scripts that produce reproducible results. If you use module save to define a personal default module collection, it's rarely necessary to execute module commands in shell startup scripts, and it can be tricky to do so safely. If you do wish to put module commands in your startup scripts, see Longhorn's default startup scripts for a safe way to do so.

Citizenship on Longhorn

You share Longhorn with hundreds of other users, and what you do on the system affects others. Exercise good citizenship to ensure that your activity does not adversely impact the system and the research community with whom you share it.

Here are some rules of thumb. Pay special attention to the first two rules.

1. Do Not Run Jobs on the Login Nodes

It is imperative that you do not run jobs on the login nodes. Doing so is the fastest route to account suspension.

  • You must avoid computationally intensive activity on login nodes. This means:

    • Don't run research applications on the login nodes; this includes frameworks like MATLAB and R. If you need interactive access, please use idev or srun to schedule a compute node.
    • Don't launch too many simultaneous processes: while it's fine to compile on a login node, a command like make -j 16 (which compiles on 16 cores) may impact other users.
    • That script you wrote to check job status should probably do so once every few minutes rather than several times a second.
  • Know when you're on a login node. You can use your Linux prompt, the hostname command, or other mechanisms to determine if you're on a login or a compute node. See Accessing the Compute Nodes for more information.

  • Know what's appropriate on a login node. A login node is a good place to edit and manage files, initiate file transfers, compile code, submit new jobs, and track existing jobs.

2. Do Not Stress the Shared Lustre File Systems

This section focuses on ways to avoid causing problems on the $HOME, $WORK, and $SCRATCH file systems. Configuring Your Account covers environment variables and aliases that help you navigate the file systems.

  • Don't run jobs in $HOME. The $HOME file system is for routine file management, not parallel jobs.

  • Run I/O intensive jobs in $SCRATCH rather than $WORK. Significant I/O might include reading/writing 100+ GBs to checkpoint/restart files, running with 4096+ MPI tasks all reading/writing individual files, but is not limited to just those two cases. If you stress $WORK, you affect every user on every TACC system.

If you know your jobs will require significant I/O, please submit a support ticket and an HPC consultant will work with you.

  • Don't get greedy. If you know or suspect your workflow is I/O intensive, don't submit a pile of simultaneous jobs. Writing restart/snapshot files can stress the file system; avoid doing so too frequently. Also, use hdf5 or netcdf to generate a single restart restart file in parallel, rather than generating files from each process separately.

  • Watch your file system quotas. If you're near your quota in $WORK and your job is repeatedly trying (and failing) to write to $WORK, you will stress the file system. If you're near your quota in $HOME, jobs run on any file system may fail, because all jobs write some data to the hidden $HOME/.slurm directory.

  • Avoid opening and closing files repeatedly in tight loops. Every open/close operation on the file system requires interaction with the MetaData Service (MDS). The MDS acts as a gatekeeper for access to files on Lustre's parallel file system. Overloading the MDS will affect other users on the system. If possible, open files once at the beginning of your program/workflow, then close them at the end.

  • Avoid storing many small files in a single directory, and avoid workflows that require many small files. A few hundred files in a single directory is probably fine; tens of thousands is almost certainly too many. If you must use many small files, group them in separate directories of manageable size.

3. Limit File Transfers

In order to not stress both internal and external networks, limit simultaneous and recursive file transfers.

  • Avoid too many simultaneous file transfers. You share the network bandwidth with other users; don't use more than your fair share. Two or three concurrent scp sessions is probably fine. Twenty is probably not.

  • Avoid recursive file transfers, especially those involving many small files. Create a tar archive before transfers. This is especially true when transferring files to or from Ranch.

4. Request Only the Resources you Need

  • When you submit a job to the scheduler, don't ask for more time than you really need. The scheduler will have an easier time finding a slot for the 2 hours you need than the 24 hours you request. This means shorter queue waits times for you and everybody else.

  • Test your submission scripts. Start small: make sure everything works on 2 nodes before you try 20. Work out submission bugs and kinks with 5 minute jobs that won't wait long in the queue and involve short, simple substitutes for your real workload: simple test problems; hello world codes; one-liners like ibrun hostname; or an ldd on your executable.

  • Respect memory limits and other system constraints. If your application needs more memory than is available, your job will fail, and may leave nodes in unusable states. Monitor your application's needs. Execute module load remora followed by module help remora for more information on a particularly handy monitoring tool.

File Systems on Longhorn

The Stockyard Global File System ($WORK) is not yet available on Longhorn.

File System Quota Key Features
$HOME 10GB, 300,000 files Not intended for parallel or high-intensity file operations.
Backed up regularly.
Defaults: 1 stripe, 1MB stripe size.
Not purged.
$WORK Not yet available.
$SCRATCH no quota Overall capacity 4.5 PB.
Defaults: 1 stripe, 1MB stripe size.
Not backed up.
Subject to purge if access time* is more than 10 days old.
/tmp no quota ~700GB available per node.
Each node's /tmp partition is purged at the end of a job.

*The operating system updates a file's access time when that file is modified on a login or compute node. Reading or executing a file/script on a login node does not update the access time, but reading or executing on a compute node does update the access time. This approach helps us distinguish between routine management tasks (e.g. tar, scp) and production use. Use the command ls -ul to view access times.

Longhorn mounts three Lustre file systems that are shared across all nodes: the home, work, and scratch file systems. Longhorn will have a fourth file system, FLASH, supporting applications with very high bandwidth or IOPS requirements that will be an allocatable resource. Longhorn's startup mechanisms define corresponding account-level environment variables $HOME, $SCRATCH and $WORK that store the paths to directories that you own on each of these file systems. Consult the Longhorn File Systems table above for the basic characteristics of these file systems, and the Good Citizenship sections for guidance on file system etiquette.

Longhorn's home and scratch file systems are mounted only on Longhorn, but the work file system mounted on Longhorn is the Global Shared File System hosted on Stockyard. This is the same work file system that is currently available on Lonestar5, Stampede2 and several other TACC resources.

The $STOCKYARD environment variable points to the highest-level directory that you own on the Global Shared File System. The definition of the $STOCKYARD environment variable is of course account-specific, but you will see the same value on all TACC systems that provide access to the Global Shared File System (see Table 3). This directory is an excellent place to store files you want to access regularly from multiple TACC resources.

Stockyard Work file system
Figure 3. Account-level directories on the work file system (Global Shared File System hosted on Stockyard). Example for fictitious user bjones. All directories usable from all systems. Sub-directories (e.g. wrangler, maverick2) exist only when you have allocations on the associated system.

Your account-specific $WORK environment variable varies from system to system and is a subdirectory of $STOCKYARD (Figure 3). The subdirectory name corresponds to the associated TACC resource. The $WORK environment variable on Longhorn points to the $STOCKYARD/longhorn subdirectory, a convenient location for files you use and jobs you run on Longhorn. Remember, however, that all subdirectories contained in your $STOCKYARD directory are available to you from any system that mounts the file system. If you have accounts on both Longhorn and Stampede2, for example, the $STOCKYARD/longhorn directory is available from your Stampede2 account, and $STOCKYARD/stampede2 directory is available from your Longhorn account. Your quota and reported usage on the Global Shared File System reflects all files that you own on Stockyard, regardless of their actual location on the file system.

Note that resource-specific subdirectories of $STOCKYARD are simply convenient ways to manage your resource-specific files. You have access to any such subdirectory from any TACC resources. If you are logged into Longhorn, for example, executing the alias cdw (equivalent to cd $WORK) will take you to the resource-specific subdirectory $STOCKYARD/longhorn. But you can access this directory from other TACC systems as well by executing cd $STOCKYARD/longhorn. These commands allow you to share files across TACC systems. In fact, several convenient account-level aliases make it even easier to navigate across the directories you own in the shared file systems:

Alias Command
cd or cdh cd $HOME
cdw cd $WORK
cds cd $SCRATCH
cdy or cdg cd $STOCKYARD

Transferring Files with scp

You can transfer files between Longhorn and Linux-based systems using either scp or rsync. Both scp and rsync are available in the Mac Terminal app. Windows SSH clients typically include scp-based file transfer capabilities.

The Linux scp (secure copy) utility is a component of the OpenSSH suite. Assuming your Longhorn username is bjones, a simple scp transfer that pushes a file named myfile from your local Linux system to Longhorn $HOME would look like this:

localhost$ scp ./myfile bjones@longhorn.tacc.utexas.edu: # note colon after net address

You can use wildcards, but you need to be careful about when and where you want wildcard expansion to occur. For example, to push all files ending in .txt from the current directory on your local machine to /work/01234/bjones/scripts on Longhorn:

localhost$ scp *.txt bjones@longhorn.tacc.utexas.edu:/work/01234/bjones/longhorn

To delay wildcard expansion until reaching Longhorn, use a backslash (\) as an escape character before the wildcard. For example, to pull all files ending in .txt from /work/01234/bjones/scripts on Longhorn to the current directory on your local system:

localhost$ scp bjones@longhorn.tacc.utexas.edu:/work/01234/bjones/longhorn/\*.txt .

You can of course use shell or environment variables in your calls to scp. For example:

localhost$ destdir="/work/01234/bjones/longhorn/data"
localhost$ scp ./myfile bjones@longhorn.tacc.utexas.edu:$destdir

You can also issue scp commands on your local client that use Longhorn environment variables like $HOME, $WORK, and $SCRATCH. To do so, use a backslash (\) as an escape character before the $; this ensures that expansion occurs after establishing the connection to Longhorn:

localhost$ scp ./myfile bjones@longhorn.tacc.utexas.edu:\$WORK/data   # Note backslash

Avoid using scp for recursive transfers of directories that contain nested directories of many small files:

localhost$ scp -r ./mydata     bjones@longhorn.tacc.utexas.edu:\$WORK  # DON'T DO THIS

Instead, use tar to create an archive of the directory, then transfer the directory as a single file:

localhost$ tar cvf ./mydata.tar mydata                                  # create archive
localhost$ scp     ./mydata.tar bjones@longhorn.tacc.utexas.edu:\$WORK  # transfer archive

Transferring Files with rsync

The rsync (remote synchronization) utility is a great way to synchronize files that you maintain on more than one system: when you transfer files using rsync, the utility copies only the changed portions of individual files. As a result, rsync is especially efficient when you only need to update a small fraction of a large dataset. The basic syntax is similar to scp:

localhost$ rsync       mybigfile bjones@longhorn.tacc.utexas.edu:\$WORK/data
localhost$ rsync -avtr mybigdir  bjones@longhorn.tacc.utexas.edu:\$WORK/data

The options on the second transfer are typical and appropriate when synching a directory: this is a recursive update (-r) with verbose (-v) feedback; the synchronization preserves time stamps (-t) as well as symbolic links and other meta-data (-a). Because rsync only transfers changes, recursive updates with rsync may be less demanding than an equivalent recursive transfer with scp.

Sharing Files with Collaborators

If you wish to share files and data with collaborators in your project, see Sharing Project Files on TACC Systems for step-by-step instructions. Project managers or delegates can use Unix group permissions and commands to create read-only or read-write shared workspaces that function as data repositories and provide a common work area to all project members.

Discover Installed Software

You can discover already installed software using TACC's Software Search tool or execute "module spider" or "module avail" on the command-line.

Users must provide their own license for commercial packages.

At this time, the following software packages are available on Longhorn:

login1$ module avail

-------------------------- /opt/apps/xl16/modulefiles --------------------------
hdf5/1.10.4    netcdf/4.7.4    spectrum_mpi/10.3.0 (L)

---------------------------- /opt/apps/modulefiles -----------------------------
TACC                  (L)      python3/powerai_1.6.2
autotools/1.2         (L)      python3/powerai_1.7.0  (D)
cmake/3.16.1          (L)      pytorch-py2/1.0.1
conda/4.8.3                    pytorch-py2/1.1.0      (D)
cuda/10.0             (g)      pytorch-py3/1.0.1
cuda/10.1             (g)      pytorch-py3/1.1.0
cuda/10.2             (g,D)    pytorch-py3/1.2.0
gcc/4.9.3                      pytorch-py3/1.3.1      (D)
gcc/6.3.0                      sanitytool/1.5
gcc/7.3.0                      settarg
gcc/9.1.0             (D)      tacc-singularity/3.5.3
git/2.24.1            (L)      tacc_tips/0.5
idev/1.5.7                     tensorflow-py2/1.13.1
launcher_gpu/1.1               tensorflow-py2/1.14.0  (D)
lmod                           tensorflow-py3/1.13.1
pylauncher/3.1                 tensorflow-py3/1.14.0
python2/powerai_1.6.0          tensorflow-py3/1.15.2
python2/powerai_1.6.1 (D)      tensorflow-py3/2.1.0   (D)
python3/powerai_1.6.0          xalt/2.8.1             (L)
python3/powerai_1.6.1          xl/16.1.1              (L)
...
login$

Building Software on Longhorn

When building software on Longhorn, we recommend using the IBM compilers and IBM Spectrum MPI stack. This will be the default in the early user period, but may change if we determine one of the other MPI stacks provides superior performance.

IBM Compilers

IBM XL is the recommended and default compiler suite on Longhorn. Here are simple examples that use the IBM compiler to build an executable from source code:

$ xlc -o myexe mycode.c       # C code
$ xlc++ -o myexe mycode.cpp   # C++ code
$ xlf90 -o myexe mycode.f     # Fortran code

See the published IBM documentation, available online for information on optimization flags and other IBM compiler options.

GNU Compilers

The GNU foundation maintains a number of high quality compilers, including a compiler for C (gcc), C++ (g++), and Fortran (gfortran). The gcc compiler is the foundation underneath all three, and the term gcc often means the suite of these three GNU compilers.

Load a gcc module to access a recent version of the GNU compiler suite. Avoid using the GNU compilers that are available without a gcc module — those will be older versions based on the "system gcc" that comes as part of the Linux distribution.

Here are simple examples that use the GNU compilers to produce an executable from source code:

$ gcc mycode.c                    # C source file; executable a.out
$ gcc -o myexe mycode.c           # C source file; executable myexe
$ g++ -o myexe mycode.cpp         # C++ source file
$ gfortran -o myexe mycode.f90    # Fortran90 source file
$ gcc -fopenmp -o myexe mycode.c  # OpenMP; GNU flag is different than IBM

Note that some compiler options are the same for both IBM and GNU (e.g. "-o"), while others are different (e.g. "-qopenmp" vs "-fopenmp"). Many options are available in one compiler suite but not the other. See the online GNU documentation for information on optimization flags and other GNU compiler options.

Compiling and Linking MPI Programs

Spectrum MPI (module load spectrum_mpi) and MVAPICH2 (module load mvapich2) are the two MPI libraries available on Longhorn. After loading an spectrum_mpi or mvapich2 module, compile and/or link using the appropriate mpi wrapper (mpicc, mpicxx, mpif90) in place of the compiler:

$ mpicc    mycode.c   -o myexe   # C source, full build
$ mpicc -c mycode.c              # C source, compile without linking
$ mpicxx   mycode.cpp -o myexe   # C++ source, full build
$ mpif90   mycode.f90 -o myexe   # Fortran source, full build

These wrappers call the compiler with the options, include paths, and libraries necessary to produce an MPI executable using the MPI module you're using. To see the effect of a given wrapper, call it with the "-showoption":

$ mpicc -show  # Show compile line generated by call to mpicc; similarly for other wrappers

Compiling with CUDA

NVIDIA's CUDA compiler and libraries are accessed by loading the CUDA module:

login1$ module load cuda

Use the nvcc compiler on the login node to compile code, and run executables on the compute nodes. Longhorn's V100 GPUs are compute capability 7.0 devices. When compiling your code, make sure to specify this level of capability with:

$ nvcc -arch=compute_70 -code=sm_70 ...

The NVIDA CUDA debugger is cuda-gdb. Applications must be debugged through an interactive idev session. Please see the relevant idev section for more details.

The NVIDIA Compute Visual Profiler, computeprof, can be used to profile both CUDA and OpenCL programs that have been developed in NVIDIA CUDA/OpenCL programming environment. Since the profiler is X based, it must be run either within a VNC session or by ssh-ing into an allocated compute node with X-forwarding enabled. The profiler command and library paths are included in the $PATH and $LD_LIBRARY_PATH variables by the CUDA module. The computeprof executable and libraries can be found in the following respective directories:

$TACC_CUDA_DIR/bin
$TACC_CUDA_DIR/lib64

For further information on the CUDA compiler, programming, the API, and debugger, see the following documentation:

  • $TACC_CUDA_DIR/doc/pdf/CUDA_Compiler_Driver_NVCC.pdf
  • $TACC_CUDA_DIR/doc/pdf/CUDA_C_Programming_Guide.pdf
  • $TACC_CUDA_DIR/doc/pdf/CUDA_Samples.pdf
  • $TACC_CUDA_DIR/doc/pdf/cuda-gdb.pdf

Building Third-Party Software

You are welcome to install packages in your own $HOME or $WORK directories. No super-user privileges are needed, simply use the "--prefix" option when configuring then making the package.

You're welcome to download third-party research software and install it in your own account. In most cases you'll want to download the source code and build the software so it's compatible with the Longhorn software environment. You can't use yum or any other installation process that requires elevated privileges, but this is almost never necessary. The key is to specify an installation directory for which you have write permissions. Details vary; you should consult the package's documentation and be prepared to experiment. When using the famous three-step autotools build process, the standard approach is to use the PREFIX environment variable to specify a non-default, user-owned installation directory at the time you execute configure or make:

$ export INSTALLDIR=$WORK/apps/t3pio
$ ./configure --prefix=$INSTALLDIR
$ make
$ make install

Other languages, frameworks, and build systems generally have equivalent mechanisms for installing software in user space. In most cases a web search like "Python Linux install local" will get you the information you need.

In Python, a local install will resemble one of the following examples:

$ pip install netCDF4      --user                   # install netCDF4 package to $HOME/.local
$ python3 setup.py install --user                   # install to $HOME/.local
$ pip3 install netCDF4     --prefix=$INSTALLDIR     # custom location; add to PYTHONPATH

Similarly in R:

$ module load Rstats            # load TACC's default R
$ R                             # launch R
> install.packages('devtools')  # R will prompt for install location

You may, of course, need to customize the build process in other ways. It's likely, for example, that you'll need to edit a makefile or other build artifacts to specify Longhorn-specific include and library paths or other compiler settings. A good way to proceed is to write a shell script that implements the entire process: definitions of environment variables, module commands, and calls to the build utilities. Include echo statements with appropriate diagnostics. Run the script until you encounter an error. Research and fix the current problem. Document your experience in the script itself; including dead-ends, alternatives, and lessons learned. Re-run the script to get to the next error, then repeat until done. When you're finished, you'll have a repeatable process that you can archive until it's time to update the software or move to a new machine.

If you wish to share a software package with collaborators, you may need to modify file permissions. See Sharing Files with Collaborators for more information.

Conda Python Environments

TACC staff has deployed a pre-configured version of conda, available as a module. For the best experience on TACC resources, we recommend that you do not install your own version of Conda.

Conda Basics

The conda module can be loaded with:

$ module load conda

Then, list the available conda environments:

$ conda env list

Environments can be loaded with

$ conda load [environment]

In this case, [environment] is a place-holder for the name of a specific environment. When finished using an environment, it can be exited by either deactivating the environment:

$ source deactivate

or unloading the module

$ module unload conda

Conda Packages

While you can technically install local packages to your ~/.local directory with pip, they will be detected by other environments, which may cause issues since they supersede all others. Instead, we recommend that you install packages directly into a cloned or created environment where you have write permissions.

Create, Activate, then Install

$ conda create -n new_env python=3 tensorflow
$ conda activate new_env
$ conda install [new package]
$ pip install [new package]

Note: pip works here because the environment was activated.

Clone and Install

$ conda create --name myclone --clone py2_powerai_1.6.1
$ conda install -n myclone [new package]

Discovering Packages

Longhorn nodes are a PowerPC architecture, so only pure python and code compiled for PowerPC will run on them. With that said, packages can be directly searched in conda and pip on the command line:

$ conda search tensorflow-gpu
$ pip search quicksect

or browsed online at

Once again, look for packages that support either "any" or "ppc64" architectures.

Python-Based Machine Learning

Longhorn uses the IBM Watson Machine Learning CE platform for machine learning frameworks and packages. Packages are distributed via Anaconda Python through the WMLCE repository. While you may be used to using pip to install the latest versions of your preferred machine learning frameworks, we recommend using this repository for several reasons:

  • The modules and environments are tested by IBM before release
  • Each PowerAI release contains a curated ecosystem of machine learning packages precompiled for PowerPC and GPU execution
  • The environments are functional and known, so we can provide support for these packages

Each version of PowerAI supported by Longhorn is cached on the file system and installed in both Python 2 and 3 environments when possible.

$ module load conda
$ conda env list
# conda environments:
#
base                  *  /scratch/apps/conda/4.8.3
py2_powerai_1.6.0        /scratch/apps/conda/4.8.3/envs/py2_powerai_1.6.0
py2_powerai_1.6.1        /scratch/apps/conda/4.8.3/envs/py2_powerai_1.6.1
py3_powerai_1.6.0        /scratch/apps/conda/4.8.3/envs/py3_powerai_1.6.0
py3_powerai_1.6.1        /scratch/apps/conda/4.8.3/envs/py3_powerai_1.6.1
py3_powerai_1.6.2        /scratch/apps/conda/4.8.3/envs/py3_powerai_1.6.2
py3_powerai_1.7.0        /scratch/apps/conda/4.8.3/envs/py3_powerai_1.7.0

These environments contain the following machine learning packages:

To increase the visibility of these environments and packages, we have also exposed some through standard LMOD modules.

$ ml avail
---------------- /opt/apps/modulefiles --------------------
   conda/4.8.3           (L,D)    pytorch-py3/1.1.0
   python2/powerai_1.6.0          pytorch-py3/1.2.0
   python2/powerai_1.6.1 (D)      pytorch-py3/1.3.1     (D)
   python3/powerai_1.6.0          tensorflow-py2/1.13.1
   python3/powerai_1.6.1          tensorflow-py2/1.14.0 (D)
   python3/powerai_1.6.2          tensorflow-py3/1.13.1
   python3/powerai_1.7.0 (D)      tensorflow-py3/1.14.0
   pytorch-py2/1.0.1              tensorflow-py3/1.15.2
   pytorch-py2/1.1.0     (D)      tensorflow-py3/2.1.0  (D)
   pytorch-py3/1.0.1

Notice that loading the tensorflow-py3/1.15.2 module also loads the python3/powerai_1.6.2 module, which loads the py3_powerai_1.6.2 conda environment. That is because each tensorflow and pytorch package redirects to and loads the PowerAI distribution from where they originated.

While you can create conda environments on the login nodes without affecting other users, you must move to a compute node when running code via an idev session.

# Allocate a compute node in the development queue for 30 minutes
$ idev -m 30 -p development

TensorFlow

$ module load tensorflow-py3/1.15.2
(py3_powerai_1.6.2)$ python -c 'import tensorflow; print(tensorflow.test.is_gpu_available())';
2020-04-20 17:32:29.440946: I 
tensorflow/stream_executor/platform/default/dso_loader.cc:44] 
Successfully opened dynamic library libcudart.so.10.1
...
2020-04-20 17:32:35.278808: I 
tensorflow/core/common_runtime/gpu/gpu_device.cc:1325] 
Created TensorFlow device (/device:GPU:3 with 14927 MB memory) 
-> physical GPU (device: 3, name: Tesla V100-SXM2-16GB, pci bus id: 0035:04:00.0, compute capability: 7.0)
True

Note that the "(py3_powerai_1.6.2)" decorator is prefixed to your shell's $PS1 prompt indicating which Conda environment was loaded.

Additional information:

PyTorch

$ module load pytorch-py3/1.2.0
(py3_powerai_1.6.2)$ python -c 'import torch; print(torch.cuda.is_available())';
True

See PyTorch for additional Information:

Horovod

Each PowerAI environment contains Horovod for distributed deep learning. Horovod requires minimal changes to your code to split your data batches across multiple GPUs and nodes. Below is an example of running the TensorFlow benchmark suite on two Longhorn nodes with 8 GPUs in total using ibrun.

# Allocate compute nodes
login1$ idev -N 2 -n 8 -p v100

# Load TensorFlow 2.1.0
c002-001$ module load tensorflow-py3/2.1.0

# Download and checkout benchmarks compatible with TF 2.1
c002-001$ git clone --branch cnn_tf_v2.1_compatible https://github.com/tensorflow/benchmarks.git
c002-001$ cd benchmarks

# Launch with ibrun
c002-001$ ibrun -n 8 python scripts/tf_cnn_benchmarks/tf_cnn_benchmarks.py --num_gpus=1 \
    --model resnet50 --batch_size 32 --num_batches 100 --variable_update=horovod
TACC:  Starting up job 22832
TACC:  Setting up parallel environment for OpenMPI mpirun.
TACC:  Starting parallel tasks...
…
----------------------------------------------------------------
total images/sec: 2560.04
----------------------------------------------------------------
TACC:  Shutdown complete. Exiting.

Official PowerAI documentation references IBM DDL and ddlrun, but we found no significant performance difference between it and NCCL with ibrun.

Launching Applications

The primary purpose of your job script is to launch your research application. How you do so depends on several factors, especially (1) the type of application (e.g. MPI, OpenMP, serial), and (2) what you're trying to accomplish (e.g. launch a single instance, complete several steps in a workflow, run several applications simultaneously within the same job). While there are many possibilities, your own job script will probably include a launch line that is a variation of one of the examples described in this section.

Launching Single GPU Applications

There are four GPUs per node indexed 0-3. By default, only GPU 0 is visible to serial GPU applications. Launching a serial GPU application takes the form:

$ ./mycode.cuda          # compiled CUDA executable

To target the executable to a specific GPU, set the CUDA_VISIBLE_DEVICES environment variable. For example, to run an application on GPU 2:

$ export CUDA_VISIBLE_DEVICES=2
$ ./mycode.cuda

This method can be used to run four serial GPU applications simultaneously, each on their own GPU. This can be useful when the same code needs to be run many times under multiple conditions, and it makes more efficient use of the nodes when all four GPUs are active:

$ CUDA_VISIBLE_DEVICES=0 ./mycode.cuda  &
$ CUDA_VISIBLE_DEVICES=1 ./mycode.cuda  &
$ CUDA_VISIBLE_DEVICES=2 ./mycode.cuda  &
$ CUDA_VISIBLE_DEVICES=3 ./mycode.cuda  &
$ wait

The trailing ampersand ‘&' symbol puts the process in the background, and the "wait" command pauses the job until all the background processes complete. To confirm which GPUs are active, and which processes are running on each GPU, use the "nvidia-smi" tool:

$ nvidia-smi
Tue Nov 26 14:26:55 2019       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.67       Driver Version: 418.67       CUDA Version: 10.1     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla V100-SXM2...  On   | 00000004:04:00.0 Off |                    0 |
| N/A   34C    P0   103W / 300W |    319MiB / 16130MiB |    100%      Default |
+-------------------------------+----------------------+----------------------+
|   1  Tesla V100-SXM2...  On   | 00000004:05:00.0 Off |                    0 |
| N/A   38C    P0   107W / 300W |    319MiB / 16130MiB |    100%      Default |
+-------------------------------+----------------------+----------------------+
|   2  Tesla V100-SXM2...  On   | 00000035:03:00.0 Off |                    0 |
| N/A   35C    P0   103W / 300W |    319MiB / 16130MiB |    100%      Default |
+-------------------------------+----------------------+----------------------+
|   3  Tesla V100-SXM2...  On   | 00000035:04:00.0 Off |                    0 |
| N/A   36C    P0   106W / 300W |    319MiB / 16130MiB |    100%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0    131794      C   ./mycode.cuda                                309MiB |
|    1    131798      C   ./mycode.cuda                                309MiB |
|    2    131803      C   ./mycode.cuda                                309MiB |
|    3    131806      C   ./mycode.cuda                                309MiB |
+-----------------------------------------------------------------------------+

Launching Multi-GPU Applications

Typical workflows may use MPI and ibrun to launch a GPU application using multiple GPUs on one node, or even multiple GPUs on multiple nodes. There is no need to set CUDA_VISIBLE_DEVICES as demonstrated above, as ibrun will handle GPU assignment among the hosts. For example, to run a compiled application on all four GPUs on one node:

login1$ idev -N1   # launch an interactive session with one node
...
c001-005$ ibrun -np 4 ./mycode.cuda --num_gpus=1 --other_options

In this hypothetical example, ibrun launches four instances of the mycode.cuda executable. And, mycode.cuda takes a flag --num_gpus that tells each instance to use one GPU. The ibrun tool is also aware of multiple hosts from information in your environment. If you are running a two-node job, then you can launch across eight GPUs in a similar fashion:

login1$ idev -N2   # launch an interactive session with two nodes
...
c001-005$ ibrun -np 8 ./mycode.cuda --num_gpus=1 --other_options

See the Tensorflow at TACC document for more information and examples about running on multiple GPUs or multiple nodes.

Multi-GPU with CUDA

If you plan to launch multi-GPU applications using CUDA-aware Spectrum MPI, make sure to export the following environment variable:

export MY_SPECTRUM_OPTIONS="--gpu"

Launching One Serial Application

To launch a serial application, simply call the executable. Specify the path to the executable in either the PATH environment variable or in the call to the executable itself:

mycode.exe                      # executable in a directory listed in $PATH
$WORK/apps/myprov/mycode.exe    # explicit full path to executable
./mycode.exe                    # executable in current directory
./mycode.exe -m -k 6 input1     # executable with notional input options

Running Jobs on Longhorn's GPU Nodes

Like all TACC resources, Longhorn's job scheduler is the Slurm Workload Manager. Slurm commands enable you to submit, manage, monitor, and control your jobs. Jobs submitted to the scheduler are queued, then run on the compute nodes. Each job consumes Service Units (SUs) which are then charged to your allocation.

Job Accounting

Longhorn's accounting system is based on node-hours: one unadjusted Service Unit (SU) represents a single compute node used for one hour (a node-hour). We then multiply by a charge rate that reflects supply and demand for the particular queue or the type of node you use. For any given job, the total cost in SUs is:

SUs billed (node-hrs) = (# nodes) x (job duration in wall clock hours) x (charge rate per node-hour)

For example, a job that runs in the v100 queue, (charge rate of 6SUs) for two hours using four nodes, will cost 48SUs:

4 nodes * 2 hours * 1.0 = 48SUs

The system tracks and charges for usage to a granularity of a few seconds of wall clock time. The system charges only for the resources you actually use, not those you request. In general, your queue wait time will be less if you request only the time you need: the scheduler will have an easier time finding a slot for the 2 hours you really need than for the 24 hours you request in your job script.

Principal Investigators can monitor allocation usage via the TACC User Portal under "Allocations->Projects and Allocations". Be aware that the SU totals shown on the portal may lag behind the most recent usage. Projects and allocation balances are also displayed upon command-line login.

Principal Investigators can monitor allocation usage via the TACC User Portal under "Allocations->Projects and Allocations". Be aware that the figures shown on the portal may lag behind the most recent usage. Projects and allocation balances are also displayed upon command-line login.

To display a summary of your TACC project balances and disk quotas at any time, execute:

login1$ /usr/local/etc/taccinfo # Generally more current than balances displayed on the portals.

Requesting Resources

Be sure to request computing resources e.g., number of nodes, number of tasks per node, max time per job, that are consistent with the type of application(s) you are running:

  • A serial (non-parallel) application can only make use of a single core on a single node, and will only see that node's memory.

  • An MPI (Message Passing Interface) program can exploit the distributed computing power of multiple nodes: it launches multiple copies of its executable (MPI tasks, each assigned unique IDs called ranks) that can communicate with each other across the network. The tasks on a given node, however, can only directly access the memory on that node. Depending on the program's memory requirements, it may not be possible to run a task on every core of every node assigned to your job. If it appears that your MPI job is running out of memory, try launching it with fewer tasks per node to increase the amount of memory available to individual tasks.

  • A popular type of parameter sweep (sometimes called high throughput computing) involves submitting a job that simultaneously runs many copies of one serial or threaded application, each with its own input parameters ("Single Program Multiple Data", or SPMD). The launcher tool is designed to make it easy to submit this type of job. For more information:

      $ module load launcher-gpu
      $ module help launcher-gpu

Slurm Job Scheduler

Longhorn employs the Slurm workload manager, the job scheduler common to all TACC HPC resources. Slurm commands enable you to submit, manage, monitor, and control your jobs.

The Stampede2 User Guide discusses Slurm extensively. See the following sections for detailed information:

Longhorn Production Queues

Longhorn's Slurm current partitions (queues), maximum node limits and charge rates are summarized in the table below. Execute qlimits on Longhorn for real-time information regarding limits on available queues. See Job Accounting to learn how jobs are charged to your allocation.

Table 5. Longhorn Production Queues

Queue status as of February 19, 2020. Queues and limits are subject to change without notice.

Queue Name Max Nodes per Job
(assoc'd cores)
Max Job Duration Charge Rate
(per node-hour)
development
(8 nodes)
2 nodes
(80 cores, 8 GPUs)
2 hours 1 Service Unit (SU)
v100
(88 nodes)
32 nodes
(1280 cores, 128 GPUs)
48 hours 6 SUs
v100-lm
(8 nodes)
8 nodes
(320 cores, 32 GPUs)
48 hours 6 SUs

To request more nodes than are available in the v100 queue, submit a consulting (help desk) ticket through the TACC User Portal. Include in your request reasonable evidence of your readiness to run under the conditions you're requesting. In most cases this should include your own strong or weak scaling results obtained from previous Longhorn jobs.

Single GPU Applications
Single Node Multiple GPUs
Multi-GPU Applications
Parallelization across GPU Nodes

Customizing your Job Script

Copy and customize the following scripts to specify and refine your job's requirements.

  • specify the maximum run time with the -t option.
  • specify number of nodes needed with the -N option
  • specify tasks per node with the -n option
  • specify the project to be charged with the -A option.

In general, the fewer resources (nodes) you specify in your batch script, the less time your job will wait in the queue. See 4. Request Only the Resources You Need in the Citizenship section.

Consult Table 6 in the Stampede2 User Guide for a listing of common Slurm #SBATCH options.

Job Management

In this section, we present several Slurm commands and other utilities that are available to help you plan and track your job submissions as well as check the status of the Slurm queues.

When interpreting queue and job status, remember that Longhorn doesn't operate on a first-come-first-served basis. Instead, the sophisticated, tunable algorithms built into Slurm attempt to keep the system busy, while scheduling jobs in a way that is as fair as possible to everyone. At times this means leaving nodes idle ("draining the queue") to make room for a large job that would otherwise never run. It also means considering each user's "fair share", scheduling jobs so that those who haven't run jobs recently may have a slightly higher priority than those who have.

TACC's qlimits command

To display resource limits for the Longhorn queues, execute: qlimits. The result is real-time data; the corresponding information in this document's table of Longhorn queues may lag behind the actual configuration that the qlimits utility displays.

Slurm's sinfo command

Slurm's sinfo command allows you to monitor the status of the queues. If you execute sinfo without arguments, you'll see a list of every node in the system together with its status. To skip the node list and produce a tight, alphabetized summary of the available queues and their status, execute:

login1$ sinfo -S+P -o "%18P %8a %20F"    # compact summary of queue status

An excerpt from this command's output might look like this:

login1$ sinfo -S+P -o "%18P %8a %20F"
PARTITION          AVAIL    NODES(A/I/O/T)    
development        up       0/8/0/8
v100               up       44/43/1/96          
v100-lm            up       0/8/0/8

The AVAIL column displays the overall status of each queue (up or down), while the column labeled NODES(A/I/O/T) shows the number of nodes in each of several states ("Allocated", "Idle", "Other", and "Total"). Execute man sinfo for more information. Use caution when reading the generic documentation, however: some available fields are not meaningful or are misleading on Longhorn (e.g. TIMELIMIT, displayed using the %l option).

Slurm's squeue command

Slurm's squeue command allows you to monitor jobs in the queues, whether pending (waiting) or currently running:

login1$ squeue             # show all jobs in all queues
login1$ squeue -u bjones   # show all jobs owned by bjones
login1$ man squeue         # more info

An excerpt from the default output might look like this:

 JOBID   PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
25781 development idv72397   bjones CG       9:36      2 c001-011,012
25918 development ppm_4828   bjones PD       0:00     20 (Resources)
25915 development MV2-test    siliu PD       0:00     14 (Priority)
25589        v100   aatest slindsey PD       0:00      8 (Dependency)
25949 development psdns_la sniffjck PD       0:00      2 (Priority)
25618        v100   SP256U   connor PD       0:00      1 (Dependency)
25944        v100  MoTi_hi   wchung  R      35:13      1 c005-003
25945        v100 WTi_hi_e   wchung  R      27:11      1 c006-001
25606        v100   trainA   jackhu  R   23:28:28      1 c008-012

The column labeled ST displays each job's status:

  • PD means "Pending" (waiting);
  • R means "Running";
  • CG means "Completing" (cleaning up after exiting the job script).

Pending jobs appear in order of decreasing priority. The last column includes a nodelist for running/completing jobs, or a reason for pending jobs. If you submit a job before a scheduled system maintenance period, and the job cannot complete before the maintenance begins, your job will run when the maintenance/reservation concludes. The squeue command will report ReqNodeNotAvailable ("Required Node Not Available"). The job will remain in the PD state until Longhorn returns to production.

The default format for squeue now reports total nodes associated with a job rather than cores, tasks, or hardware threads. One reason for this change is clarity: the operating system sees each compute node's 56 hardware threads as "processors", and output based on that information can be ambiguous or otherwise difficult to interpret.

The default format lists all nodes assigned to displayed jobs; this can make the output difficult to read. A handy variation that suppresses the nodelist is:

login1$ squeue -o "%.10i %.12P %.12j %.9u %.2t %.9M %.6D"  # suppress nodelist

The --start option displays job start times, including very rough estimates for the expected start times of some pending jobs that are relatively high in the queue:

login1$ squeue --start -j 167635     # display estimated start time for job 167635

TACC's showq utility

TACC's showq utility mimics a tool that originated in the PBS project, and serves as a popular alternative to the Slurm squeue command:

login1$ showq                 # show all jobs; default format
login1$ showq -u              # show your own jobs
login1$ showq -U bjones       # show jobs associated with user bjones
login1$ showq -h              # more info

The output groups jobs in four categories: ACTIVE, WAITING, BLOCKED, and COMPLETING/ERRORED. A BLOCKED job is one that cannot yet run due to temporary circumstances (e.g. a pending maintenance or other large reservation.).

If your waiting job cannot complete before a maintenance/reservation begins, showq will display its state as **WaitNod** ("Waiting for Nodes"). The job will remain in this state until Longhorn returns to production.

The default format for showq now reports total nodes associated with a job rather than cores, tasks, or hardware threads. One reason for this change is clarity: the operating system sees each compute node's 112 hardware threads as "processors", and output based on that information can be ambiguous or otherwise difficult to interpret.

Other Job Management Commands

scancel, scontrol, and sacct

It's not possible to add resources to a job (e.g. allow more time) once you've submitted the job to the queue.

To cancel a pending or running job, first determine its jobid, then use scancel:

login1$ squeue -u bjones    # one way to determine jobid
 JOBID   PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
170361        v100   spec12   bjones PD       0:00     32 (Resources)
login1$ scancel 170361      # cancel job

For detailed information about the configuration of a specific job, use scontrol:

login1$ scontrol show job=170361

To view some accounting data associated with your own jobs, use sacct:

login1$ sacct --starttime 2019-06-01  # show jobs that started on or after this date

Dependent Jobs using sbatch

You can use sbatch to help manage workflows that involve multiple steps: the --dependency option allows you to launch jobs that depend on the completion (or successful completion) of another job. For example you could use this technique to split into three jobs a workflow that requires you to (1) compile on a single node; then (2) compute on 40 nodes; then finally (3) post-process your results using 4 nodes.

login1$ sbatch --dependency=afterok:173210 myjobscript

For more information see the Slurm online documentation. Note that you can use $SLURM_JOBID from one job to find the jobid you'll need to construct the sbatch launch line for a subsequent one. But also remember that you can't use sbatch to submit a job from a compute node.

Containers on Longhorn

Longhorn provides integrated support for Singularity – a containerization platform that enables users to access software and libraries that are not otherwise available in the Longhorn module system. Singularity is the containerization platform of choice for all TACC HPC systems because users can pull / run / shell without escalated privileges, MPI and GPUs are supported, and it is compatible with Docker.

To make the experience seamless, our implementation injects mount points and environment variables into the container to match the HPC system environment – the $SCRATCH, $WORK, and $HOME filesystems all will be identical to what users see natively on any Longhorn node.

To get started with Singularity, first load the tacc-singularity module:

$ module load tacc-singularity

All singularity commands must be run on a compute node. Example commands for the most common Singularity functions include:

  • Pull a Singularity-compatible image from Docker Hub

      login1$ idev
      ...
      c001-005$ singularity pull docker://python:3.8.0
      ...
      INFO:    Creating SIF file...
      INFO:    Build complete: python_3.8.0.sif
  • Start an interactive session inside the container

      c001-005$ singularity shell python_3.8.0.sif
      Singularity python_3.8.0.sif:~/singularity> python3 --version
      Python 3.8.0
      Singularity python_3.8.0.sif:~/singularity> exit
  • Execute a command inside the container

      c001-005$ singularity exec python_3.8.0.sif python3 --version
      Python 3.8.0
  • Run the default container command (not supported by all containers)

      c001-005$ singularity run python_3.8.0.sif
      Python 3.8.0 (default, Nov 23 2019, 09:02:13)
      [GCC 8.3.0] on linux
      Type "help", "copyright", "credits" or "license" for more information.
      >>>

Note that (unlike other TACC machines) Longhorn nodes are a PowerPC architecture (Power PC 64 LE). Thus, when pulling images from (e.g.) Docker Hub, make sure the image is Power PC 64 LE compatible. Singularity will automatically pull the correct architecture, if it exists.

Tip: The search form on Docker Hub can be filtered by Power PC architecture: https://hub.docker.com/search?q=&type=image&architecture=ppc64le

Help Desk

TACC Consulting operates from 8am to 5pm CST, Monday through Friday, except for holidays. You can submit a help desk ticket at any time via the TACC User Portal with "Longhorn" in the Resource field. Help the consulting staff help you by following these best practices when submitting tickets.

  • Do your homework before submitting a help desk ticket. What does the user guide and other documentation say? Search the internet for key phrases in your error logs; that's probably what the consultants answering your ticket are going to do. What have you changed since the last time your job succeeded?

  • Describe your issue as precisely and completely as you can: what you did, what happened, verbatim error messages, other meaningful output. When appropriate, include the information a consultant would need to find your artifacts and understand your workflow: e.g. the directory containing your build and/or job script; the modules you were using; relevant job numbers; and recent changes in your workflow that could affect or explain the behavior you're observing.

  • Subscribe to Longhorn User News. This is the best way to keep abreast of maintenance schedules, system outages, and other general interest items.

  • Have realistic expectations. Consultants can address system issues and answer questions about Longhorn. But they can't teach parallel programming in a ticket, and may know nothing about the package you downloaded. They may offer general advice that will help you build, debug, optimize, or modify your code, but you shouldn't expect them to do these things for you.

  • Be patient. It may take a business day for a consultant to get back to you, especially if your issue is complex. It might take an exchange or two before you and the consultant are on the same page. If the admins disable your account, it's not punitive. When the file system is in danger of crashing, or a login node hangs, they don't have time to notify you before taking action.