Longhorn has been decommissioned and is no longer available. (11/30/2022)

TACC is eager to support Longhorn users long-term on our other GPU computing resources - primarily Lonestar6 and Frontera. Toward this end, Longhorn staff has stored snapshots of all users' home and scratch directories on a temporary location on Lonestar6. TACC staff will implement this transition as smoothly as possible. Please submit a support ticket with the subject "Longhorn Transition" for more information about accessing your data and requesting allocations on Lonestar6 or Frontera.

TACC Longhorn User Guide
Last update: November 30, 2022

Status Updates and Notices

  • The Stockyard ($WORK) file system is now mounted on Longhorn. Users must STILL run all jobs out of $SCRATCH. (09/21/2021)
  • Longhorn is a subsystem of Frontera. Consult Frontera Allocations for information on obtaining a Longhorn allocation.
  • You may now subscribe to Longhorn User News. Stay up-to-date on Longhorn's status, scheduled maintenances and other notifications.
  • Longhorn's production queue limits are subject to change at any time.
  • All users: read the Good Conduct section. Longhorn is a shared resource and your actions can impact other users.

Introduction to Longhorn

Longhorn is a TACC resource built in partnership with IBM to support GPU-accelerated workloads. The power of this system is in its multiple GPUs per node, and it is intended to support sophisticated workloads that require high GPU density and little CPU compute. Longhorn will support double-precision machine learning and deep learning workloads that can be accelerated by GPU-powered frameworks, as well as general purpose GPU calculations. Longhorn is also part of the GPU subsystem to one of TACC's flagship supercomputers, Frontera, funded by the National Science Foundation (NSF) through award #1818253, Computing for the Endless Frontier.

TACC's Longhorn Cluster
Figure 1. TACC's Longhorn System

Quickstart for Experienced Users

Experienced HPC/TACC users will be very familiar with many of the topics presented in this guide. Here we'll highlight some sections for a quick start on Longhorn.

  • Log into the TACC User Portal to confirm that you've been added to a Longhorn allocation. Then, connect via SSH to longhorn.tacc.utexas.edu.
  • Review the TACC info box (taccinfo) displayed at login for your allocation availability and SU balances.
  • Read the Good Conduct section. Longhorn is a shared resource and this section covers practices and etiquette to keep your account in good standing and keep Longhorn's systems running smoothly for all users.
  • Consult the Longhorn File Systems and Longhorn Production Queues tables. These should be near identical to the structure used on other TACC systems but there are a few minor changes you will want to take note of.
  • Copy and modify any of the Sample Job Scripts for your own use. These scripts will also be helpful to show you how to modify any Jobs Scripts you are bringing over from other TACC systems so that they run efficiently on Longhorn.
  • Review the default modules with "module list". Make any changes needed for your code.
  • Start small. Run any jobs from other systems on a smaller scale in order to test the performance of your code on Longhorn. You may find your code needs to be altered or recompiled in order to perform well and at scale on the new system.

IBM Power System Specifications

Longhorn is comprised of 108 IBM Power System AC922 nodes distributed across nine racks, plus an IBM Elastic Storage System (representing the home and scratch file systems) as a standalone 10th rack. Four nodes are reserved as login nodes and management nodes, leaving 104 nodes for the compute system.

GPU Nodes

Longhorn hosts 96 V100 nodes, each with 4 GPUs per node. Access these nodes via the v100 queue.

Model:  IBM Power System AC922 (8335-GTH)
Processor:  IBM Power 9
Total processors per node:  2
Total cores per processor:  20
Total cores per node:  40
Hardware threads per core:  4
Hardware threads per node:  160
Clock rate:  2.3GHz
Clock rate (turbo):  3.8GHz
RAM:  256GB
Local storage:  ~900 GB (/tmp)
GPUs:  4x NVIDIA Tesla V100
GPU RAM:  4x 16GB (64 GB aggregate)

GPU Large Memory Nodes

Longhorn hosts 8 large memory V100 nodes, each with 4 GPUs per node. Access these nodes via the v100-lm queue.

Model:  IBM Power System AC922 (8335-GTH)
Processor:  IBM Power 9
Total processors per node:  2
Total cores per processor:  20
Total cores per node:  40
Hardware threads per core:  4
Hardware threads per node:  160
Clock rate:  2.3GHz
Clock rate (turbo):  3.8GHz
RAM:  512GB
Local storage:  ~900 GB (/tmp)
GPUs:  4x NVIDIA Tesla V100
GPU RAM:  4x 16GB (64 GB aggregate)

Login Nodes

Longhorn hosts two login nodes:

  • Dual socket
  • IBM Power 9 processors @ 2.3 GHz and 20 cores/socket (40 cores/node)
  • 256 GB DDR4 RAM (16 x 16 GB DIMMS @ 2666 MHz)
  • Hyperthreading enabled


Longhorn is attached to $HOME and $SCRATCH file systems over a fast network.

  • Mellanox EDR Infiniband (MT28800 Family ConnectX-5 Ex adapter)
  • Spine-and-leaf interconnect
  • NetXtreme BCM5719 Gigabit Ethernet 1Gbps adapter

Longhorn is also attached to the $WORK file system over a limited connection. For better performance and more efficient I/O, we recommend staging your data to the $SCRATCH file system prior to submitting compute jobs.

Accessing the System

Only users with allocations on Longhorn are permitted to log on to Longhorn. A TACC User Portal account does not enable you to log on to any TACC resources unless you have an active allocation on that resource.

Log on with Secure Shell (SSH)

The "ssh" command (SSH protocol) is the standard way to connect to Longhorn. SSH also includes support for the file transfer utilities scp and sftp. Wikipedia is a good source of information on SSH. SSH is available within Linux and from the terminal app in the Mac OS. If you are using Windows, you will need an SSH client that supports the SSH-2 protocol: e.g. Bitvise, OpenSSH, PuTTY, or SecureCRT. Initiate a session using the ssh command or the equivalent; from the Linux command line the launch command looks like this:

localhost$ ssh taccuserid@longhorn.tacc.utexas.edu

The above command will alternate connections across both available login nodes, login1-login2, and route your connection to one of them. To connect to a specific login node, use its full domain name:

localhost$ ssh taccuserid@login2.longhorn.tacc.utexas.edu

To connect with X11 support on Longhorn (usually required for applications with graphical user interfaces), use the "-X" or "-Y" switch:

localhost$ ssh -X taccuserid@longhorn.tacc.utexas.edu

To report a connection problem, execute the ssh command with the "-vvv" option and include the verbose output when submitting a help ticket.

Do not run the "ssh-keygen" command on Longhorn. This command will create and configure a key pair that will interfere with the execution of job scripts in the batch system. If you do this by mistake, you can recover by renaming or deleting the .ssh directory located in your home directory; the system will automatically generate a new one for you when you next log into Longhorn.

  1. execute "mv .ssh dot.ssh.old"
  2. log out
  3. log into Longhorn again

After logging in again the system will generate a properly configured key pair.

Regardless of your research workflow, you'll need to master Linux basics and a Linux-based text editor (e.g. emacs, nano, gedit, or vi/vim) to use the system properly. However, this user guide does not address these topics. There are numerous resources in a variety of formats that are available to help you learn Linux, including some listed on the TACC and training sites. If you encounter a term or concept in this user guide that is new to you, a quick internet search should help you resolve the matter quickly.

Check your Allocation Status

You must be added to a Longhorn allocation in order to have access to Longhorn. The ability to log on to the TACC User Portal does NOT signify access to Longhorn or any TACC resource. You may monitor your allocations on the TACC User Portal. Please consult the allocations documentation for more information.

Multi-Factor Authentication

Access to all TACC systems now requires Multi-Factor Authentication (MFA). You can create an MFA pairing on the TACC User Portal. After login on the portal, go to your account profile (Home->Account Profile), then click the "Manage" button under "Multi-Factor Authentication" on the right side of the page. See Multi-Factor Authentication at TACC for further information.

Password Management

Use your TACC User Portal password for direct logins to TACC resources. You can change your TACC password through the TACC User Portal. Log into the portal, then select "Change Password" under the "HOME" tab. If you've forgotten your password, go to the TACC User Portal home page and select "Password Reset" under the Home tab.

Account-Level Diagnostics

TACC's sanitytool module loads an account-level diagnostic package that detects common account-level issues and often walks you through the fixes. You should certainly run the package's sanitycheck utility when you encounter unexpected behavior. You may also want to run sanitycheck periodically as preventive maintenance. To run sanitytool's account-level diagnostics, execute the following commands:

login1$ module load sanitytool
login1$ sanitycheck

Execute module help sanitytool for more information.

Linux Shell

The default login shell for your user account is Bash. To determine your current login shell, execute:

$ echo $SHELL

If you'd like to change your login shell to csh, sh, tcsh, or zsh, submit a ticket through the TACC portal. The chsh ("change shell") command will not work on TACC systems.

When you start a shell on Longhorn, system-level startup files initialize your account-level environment and aliases before the system sources your own user-level startup scripts. You can use these startup scripts to customize your shell by defining your own environment variables, aliases, and functions. These scripts (e.g. .profile and .bashrc) are generally hidden files: so-called dotfiles that begin with a period, visible when you execute: ls -a.

Before editing your startup files, however, it's worth taking the time to understand the basics of how your shell manages startup. Bash startup behavior is very different from the simpler csh behavior, for example. The Bash startup sequence varies depending on how you start the shell (e.g. using ssh to open a login shell, executing the bash command to begin an interactive shell, or launching a script to start a non-interactive shell). Moreover, Bash does not automatically source your .bashrc when you start a login shell by using ssh to connect to a node. Unless you have specialized needs, however, this is undoubtedly more flexibility than you want: you will probably want your environment to be the same regardless of how you start the shell. The easiest way to achieve this is to execute source ~/.bashrc from your .profile, then put all your customizations in .bashrc. The system-generated default startup scripts demonstrate this approach. We recommend that you use these default files as templates.

For more information see the Bash Users' Startup Files: Quick Start Guide and other online resources that explain shell startup. To recover the originals that appear in a newly created account, execute /usr/local/startup_scripts/install_default_scripts.

Environment Variables

Your environment includes the environment variables and functions defined in your current shell: those initialized by the system, those you define or modify in your account-level startup scripts, and those defined or modified by the modules that you load to configure your software environment. Be sure to distinguish between an environment variable's name (e.g. HISTSIZE) and its value ($HISTSIZE). Understand as well that a sub-shell (e.g. a script) inherits environment variables from its parent, but does not inherit ordinary shell variables or aliases. Use export (in Bash) or setenv (in csh) to define an environment variable.

Execute the env command to see the environment variables that define the way your shell and child shells behave.

Pipe the results of env into grep to focus on specific environment variables. For example, to see all environment variables that contain the string GIT (in all caps), execute:

$ env | grep GIT

The environment variables PATH and LD_LIBRARY_PATH are especially important. PATH is a colon-separated list of directory paths that determines where the system looks for your executables. LD_LIBRARY_PATH is a similar list that determines where the system looks for shared libraries.

Using Modules to Manage your Environment

Lmod, a module system developed and maintained at TACC, makes it easy to manage your environment so you have access to the software packages and versions that you need to conduct your research. This is especially important on a system like Longhorn that serves thousands of users with an enormous range of needs. Loading a module amounts to choosing a specific package from among available alternatives:

$ module load xl          # load the default IBM compiler
$ module load xl/16.1.1   # load a specific version of the IBM compiler (v16.1.1)

A module does its job by defining or modifying environment variables (and sometimes aliases and functions). For example, a module may prepend appropriate paths to $PATH and $LD_LIBRARY_PATH so that the system can find the executables and libraries associated with a given software package. The module creates the illusion that the system is installing software for your personal use. Unloading a module reverses these changes and creates the illusion that the system just uninstalled the software:

$ module load   ddt  # defines DDT-related env vars; modifies others
$ module unload ddt  # undoes changes made by load

The module system does more, however. When you load a given module, the module system can automatically replace or deactivate modules to ensure the packages you have loaded are compatible with each other. In the example below, the module system automatically unloads one compiler when you load another, and deactivates IBM-compatible versions of MPI:

$ module load xl             # load default version of IBM compiler
$ module load spectrum_mpi   # load default version of Spectrum MPI
$ module load gcc            # change compiler

Lmod is automatically replacing "xl/16.1.1" with "gcc/9.1.0".

Inactive Modules:
  1) spectrum_mpi

On Longhorn, modules generally adhere to a TACC naming convention when defining environment variables that are helpful for building and running software. For example, the papi module defines TACC_PAPI_BIN (the path to PAPI executables), TACC_PAPI_LIB (the path to PAPI libraries), TACC_PAPI_INC (the path to PAPI include files), and TACC_PAPI_DIR (top-level PAPI directory). After loading a module, here are some easy ways to observe its effects:

$ module show papi   # see what this module does to your environment
$ env | grep PAPI    # see env vars that contain the string PAPI
$ env | grep -i papi # case-insensitive search for 'papi' in environment

To see the modules you currently have loaded:

$ module list

To see all modules that you can load right now because they are compatible with the currently loaded modules:

$ module avail

To see all installed modules, even if they are not currently available because they are incompatible with your currently loaded modules:

$ module spider   # list all modules, even those not available to load

To filter your search:

module spider cuda          # all modules with names containing 'cuda'
$ module spider cuda/10.1     # additional details on a specific module

Among other things, the latter command will tell you which modules you need to load before the module is available to load. You might also search for modules that are tagged with a keyword related to your needs (though your success here depends on the diligence of the module writers). For example:

$ module keyword performance

You can save a collection of modules as a personal default collection that will load every time you log into Longhorn. To do so, load the modules you want in your collection, then execute:

$ module save    # save the currently loaded collection of modules 

Two commands make it easy to return to a known, reproducible state:

$ module reset   # load the system default collection of modules
$ module restore # load your personal default collection of modules

On TACC systems, the command module reset is equivalent to module purge; module load TACC. It's a safer, easier way to get to a known baseline state than issuing the two commands separately.

Help text is available for both individual modules and the module system itself:

$ module help cuda/10.1     # show help text for software package swr
$ module help               # show help text for the module system itself

See Lmod's online documentation for more extensive documentation. The online documentation addresses the basics in more detail, but also covers several topics beyond the scope of the help text (e.g. writing and using your own module files).

It's safe to execute module commands in job scripts. In fact, this is a good way to write self-documenting, portable job scripts that produce reproducible results. If you use module save to define a personal default module collection, it's rarely necessary to execute module commands in shell startup scripts, and it can be tricky to do so safely. If you do wish to put module commands in your startup scripts, see Longhorn's default startup scripts for a safe way to do so.

Good Conduct on Longhorn

You share Longhorn with many, sometimes hundreds, of other users, and what you do on the system affects others. All users must follow a set of good practices which entail limiting activities that may impact the system for other users. Exercise good conduct to ensure that your activity does not adversely impact the system and the research community with whom you share it.

TACC staff has developed the following guidelines to good conduct on Longhorn. Please familiarize yourself especially with the first two mandates. The next sections discuss best practices on limiting and minimizing I/O activity and file transfers. And finally, we provide job submission tips when constructing job scripts to help minimize wait times in the queues.

Do Not Run Jobs on the Login Nodes

Longhorn's few login nodes are shared among all users. Dozens, (sometimes hundreds) of users may be logged on at one time accessing the file systems. Think of the login nodes as a prep area, where users may edit and manage files, compile code, perform file management, issue transfers, submit new and track existing batch jobs etc. The login nodes provide an interface to the "back-end" compute nodes.

The compute nodes are where actual computations occur and where research is done. Hundreds of jobs may be running on all compute nodes, with hundreds more queued up to run. All batch jobs and executables, as well as development and debugging sessions, must be run on the compute nodes. To access compute nodes on TACC resources, one must either submit a job to a batch queue or initiate an interactive session using the idev utility.

A single user running computationally expensive or disk intensive task/s will negatively impact performance for other users. Running jobs on the login nodes is one of the fastest routes to account suspension. Instead, run on the compute nodes via an interactive session (idev) or by submitting a batch job.

Do not run jobs or perform intensive computational activity on the login nodes or the shared file systems.
Your account may be suspended and you will lose access to the queues if your jobs are impacting other users.

Do's & Don't on the Login Nodes

  • Do not run research applications on the login nodes; this includes frameworks like MATLAB and R, as well as computationally or I/O intensive Python scripts. If you need interactive access, use the idev utility or Slurm's srun to schedule one or more compute nodes.

    DO THIS: Start an interactive session on a compute node and run Matlab.

      login1$ idev
      nid00181$ matlab

    DO NOT DO THIS: Run Matlab or other software packages on a login node

    login1$ matlab
  • Do not launch too many simultaneous processes; while it's fine to compile on a login node, a command like "make -j 16" (which compiles on 16 cores) may impact other users.

    DO THIS: build and submit a batch job. All batch jobs run on the compute nodes.

      login1$ make mytarget
      login1$ sbatch myjobscript

    DO NOT DO THIS: Invoke multiple build sessions.

    login1$ make -j 12

    DO NOT DO THIS: Run an executable on a login node.

      login1$ ./myprogram
  • That script you wrote to poll job status should probably do so once every few minutes rather than several times a second.

Do Not Stress the Shared File Systems

The TACC Global Shared File System, Stockyard, is mounted on most TACC HPC resources as the /work ($WORK) directory. This file system is accessible to all TACC users, and therefore experiences a lot of I/O activity (reading and writing to disk, opening and closing files) as users run their jobs, read and generate data including intermediate and checkpointing files. As TACC adds more users, the stress on the $WORK file system is increasing to the extent that TACC staff is now recommending new job submission guidelines in order to reduce stress and I/O on Stockyard.

  • Copy or move all job input files to $SCRATCH
  • Make sure your job script directs all output to $SCRATCH

Consider that $HOME and $WORK are for storage and keeping track of important items. Actual job activity, reading and writing to disk, should be offloaded to your resource's $SCRATCH file system (see Table. File System Usage Recommendations. You can start a job from anywhere but the actual work of the job should occur only on the $SCRATCH partition. You can save original items to $HOME or $WORK so that you can copy them over to $SCRATCH if you need to re-generate results.

Compute nodes should not reference $WORK unless it's to stage data in/out only before/after jobs.

More File System Tips

  • Don't run jobs in your $HOME directory. The $HOME file system is for routine file management, not parallel jobs.

  • Avoid storing many small files in a single directory, and avoid workflows that require many small files. A few hundred files in a single directory is probably fine; tens of thousands is almost certainly too many. If you must use many small files, group them in separate directories of manageable size.

  • Watch all your file system quotas. If you're near your quota in $WORK and your job is repeatedly trying (and failing) to write to $WORK, you will stress that file system. If you're near your quota in $HOME, jobs run on any file system may fail, because all jobs write some data to the hidden $HOME/.slurm directory.

  • TACC resources, with a few exceptions, mount three file systems: /home, /work and /scratch. Please follow each file system's recommended usage.

Table. File System Usage Recommendations

File System Best Storage Practices Best Activities
$HOME cron jobs
small scripts
environment settings
compiling, editing
$SCRATCH temporary datasets
I/O files
job files
all job I/O activity
$WORK store software installations
original datasets that can't be reproduced
job scripts and templates
staging datasets

Limit Input/Output (I/O) Activity

In addition to the file system tips above, it's important that your jobs limit all I/O activity. This section focuses on ways to avoid causing problems on each resources' shared file systems.

  • Limit I/O intensive sessions (lots of reads and writes to disk, rapidly opening or closing many files)

  • Avoid opening and closing files repeatedly in tight loops. Every open/close operation on the file system requires interaction with the MetaData Service (MDS). The MDS acts as a gatekeeper for access to files on Lustre's parallel file system. Overloading the MDS will affect other users on the system. If possible, open files once at the beginning of your program/workflow, then close them at the end.

  • Don't get greedy. If you know or suspect your workflow is I/O intensive, don't submit a pile of simultaneous jobs. Writing restart/snapshot files can stress the file system; avoid doing so too frequently. Also, use the hdf5 or netcdf libraries to generate a single restart file in parallel, rather than generating files from each process separately.

If you know your jobs will require significant I/O, please submit a support ticket and an HPC consultant will work with you. See also Managing I/O on TACC Resources for additional information.

File Transfer Guidelines

In order to not stress both internal and external networks, be mindful of the following guidelines:

  • Avoid too many simultaneous file transfers. You share the network bandwidth with other users; don't use more than your fair share. Two or three concurrent scp sessions is probably fine. Twenty is probably not.

  • Avoid recursive file transfers, especially those involving many small files. Create a tar archive before transfers. This is especially true when transferring files to or from Ranch.

  • When creating or transferring large files to Stockyard ($WORK), be sure to stripe the receiving directories. See STRIPING for more information.

Job Submission Tips

  • Request Only the Resources You Need Make sure your job scripts request only the resources that are needed for that job. Don't ask for more time or more nodes than you really need. The scheduler will have an easier time finding a slot for a job requesting 2 nodes for 2 hours, than for a job requesting 4 nodes for 24 hours. This means shorter queue waits times for you and everybody else.

  • Test your submission scripts. Start small: make sure everything works on 2 nodes before you try 20. Work out submission bugs and kinks with 5 minute jobs that won't wait long in the queue and involve short, simple substitutes for your real workload: simple test problems; hello world codes; one-liners like ibrun hostname; or an ldd on your executable.

  • Respect memory limits and other system constraints. If your application needs more memory than is available, your job will fail, and may leave nodes in unusable states. Use TACC's Remora tool to monitor your application's needs.

File Systems on Longhorn

Longhorn mounts two GPFS file systems, home and scratch, and one Lustre file system, work, that are shared across all nodes. Longhorn's startup mechanisms define corresponding account-level environment variables $HOME, $SCRATCH and $WORK that store the paths to directories that you own on each of these file systems. Consult the Longhorn File Systems table below for the basic characteristics of these file systems, and the Good Conduct sections for guidance on file system and other system etiquette.

Longhorn's home and scratch file systems are mounted only on Longhorn, but the work file system mounted on Longhorn is the Global Shared File System hosted on Stockyard. This is the same work file system that is currently available on Frontera, Stampede2 and most other TACC resources.

The $STOCKYARD environment variable points to the highest-level directory that you own on the Global Shared File System. The definition of the $STOCKYARD environment variable is of course account-specific, but you will see the same value on all TACC systems that provide access to the Global Shared File System (see Table 3). This directory is an excellent place to store files you want to access regularly from multiple TACC resources.

File System Quota Key Features
$HOME 10GB, 300,000 files Not intended for parallel or high-intensity file operations.
Backed up regularly.
Defaults: 1 stripe, 1MB stripe size.
Not purged.
3,000,000 files across all TACC systems,
regardless of where on the file system the files reside.
Not intended for high-intensity file operations or jobs involving very large files.
On the Global Shared File System that is mounted on most TACC systems.
Defaults: 1 stripe, 1MB stripe size.
Not backed up.
Not purged.
$SCRATCH no quota Overall capacity 4.5 PB.
Defaults: 1 stripe, 1MB stripe size.
Not backed up.
Subject to purge if access time* is more than 10 days old.
/tmp no quota ~700GB available per node.
Each node's /tmp partition is purged at the end of a job.

Scratch File System Purge Policy

The $SCRATCH file system, as its name indicates, is a temporary storage space. Files that have not been accessed* in ten days are subject to purge. Deliberately modifying file access time (using any method, tool, or program) for the purpose of circumventing purge policies is prohibited.

*The operating system updates a file's access time when that file is modified on a login or compute node or any time that file is read. Reading or executing a file/script will update the access time. Use the "ls -ul" command to view access times.

Stockyard Work File System
Figure 3. Account-level directories on the work file system (Global Shared File System hosted on Stockyard). Example for fictitious user bjones. All directories usable from all systems. Sub-directories (e.g. lonestar6, stampede2) exist only when you have allocations on the associated system.

Your account-specific $WORK environment variable varies from system to system and is a subdirectory of $STOCKYARD (Figure 3). The subdirectory name corresponds to the associated TACC resource. The $WORK environment variable on Longhorn points to the $STOCKYARD/longhorn subdirectory, a convenient location for files you use and jobs you run on Longhorn. Remember, however, that all subdirectories contained in your $STOCKYARD directory are available to you from any system that mounts the file system. If you have accounts on both Longhorn and Stampede2, for example, the $STOCKYARD/longhorn directory is available from your Stampede2 account, and $STOCKYARD/stampede2 directory is available from your Longhorn account. Your quota and reported usage on the Global Shared File System reflects all files that you own on Stockyard, regardless of their actual location on the file system.

Note that resource-specific subdirectories of $STOCKYARD are simply convenient ways to manage your resource-specific files. You have access to any such subdirectory from any TACC resources. If you are logged into Longhorn, for example, executing the alias cdw (equivalent to cd $WORK) will take you to the resource-specific subdirectory $STOCKYARD/longhorn. But you can access this directory from other TACC systems as well by executing cd $STOCKYARD/longhorn. These commands allow you to share files across TACC systems. In fact, several convenient account-level aliases make it even easier to navigate across the directories you own in the shared file systems:

Alias Command
cd or cdh cd $HOME
cds cd $SCRATCH
cdy or cdg cd $STOCKYARD
cdw cd $WORK

Transferring Files with scp

You can transfer files between Longhorn and Linux-based systems using either scp or rsync. Both scp and rsync are available in the Mac Terminal app. Windows SSH clients typically include scp-based file transfer capabilities.

The Linux scp (secure copy) utility is a component of the OpenSSH suite. Assuming your Longhorn username is bjones, a simple scp transfer that pushes a file named myfile from your local Linux system to Longhorn $HOME would look like this:

localhost$ scp ./myfile bjones@longhorn.tacc.utexas.edu:  # note colon after net address

You can use wildcards, but you need to be careful about when and where you want wildcard expansion to occur. For example, to push all files ending in .txt from the current directory on your local machine to /work/01234/bjones/scripts on Longhorn:

localhost$ scp *.txt bjones@longhorn.tacc.utexas.edu:/work/01234/bjones/longhorn

To delay wildcard expansion until reaching Longhorn, use a backslash (\) as an escape character before the wildcard. For example, to pull all files ending in .txt from /work/01234/bjones/scripts on Longhorn to the current directory on your local system:

localhost$ scp bjones@longhorn.tacc.utexas.edu:/work/01234/bjones/longhorn/\*.txt .

You can of course use shell or environment variables in your calls to scp. For example:

localhost$ destdir="/work/01234/bjones/longhorn/data"
localhost$ scp ./myfile bjones@longhorn.tacc.utexas.edu:$destdir

You can also issue scp commands on your local client that use Longhorn environment variables like $HOME, $WORK, and $SCRATCH. To do so, use a backslash (\) as an escape character before the $; this ensures that expansion occurs after establishing the connection to Longhorn:

localhost$ scp ./myfile bjones@longhorn.tacc.utexas.edu:\$SCRATCH/data   # Note backslash

Avoid using scp for recursive transfers of directories that contain nested directories of many small files:

localhost$ scp -r ./mydata     bjones@longhorn.tacc.utexas.edu:\$SCRATCH  # DON'T DO THIS

Instead, use tar to create an archive of the directory, then transfer the directory as a single file:

localhost$ tar cvf ./mydata.tar mydata                                  # create archive
 localhost$ scp ./mydata.tar bjones@longhorn.tacc.utexas.edu:\$WORK # transfer archive

Transferring Files with rsync

The rsync (remote synchronization) utility is a great way to synchronize files that you maintain on more than one system: when you transfer files using rsync, the utility copies only the changed portions of individual files. As a result, rsync is especially efficient when you only need to update a small fraction of a large dataset. The basic syntax is similar to scp:

localhost$ rsync       mybigfile bjones@longhorn.tacc.utexas.edu:\$SCRATCH/data
localhost$ rsync -avtr mybigdir  bjones@longhorn.tacc.utexas.edu:\$SCRATCH/data

The options on the second transfer are typical and appropriate when synching a directory: this is a recursive update (-r) with verbose (-v) feedback; the synchronization preserves time stamps (-t) as well as symbolic links and other meta-data (-a). Because rsync only transfers changes, recursive updates with rsync may be less demanding than an equivalent recursive transfer with scp.

Sharing Files with Collaborators

If you wish to share files and data with collaborators in your project, see Sharing Project Files on TACC Systems for step-by-step instructions. Project managers or delegates can use Unix group permissions and commands to create read-only or read-write shared workspaces that function as data repositories and provide a common work area to all project members.

Launching Applications

The primary purpose of your job script is to launch your research application. How you do so depends on several factors, especially (1) the type of application (e.g. MPI, OpenMP, serial), and (2) what you're trying to accomplish (e.g. launch a single instance, complete several steps in a workflow, run several applications simultaneously within the same job). While there are many possibilities, your own job script will probably include a launch line that is a variation of one of the examples described in this section.

Launching Single GPU Applications

There are four GPUs per node indexed 0-3. By default, only GPU 0 is visible to serial GPU applications. Launching a serial GPU application takes the form:

$ ./mycode.cuda          # compiled CUDA executable

To target the executable to a specific GPU, set the CUDA_VISIBLE_DEVICES environment variable. For example, to run an application on GPU 2:

$ ./mycode.cuda

This method can be used to run four serial GPU applications simultaneously, each on their own GPU. This can be useful when the same code needs to be run many times under multiple conditions, and it makes more efficient use of the nodes when all four GPUs are active:

$ CUDA_VISIBLE_DEVICES=0 ./mycode.cuda  &
$ CUDA_VISIBLE_DEVICES=1 ./mycode.cuda  &
$ CUDA_VISIBLE_DEVICES=2 ./mycode.cuda  &
$ CUDA_VISIBLE_DEVICES=3 ./mycode.cuda  &
$ wait

The trailing ampersand ‘&' symbol puts the process in the background, and the "wait" command pauses the job until all the background processes complete. To confirm which GPUs are active, and which processes are running on each GPU, use the "nvidia-smi" tool:

$ nvidia-smi
Tue Nov 26 14:26:55 2019       
| NVIDIA-SMI 418.67       Driver Version: 418.67       CUDA Version: 10.1     |
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|   0  Tesla V100-SXM2...  On   | 00000004:04:00.0 Off |                    0 |
| N/A   34C    P0   103W / 300W |    319MiB / 16130MiB |    100%      Default |
|   1  Tesla V100-SXM2...  On   | 00000004:05:00.0 Off |                    0 |
| N/A   38C    P0   107W / 300W |    319MiB / 16130MiB |    100%      Default |
|   2  Tesla V100-SXM2...  On   | 00000035:03:00.0 Off |                    0 |
| N/A   35C    P0   103W / 300W |    319MiB / 16130MiB |    100%      Default |
|   3  Tesla V100-SXM2...  On   | 00000035:04:00.0 Off |                    0 |
| N/A   36C    P0   106W / 300W |    319MiB / 16130MiB |    100%      Default |

| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|    0    131794      C   ./mycode.cuda                                309MiB |
|    1    131798      C   ./mycode.cuda                                309MiB |
|    2    131803      C   ./mycode.cuda                                309MiB |
|    3    131806      C   ./mycode.cuda                                309MiB |

Launching Multi-GPU Applications

Typical workflows may use MPI and ibrun to launch a GPU application using multiple GPUs on one node, or even multiple GPUs on multiple nodes. There is no need to set CUDA_VISIBLE_DEVICES as demonstrated above, as ibrun will handle GPU assignment among the hosts. For example, to run a compiled application on all four GPUs on one node:

login1$ idev -N1   # launch an interactive session with one node
c001-005$ ibrun -np 4 ./mycode.cuda --num_gpus=1 --other_options

In this hypothetical example, ibrun launches four instances of the mycode.cuda executable. And, mycode.cuda takes a flag --num_gpus that tells each instance to use one GPU. The ibrun tool is also aware of multiple hosts from information in your environment. If you are running a two-node job, then you can launch across eight GPUs in a similar fashion:

login1$ idev -N2   # launch an interactive session with two nodes
c001-005$ ibrun -np 8 ./mycode.cuda --num_gpus=1 --other_options

See the Tensorflow at TACC document for more information and examples about running on multiple GPUs or multiple nodes.

Launching Multi-GPU with CUDA

If you plan to launch multi-GPU applications using CUDA-aware Spectrum MPI, make sure to export the following environment variable:

export MY_SPECTRUM_OPTIONS="--gpu"

Launching One Serial Application

To launch a serial application, simply call the executable. Specify the path to the executable in either the PATH environment variable or in the call to the executable itself:

myprogram                       # executable in a directory listed in $PATH
$WORK/apps/myprov/myprogram     # explicit full path to executable
./myprogram                     # executable in current directory
./myprogram -m -k 6 input1      # executable with notional input options

Running Jobs on Longhorn's GPU Nodes

Like all TACC resources, Longhorn's job scheduler is the Slurm Workload Manager. Slurm commands enable you to submit, manage, monitor, and control your jobs. Jobs submitted to the scheduler are queued, then run on the compute nodes. Each job consumes Service Units (SUs) which are then charged to your allocation.

Job Accounting

Like all TACC systems, Longhorn's accounting system is based on node-hours: one unadjusted Service Unit (SU) represents a single compute node used for one hour (a node-hour). For any given job, the total cost in SUs is the use of one compute node for one hour of wall clock time plus any charges or discounts for the use of specialized queues, e.g. Frontera's flex queue, Stampede2's development queue, and Longhorn's v100 queue. The queue charge rates are determined by the supply and demand for that particular queue or type of node used and are subject to change.

Longhorn SUs billed = (# nodes) x (job duration in wall clock hours) x (charge rate per node-hour)

The Slurm scheduler tracks and charges for usage to a granularity of a few seconds of wall clock time. The system charges only for the resources you actually use, not those you request. If your job finishes early and exits properly, Slurm will release the nodes back into the pool of available nodes. Your job will only be charged for as long as you are using the nodes.

TACC does not implement node-sharing on any compute resource. Each Longhorn node can be assigned to only one user at a time; hence a complete node is dedicated to a user's job and accrues wall-clock time for all the node's cores whether or not all cores are used.

Tip: Your queue wait times will be less if you request only the time you need: the scheduler will have a much easier time finding a slot for the 2 hours you really need than say, for the 12 hours requested in your job script.

Principal Investigators can monitor allocation usage via the TACC User Portal under "Allocations->Projects and Allocations". Be aware that the figures shown on the portal may lag behind the most recent usage. Projects and allocation balances are also displayed upon command-line login.

To display a summary of your TACC project balances and disk quotas at any time, execute:

login1$ /usr/local/etc/taccinfo        # Generally more current than balances displayed on the portals.

Requesting Resources

Be sure to request computing resources e.g., number of nodes, number of tasks per node, max time per job, that are consistent with the type of application(s) you are running:

  • A serial (non-parallel) application can only make use of a single core on a single node, and will only see that node's memory.

  • An MPI (Message Passing Interface) program can exploit the distributed computing power of multiple nodes: it launches multiple copies of its executable (MPI tasks, each assigned unique IDs called ranks) that can communicate with each other across the network. The tasks on a given node, however, can only directly access the memory on that node. Depending on the program's memory requirements, it may not be possible to run a task on every core of every node assigned to your job. If it appears that your MPI job is running out of memory, try launching it with fewer tasks per node to increase the amount of memory available to individual tasks.

  • A popular type of parameter sweep (sometimes called high throughput computing) involves submitting a job that simultaneously runs many copies of one serial or threaded application, each with its own input parameters ("Single Program Multiple Data", or SPMD). The launcher tool is designed to make it easy to submit this type of job. For more information:

      $ module load launcher-gpu
      $ module help launcher-gpu

Slurm Job Scheduler

Longhorn employs the Slurm workload manager, the job scheduler common to all TACC HPC resources. Slurm commands enable you to submit, manage, monitor, and control your jobs.

The Stampede2 User Guide discusses Slurm extensively. See the following sections for detailed information:

Longhorn Production Queues

Longhorn's Slurm current partitions (queues), maximum node limits and charge rates are summarized in the table below. Execute qlimits on Longhorn for real-time information regarding limits on available queues. See Job Accounting to learn how jobs are charged to your allocation.

Table 5. Longhorn Production Queues

Queue status as of February 19, 2020. Queues and limits are subject to change without notice.

Queue Name Max Nodes per Job
(assoc'd cores)
Max Job Duration Charge Rate
(per node-hour)
(8 nodes)
2 nodes
(80 cores, 8 GPUs)
2 hours 1 Service Unit (SU)
(88 nodes)
32 nodes
(1280 cores, 128 GPUs)
48 hours 6 SUs
(8 nodes)
8 nodes
(320 cores, 32 GPUs)
48 hours 6 SUs

To request more nodes than are available in the v100 queue, submit a consulting (help desk) ticket through the TACC User Portal. Include in your request reasonable evidence of your readiness to run under the conditions you're requesting. In most cases this should include your own strong or weak scaling results obtained from previous Longhorn jobs.

Single GPU Applications
Single Node Multiple GPUs
Multi-GPU Applications
Parallelization across GPU Nodes

Customizing your Job Script

Copy and customize the following scripts to specify and refine your job's requirements.

  • specify the maximum run time with the -t option.
  • specify number of nodes needed with the -N option
  • specify tasks per node with the -n option
  • specify the project to be charged with the -A option.

In general, the fewer resources (nodes) you specify in your batch script, the less time your job will wait in the queue. See 4. Request Only the Resources You Need in the Good Conduct section.

Consult Table 6 in the Stampede2 User Guide for a listing of common Slurm #SBATCH options.

Job Management

In this section, we present several Slurm commands and other utilities that are available to help you plan and track your job submissions as well as check the status of the Slurm queues.

When interpreting queue and job status, remember that Longhorn doesn't operate on a first-come-first-served basis. Instead, the sophisticated, tunable algorithms built into Slurm attempt to keep the system busy, while scheduling jobs in a way that is as fair as possible to everyone. At times this means leaving nodes idle ("draining the queue") to make room for a large job that would otherwise never run. It also means considering each user's "fair share", scheduling jobs so that those who haven't run jobs recently may have a slightly higher priority than those who have.

TACC's qlimits command

To display resource limits for the Longhorn queues, execute: qlimits. The result is real-time data; the corresponding information in this document's table of Longhorn queues may lag behind the actual configuration that the qlimits utility displays.

Slurm's sinfo command

Slurm's sinfo command allows you to monitor the status of the queues. If you execute sinfo without arguments, you'll see a list of every node in the system together with its status. To skip the node list and produce a tight, alphabetized summary of the available queues and their status, execute:

login1$ sinfo -S+P -o "%18P %8a %20F"    # compact summary of queue status

An excerpt from this command's output might look like this:

login1$ sinfo -S+P -o "%18P %8a %20F"
PARTITION          AVAIL    NODES(A/I/O/T)    
development        up       0/8/0/8
v100               up       44/43/1/96          
v100-lm            up       0/8/0/8

The AVAIL column displays the overall status of each queue (up or down), while the column labeled NODES(A/I/O/T) shows the number of nodes in each of several states ("Allocated", "Idle", "Offline", and "Total"). Execute man sinfo for more information. Use caution when reading the generic documentation, however: some available fields are not meaningful or are misleading on Longhorn (e.g. TIMELIMIT, displayed using the %l option).

Slurm's squeue command

Slurm's squeue command allows you to monitor jobs in the queues, whether pending (waiting) or currently running:

login1$ squeue             # show all jobs in all queues
login1$ squeue -u bjones   # show all jobs owned by bjones
login1$ man squeue         # more info

An excerpt from the default output might look like this:

25781 development idv72397   bjones CG       9:36      2 c001-011,012
25918 development ppm_4828   bjones PD       0:00     20 (Resources)
25915 development MV2-test    siliu PD       0:00     14 (Priority)
25589        v100   aatest slindsey PD       0:00      8 (Dependency)
25949 development psdns_la sniffjck PD       0:00      2 (Priority)
25618        v100   SP256U   connor PD       0:00      1 (Dependency)
25944        v100  MoTi_hi   wchung  R      35:13      1 c005-003
25945        v100 WTi_hi_e   wchung  R      27:11      1 c006-001
25606        v100   trainA   jackhu  R   23:28:28      1 c008-012

The column labeled ST displays each job's status:

  • PD means "Pending" (waiting);
  • R means "Running";
  • CG means "Completing" (cleaning up after exiting the job script).

Pending jobs appear in order of decreasing priority. The last column includes a nodelist for running/completing jobs, or a reason for pending jobs. If you submit a job before a scheduled system maintenance period, and the job cannot complete before the maintenance begins, your job will run when the maintenance/reservation concludes. The squeue command will report ReqNodeNotAvailable ("Required Node Not Available"). The job will remain in the PD state until Longhorn returns to production.

The default format for squeue now reports total nodes associated with a job rather than cores, tasks, or hardware threads. One reason for this change is clarity: the operating system sees each compute node's 56 hardware threads as "processors", and output based on that information can be ambiguous or otherwise difficult to interpret.

The default format lists all nodes assigned to displayed jobs; this can make the output difficult to read. A handy variation that suppresses the nodelist is:

login1$ squeue -o "%.10i %.12P %.12j %.9u %.2t %.9M %.6D"  # suppress nodelist

The --start option displays job start times, including very rough estimates for the expected start times of some pending jobs that are relatively high in the queue:

login1$ squeue --start -j 167635     # display estimated start time for job 167635

TACC's showq utility

TACC's showq utility mimics a tool that originated in the PBS project, and serves as a popular alternative to the Slurm squeue command:

login1$ showq                 # show all jobs; default format
login1$ showq -u              # show your own jobs
login1$ showq -U bjones       # show jobs associated with user bjones
login1$ showq -h              # more info

The output groups jobs in four categories: ACTIVE, WAITING, BLOCKED, and COMPLETING/ERRORED. A BLOCKED job is one that cannot yet run due to temporary circumstances (e.g. a pending maintenance or other large reservation.).

If your waiting job cannot complete before a maintenance/reservation begins, showq will display its state as **WaitNod** ("Waiting for Nodes"). The job will remain in this state until Longhorn returns to production.

The default format for showq now reports total nodes associated with a job rather than cores, tasks, or hardware threads. One reason for this change is clarity: the operating system sees each compute node's 112 hardware threads as "processors", and output based on that information can be ambiguous or otherwise difficult to interpret.

Other Job Management Commands

scancel, scontrol, and sacct

It's not possible to add resources to a job (e.g. allow more time) once you've submitted the job to the queue.

To cancel a pending or running job, first determine its jobid, then use scancel:

login1$ squeue -u bjones    # one way to determine jobid
170361        v100   spec12   bjones PD       0:00     32 (Resources)
login1$ scancel 170361      # cancel job

For detailed information about the configuration of a specific job, use scontrol:

login1$ scontrol show job=170361

To view some accounting data associated with your own jobs, use sacct:

login1$ sacct --starttime 2019-06-01  # show jobs that started on or after this date

Dependent Jobs using sbatch

You can use sbatch to help manage workflows that involve multiple steps: the --dependency option allows you to launch jobs that depend on the completion (or successful completion) of another job. For example you could use this technique to split into three jobs a workflow that requires you to (1) compile on a single node; then (2) compute on 40 nodes; then finally (3) post-process your results using 4 nodes.

login1$ sbatch --dependency=afterok:173210 myjobscript

For more information see the Slurm online documentation. Note that you can use $SLURM_JOBID from one job to find the jobid you'll need to construct the sbatch launch line for a subsequent one. But also remember that you can't use sbatch to submit a job from a compute node.

Discover Installed Software

You can discover already installed software using TACC's Software Search tool or execute "module spider" or "module avail" on the command-line.

Users must provide their own license for commercial packages.

At this time, the following software packages are available on Longhorn:

login1$ module avail
----------------- /opt/apps/xl16/spectrum_mpi10_3/modulefiles ------------------
fftw3/3.3.10                  petsc/3.13-i64debug
petsc/3.13-complex            petsc/3.13-i64
petsc/3.13-complexdebug       petsc/3.13-nohdf5
petsc/3.13-complexi64debug    petsc/3.13-single
petsc/3.13-complexi64         petsc/3.13-singledebug
petsc/3.13-cuda               petsc/3.13-uni
petsc/3.13-cudadebug          petsc/3.13-unidebug
petsc/3.13-debug              petsc/3.13             (D)

-------------------------- /opt/apps/xl16/modulefiles --------------------------
hdf5/1.10.4           mvapich2-gdr/2.3.6 (D)    spectrum_mpi/10.3.0 (L)
mvapich2-gdr/2.3.4    netcdf/4.7.4

---------------------------- /opt/apps/modulefiles -----------------------------
TACC                  (L)      python3/powerai_1.6.2
autotools/1.2         (L)      python3/powerai_1.7.0  (D)
cmake/3.16.1          (L)      pytorch-py2/1.0.1
conda/4.8.3                    pytorch-py2/1.1.0      (D)
cuda/10.0             (g)      pytorch-py3/1.0.1
cuda/10.1             (g)      pytorch-py3/1.1.0
cuda/10.2             (g,D)    pytorch-py3/1.2.0
gcc/4.9.3                      pytorch-py3/1.3.1      (D)
gcc/6.3.0                      sanitytool/1.5
gcc/7.3.0             (D)      settarg
gcc/9.1.0                      tacc-singularity/3.5.3
git/2.24.1            (L)      tacc-singularity/3.7.2 (D)
idev/1.5.7                     tacc_tips/0.5
launcher_gpu/1.1               tensorflow-py2/1.13.1
lmod                           tensorflow-py2/1.14.0  (D)
pgi/19.10.0                    tensorflow-py3/1.13.1
pgi/20.7.0            (D)      tensorflow-py3/1.14.0
pylauncher/3.1                 tensorflow-py3/1.15.2
python2/powerai_1.6.0          tensorflow-py3/2.1.0   (D)
python2/powerai_1.6.1 (D)      xalt/2.10.21           (L)
python3/powerai_1.6.0          xl/16.1.1              (L)

Building Software on Longhorn

When building software on Longhorn, we recommend using the IBM compilers and IBM Spectrum MPI stack. This will be the default in the early user period, but may change if we determine one of the other MPI stacks provides superior performance.

IBM Compilers

IBM XL is the recommended and default compiler suite on Longhorn. Here are simple examples that use the IBM compiler to build an executable from source code:

$ xlc -o myexe mycode.c       # C code
$ xlc++ -o myexe mycode.cpp   # C++ code
$ xlf90 -o myexe mycode.f     # Fortran code

See the published IBM documentation, available online for information on optimization flags and other IBM compiler options.

GNU Compilers

The GNU foundation maintains a number of high quality compilers, including a compiler for C (gcc), C++ (g++), and Fortran (gfortran). The gcc compiler is the foundation underneath all three, and the term gcc often means the suite of these three GNU compilers.

Load a gcc module to access a recent version of the GNU compiler suite. Avoid using the GNU compilers that are available without a gcc module — those will be older versions based on the "system gcc" that comes as part of the Linux distribution.

Here are simple examples that use the GNU compilers to produce an executable from source code:

$ gcc mycode.c                    # C source file; executable a.out
$ gcc -o myexe mycode.c           # C source file; executable myexe
$ g++ -o myexe mycode.cpp         # C++ source file
$ gfortran -o myexe mycode.f90    # Fortran90 source file
$ gcc -fopenmp -o myexe mycode.c  # OpenMP; GNU flag is different than IBM

Note that some compiler options are the same for both IBM and GNU (e.g. "-o"), while others are different (e.g. "-qopenmp" vs "-fopenmp"). Many options are available in one compiler suite but not the other. See the online GNU documentation for information on optimization flags and other GNU compiler options.

Compiling and Linking MPI Programs

Spectrum MPI (module load spectrum_mpi) and MVAPICH2 (module load mvapich2) are the two MPI libraries available on Longhorn. After loading an spectrum_mpi or mvapich2 module, compile and/or link using the appropriate mpi wrapper (mpicc, mpicxx, mpif90) in place of the compiler:

$ mpicc    mycode.c   -o myexe   # C source, full build
$ mpicc -c mycode.c              # C source, compile without linking
$ mpicxx   mycode.cpp -o myexe   # C++ source, full build
$ mpif90   mycode.f90 -o myexe   # Fortran source, full build

These wrappers call the compiler with the options, include paths, and libraries necessary to produce an MPI executable using the MPI module you're using. To see the effect of a given wrapper, call it with the "-showoption":

$ mpicc -show  # Show compile line generated by call to mpicc; similarly for other wrappers

Compiling with CUDA

NVIDIA's CUDA compiler and libraries are accessed by loading the CUDA module:

login1$ module load cuda

Use the nvcc compiler on the login node to compile code, and run executables on the compute nodes. Longhorn's V100 GPUs are compute capability 7.0 devices. When compiling your code, make sure to specify this level of capability with:

$ nvcc -arch=compute_70 -code=sm_70 ...

The NVIDA CUDA debugger is cuda-gdb. Applications must be debugged through an interactive idev session. Please see the relevant idev section for more details.

The NVIDIA Compute Visual Profiler, computeprof, can be used to profile both CUDA and OpenCL programs that have been developed in NVIDIA CUDA/OpenCL programming environment. Since the profiler is X based, it must be run either within a VNC session or by ssh-ing into an allocated compute node with X-forwarding enabled. The profiler command and library paths are included in the $PATH and $LD_LIBRARY_PATH variables by the CUDA module. The computeprof executable and libraries can be found in the following respective directories:


For further information on the CUDA compiler, programming, the API, and debugger, see the following documentation:

  • $TACC_CUDA_DIR/doc/pdf/CUDA_Compiler_Driver_NVCC.pdf
  • $TACC_CUDA_DIR/doc/pdf/CUDA_C_Programming_Guide.pdf
  • $TACC_CUDA_DIR/doc/pdf/CUDA_Samples.pdf
  • $TACC_CUDA_DIR/doc/pdf/cuda-gdb.pdf

Building Third-Party Software

You are welcome to install packages in your own $HOME or $WORK directories. No super-user privileges are needed, simply use the "--prefix" option when configuring then making the package.

You're welcome to download third-party research software and install it in your own account. In most cases you'll want to download the source code and build the software so it's compatible with the Longhorn software environment. You can't use yum or any other installation process that requires elevated privileges, but this is almost never necessary. The key is to specify an installation directory for which you have write permissions. Details vary; you should consult the package's documentation and be prepared to experiment. When using the famous three-step autotools build process, the standard approach is to use the PREFIX environment variable to specify a non-default, user-owned installation directory at the time you execute configure or make:

$ export INSTALLDIR=$WORK/apps/t3pio
$ ./configure --prefix=$INSTALLDIR
$ make
$ make install

Other languages, frameworks, and build systems generally have equivalent mechanisms for installing software in user space. In most cases a web search like "Python Linux install local" will get you the information you need.

In Python, a local install will resemble one of the following examples:

$ pip install netCDF4      --user                   # install netCDF4 package to $HOME/.local
$ python3 setup.py install --user                   # install to $HOME/.local
$ pip3 install netCDF4     --prefix=$INSTALLDIR     # custom location; add to PYTHONPATH

Similarly in R:

$ module load Rstats            # load TACC's default R
$ R                             # launch R
> install.packages('devtools')  # R will prompt for install location

You may, of course, need to customize the build process in other ways. It's likely, for example, that you'll need to edit a makefile or other build artifacts to specify Longhorn-specific include and library paths or other compiler settings. A good way to proceed is to write a shell script that implements the entire process: definitions of environment variables, module commands, and calls to the build utilities. Include echo statements with appropriate diagnostics. Run the script until you encounter an error. Research and fix the current problem. Document your experience in the script itself; including dead-ends, alternatives, and lessons learned. Re-run the script to get to the next error, then repeat until done. When you're finished, you'll have a repeatable process that you can archive until it's time to update the software or move to a new machine.

If you wish to share a software package with collaborators, you may need to modify file permissions. See Sharing Files with Collaborators for more information.

Conda Python Environments

TACC staff has deployed a pre-configured version of conda, available as a module. For the best experience on TACC resources, we recommend that you do not install your own version of Conda.

Conda Basics

The conda module can be loaded with:

$ module load conda

Then, list the available conda environments:

$ conda env list

Environments can be loaded with

$ conda activate [environment]

In this case, [environment] is a place-holder for the name of a specific environment. When finished using an environment, it can be exited by either deactivating the environment:

$ source deactivate

or unloading the module

$ module unload conda

Conda Packages

While you can technically install local packages to your ~/.local directory with pip, they will be detected by other environments, which may cause issues since they supersede all others. Instead, we recommend that you install packages directly into a cloned or created environment where you have write permissions.

Create, Activate, then Install

$ conda create -n new_env python=3 tensorflow
$ conda activate new_env
$ conda install [new package]
$ pip install [new package]

Note: pip works here because the environment was activated.

Clone and Install

$ conda create --name myclone --clone py2_powerai_1.6.1
$ conda install -n myclone [new package]

Discovering Packages

Longhorn nodes are a PowerPC architecture, so only pure python and code compiled for PowerPC will run on them. With that said, packages can be directly searched in conda and pip on the command line:

$ conda search tensorflow-gpu
$ pip search quicksect

or browsed online at

Once again, look for packages that support either "any" or "ppc64" architectures.

Python-Based Machine Learning

Longhorn uses the IBM Watson Machine Learning CE platform for machine learning frameworks and packages. Packages are distributed via Anaconda Python through the WMLCE repository. While you may be used to using pip to install the latest versions of your preferred machine learning frameworks, we recommend using this repository for several reasons:

  • The modules and environments are tested by IBM before release
  • Each PowerAI release contains a curated ecosystem of machine learning packages precompiled for PowerPC and GPU execution
  • The environments are functional and known, so we can provide support for these packages

Each version of PowerAI supported by Longhorn is cached on the file system and installed in both Python 2 and 3 environments when possible.

$ module load conda
$ conda env list
# conda environments:
base                  *  /scratch/apps/conda/4.8.3
py2_powerai_1.6.0        /scratch/apps/conda/4.8.3/envs/py2_powerai_1.6.0
py2_powerai_1.6.1        /scratch/apps/conda/4.8.3/envs/py2_powerai_1.6.1
py3_powerai_1.6.0        /scratch/apps/conda/4.8.3/envs/py3_powerai_1.6.0
py3_powerai_1.6.1        /scratch/apps/conda/4.8.3/envs/py3_powerai_1.6.1
py3_powerai_1.6.2        /scratch/apps/conda/4.8.3/envs/py3_powerai_1.6.2
py3_powerai_1.7.0        /scratch/apps/conda/4.8.3/envs/py3_powerai_1.7.0

These environments contain the following machine learning packages:

To increase the visibility of these environments and packages, we have also exposed some through standard LMOD modules.

$ ml avail
---------------- /opt/apps/modulefiles --------------------
   conda/4.8.3           (L,D)    pytorch-py3/1.1.0
   python2/powerai_1.6.0          pytorch-py3/1.2.0
   python2/powerai_1.6.1 (D)      pytorch-py3/1.3.1     (D)
   python3/powerai_1.6.0          tensorflow-py2/1.13.1
   python3/powerai_1.6.1          tensorflow-py2/1.14.0 (D)
   python3/powerai_1.6.2          tensorflow-py3/1.13.1
   python3/powerai_1.7.0 (D)      tensorflow-py3/1.14.0
   pytorch-py2/1.0.1              tensorflow-py3/1.15.2
   pytorch-py2/1.1.0     (D)      tensorflow-py3/2.1.0  (D)

Notice that loading the tensorflow-py3/1.15.2 module also loads the python3/powerai_1.6.2 module, which loads the py3_powerai_1.6.2 conda environment. That is because each tensorflow and pytorch package redirects to and loads the PowerAI distribution from where they originated.

While you can create conda environments on the login nodes without affecting other users, you must move to a compute node when running code via an idev session.

# Allocate a compute node in the development queue for 30 minutes
$ idev -m 30 -p development


$ module load tensorflow-py3/1.15.2
(py3_powerai_1.6.2)$ python -c 'import tensorflow; print(tensorflow.test.is_gpu_available())';
2020-04-20 17:32:29.440946: I 
Successfully opened dynamic library libcudart.so.10.1
2020-04-20 17:32:35.278808: I 
Created TensorFlow device (/device:GPU:3 with 14927 MB memory) 
-> physical GPU (device: 3, name: Tesla V100-SXM2-16GB, pci bus id: 0035:04:00.0, 
    compute capability: 7.0)

Note that the "(py3_powerai_1.6.2)" decorator is prefixed to your shell's $PS1 prompt indicating which Conda environment was loaded.

Additional information:


$ module load pytorch-py3/1.2.0
(py3_powerai_1.6.2)$ python -c 'import torch; print(torch.cuda.is_available())';

See PyTorch for additional Information:


Each PowerAI environment contains Horovod for distributed deep learning. Horovod requires minimal changes to your code to split your data batches across multiple GPUs and nodes. Below is an example of running the TensorFlow benchmark suite on two Longhorn nodes with 8 GPUs in total using ibrun.

# Allocate compute nodes
login1$ idev -N 2 -n 8 -p v100

# Load TensorFlow 2.1.0
c002-001$ module load tensorflow-py3/2.1.0

# Download and checkout benchmarks compatible with TF 2.1
c002-001$ git clone --branch cnn_tf_v2.1_compatible https://github.com/tensorflow/benchmarks.git
c002-001$ cd benchmarks

# Launch with ibrun
c002-001$ ibrun -n 8 python scripts/tf_cnn_benchmarks/tf_cnn_benchmarks.py --num_gpus=1 \
    --model resnet50 --batch_size 32 --num_batches 100 --variable_update=horovod
TACC:  Starting up job 22832
TACC:  Setting up parallel environment for OpenMPI mpirun.
TACC:  Starting parallel tasks...
total images/sec: 2560.04
TACC:  Shutdown complete. Exiting.

Official PowerAI documentation references IBM DDL and ddlrun, but we found no significant performance difference between it and NCCL with ibrun.

Containers on Longhorn

Longhorn provides integrated support for Singularity – a containerization platform that enables users to access software and libraries that are not otherwise available in the Longhorn module system. Singularity is the containerization platform of choice for all TACC HPC systems because users can pull / run / shell without escalated privileges, MPI and GPUs are supported, and it is compatible with Docker.

To make the experience seamless, our implementation injects mount points and environment variables into the container to match the HPC system environment – the $SCRATCH, $WORK, and $HOME file systems all will be identical to what users see natively on any Longhorn node.

To get started with Singularity, first load the tacc-singularity module:

$ module load tacc-singularity

All singularity commands must be run on a compute node. Example commands for the most common Singularity functions include:

  • Pull a Singularity-compatible image from Docker Hub

      login1$ idev
      c001-005$ singularity pull docker://python:3.8.0
      INFO:    Creating SIF file...
      INFO:    Build complete: python_3.8.0.sif
  • Start an interactive session inside the container

      c001-005$ singularity shell python_3.8.0.sif
      Singularity python_3.8.0.sif:~/singularity> python3 --version
      Python 3.8.0
      Singularity python_3.8.0.sif:~/singularity> exit
  • Execute a command inside the container

      c001-005$ singularity exec python_3.8.0.sif python3 --version
      Python 3.8.0
  • Run the default container command (not supported by all containers)

      c001-005$ singularity run python_3.8.0.sif
      Python 3.8.0 (default, Nov 23 2019, 09:02:13)
      [GCC 8.3.0] on linux
      Type "help", "copyright", "credits" or "license" for more information.

Note that (unlike other TACC machines) Longhorn nodes are a PowerPC architecture (Power PC 64 LE). Thus, when pulling images from (e.g.) Docker Hub, make sure the image is Power PC 64 LE compatible. Singularity will automatically pull the correct architecture, if it exists.

Tip: The search form on Docker Hub can be filtered by Power PC architecture: https://hub.docker.com/search?q=&type=image&architecture=ppc64le


Longhorn offers a suite of browser-based visualization tools that can be launched through the command line and accessed through a web browser on your local machine.


RStudio is the interactive development environment for the R programming language. Longhorn provides a template batch script for launching RStudio Server version 1.4.1717 with R version 4.1.0.

Launching RStudio

To start an RStudio instance, log in to Longhorn and type the command:

login1$ sbatch /scratch/projects/rstudio/rstudio.slurm

By default, this will launch an RStudio session on a v100 node for four hours. RStudio will have exclusive access to 40x IBM Power9 cores, 4x NVIDIA Tesla V100 GPUs, and 256GB RAM (64GB GPU RAM) for the duration of the session. To select a different queue or job time, copy the "rstudio.slurm" script to a local folder (e.g. in your $HOME directory) and edit the appropriate Slurm directives at the top of the script.

Upon launching the script, an RStudio job will be submitted to the queue. When the job begins running, a file called rstudio.out containing connection instructions will appear in the directory from where you launched the script, e.g.:

login1$ cat rstudio.out
TACC: Your Rstudio server is now running!

Your instance is now running at http://longhorn.tacc.utexas.edu:50051
After navigating to that address in your local web browser, authenticate using
your TACC username the password '53444c54-6573-7361-4c6f-756973584f58'

It may take up to one minute for the instructions to appear at the end of rstudio.out.

Navigate to the link provided (e.g. http://longhorn.tacc.utexas.edu:50051 => the port at the end may differ), and enter your TACC username and the temporary provided password (53444c54-6573-7361-4c6f-756973584f58 => do not include quotes) to connect.

Installing R Packages

A large selection of R packages have been pre-installed. A listing can be shown in the RStudio IDE using the following command:

> library()

To install additional R packages (e.g. ggplot2) in your own environment, use the standard R package installation method:

> install.packages("ggplot2") 

New packages will be installed to your $HOME/R/library directory.

Quitting RStudio

To stop the RStudio instance, return to the Longhorn terminal and type the command:

login1$ bash /scratch/projects/rstudio/rstudio-stop

Visual Studio Code

Longhorn allows users to run VS Code on the login nodes for standard source code editing and build tools. Please limit your use to one concurrent instance of VS Code. Remember, the login nodes are a shared resource and VS Code should not consume more resources than one's fair share.

To launch a VS Code instance, log in to Longhorn and type the command:

login1$ bash /scratch/projects/vscode/code
password: 53444c54-6573-7361-4c6f-756973584f58

To connect to the VS Code instance, open a web browser on your local computer and navigate to the link returned in the terminal. On the first time you launch a VS Code instance, it will generate a config file in $HOME/.config/code-server/config.yaml containing the password to connect. If the password is not displayed to screen, cat the contents of the file to find the password:

login1$ cat $HOME/.config/code-server/config.yaml
auth: password
password: 53444c54-6573-7361-4c6f-756973584f58
cert: false

To stop the code-server, type the command:

login1$ bash /scratch/projects/vscode/code-stop

Jupyter Notebook

Longhorn provides a template batch script to launch Jupyter Notebooks. Because of the high demand for customized Jupyter environments, users need to create and manage their own environment (including Python and Python Library versions).

Create Your Jupyter Environment

To create a new conda environment for Jupyter and, e.g., Python 3.7, log in to Longhorn and perform the following:

login1$ module load conda/4.8.3
login1$ conda create -n jupyter python=3.7 jupyterlab

This only needs to be performed once. It is important to name the environment as shown above (-n jupyter) because the upcoming batch script looks for an environment with that name specifically.

Launch a Notebook

To launch a Jupyter Notebook instance, log in to Longhorn and type the command:

login$ sbatch /scratch/projects/jupyter/jupyter.slurm

By default, this will launch a Jupyter Notebook on a v100 node for four hours. Jupyter will have exclusive access to 40x IBM Power9 cores, 4x NVIDIA Tesla V100 GPUs, and 256GB RAM (64GB GPU RAM) for the duration of the session.

To select a different queue, job time, or conda environment, copy the "jupyter.slurm" script to a local folder (e.g. in your $HOME directory) and edit the appropriate Slurm directives at the top of the script.

Upon launching the script, a Jupyter job will be submitted to the queue. When the job begins running, a file called "jupyter.out" containing connection instructions will be created in the directory from where you launched the script, e.g.:

login1$ cat jupyter.out
Your jupyter notebook server is now running! 
Your notebook is now running at 

Navigate to the link provided to connect.

Tip: Replace "lab" with "tree" in the above URL to get the older Jupyter tree-style view.

Customize Your Jupyter Environment

If you need additional Python packages in your Jupyter environment, you can install them directly with conda. To install, for example, the Python Matplotlib library, log into Longhorn and perform the following:

login1$ module load conda
login1$ conda install -n jupyter matplotlib

By default, extra packages will be installed into $SCRATCH/conda_local/envs/jupyter.

Quitting Jupyter

To stop the Jupyter Notebook, return to the Longhorn terminal and type the command:

login1$ bash /scratch/projects/jupyter/jupyter-stop

Help Desk

TACC Consulting operates from 8am to 5pm CST, Monday through Friday, except for holidays. You can submit a help desk ticket at any time via the TACC User Portal with "Longhorn" in the Resource field. Help the consulting staff help you by following these best practices when submitting tickets.

  • Do your homework before submitting a help desk ticket. What does the user guide and other documentation say? Search the internet for key phrases in your error logs; that's probably what the consultants answering your ticket are going to do. What have you changed since the last time your job succeeded?

  • Describe your issue as precisely and completely as you can: what you did, what happened, verbatim error messages, other meaningful output. When appropriate, include the information a consultant would need to find your artifacts and understand your workflow: e.g. the directory containing your build and/or job script; the modules you were using; relevant job numbers; and recent changes in your workflow that could affect or explain the behavior you're observing.

  • Subscribe to Longhorn User News. This is the best way to keep abreast of maintenance schedules, system outages, and other general interest items.

  • Have realistic expectations. Consultants can address system issues and answer questions about Longhorn. But they can't teach parallel programming in a ticket, and may know nothing about the package you downloaded. They may offer general advice that will help you build, debug, optimize, or modify your code, but you shouldn't expect them to do these things for you.

  • Be patient. It may take a business day for a consultant to get back to you, especially if your issue is complex. It might take an exchange or two before you and the consultant are on the same page. If the admins disable your account, it's not punitive. When the file system is in danger of crashing, or a login node hangs, they don't have time to notify you before taking action.