Last update: October 20, 2020
- Longhorn is a subsystem of Frontera. Consult Frontera Allocations for information on obtaining a Longhorn allocation.
- The Stockyard (
$WORK) file system is not mounted on Longhorn. Users must run all jobs out of
- You may now subscribe to Longhorn User News. Stay up-to-date on Longhorn's status, scheduled maintenances and other notifications.
- Longhorn's production queue limits are subject to change at any time.
- All users: read the Good Citizenship section. Longhorn is a shared resource and your actions can impact other users.
Longhorn is a TACC resource built in partnership with IBM to support GPU-accelerated workloads. The power of this system is in its multiple GPUs per node, and it is intended to support sophisticated workloads that require high GPU density and little CPU compute. Longhorn will support double-precision machine learning and deep learning workloads that can be accelerated by GPU-powered frameworks, as well as general purpose GPU calculations. Longhorn is also part of the GPU subsystem to one of TACC's flagship supercomputers, Frontera, funded by the National Science Foundation (NSF) through award #1818253, Computing for the Endless Frontier.
Experienced HPC/TACC users will be very familiar with many of the topics presented in this guide. Here we'll highlight some sections for a quick start on Longhorn.
- Log into the TACC User Portal to confirm that you've been added to a Longhorn allocation. Then, connect via SSH to
- Review the TACC info box (
taccinfo) displayed at login for your allocation availability and SU balances.
- Read the Good Citizenship section. Longhorn is a shared resource and this section covers practices and etiquette to keep your account in good standing and keep Longhorn's systems running smoothly for all users.
- Consult the Longhorn File Systems and Longhorn Production Queues tables. These should be near identical to the structure used on other TACC systems but there are a few minor changes you will want to take note of.
- Copy and modify any of the Sample Job Scripts for your own use. These scripts will also be helpful to show you how to modify any Jobs Scripts you are bringing over from other TACC systems so that they run efficiently on Longhorn.
- Review the default modules with "
module list". Make any changes needed for your code.
- Start small. Run any jobs from other systems on a smaller scale in order to test the performance of your code on Longhorn. You may find your code needs to be altered or recompiled in order to perform well and at scale on the new system.
Longhorn is comprised of 108 IBM Power System AC922 nodes distributed across nine racks, plus an IBM Elastic Storage System (representing the home and scratch filesystems) as a standalone 10th rack. Four nodes are reserved as login nodes and management nodes, leaving 104 nodes for the compute system.
Longhorn hosts 96 V100 nodes, each with 4 GPUs per node. Access these nodes via the
|Model:||IBM Power System AC922 (8335-GTH)|
|Processor:||IBM Power 9|
|Total processors per node:||2|
|Total cores per processor:||20|
|Total cores per node:||40|
|Hardware threads per core:||4|
|Hardware threads per node:||160|
|Clock rate (turbo):||3.8GHz|
|Local storage:||~900 GB ( |
|GPUs:||4x NVIDIA Tesla V100|
|GPU RAM:||4x 16GB (64 GB aggregate)|
Longhorn hosts 8 large memory V100 nodes, each with 4 GPUs per node. Access these nodes via the
|Model:||IBM Power System AC922 (8335-GTH)|
|Processor:||IBM Power 9|
|Total processors per node:||2|
|Total cores per processor:||20|
|Total cores per node:||40|
|Hardware threads per core:||4|
|Hardware threads per node:||160|
|Clock rate (turbo):||3.8GHz|
|Local storage:||~900 GB ( |
|GPUs:||4x NVIDIA Tesla V100|
|GPU RAM:||4x 16GB (64 GB aggregate)|
Longhorn hosts two login nodes:
- Dual socket
- IBM Power 9 processors @ 2.3 GHz and 20 cores/socket (40 cores/node)
- 256 GB DDR4 RAM (16 x 16 GB DIMMS @ 2666 MHz)
- Hyperthreading enabled
Stockyard, TACC's global shared file system accessible via
$WORK, is not mounted on Longhorn.
Longhorn is attached to
$SCRATCH filesystems over a fast network.
- Mellanox EDR Infiniband (MT28800 Family ConnectX-5 Ex adapter)
- Spine-and-leaf interconnect
- NetXtreme BCM5719 Gigabit Ethernet 1Gbps adapter
For better performance and more efficient I/O, we recommend staging your data to the
$SCRATCH filesystem prior to submitting compute jobs.
Only users with allocations on Longhorn are permitted to log on to Longhorn. A TACC User Portal account does not enable you to log on to any TACC resources unless you have an active allocation on that resource.
ssh" command (SSH protocol) is the standard way to connect to Longhorn. SSH also includes support for the file transfer utilities
sftp. Wikipedia is a good source of information on SSH. SSH is available within Linux and from the terminal app in the Mac OS. If you are using Windows, you will need an SSH client that supports the SSH-2 protocol: e.g. Bitvise, OpenSSH, PuTTY, or SecureCRT. Initiate a session using the
ssh command or the equivalent; from the Linux command line the launch command looks like this:
localhost$ ssh firstname.lastname@example.org
The above command will alternate connections across both available login nodes,
login1-login2, and route your connection to one of them. To connect to a specific login node, use its full domain name:
localhost$ ssh email@example.com
To connect with X11 support on Longhorn (usually required for applications with graphical user interfaces), use the "
-X" or "
localhost$ ssh -X firstname.lastname@example.org
To report a connection problem, execute the
ssh command with the "
-vvv" option and include the verbose output when submitting a help ticket.
Do not run the "
ssh-keygen" command on Longhorn. This command will create and configure a key pair that will interfere with the execution of job scripts in the batch system. If you do this by mistake, you can recover by renaming or deleting the
.ssh directory located in your home directory; the system will automatically generate a new one for you when you next log into Longhorn.
- execute "
mv .ssh dot.ssh.old"
- log out
- log into Longhorn again
After logging in again the system will generate a properly configured key pair.
Regardless of your research workflow, you'll need to master Linux basics and a Linux-based text editor (e.g.
vi/vim) to use the system properly. However, this user guide does not address these topics. There are numerous resources in a variety of formats that are available to help you learn Linux, including some listed on the TACC and training sites. If you encounter a term or concept in this user guide that is new to you, a quick internet search should help you resolve the matter quickly.
You must be added to a Longhorn allocation in order to have access to Longhorn. The ability to log on to the TACC User Portal does NOT signify access to Longhorn or any TACC resource. You may monitor your allocations on the TACC User Portal. Please consult the allocations documentation for more information.
Access to all TACC systems now requires Multi-Factor Authentication (MFA). You can create an MFA pairing on the TACC User Portal. After login on the portal, go to your account profile (Home->Account Profile), then click the "Manage" button under "Multi-Factor Authentication" on the right side of the page. See Multi-Factor Authentication at TACC for further information.
Use your TACC User Portal password for direct logins to TACC resources. You can change your TACC password through the TACC User Portal. Log into the portal, then select "Change Password" under the "HOME" tab. If you've forgotten your password, go to the TACC User Portal home page and select "Password Reset" under the Home tab.
sanitytool module loads an account-level diagnostic package that detects common account-level issues and often walks you through the fixes. You should certainly run the package's
sanitycheck utility when you encounter unexpected behavior. You may also want to run
sanitycheck periodically as preventive maintenance. To run
sanitytool's account-level diagnostics, execute the following commands:
login1$ module load sanitytool login1$ sanitycheck
module help sanitytool for more information.
The default login shell for your user account is Bash. To determine your current login shell, execute:
$ echo $SHELL
If you'd like to change your login shell to
zsh, submit a ticket through the TACC portal. The
chsh ("change shell") command will not work on TACC systems.
When you start a shell on Longhorn, system-level startup files initialize your account-level environment and aliases before the system sources your own user-level startup scripts. You can use these startup scripts to customize your shell by defining your own environment variables, aliases, and functions. These scripts (e.g.
.bashrc) are generally hidden files: so-called dotfiles that begin with a period, visible when you execute:
Before editing your startup files, however, it's worth taking the time to understand the basics of how your shell manages startup. Bash startup behavior is very different from the simpler
csh behavior, for example. The Bash startup sequence varies depending on how you start the shell (e.g. using
ssh to open a login shell, executing the
bash command to begin an interactive shell, or launching a script to start a non-interactive shell). Moreover, Bash does not automatically source your
.bashrc when you start a login shell by using
ssh to connect to a node. Unless you have specialized needs, however, this is undoubtedly more flexibility than you want: you will probably want your environment to be the same regardless of how you start the shell. The easiest way to achieve this is to execute
source ~/.bashrc from your
.profile, then put all your customizations in
.bashrc. The system-generated default startup scripts demonstrate this approach. We recommend that you use these default files as templates.
For more information see the Bash Users' Startup Files: Quick Start Guide and other online resources that explain shell startup. To recover the originals that appear in a newly created account, execute
Your environment includes the environment variables and functions defined in your current shell: those initialized by the system, those you define or modify in your account-level startup scripts, and those defined or modified by the modules that you load to configure your software environment. Be sure to distinguish between an environment variable's name (e.g.
HISTSIZE) and its value (
$HISTSIZE). Understand as well that a sub-shell (e.g. a script) inherits environment variables from its parent, but does not inherit ordinary shell variables or aliases. Use
export (in Bash) or
csh) to define an environment variable.
env command to see the environment variables that define the way your shell and child shells behave.
Pipe the results of
grep to focus on specific environment variables. For example, to see all environment variables that contain the string GIT (in all caps), execute:
$ env | grep GIT
The environment variables
LD_LIBRARY_PATH are especially important.
PATH is a colon-separated list of directory paths that determines where the system looks for your executables.
LD_LIBRARY_PATH is a similar list that determines where the system looks for shared libraries.
Lmod, a module system developed and maintained at TACC, makes it easy to manage your environment so you have access to the software packages and versions that you need to conduct your research. This is especially important on a system like Longhorn that serves thousands of users with an enormous range of needs. Loading a module amounts to choosing a specific package from among available alternatives:
$ module load xl # load the default IBM compiler $ module load xl/16.1.1 # load a specific version of the IBM compiler (v16.1.1)
A module does its job by defining or modifying environment variables (and sometimes aliases and functions). For example, a module may prepend appropriate paths to
$LD_LIBRARY_PATH so that the system can find the executables and libraries associated with a given software package. The module creates the illusion that the system is installing software for your personal use. Unloading a module reverses these changes and creates the illusion that the system just uninstalled the software:
$ module load ddt # defines DDT-related env vars; modifies others $ module unload ddt # undoes changes made by load
The module system does more, however. When you load a given module, the module system can automatically replace or deactivate modules to ensure the packages you have loaded are compatible with each other. In the example below, the module system automatically unloads one compiler when you load another, and deactivates IBM-compatible versions of MPI:
$ module load xl # load default version of IBM compiler $ module load spectrum_mpi # load default version of Spectrum MPI $ module load gcc # change compiler Lmod is automatically replacing "xl/16.1.1" with "gcc/9.1.0". Inactive Modules: 1) spectrum_mpi
On Longhorn, modules generally adhere to a TACC naming convention when defining environment variables that are helpful for building and running software. For example, the
papi module defines
TACC_PAPI_BIN (the path to PAPI executables),
TACC_PAPI_LIB (the path to PAPI libraries),
TACC_PAPI_INC (the path to PAPI include files), and
TACC_PAPI_DIR (top-level PAPI directory). After loading a module, here are some easy ways to observe its effects:
$ module show papi # see what this module does to your environment $ env | grep PAPI # see env vars that contain the string PAPI $ env | grep -i papi # case-insensitive search for 'papi' in environment
To see the modules you currently have loaded:
$ module list
To see all modules that you can load right now because they are compatible with the currently loaded modules:
$ module avail
To see all installed modules, even if they are not currently available because they are incompatible with your currently loaded modules:
$ module spider # list all modules, even those not available to load
To filter your search:
module spider cuda # all modules with names containing 'cuda' $ module spider cuda/10.1 # additional details on a specific module
Among other things, the latter command will tell you which modules you need to load before the module is available to load. You might also search for modules that are tagged with a keyword related to your needs (though your success here depends on the diligence of the module writers). For example:
$ module keyword performance
You can save a collection of modules as a personal default collection that will load every time you log into Longhorn. To do so, load the modules you want in your collection, then execute:
$ module save # save the currently loaded collection of modules
Two commands make it easy to return to a known, reproducible state:
$ module reset # load the system default collection of modules $ module restore # load your personal default collection of modules
On TACC systems, the command
module reset is equivalent to
module purge; module load TACC. It's a safer, easier way to get to a known baseline state than issuing the two commands separately.
Help text is available for both individual modules and the module system itself:
$ module help cuda/10.1 # show help text for software package swr $ module help # show help text for the module system itself
See Lmod's online documentation for more extensive documentation. The online documentation addresses the basics in more detail, but also covers several topics beyond the scope of the help text (e.g. writing and using your own module files).
It's safe to execute module commands in job scripts. In fact, this is a good way to write self-documenting, portable job scripts that produce reproducible results. If you use
module save to define a personal default module collection, it's rarely necessary to execute module commands in shell startup scripts, and it can be tricky to do so safely. If you do wish to put module commands in your startup scripts, see Longhorn's default startup scripts for a safe way to do so.
You share Longhorn with many, sometimes hundreds, of other users, and what you do on the system affects others. All users must follow a set of good practices which entail limiting activities that may impact the system for other users. Exercise good citizenship to ensure that your activity does not adversely impact the system and the research community with whom you share it.
TACC staff has developed the following guidelines to good citizenship on Longhorn. Please familiarize yourself especially with the first two mandates. The next sections discuss best practices on limiting and minimizing I/O activity and file transfers. And finally, we provide job submission tips when constructing job scripts to help minimize wait times in the queues.
Longhorn's few login nodes are shared among all users. Dozens, (sometimes hundreds) of users may be logged on at one time accessing the file systems. Think of the login nodes as a prep area, where users may edit and manage files, compile code, perform file management, issue transfers, submit new and track existing batch jobs etc. The login nodes provide an interface to the "back-end" compute nodes.
The compute nodes are where actual computations occur and where research is done. Hundreds of jobs may be running on all compute nodes, with hundreds more queued up to run. All batch jobs and executables, as well as development and debugging sessions, must be run on the compute nodes. To access compute nodes on TACC resources, one must either submit a job to a batch queue or initiate an interactive session using the
A single user running computationally expensive or disk intensive task/s will negatively impact performance for other users. Running jobs on the login nodes is one of the fastest routes to account suspension. Instead, run on the compute nodes via an interactive session (
idev) or by submitting a batch job.
Do not run jobs or perform intensive computational activity on the login nodes or the shared file systems.
Your account may be suspended and you will lose access to the queues if your jobs are impacting other users.
Do not run research applications on the login nodes; this includes frameworks like MATLAB and R, as well as computationally or I/O intensive Python scripts. If you need interactive access, use the
idevutility or Slurm's
srunto schedule one or more compute nodes.
DO THIS: Start an interactive session on a compute node and run Matlab.
login1$ idev nid00181$ matlab
DO NOT DO THIS: Run Matlab or other software packages on a login node
Do not launch too many simultaneous processes; while it's fine to compile on a login node, a command like "
make -j 16" (which compiles on 16 cores) may impact other users.
DO THIS: build and submit a batch job. All batch jobs run on the compute nodes.
login1$ make mytarget login1$ sbatch myjobscript
DO NOT DO THIS: invoke multiple build sessions, run an executable on a login node.
login1$ make -j 12 login1$ ./myprogram
That script you wrote to poll job status should probably do so once every few minutes rather than several times a second.
Longhorn is one of the few TACC systems that does not mount the Stockyard (
$WORK) file system, instead only mounting a
$SCRATCH file system. Longhorn users must therefore run all jobs out of Longhorn's
$SCRATCH file system.
- Copy or move all job input files to
- Make sure your job script directs all output to
$HOME is for storage and keeping track of important items. Actual job activity, reading and writing to disk, should be offloaded to your resource's
$SCRATCH file system (see Table. File System Usage Recommendations. You can start a job from anywhere but the actual work of the job should occur only on the
Don't run jobs in your
$HOMEfile system is for routine file management, not parallel jobs.
Avoid storing many small files in a single directory, and avoid workflows that require many small files. A few hundred files in a single directory is probably fine; tens of thousands is almost certainly too many. If you must use many small files, group them in separate directories of manageable size.
Watch all your file system quotas. If you're near your quota in
$HOME, jobs run on any file system may fail, because all jobs write some data to the hidden
|File System||Best Storage Practices||Best Activities|
| ||cron jobs |
| ||temporary datasets |
|all job I/O activity|
In addition to the file system tips above, it's important that your jobs limit all I/O activity. This section focuses on ways to avoid causing problems on each resources' shared file systems.
Limit I/O intensive sessions (lots of reads and writes to disk, rapidly opening or closing many files)
Avoid opening and closing files repeatedly in tight loops. Every open/close operation on the file system requires interaction with the MetaData Service (MDS). The MDS acts as a gatekeeper for access to files on Lustre's parallel file system. Overloading the MDS will affect other users on the system. If possible, open files once at the beginning of your program/workflow, then close them at the end.
Don't get greedy. If you know or suspect your workflow is I/O intensive, don't submit a pile of simultaneous jobs. Writing restart/snapshot files can stress the file system; avoid doing so too frequently. Also, use the
netcdflibraries to generate a single restart file in parallel, rather than generating files from each process separately.
If you know your jobs will require significant I/O, please submit a support ticket and an HPC consultant will work with you. See also Managing I/O on TACC Resources for additional information.
In order to not stress both internal and external networks:
Avoid too many simultaneous file transfers. You share the network bandwidth with other users; don't use more than your fair share. Two or three concurrent
scpsessions is probably fine. Twenty is probably not.
Avoid recursive file transfers, especially those involving many small files. Create a tar archive before transfers. This is especially true when transferring files to or from Ranch.
When creating or transferring large files to Stockyard (
$WORK), be sure to stripe the receiving directories. See STRIPING for more information.
Request Only the Resources You Need Make sure your job scripts request only the resources that are needed for that job. Don't ask for more time or more nodes than you really need. The scheduler will have an easier time finding a slot for a job requesting 2 nodes for 2 hours, than for a job requesting 4 nodes for 24 hours. This means shorter queue waits times for you and everybody else.
Test your submission scripts. Start small: make sure everything works on 2 nodes before you try 20. Work out submission bugs and kinks with 5 minute jobs that won't wait long in the queue and involve short, simple substitutes for your real workload: simple test problems;
hello worldcodes; one-liners like
ibrun hostname; or an
lddon your executable.
Respect memory limits and other system constraints. If your application needs more memory than is available, your job will fail, and may leave nodes in unusable states. Use TACC's Remora tool to monitor your application's needs.
The Stockyard Global File System (
$WORK) is not mounted on Longhorn.
|File System||Quota||Key Features|
| ||10GB, 300,000 files||Not intended for parallel or high-intensity file operations. |
Backed up regularly.
Defaults: 1 stripe, 1MB stripe size.
| ||Not yet available.|
| ||no quota||Overall capacity 4.5 PB. |
Defaults: 1 stripe, 1MB stripe size.
Not backed up.
Subject to purge if access time* is more than 10 days old.
| ||no quota||~700GB available per node. |
*The operating system updates a file's access time when that file is modified on a login or compute node. Reading or executing a file/script on a login node does not update the access time, but reading or executing on a compute node does update the access time. This approach helps us distinguish between routine management tasks (e.g.
scp) and production use. Use the command
ls -ul to view access times.
| || |
| || |
| || |
| || |
You can transfer files between Longhorn and Linux-based systems using either
rsync are available in the Mac Terminal app. Windows SSH clients typically include
scp-based file transfer capabilities.
scp (secure copy) utility is a component of the OpenSSH suite. Assuming your Longhorn username is
bjones, a simple
scp transfer that pushes a file named
myfile from your local Linux system to Longhorn
$HOME would look like this:
localhost$ scp ./myfile email@example.com: # note colon after net address
You can use wildcards, but you need to be careful about when and where you want wildcard expansion to occur. For example, to push all files ending in
.txt from the current directory on your local machine to
/work/01234/bjones/scripts on Longhorn:
localhost$ scp *.txt firstname.lastname@example.org:/work/01234/bjones/longhorn
To delay wildcard expansion until reaching Longhorn, use a backslash (
\) as an escape character before the wildcard. For example, to pull all files ending in
/work/01234/bjones/scripts on Longhorn to the current directory on your local system:
localhost$ scp email@example.com:/work/01234/bjones/longhorn/\*.txt .
You can of course use shell or environment variables in your calls to
scp. For example:
localhost$ destdir="/work/01234/bjones/longhorn/data" localhost$ scp ./myfile firstname.lastname@example.org:$destdir
You can also issue
scp commands on your local client that use Longhorn environment variables like
$SCRATCH. To do so, use a backslash (
\) as an escape character before the
$; this ensures that expansion occurs after establishing the connection to Longhorn:
localhost$ scp ./myfile email@example.com:\$WORK/data # Note backslash
scp for recursive transfers of directories that contain nested directories of many small files:
scp -r ./mydata firstname.lastname@example.org:\$WORK# DON'T DO THIS
tar to create an archive of the directory, then transfer the directory as a single file:
localhost$ tar cvf ./mydata.tar mydata # create archive localhost$ scp ./mydata.tar email@example.com:\$WORK # transfer archive
rsync (remote synchronization) utility is a great way to synchronize files that you maintain on more than one system: when you transfer files using
rsync, the utility copies only the changed portions of individual files. As a result,
rsync is especially efficient when you only need to update a small fraction of a large dataset. The basic syntax is similar to
localhost$ rsync mybigfile firstname.lastname@example.org:\$WORK/data localhost$ rsync -avtr mybigdir email@example.com:\$WORK/data
The options on the second transfer are typical and appropriate when synching a directory: this is a recursive update (
-r) with verbose (
-v) feedback; the synchronization preserves time stamps (
-t) as well as symbolic links and other meta-data (
rsync only transfers changes, recursive updates with
rsync may be less demanding than an equivalent recursive transfer with
If you wish to share files and data with collaborators in your project, see Sharing Project Files on TACC Systems for step-by-step instructions. Project managers or delegates can use Unix group permissions and commands to create read-only or read-write shared workspaces that function as data repositories and provide a common work area to all project members.
The primary purpose of your job script is to launch your research application. How you do so depends on several factors, especially (1) the type of application (e.g. MPI, OpenMP, serial), and (2) what you're trying to accomplish (e.g. launch a single instance, complete several steps in a workflow, run several applications simultaneously within the same job). While there are many possibilities, your own job script will probably include a launch line that is a variation of one of the examples described in this section.
There are four GPUs per node indexed 0-3. By default, only GPU 0 is visible to serial GPU applications. Launching a serial GPU application takes the form:
$ ./mycode.cuda # compiled CUDA executable
To target the executable to a specific GPU, set the
CUDA_VISIBLE_DEVICES environment variable. For example, to run an application on GPU 2:
$ export CUDA_VISIBLE_DEVICES=2 $ ./mycode.cuda
This method can be used to run four serial GPU applications simultaneously, each on their own GPU. This can be useful when the same code needs to be run many times under multiple conditions, and it makes more efficient use of the nodes when all four GPUs are active:
$ CUDA_VISIBLE_DEVICES=0 ./mycode.cuda
&$ CUDA_VISIBLE_DEVICES=1 ./mycode.cuda &$ CUDA_VISIBLE_DEVICES=2 ./mycode.cuda &$ CUDA_VISIBLE_DEVICES=3 ./mycode.cuda &$ wait
The trailing ampersand ‘
&' symbol puts the process in the background, and the "
wait" command pauses the job until all the background processes complete. To confirm which GPUs are active, and which processes are running on each GPU, use the "
$ nvidia-smi Tue Nov 26 14:26:55 2019 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 418.67 Driver Version: 418.67 CUDA Version: 10.1 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla V100-SXM2... On | 00000004:04:00.0 Off | 0 | | N/A 34C P0 103W / 300W | 319MiB / 16130MiB | 100% Default | +-------------------------------+----------------------+----------------------+ | 1 Tesla V100-SXM2... On | 00000004:05:00.0 Off | 0 | | N/A 38C P0 107W / 300W | 319MiB / 16130MiB | 100% Default | +-------------------------------+----------------------+----------------------+ | 2 Tesla V100-SXM2... On | 00000035:03:00.0 Off | 0 | | N/A 35C P0 103W / 300W | 319MiB / 16130MiB | 100% Default | +-------------------------------+----------------------+----------------------+ | 3 Tesla V100-SXM2... On | 00000035:04:00.0 Off | 0 | | N/A 36C P0 106W / 300W | 319MiB / 16130MiB | 100% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 131794 C ./mycode.cuda 309MiB | | 1 131798 C ./mycode.cuda 309MiB | | 2 131803 C ./mycode.cuda 309MiB | | 3 131806 C ./mycode.cuda 309MiB | +-----------------------------------------------------------------------------+
Typical workflows may use MPI and
ibrun to launch a GPU application using multiple GPUs on one node, or even multiple GPUs on multiple nodes. There is no need to set
CUDA_VISIBLE_DEVICES as demonstrated above, as
ibrun will handle GPU assignment among the hosts. For example, to run a compiled application on all four GPUs on one node:
login1$ idev -N1 # launch an interactive session with one node ... c001-005$ ibrun -np 4 ./mycode.cuda --num_gpus=1 --other_options
In this hypothetical example,
ibrun launches four instances of the
mycode.cuda executable. And,
mycode.cuda takes a flag
--num_gpus that tells each instance to use one GPU. The
ibrun tool is also aware of multiple hosts from information in your environment. If you are running a two-node job, then you can launch across eight GPUs in a similar fashion:
login1$ idev -N2 # launch an interactive session with two nodes ... c001-005$ ibrun -np 8 ./mycode.cuda --num_gpus=1 --other_options
See the Tensorflow at TACC document for more information and examples about running on multiple GPUs or multiple nodes.
If you plan to launch multi-GPU applications using CUDA-aware Spectrum MPI, make sure to export the following environment variable:
To launch a serial application, simply call the executable. Specify the path to the executable in either the PATH environment variable or in the call to the executable itself:
mycode.exe # executable in a directory listed in $PATH $WORK/apps/myprov/mycode.exe # explicit full path to executable ./mycode.exe # executable in current directory ./mycode.exe -m -k 6 input1 # executable with notional input options
Like all TACC resources, Longhorn's job scheduler is the Slurm Workload Manager. Slurm commands enable you to submit, manage, monitor, and control your jobs. Jobs submitted to the scheduler are queued, then run on the compute nodes. Each job consumes Service Units (SUs) which are then charged to your allocation.
Like all TACC systems, Longhorn's accounting system is based on node-hours: one unadjusted Service Unit (SU) represents a single compute node used for one hour (a node-hour). For any given job, the total cost in SUs is the use of one compute node for one hour of wall clock time plus any additional charges for the use of specialized queues, e.g. Stampede2's
largemem queue, Lonestar5's
gpu queue and Longhorn's
v100 queue. The queue charge rates are determined by the supply and demand for that particular queue or type of node used.
Longhorn SUs billed = (# nodes) x (job duration in wall clock hours) x (charge rate per node-hour)
The Slurm scheduler tracks and charges for usage to a granularity of a few seconds of wall clock time. The system charges only for the resources you actually use, not those you request. If your job finishes early and exits properly, Slurm will release the nodes back into the poolof available nodes. Your job will only be charged for as long as you are using the nodes.
TACC does not implement node-sharing on any compute resource. Each Longhorn node can be assigned to only one user at a time; hence a complete node is dedicated to a user's job and accrues wall-clock time for all the node's cores whether or not all cores are used.
Tip: Your queue wait times will be less if you request only the time you need: the scheduler will have a much easier time finding a slot for the 2 hours you really need than say, for the 12 hours requested in your job script.
Principal Investigators can monitor allocation usage via the TACC User Portal under "Allocations->Projects and Allocations". Be aware that the figures shown on the portal may lag behind the most recent usage. Projects and allocation balances are also displayed upon command-line login.
To display a summary of your TACC project balances and disk quotas at any time, execute:
login1$ /usr/local/etc/taccinfo # Generally more current than balances displayed on the portals.
Be sure to request computing resources e.g., number of nodes, number of tasks per node, max time per job, that are consistent with the type of application(s) you are running:
A serial (non-parallel) application can only make use of a single core on a single node, and will only see that node's memory.
An MPI (Message Passing Interface) program can exploit the distributed computing power of multiple nodes: it launches multiple copies of its executable (MPI tasks, each assigned unique IDs called ranks) that can communicate with each other across the network. The tasks on a given node, however, can only directly access the memory on that node. Depending on the program's memory requirements, it may not be possible to run a task on every core of every node assigned to your job. If it appears that your MPI job is running out of memory, try launching it with fewer tasks per node to increase the amount of memory available to individual tasks.
A popular type of parameter sweep (sometimes called high throughput computing) involves submitting a job that simultaneously runs many copies of one serial or threaded application, each with its own input parameters ("Single Program Multiple Data", or SPMD). The
launchertool is designed to make it easy to submit this type of job. For more information:
$ module load launcher-gpu $ module help launcher-gpu
Longhorn employs the Slurm workload manager, the job scheduler common to all TACC HPC resources. Slurm commands enable you to submit, manage, monitor, and control your jobs.
The Stampede2 User Guide discusses Slurm extensively. See the following sections for detailed information:
Longhorn's Slurm current partitions (queues), maximum node limits and charge rates are summarized in the table below. Execute
qlimits on Longhorn for real-time information regarding limits on available queues. See Job Accounting to learn how jobs are charged to your allocation.
Queue status as of February 19, 2020. Queues and limits are subject to change without notice.
|Queue Name||Max Nodes per Job |
|Max Job Duration||Charge Rate |
|2 nodes |
(80 cores, 8 GPUs)
|2 hours||1 Service Unit (SU)|
|32 nodes |
(1280 cores, 128 GPUs)
|48 hours||6 SUs|
|8 nodes |
(320 cores, 32 GPUs)
|48 hours||6 SUs|
To request more nodes than are available in the
v100 queue, submit a consulting (help desk) ticket through the TACC User Portal. Include in your request reasonable evidence of your readiness to run under the conditions you're requesting. In most cases this should include your own strong or weak scaling results obtained from previous Longhorn jobs.
Single Node Multiple GPUs
Parallelization across GPU Nodes
- specify the maximum run time with the
- specify number of nodes needed with the
- specify tasks per node with the
- specify the project to be charged with the
In this section, we present several Slurm commands and other utilities that are available to help you plan and track your job submissions as well as check the status of the Slurm queues.
When interpreting queue and job status, remember that Longhorn doesn't operate on a first-come-first-served basis. Instead, the sophisticated, tunable algorithms built into Slurm attempt to keep the system busy, while scheduling jobs in a way that is as fair as possible to everyone. At times this means leaving nodes idle ("draining the queue") to make room for a large job that would otherwise never run. It also means considering each user's "fair share", scheduling jobs so that those who haven't run jobs recently may have a slightly higher priority than those who have.
To display resource limits for the Longhorn queues, execute:
qlimits. The result is real-time data; the corresponding information in this document's table of Longhorn queues may lag behind the actual configuration that the
qlimits utility displays.
sinfo command allows you to monitor the status of the queues. If you execute
sinfo without arguments, you'll see a list of every node in the system together with its status. To skip the node list and produce a tight, alphabetized summary of the available queues and their status, execute:
login1$ sinfo -S+P -o "%18P %8a %20F" # compact summary of queue status
An excerpt from this command's output might look like this:
login1$ sinfo -S+P -o "%18P %8a %20F" PARTITION AVAIL NODES(A/I/O/T) development up 0/8/0/8 v100 up 44/43/1/96 v100-lm up 0/8/0/8
AVAIL column displays the overall status of each queue (up or down), while the column labeled
NODES(A/I/O/T) shows the number of nodes in each of several states ("Allocated", "Idle", "Other", and "Total"). Execute
man sinfo for more information. Use caution when reading the generic documentation, however: some available fields are not meaningful or are misleading on Longhorn (e.g.
TIMELIMIT, displayed using the
squeue command allows you to monitor jobs in the queues, whether pending (waiting) or currently running:
login1$ squeue # show all jobs in all queues login1$ squeue -u bjones # show all jobs owned by bjones login1$ man squeue # more info
An excerpt from the default output might look like this:
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) 25781 development idv72397 bjones CG 9:36 2 c001-011,012 25918 development ppm_4828 bjones PD 0:00 20 (Resources) 25915 development MV2-test siliu PD 0:00 14 (Priority) 25589 v100 aatest slindsey PD 0:00 8 (Dependency) 25949 development psdns_la sniffjck PD 0:00 2 (Priority) 25618 v100 SP256U connor PD 0:00 1 (Dependency) 25944 v100 MoTi_hi wchung R 35:13 1 c005-003 25945 v100 WTi_hi_e wchung R 27:11 1 c006-001 25606 v100 trainA jackhu R 23:28:28 1 c008-012
The column labeled
ST displays each job's status:
PDmeans "Pending" (waiting);
CGmeans "Completing" (cleaning up after exiting the job script).
Pending jobs appear in order of decreasing priority. The last column includes a nodelist for running/completing jobs, or a reason for pending jobs. If you submit a job before a scheduled system maintenance period, and the job cannot complete before the maintenance begins, your job will run when the maintenance/reservation concludes. The
squeue command will report
ReqNodeNotAvailable ("Required Node Not Available"). The job will remain in the
PD state until Longhorn returns to production.
The default format for
squeue now reports total nodes associated with a job rather than cores, tasks, or hardware threads. One reason for this change is clarity: the operating system sees each compute node's 56 hardware threads as "processors", and output based on that information can be ambiguous or otherwise difficult to interpret.
The default format lists all nodes assigned to displayed jobs; this can make the output difficult to read. A handy variation that suppresses the nodelist is:
login1$ squeue -o "%.10i %.12P %.12j %.9u %.2t %.9M %.6D" # suppress nodelist
--start option displays job start times, including very rough estimates for the expected start times of some pending jobs that are relatively high in the queue:
login1$ squeue --start -j 167635 # display estimated start time for job 167635
showq utility mimics a tool that originated in the PBS project, and serves as a popular alternative to the Slurm
login1$ showq # show all jobs; default format login1$ showq -u # show your own jobs login1$ showq -U bjones # show jobs associated with user bjones login1$ showq -h # more info
The output groups jobs in four categories:
BLOCKED job is one that cannot yet run due to temporary circumstances (e.g. a pending maintenance or other large reservation.).
If your waiting job cannot complete before a maintenance/reservation begins,
showq will display its state as
**WaitNod** ("Waiting for Nodes"). The job will remain in this state until Longhorn returns to production.
The default format for
showq now reports total nodes associated with a job rather than cores, tasks, or hardware threads. One reason for this change is clarity: the operating system sees each compute node's 112 hardware threads as "processors", and output based on that information can be ambiguous or otherwise difficult to interpret.
It's not possible to add resources to a job (e.g. allow more time) once you've submitted the job to the queue.
To cancel a pending or running job, first determine its jobid, then use
login1$ squeue -u bjones # one way to determine jobid JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) 170361 v100 spec12 bjones PD 0:00 32 (Resources) login1$ scancel 170361 # cancel job
For detailed information about the configuration of a specific job, use
login1$ scontrol show job=170361
To view some accounting data associated with your own jobs, use
login1$ sacct --starttime 2019-06-01 # show jobs that started on or after this date
You can use
sbatch to help manage workflows that involve multiple steps: the
--dependency option allows you to launch jobs that depend on the completion (or successful completion) of another job. For example you could use this technique to split into three jobs a workflow that requires you to (1) compile on a single node; then (2) compute on 40 nodes; then finally (3) post-process your results using 4 nodes.
login1$ sbatch --dependency=afterok:173210 myjobscript
For more information see the Slurm online documentation. Note that you can use
$SLURM_JOBID from one job to find the jobid you'll need to construct the
sbatch launch line for a subsequent one. But also remember that you can't use
sbatch to submit a job from a compute node.
You can discover already installed software using TACC's Software Search tool or execute "
module spider" or "
module avail" on the command-line.
Users must provide their own license for commercial packages.
At this time, the following software packages are available on Longhorn:
login1$ module avail -------------------------- /opt/apps/xl16/modulefiles -------------------------- hdf5/1.10.4 netcdf/4.7.4 spectrum_mpi/10.3.0 (L) ---------------------------- /opt/apps/modulefiles ----------------------------- TACC (L) python3/powerai_1.6.2 autotools/1.2 (L) python3/powerai_1.7.0 (D) cmake/3.16.1 (L) pytorch-py2/1.0.1 conda/4.8.3 pytorch-py2/1.1.0 (D) cuda/10.0 (g) pytorch-py3/1.0.1 cuda/10.1 (g) pytorch-py3/1.1.0 cuda/10.2 (g,D) pytorch-py3/1.2.0 gcc/4.9.3 pytorch-py3/1.3.1 (D) gcc/6.3.0 sanitytool/1.5 gcc/7.3.0 settarg gcc/9.1.0 (D) tacc-singularity/3.5.3 git/2.24.1 (L) tacc_tips/0.5 idev/1.5.7 tensorflow-py2/1.13.1 launcher_gpu/1.1 tensorflow-py2/1.14.0 (D) lmod tensorflow-py3/1.13.1 pylauncher/3.1 tensorflow-py3/1.14.0 python2/powerai_1.6.0 tensorflow-py3/1.15.2 python2/powerai_1.6.1 (D) tensorflow-py3/2.1.0 (D) python3/powerai_1.6.0 xalt/2.8.1 (L) python3/powerai_1.6.1 xl/16.1.1 (L) ... login$
When building software on Longhorn, we recommend using the IBM compilers and IBM Spectrum MPI stack. This will be the default in the early user period, but may change if we determine one of the other MPI stacks provides superior performance.
IBM XL is the recommended and default compiler suite on Longhorn. Here are simple examples that use the IBM compiler to build an executable from source code:
$ xlc -o myexe mycode.c # C code $ xlc++ -o myexe mycode.cpp # C++ code $ xlf90 -o myexe mycode.f # Fortran code
See the published IBM documentation, available online for information on optimization flags and other IBM compiler options.
The GNU foundation maintains a number of high quality compilers, including a compiler for C (gcc), C++ (g++), and Fortran (gfortran). The gcc compiler is the foundation underneath all three, and the term gcc often means the suite of these three GNU compilers.
Load a gcc module to access a recent version of the GNU compiler suite. Avoid using the GNU compilers that are available without a gcc module — those will be older versions based on the "system gcc" that comes as part of the Linux distribution.
Here are simple examples that use the GNU compilers to produce an executable from source code:
$ gcc mycode.c # C source file; executable a.out $ gcc -o myexe mycode.c # C source file; executable myexe $ g++ -o myexe mycode.cpp # C++ source file $ gfortran -o myexe mycode.f90 # Fortran90 source file $ gcc -fopenmp -o myexe mycode.c # OpenMP; GNU flag is different than IBM
Note that some compiler options are the same for both IBM and GNU (e.g. "
-o"), while others are different (e.g. "
-qopenmp" vs "
-fopenmp"). Many options are available in one compiler suite but not the other. See the online GNU documentation for information on optimization flags and other GNU compiler options.
Spectrum MPI (
module load spectrum_mpi) and MVAPICH2 (
module load mvapich2) are the two MPI libraries available on Longhorn. After loading an
mvapich2 module, compile and/or link using the appropriate mpi wrapper (
mpif90) in place of the compiler:
$ mpicc mycode.c -o myexe # C source, full build $ mpicc -c mycode.c # C source, compile without linking $ mpicxx mycode.cpp -o myexe # C++ source, full build $ mpif90 mycode.f90 -o myexe # Fortran source, full build
These wrappers call the compiler with the options, include paths, and libraries necessary to produce an MPI executable using the MPI module you're using. To see the effect of a given wrapper, call it with the "
$ mpicc -show # Show compile line generated by call to mpicc; similarly for other wrappers
NVIDIA's CUDA compiler and libraries are accessed by loading the CUDA module:
login1$ module load cuda
nvcc compiler on the login node to compile code, and run executables on the compute nodes. Longhorn's V100 GPUs are compute capability 7.0 devices. When compiling your code, make sure to specify this level of capability with:
$ nvcc -arch=compute_70 -code=sm_70 ...
The NVIDA CUDA debugger is
cuda-gdb. Applications must be debugged through an interactive
idev session. Please see the relevant idev section for more details.
The NVIDIA Compute Visual Profiler,
computeprof, can be used to profile both CUDA and OpenCL programs that have been developed in NVIDIA CUDA/OpenCL programming environment. Since the profiler is X based, it must be run either within a VNC session or by ssh-ing into an allocated compute node with X-forwarding enabled. The profiler command and library paths are included in the
$LD_LIBRARY_PATH variables by the CUDA module. The
computeprof executable and libraries can be found in the following respective directories:
For further information on the CUDA compiler, programming, the API, and debugger, see the following documentation:
You are welcome to install packages in your own
$WORK directories. No super-user privileges are needed, simply use the "
--prefix" option when configuring then making the package.
You're welcome to download third-party research software and install it in your own account. In most cases you'll want to download the source code and build the software so it's compatible with the Longhorn software environment. You can't use yum or any other installation process that requires elevated privileges, but this is almost never necessary. The key is to specify an installation directory for which you have write permissions. Details vary; you should consult the package's documentation and be prepared to experiment. When using the famous three-step autotools build process, the standard approach is to use the
PREFIX environment variable to specify a non-default, user-owned installation directory at the time you execute
$ export INSTALLDIR=$WORK/apps/t3pio $ ./configure --prefix=$INSTALLDIR $ make $ make install
Other languages, frameworks, and build systems generally have equivalent mechanisms for installing software in user space. In most cases a web search like "Python Linux install local" will get you the information you need.
In Python, a local install will resemble one of the following examples:
$ pip install netCDF4 --user # install netCDF4 package to $HOME/.local $ python3 setup.py install --user # install to $HOME/.local $ pip3 install netCDF4 --prefix=$INSTALLDIR # custom location; add to PYTHONPATH
Similarly in R:
$ module load Rstats # load TACC's default R $ R # launch R > install.packages('devtools') # R will prompt for install location
You may, of course, need to customize the build process in other ways. It's likely, for example, that you'll need to edit a
makefile or other build artifacts to specify Longhorn-specific include and library paths or other compiler settings. A good way to proceed is to write a shell script that implements the entire process: definitions of environment variables, module commands, and calls to the build utilities. Include
echo statements with appropriate diagnostics. Run the script until you encounter an error. Research and fix the current problem. Document your experience in the script itself; including dead-ends, alternatives, and lessons learned. Re-run the script to get to the next error, then repeat until done. When you're finished, you'll have a repeatable process that you can archive until it's time to update the software or move to a new machine.
If you wish to share a software package with collaborators, you may need to modify file permissions. See Sharing Files with Collaborators for more information.
The conda module can be loaded with:
$ module load conda
Then, list the available conda environments:
$ conda env list
Environments can be loaded with
$ conda load [environment]
In this case,
[environment] is a place-holder for the name of a specific environment. When finished using an environment, it can be exited by either deactivating the environment:
$ source deactivate
or unloading the module
$ module unload conda
While you can technically install local packages to your
~/.local directory with
pip, they will be detected by other environments, which may cause issues since they supersede all others. Instead, we recommend that you install packages directly into a cloned or created environment where you have write permissions.
$ conda create -n new_env python=3 tensorflow $ conda activate new_env $ conda install [new package] $ pip install [new package]
pip works here because the environment was activated.
$ conda create --name myclone --clone py2_powerai_1.6.1 $ conda install -n myclone [new package]
Longhorn nodes are a PowerPC architecture, so only pure python and code compiled for PowerPC will run on them. With that said, packages can be directly searched in
pip on the command line:
$ conda search tensorflow-gpu $ pip search quicksect
or browsed online at
Once again, look for packages that support either "
any" or "
Longhorn uses the IBM Watson Machine Learning CE platform for machine learning frameworks and packages. Packages are distributed via Anaconda Python through the WMLCE repository. While you may be used to using
pip to install the latest versions of your preferred machine learning frameworks, we recommend using this repository for several reasons:
- The modules and environments are tested by IBM before release
- Each PowerAI release contains a curated ecosystem of machine learning packages precompiled for PowerPC and GPU execution
- The environments are functional and known, so we can provide support for these packages
Each version of PowerAI supported by Longhorn is cached on the file system and installed in both Python 2 and 3 environments when possible.
$ module load conda $ conda env list # conda environments: # base * /scratch/apps/conda/4.8.3 py2_powerai_1.6.0 /scratch/apps/conda/4.8.3/envs/py2_powerai_1.6.0 py2_powerai_1.6.1 /scratch/apps/conda/4.8.3/envs/py2_powerai_1.6.1 py3_powerai_1.6.0 /scratch/apps/conda/4.8.3/envs/py3_powerai_1.6.0 py3_powerai_1.6.1 /scratch/apps/conda/4.8.3/envs/py3_powerai_1.6.1 py3_powerai_1.6.2 /scratch/apps/conda/4.8.3/envs/py3_powerai_1.6.2 py3_powerai_1.7.0 /scratch/apps/conda/4.8.3/envs/py3_powerai_1.7.0
These environments contain the following machine learning packages:
To increase the visibility of these environments and packages, we have also exposed some through standard LMOD modules.
$ ml avail ---------------- /opt/apps/modulefiles -------------------- conda/4.8.3 (L,D) pytorch-py3/1.1.0 python2/powerai_1.6.0 pytorch-py3/1.2.0 python2/powerai_1.6.1 (D) pytorch-py3/1.3.1 (D) python3/powerai_1.6.0 tensorflow-py2/1.13.1 python3/powerai_1.6.1 tensorflow-py2/1.14.0 (D) python3/powerai_1.6.2 tensorflow-py3/1.13.1 python3/powerai_1.7.0 (D) tensorflow-py3/1.14.0 pytorch-py2/1.0.1 tensorflow-py3/1.15.2 pytorch-py2/1.1.0 (D) tensorflow-py3/2.1.0 (D) pytorch-py3/1.0.1
Notice that loading the
tensorflow-py3/1.15.2 module also loads the
python3/powerai_1.6.2 module, which loads the
py3_powerai_1.6.2 conda environment. That is because each tensorflow and pytorch package redirects to and loads the PowerAI distribution from where they originated.
While you can create conda environments on the login nodes without affecting other users, you must move to a compute node when running code via an idev session.
# Allocate a compute node in the development queue for 30 minutes $ idev -m 30 -p development
$ module load tensorflow-py3/1.15.2 (py3_powerai_1.6.2)$ python -c 'import tensorflow; print(tensorflow.test.is_gpu_available())'; 2020-04-20 17:32:29.440946: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 ... 2020-04-20 17:32:35.278808: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1325] Created TensorFlow device (/device:GPU:3 with 14927 MB memory) -> physical GPU (device: 3, name: Tesla V100-SXM2-16GB, pci bus id: 0035:04:00.0, compute capability: 7.0) True
Note that the "
(py3_powerai_1.6.2)" decorator is prefixed to your shell's
$PS1 prompt indicating which Conda environment was loaded.
$ module load pytorch-py3/1.2.0 (py3_powerai_1.6.2)$ python -c 'import torch; print(torch.cuda.is_available())'; True
See PyTorch for additional Information:
Each PowerAI environment contains Horovod for distributed deep learning. Horovod requires minimal changes to your code to split your data batches across multiple GPUs and nodes. Below is an example of running the TensorFlow benchmark suite on two Longhorn nodes with 8 GPUs in total using
# Allocate compute nodes login1$ idev -N 2 -n 8 -p v100 # Load TensorFlow 2.1.0 c002-001$ module load tensorflow-py3/2.1.0 # Download and checkout benchmarks compatible with TF 2.1 c002-001$ git clone --branch cnn_tf_v2.1_compatible https://github.com/tensorflow/benchmarks.git c002-001$ cd benchmarks # Launch with ibrun c002-001$ ibrun -n 8 python scripts/tf_cnn_benchmarks/tf_cnn_benchmarks.py --num_gpus=1 \ --model resnet50 --batch_size 32 --num_batches 100 --variable_update=horovod TACC: Starting up job 22832 TACC: Setting up parallel environment for OpenMPI mpirun. TACC: Starting parallel tasks... … ---------------------------------------------------------------- total images/sec: 2560.04 ---------------------------------------------------------------- TACC: Shutdown complete. Exiting.
Official PowerAI documentation references IBM DDL and
ddlrun, but we found no significant performance difference between it and NCCL with
Longhorn provides integrated support for Singularity – a containerization platform that enables users to access software and libraries that are not otherwise available in the Longhorn module system. Singularity is the containerization platform of choice for all TACC HPC systems because users can pull / run / shell without escalated privileges, MPI and GPUs are supported, and it is compatible with Docker.
To make the experience seamless, our implementation injects mount points and environment variables into the container to match the HPC system environment – the
$HOME filesystems all will be identical to what users see natively on any Longhorn node.
To get started with Singularity, first load the
$ module load tacc-singularity
All singularity commands must be run on a compute node. Example commands for the most common Singularity functions include:
Pull a Singularity-compatible image from Docker Hub
login1$ idev ... c001-005$ singularity pull docker://python:3.8.0 ... INFO: Creating SIF file... INFO: Build complete: python_3.8.0.sif
Start an interactive session inside the container
c001-005$ singularity shell python_3.8.0.sif Singularity python_3.8.0.sif:~/singularity> python3 --version Python 3.8.0 Singularity python_3.8.0.sif:~/singularity> exit
Execute a command inside the container
c001-005$ singularity exec python_3.8.0.sif python3 --version Python 3.8.0
Run the default container command (not supported by all containers)
c001-005$ singularity run python_3.8.0.sif Python 3.8.0 (default, Nov 23 2019, 09:02:13) [GCC 8.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>>
Note that (unlike other TACC machines) Longhorn nodes are a PowerPC architecture (Power PC 64 LE). Thus, when pulling images from (e.g.) Docker Hub, make sure the image is Power PC 64 LE compatible. Singularity will automatically pull the correct architecture, if it exists.
Tip: The search form on Docker Hub can be filtered by Power PC architecture: https://hub.docker.com/search?q=&type=image&architecture=ppc64le
TACC Consulting operates from 8am to 5pm CST, Monday through Friday, except for holidays. You can submit a help desk ticket at any time via the TACC User Portal with "Longhorn" in the Resource field. Help the consulting staff help you by following these best practices when submitting tickets.
Do your homework before submitting a help desk ticket. What does the user guide and other documentation say? Search the internet for key phrases in your error logs; that's probably what the consultants answering your ticket are going to do. What have you changed since the last time your job succeeded?
Describe your issue as precisely and completely as you can: what you did, what happened, verbatim error messages, other meaningful output. When appropriate, include the information a consultant would need to find your artifacts and understand your workflow: e.g. the directory containing your build and/or job script; the modules you were using; relevant job numbers; and recent changes in your workflow that could affect or explain the behavior you're observing.
Subscribe to Longhorn User News. This is the best way to keep abreast of maintenance schedules, system outages, and other general interest items.
Have realistic expectations. Consultants can address system issues and answer questions about Longhorn. But they can't teach parallel programming in a ticket, and may know nothing about the package you downloaded. They may offer general advice that will help you build, debug, optimize, or modify your code, but you shouldn't expect them to do these things for you.
Be patient. It may take a business day for a consultant to get back to you, especially if your issue is complex. It might take an exchange or two before you and the consultant are on the same page. If the admins disable your account, it's not punitive. When the file system is in danger of crashing, or a login node hangs, they don't have time to notify you before taking action.