Maverick2 User Guide
Last update: June 18, 2020

Notices

  • All users: read Managing I/O on TACC Resources. TACC Staff have put forth new file system and job submission guidelines. (01/09/20)
  • Maverick2 is TACC's dedicated Deep Learning Machine. Allocation requests must include a justification explaining your need for this resource.
  • Maverick2 does not support any Visualization applications.

Introduction

Maverick2 is an extension to TACC's services to support GPU accelerated Machine Learning and Deep Learning research workloads. The power of this system is in its multiple GPUs per node and it is mostly intended to support workloads that are better supported with a dense cluster of GPUs and little CPU compute. The system is designed to support model training via GPU powered frameworks that can take advantage of the 4 GPUs in a node. In addition to the 96 1080-TI Nvidia GPU cards, a limited number of Pascal 100 and Volta 100 cards are available to support any workloads that cannot be done in the smaller memory footprints of the primary GPU cards. The system software supports Tensorflow and Caffe and can also be augmented to run other frameworks.

Edward Blein - Trail of Horses

System Overview

Maverick2 hosts the following GPUs: 24 nodes each with 4 NVidia GTX 1080 Ti GPUs running in a Broadwell based compute node; four nodes each with two of NVidia V100s GPUs running in a Skylake based Dell R740 based node; and three nodes each with two NVidia P100s GPUs running in a Skylake based Dell R740 node.

GTX Compute Nodes

Maverick2 hosts 24 GTX compute nodes. One GTX node is reserved for staff use, leaving 23 nodes available for general use.

Table 1. Maverick2 GTX Compute Node Specifications

Model: Super Micro X10DRG-Q Motherboard
Processor: Intel(R) Xeon(R) CPU E5-2620 v4
Total processors per node: 2
Total cores per processor: 8
Total cores per node: 16
Hardware threads per core: 2
Hardware threads per node: 32
Clock rate: 2.10GHz
RAM: 128 GB
L1/L2/L3 Cache: 512KiB / 2MiB / 20 MiB
Local storage: 150.0 GB (~60 GB free)
GPUs: 4 x NVidia 1080-TI GPUs

V100 Compute Nodes

Maverick2 has 4 V100 compute nodes.

Table 2. Maverick2 V100 Compute Node Specifications

Model: Dell PowerEdge R740
Processor: Xeon(R) Platinum 8160 CPU @ 2.10GHz
Total processors per node: 2
Total cores per processor: 24
Total cores per node: 48
Hardware threads per core: 2
Hardware threads per node: 96
Clock rate: 2.10GHz
RAM: 192 GB
L1/L2/L3 Cache: 1536KiB / 24576KiB / 33792KiB
Local storage: 119.5 GB (~32 GB free)
GPUs: 2 NVidia V100 adapters

P100 Compute Nodes

Maverick2 has 3 P100 nodes.

Table 3. Maverick2 P100 Compute Node Specifications

Model: Dell PowerEdge R740
Processor: Xeon(R) Platinum 8160 CPU @ 2.10GHz
Total processors per node: 2
Total cores per processor: 24
Total cores per node: 48
Hardware threads per core: 2
Hardware threads per node: 96
Clock rate: 2.10GHz
RAM: 192 GB
L1/L2/L3 Cache: 1536KiB / 24576KiB / 33792KiB
Local storage: 119.5 GB (~32 GB free)
GPUs: 2 NVidia P100 adapters

Login Nodes

Maverick2 hosts a single login node:

  • Dual Socket
  • Intel Xeon CPU E5-2660 v3 (Haswell) @ 2.60GHz: 10 cores/socket (20 cores/node)
  • 128 GB DDR4-2133 (8 x 16GB dual rank x4 DIMMS)
  • Hyperthreading Disabled

Network

  • Mellanox FDR Infiniband MT27500 Family ConnectX-3 Adapter
  • up to 10/40/56Gbps bandwidth and a sub-microsecond low latency
  • Fat Tree Interconnect
  • Intel Ethernet Controller I350 IEEE 802.3 1Gbps Adapter

File Systems

Maverick2 mounts two shared Lustre file systems on which each user has corresponding account-specific directories $HOME and $WORK. Each file system is available from all Maverick2 nodes; the Stockyard-hosted work file system is available on other TACC systems as well. A Lustre file system looks and acts like a single logical hard disk, but is actually a sophisticated integrated system involving many physical drives (dozens of physical drives for $HOME, and thousands for $WORK).

Lustre can stripe (distribute) large files over several physical disks, making it possible to deliver the high performance needed to service input/output (I/O) requests from hundreds of users across thousands of nodes. Object Storage Targets (OSTs) manage the file system's spinning disks: a file with 20 stripes, for example, is distributed across 20 OSTs. One designated Meta-Data Server (MDS) tracks the OSTs assigned to a file, as well as the file's descriptive data.

See Navigating the Shared File Systems below and consult the Shared Lustre File Systems section in the Stampede2 User Guide for best practices.

Table 2. Maverick2 File Systems

File System Quota Key Features
$HOME 10GB, 200,000 files Not intended for parallel or high-intensity file operations.
Backed up regularly.
Overall capacity ~1PB. NFS-mounted. Two Meta-Data Servers (MDS), four Object Storage Targets (OSTs).
Defaults: 1 stripe, 1MB stripe size.
Not purged.
$WORK 1TB, 3,000,000 files across all TACC systems,
regardless of where on the file system the files reside.
Not intended for high-intensity file operations or jobs involving very large files.
On the Global Shared File System that is mounted on most TACC systems.
See Stockyard system description for more information.
Defaults: 1 stripe, 1MB stripe size
Not backed up.
Not purged.
$SCRATCH N/A Maverick2 does not have a scratch file system.
Maverick2 Immersion Cooling System

Accessing the System

Access to all TACC systems now requires Multi-Factor Authentication (MFA). You can create an MFA pairing on the TACC User Portal. After login on the portal, go to your account profile (Home->Account Profile), then click the "Manage" button under "Multi-Factor Authentication" on the right side of the page. See Multi-Factor Authentication at TACC for further information.

Secure Shell (SSH)

The "ssh" command (SSH protocol) is the standard way to connect to Maverick2. SSH also includes support for the file transfer utilities scp and sftp. Wikipedia is a good source of information on SSH. SSH is available within Linux and from the terminal app in the Mac OS. If you are using Windows, you will need an SSH client that supports the SSH-2 protocol: e.g. Bitvise, OpenSSH, PuTTY, or SecureCRT. Initiate a session using the ssh command or the equivalent; from the Linux command line the launch command looks like this:

localhost$ ssh username@maverick2.tacc.utexas.edu

Use your TUP password for direct logins to Maverick2. Only users with an allocation on Maverick2 may log on. You can change your TACC password through the TACC User Portal. Log into the portal, then select "Change Password" under the "HOME" tab. If you've forgotten your password, go to the TACC User Portal home page and select "Password Reset" under the Home tab.

To report a connection problem, execute the ssh command with the "-vvv" option and include the verbose output when submitting a help ticket.

Citizenship on Maverick2

You share Maverick2 with many, sometimes hundreds, of other users, and what you do on the system affects others. All users must follow a set of good practices which entail limiting activities that may impact the system for other users. Exercise good citizenship to ensure that your activity does not adversely impact the system and the research community with whom you share it.

TACC staff have developed the following guidelines to good citizenship on Maverick2. Please familiarize yourself especially with the first two mandates:

The next two sections discuss best practices on limiting and minimizing I/O activity and file transfers. And finally, we provide job submission tips when constructing job scripts.

Do Not Run Jobs on the Login Nodes

Maverick2's login nodes are shared among all users. Dozens, (sometimes hundreds) of users may be logged on at one time accessing the file systems. Hundreds of jobs may be running on all compute nodes, with hundreds more queued up to run. The login nodes provide an interface to the "back-end" compute nodes.

Think of the login nodes as a prep area, where users may edit and manage files, compile code, perform file management, issue transfers, submit new and track existing batch jobs etc.

The compute nodes are where actual computations occur and where research is done. All batch jobs and executables, as well as development and debugging sessions, must be run on the compute nodes. To access compute nodes on TACC resources, one must either submit a job to a batch queue or initiate an interactive session using the idev utility.

A single user running computationally expensive or disk intensive task/s will negatively impact performance for other users. Running jobs on the login nodes is one of the fastest routes to account suspension. Instead, run on the compute nodes via an interactive session (idev) or by submitting a batch job.

Do not run jobs or perform intensive computational activity on the login nodes or the shared file systems ($WORK).
Doing so is the fastest route to account suspension.

  • Do not run research applications on the login nodes; this includes frameworks like MATLAB and R, as well as computationally or I/O intensive Python scripts. If you need interactive access, use the idev utility or Slurm's srun to schedule one or more compute nodes.

    DO THIS: Start an interactive session on a compute node and run Matlab.

      login1$ idev
      nid00181$ matlab

    DO NOT DO THIS: Run Matlab or other software packages on a login node

    login1$ matlab
  • Don't launch too many simultaneous processes; while it's fine to compile on a login node, a command like "make -j 16" (which compiles on 16 cores) may impact other users.

    DO THIS: build and submit a batch job. All batch jobs run on the compute nodes.

      login1$ make mytarget
      login1$ sbatch myjobscript

    DO NOT DO THIS: invoke multiple build sessions, run an executable on a login node.

      login1$ make -j 12
      login1$ ./myapp.exe
  • That script you wrote to poll job status should probably do so once every few minutes rather than several times a second.

Follow File Systems Usage Recommendations

TACC resources, with a few exceptions, mount three file systems: /home, /work and /scratch. Please follow each file system's recommended usage.

Stockyard ($WORK)

The TACC Global Shared File System, Stockyard, is mounted on most TACC HPC resources as the /work ($WORK) directory. This file system is accessible to all TACC users, and therefore experiences a lot of I/O activity (reading and writing to disk, opening and closing files) as users run their jobs, read and generate data including intermediate and checkpointing files. As TACC adds more users, the stress on the $WORK file system is increasing to the extent that TACC staff is now recommending new job submission guidelines in order to reduce stress and I/O on Stockyard.

TACC staff now recommends that you run your jobs out of your resource's $SCRATCH file system instead of the global $WORK file system. To run your jobs out $SCRATCH

  • Copy or move all job input files to $SCRATCH
  • Make sure your job script directs all output to $SCRATCH

Consider that $HOME and $WORK are for storage and keeping track of important items. Actual job activity, reading and writing to disk, should be offloaded to your resource's $SCRATCH file system (see Table 1.. You can start a job from anywhere but the actual work of the job should occur only on the $SCRATCH partition. You can save original items to $HOME or $WORK so that you can copy them over to $SCRATCH if you need to re-generate results.

  • Run I/O intensive jobs in $SCRATCH rather than $WORK. If you stress $WORK, you affect every user on every TACC system. Significant I/O might include reading/writing 100+ GBs to checkpoint/restart files, running with 4096+ MPI tasks all reading/writing individual files, but is not limited to just those two cases. If you stress $WORK, you affect every user on every TACC system.

Computes nodes should not reference $WORK unless it's to stage data in/out only before/after jobs.

A few other file system tips:

  • Don't run jobs in $HOME. The $HOME file system is for routine file management, not parallel jobs.

  • Avoid storing many small files in a single directory, and avoid workflows that require many small files. A few hundred files in a single directory is probably fine; tens of thousands is almost certainly too many. If you must use many small files, group them in separate directories of manageable size.

  • Watch all your file system quotas. If you're near your quota in $WORK and your job is repeatedly trying (and failing) to write to $WORK, you will stress that file system. If you're near your quota in $HOME, jobs run on any file system may fail, because all jobs write some data to the hidden $HOME/.slurm directory.

Table 1. File System Usage Recommendations

File system Best Storage Practices Best Activities
$HOME cron jobs
small scripts
environment settings
compiling, editing
$WORK software installations
original datasets that can't be reproduced
job scripts and templates
staging datasets
$SCRATCH temporary datasets
I/O files
job files
all job I/O activity

Limit Input/Output (I/O) Activity

In addition to the file system tips above, it's important that your jobs limit all I/O activity. This section focuses on ways to avoid causing problems on the $HOME, $WORK, and $SCRATCH file systems.

  • Limit I/O intensive sessions (lots of reads and writes to disk, rapidly opening or closing many files)

  • Avoid opening and closing files repeatedly in tight loops. Every open/close operation on the file system requires interaction with the MetaData Service (MDS). The MDS acts as a gatekeeper for access to files on Lustre's parallel file system. Overloading the MDS will affect other users on the system. If possible, open files once at the beginning of your program/workflow, then close them at the end.

  • Don't get greedy. If you know or suspect your workflow is I/O intensive, don't submit a pile of simultaneous jobs. Writing restart/snapshot files can stress the file system; avoid doing so too frequently. Also, use the hdf5 or netcdf libraries to generate a single restart file in parallel, rather than generating files from each process separately.

If you know your jobs will require significant I/O, please submit a support ticket and an HPC consultant will work with you. See Managing I/O on TACC Resources for additional information.

Limit File Transfers

In order to not stress both internal and external networks:

  • Avoid too many simultaneous file transfers. You share the network bandwidth with other users; don't use more than your fair share. Two or three concurrent scp sessions is probably fine. Twenty is probably not.

  • Avoid recursive file transfers, especially those involving many small files. Create a tar archive before transfers. This is especially true when transferring files to or from Ranch.

  • When creating or transferring large files to Stockyard ($WORK), be sure to stripe the receiving directories. See Striping Large Files in the Stampede2 User Guide for more information.

Job Submission Tips

  • Request Only the Resources You Need Make sure your job scripts request only the resources that are needed for that job. Don't ask for more time or more nodes than you really need. The scheduler will have an easier time finding a slot for a job requesting 2 nodes for 2 hours, than for a job requesting 4 nodes for 24 hours. This means shorter queue waits times for you and everybody else.

  • Test your submission scripts. Start small: make sure everything works on 2 nodes before you try 20. Work out submission bugs and kinks with 5 minute jobs that won't wait long in the queue and involve short, simple substitutes for your real workload: simple test problems; hello world codes; one-liners like ibrun hostname; or an ldd on your executable.

  • Respect memory limits and other system constraints. If your application needs more memory than is available, your job will fail, and may leave nodes in unusable states. Use TACC's Remora tool to monitor your application's needs.

Maverick2 mounts two Lustre file systems that are shared across all nodes: the home and work file systems. Maverick2's startup mechanisms define corresponding account-level environment variables, $HOME and $WORK, that store the paths to directories that you own on each of these file systems. Consult the Maverick2 File Systems table for the basic characteristics of these file systems and Good Citizenship section in the Stampede2 User Guide for tips on file system etiquette.

Maverick2's home file system is mounted only on Maverick2, but the work file system mounted on Maverick2 is the Global Shared File System hosted on Stockyard. This is the same work file system that is currently available on Stampede2, Wrangler, Lonestar5, and several other TACC resources.

The $STOCKYARD environment variable points to the highest-level directory that you own on the Global Shared File System. The definition of the $STOCKYARD environment variable is of course account-specific, but you will see the same value on all TACC systems that provide access to the Global Shared File System (see Figure 3). This directory is an excellent place to store files you want to access regularly from multiple TACC resources.

Your account-specific $WORK environment variable varies from system to system and (except for the decommissioned Stampede1 system) is a sub-directory of $STOCKYARD (Figure 3). The sub-directory name corresponds to the associated TACC resource. The $WORK environment variable on Maverick2 points to the $STOCKYARD/maverick2 subdirectory, a convenient location for files you use and jobs you run on Maverick2. Remember, however, that all subdirectories contained in your $STOCKYARD directory are available to you from any system that mounts the file system. If you have accounts on both Maverick2 and Stampede2, for example, the $STOCKYARD/maverick2 directory is available from your Stampede2 account, and $STOCKYARD/stampede2 is available from your Maverick2 account. Your quota and reported usage on the Global Shared File System reflects all files that you own on Stockyard, regardless of their actual location on the file system.

Note that resource-specific sub-directories of $STOCKYARD are nothing more than convenient ways to manage your resource-specific files. You have access to any such sub-directory from any TACC resources. If you are logged into Maverick2, for example, executing the alias cdw (equivalent to "cd $WORK") will take you to the resource-specific sub-directory $STOCKYARD/maverick2. But you can access this directory from other TACC systems as well by executing "cd $STOCKYARD/maverick2". These commands allow you to share files across TACC systems. In fact, several convenient account-level aliases make it even easier to navigate across the directories you own in the shared file systems:

Table 3. Built-in Account Level Aliases

Built-in Account Level Aliases
Alias Command
cd or cdh cd $HOME
cdw cd $WORK
cdy or cdg cd $STOCKYARD
Stockyard Work file system
Figure 3. Account-level directories on the work file system (Global Shared File System hosted on Stockyard). Example for fictitious user bjones. All directories usable from all systems. Sub-directories (e.g. frontera, maverick2) exist only when you have allocations on the associated system.

Transferring Files Using scp and rsync

You can transfer files between Maverick2 and Linux-based systems using either scp or rsync. Both scp and rsync are available in the Mac Terminal app. Windows ssh clients typically include scp-based file transfer capabilities.

The Linux scp (secure copy) utility is a component of the OpenSSH suite. Assuming your Maverick2 username is bjones, a simple scp transfer that pushes a file named "myfile" from your local Linux system to Maverick2 $HOME would look like this:

localhost$ scp ./myfile bjones@maverick2.tacc.utexas.edu:  # note colon after net address

You can use wildcards, but you need to be careful about when and where you want wildcard expansion to occur. For example, to push all files ending in ".txt" from the current directory on your local machine to /work/01234/bjones/scripts on Maverick2:

localhost$ scp *.txt bjones@maverick2.tacc.utexas.edu:/work/01234/bjones/maverick2

To delay wildcard expansion until reaching Maverick2, use a backslash ("\") as an escape character before the wildcard. For example, to pull all files ending in ".txt" from /work/01234/bjones/scripts on Maverick2 to the current directory on your local system:

localhost$ scp bjones@maverick2.tacc.utexas.edu:/work/01234/bjones/maverick2/\*.txt .

You can of course use shell or environment variables in your calls to scp. For example:

localhost$ destdir="/work/01234/bjones/maverick2/data"
localhost$ scp ./myfile bjones@maverick2.tacc.utexas.edu:$destdir

You can also issue scp commands on your local client that use Maverick2 environment variables like $HOME and $WORK. To do so, use a backslash ("\") as an escape character before the "$"; this ensures that expansion occurs after establishing the connection to Maverick2:

localhost$ scp ./myfile bjones@maverick2.tacc.utexas.edu:\$WORK/data   # Note backslash

Avoid using scp for recursive ("-r") transfers of directories that contain nested directories of many small files:

localhost$ scp -r  ./mydata     bjones@maverick2.tacc.utexas.edu:\$WORK  # DON'T DO THIS

Instead, use tar to create an archive of the directory, then transfer the directory as a single file:

localhost$ tar cvf ./mydata.tar mydata                                   # create archive
localhost$ scp     ./mydata.tar bjones@maverick2.tacc.utexas.edu:\$WORK  # transfer archive

The rsync (remote synchronization) utility is a great way to synchronize files that you maintain on more than one system: when you transfer files using rsync, the utility copies only the changed portions of individual files. As a result, rsync is especially efficient when you only need to update a small fraction of a large dataset. The basic syntax is similar to scp:

localhost$ rsync       mybigfile bjones@maverick2.tacc.utexas.edu:\$WORK/data
localhost$ rsync -avtr mybigdir  bjones@maverick2.tacc.utexas.edu:\$WORK/data

The options on the second transfer are typical and appropriate when synching a directory: this is a recursive update ("-r") with verbose ("-v") feedback; the synchronization preserves time stamps ("-t") as well as symbolic links and other meta-data ("-a"). Because rsync only transfers changes, recursive updates with rsync may be less demanding than an equivalent recursive transfer with scp.

See Good Citizenship in the Stampede2 User Guide for additional important advice about striping the receiving directory when transferring large files; watching your quota on $HOME and $WORK; and limiting the number of simultaneous transfers. Remember also that $STOCKYARD (and your $WORK directory on each TACC resource) is available from several other TACC systems: there's no need for scp when both the source and destination involve sub-directories of $STOCKYARD. See Managing Your Files for more information about transfers on $STOCKYARD.

Sharing Files with Collaborators

If you wish to share files and data with collaborators in your project, see Sharing Project Files on TACC Systems for step-by-step instructions. Project managers or delegates can use Unix group permissions and commands to create read-only or read-write shared workspaces that function as data repositories and provide a common work area to all project members.

Notes on Small Files Under Lustre

The Stockyard/$WORK file system is a Lustre file system which is optimized for large scale reads and writes. As some workloads, such as image classification, leverage using multiple small files, we advise users not work directly on $WORK with these workloads. Users should have their jobs copy these files to /tmp on the compute node, compute against the /tmp data, store their results on the $WORK file system, and clean up /tmp. We are currently working on solutions to expand the 60 GB /tmp capacity.

Striping Large Files

Before transferring large files to Maverick2, or creating new large files, be sure to set an appropriate default stripe count on the receiving directory. To avoid exceeding your fair share of any given OST, a good rule of thumb is to allow at least one stripe for each 100GB in the file. For example, to set the default stripe count on the current directory to 30 (a plausible stripe count for a directory receiving a file approaching 3TB in size), execute:

$ lfs setstripe -c 30 $PWD

Note that an "lfs setstripe" command always sets both stripe count and stripe size, even if you explicitly specify only one or the other. Since the example above does not explicitly specify stripe size, the command will set the stripe size on the directory to Maverick2's system default (1MB). In general there's no need to customize stripe size when creating or transferring files.

Remember that it's not possible to change the striping on a file that already exists. Moreover, the "mv" command has no effect on a file's striping if the source and destination directories are on the same file system. You can, of course, use the "cp" command to create a second copy with different striping; to do so, copy the file to a directory with the intended stripe parameters.

Software on Maverick2

As of February 5, 2020, the following software modules are currently installed on Maverick2. You can discover already installed software using TACC's Software Search tool or via "module" commands e.g., "module spider", "module avail" to retrieve the most up-to-date listing.

login1$ module avail

-------------------- /opt/apps/intel18/impi18_0/modulefiles --------------------
   boost/1.66                     phdf5/1.10.4   (D)
   fftw3/3.3.6                    pnetcdf/1.8.1
   parallel-netcdf/4.3.3.1        python2/2.7.16 (L,D)
   parallel-netcdf/4.6.2   (D)    python3/3.7.0  (D)

------------------------ /opt/apps/intel18/modulefiles -------------------------
   hdf5/1.8.16        mkl-dnn/0.18.1    netcdf/4.3.3.1        python3/3.7.0
   hdf5/1.10.4 (D)    nco/4.6.9         netcdf/4.6.2   (D)    udunits/2.2.25
   impi/18.0.2 (L)    ncview/2.1.7      python2/2.7.16

---------------------------- /opt/apps/modulefiles -----------------------------
   TACC          (L)      gcc/7.1.0                 matlab/2019a           (D)
   autotools/1.2 (L)      gcc/7.3.0        (D)      mcr/9.5
   cmake/3.8.2            git/2.24.1       (L)      mcr/9.6                (D)
   cmake/3.10.2           hwloc/1.11.2              ncl_ncarg/6.3.0
   cmake/3.16.1  (L,D)    idev/1.5.5                settarg
   cuda/8.0      (g)      intel/16.0.3              swr/18.3.3
   cuda/9.0      (g)      intel/17.0.4              tacc-singularity/2.6.0
   cuda/9.2      (g,D)    intel/18.0.2     (L,D)    tacc-singularity/3.4.2 (D)
   cuda/10.0     (g)      launcher_gpu/1.0          tacc_tips/0.5
   cuda/10.1     (g)      lmod                      xalt/2.6.12            (L)
   gcc/5.4.0              mathematica/12.0
   gcc/6.3.0              matlab/2018b

  Where:
   D:  Default Module
   L:  Module is loaded
   g:  built for GPU

At this time, with the limited size of the local disks on Maverick2, we are keeping the number of packages supported to a reduced size to accommodate the work done on this system that is not possible or practical on other TACC systems.

Users must provide their own license for commercial packages. TACC will work on a best effort level with any commercial vendors to support that software on the system, but make no guarantee that licences can migrate to our systems or can be supported within the support framework at TACC.

You are welcome to install packages in your own $HOME or $WORK directories. No super-user privileges are needed, simply use the "--prefix" option when configuring then making the package.

Job Accounting

Maverick2's accounting system is based on node-hours: one unadjusted Service Unit (SU) represents a single compute node used for one hour (a node-hour). We then multiply by a charge rate that reflects supply and demand for the type of node you use. For any given job, the total cost in SUs is:

SUs billed (node-hrs) = ( # nodes ) x ( job duration in wall clock hours ) x ( charge rate per node-hour )

The system tracks and charges for usage to a granularity of a few seconds of wall clock time. The system charges only for the resources you actually use, not those you request. In general, your queue wait time will be less if you request only the time you need: the scheduler will have an easier time finding a slot for the 2 hours you really need than for the 48 hours you request in your job script.

Principal Investigators can monitor allocation usage via the TACC User Portal under "Allocations->Projects and Allocations". Be aware that the figures shown on the portal may lag behind the most recent usage. Projects and allocation balances are also displayed upon command-line login.

To display a summary of your TACC project balances and disk quotas at any time, execute:

login1$ /usr/local/etc/taccinfo # Generally more current than balances displayed on the portals.

Slurm Job Scheduler

Maverick2 employs the Slurm Workload Manager job scheduler. Slurm commands enable you to submit, manage, monitor, and control your jobs.

The Stampede2 User Guide discusses Slurm extensively. See the following sections for detailed information:

Slurm Partitions (Queues)

Queues and limits are subject to change without notice.

Execute "qlimits" on Maverick2 for real-time information regarding limits on available queues.

See Stampede2's Monitoring Jobs and Queues section for additional information.

Queue Name
(available nodes)
Max Nodes per Job
(assoc'd cores)
Max Duration Max Jobs in Queue Charge Rate
(per node-hour)
gtx
(24 nodes)
4 nodes
(64 cores)
24 hours 4 1 SU
v100
(4 nodes)
4 nodes
(64 cores)
24 hours 4 1 SU
p100
(3 nodes)
3 nodes
(48 cores)
24 hours 4 1 SU

Help Desk

TACC Consulting operates from 8am to 5pm CST, Monday through Friday, except for holidays. You can submit a help desk ticket at any time via the TACC User Portal with "Maverick2" in the Resource field. Help the consulting staff help you by following these best practices when submitting tickets.

  • Do your homework before submitting a help desk ticket. What does the user guide and other documentation say? Search the internet for key phrases in your error logs; that's probably what the consultants answering your ticket are going to do. What have you changed since the last time your job succeeded?

  • Describe your issue as precisely and completely as you can: what you did, what happened, verbatim error messages, other meaningful output. When appropriate, include the information a consultant would need to find your artifacts and understand your workflow: e.g. the directory containing your build and/or job script; the modules you were using; relevant job numbers; and recent changes in your workflow that could affect or explain the behavior you're observing.

  • Subscribe to Maverick2 User News. This is the best way to keep abreast of maintenance schedules, system outages, and other general interest items.

  • Have realistic expectations. Consultants can address system issues and answer questions about Maverick2. But they can't teach parallel programming in a ticket, and may know nothing about the package you downloaded. They may offer general advice that will help you build, debug, optimize, or modify your code, but you shouldn't expect them to do these things for you.

  • Be patient. It may take a business day for a consultant to get back to you, especially if your issue is complex. It might take an exchange or two before you and the consultant are on the same page. If the admins disable your account, it's not punitive. When the file system is in danger of crashing, or a login node hangs, they don't have time to notify you before taking action.

A Maverick
Person and Horse

.