Ranch User Guide
Last update: March 23, 2019 17:58


03/25/2019 The Ranch system has transitioned from our Oracle HSM (hierarchical storage manager) based system to a new Quantum HSM system.

Consult the Ranch Transition to Quantum Archiving System document for detailed information on what this means for you.

  • 03/23/19 In the new system, users will find a symbolic link (./old_HSM) that will lead them to their data on the Oracle HSM system, which is mounted as a read only filesystem. The new system will have more limits on inode usage (file count to you and me), so we would like users to think about the data they need, and to tar that up, and direct it to their new home directory. Please look at the "Organizing Your Data" section for more information.

  • 03/23/19 When data is accessed on the old Oracle HSM system (from the /old_HSM link), your terminal may appear to hang, this is due to the Oracle HSM system staging data from tape(s), that could take a few minutes for some files, and significantly longer for very large files. As a metric to think about, if your files are large (5GB+), they will stream quickly off tape, at around 250MB/sec. If the files are small, then the tape drive will need to stop, start, and reposition, which will significantly slow down transfers.

  • 03/23/19 All of your current data will be available until the end of Mar 31, 2020 in the old_HSM directory. After that date, the link will be removed, however, your data will still be available upon request for some time.


TACC's High Performance Computing (HPC) systems are used primarily for scientific computing and although their disk systems are large, they are not big enough to keep up with the data generated on the systems. The Ranch system fills this need for high capacity storage, by providing a massive, high-performance file system for archival purposes.

Ranch (RANCHranch.tacc.utexas.edu), is now a Quantum Stornext filesystem based system, with a DDN provided front-end disk system (30PB raw), and a Quantum Scalar i6000 tape library.

Ranch is an allocated resource, meaning the system is available only to users with an allocation on one of TACC's computational resources such as Stampede2, Lonestar 5, or Maverick2. XSEDE PIs will be prompted automatically for the companion storage allocation amount as part of the XRAS submission request, UT and UT system PIs should also make a request and justify the amount of storage requested when applying for an allocation. The default allocation for XSEDE, UT, and UT affiliate users on Ranch is 2TB. To request additional Ranch storage for your allocation, please submit a TACC user portal ticket.

Intended Use

Ranch consists of long term tape storage and is designed to be used for archiving data. Data that was either produced on TACC's systems, or processed on them. Ranch is not meant for active data, nor is it intended to be a replication solution for your "/scratch" directory. Ranch is also not suitable for system backups, due to the large number of small files this inevitably generates and the nature of a tape based archive. The Ranch system provides a single backup copy of project related data.

Note: Ranch is an archival system. The Ranch system is not backed up or replicated. This means that Ranch contains a single copy of user data. While lost data due to tape damage is rare, please keep this possibility in mind when making data management plans. If you have irreplaceable data and would like a different level of service, please let us know via the ticketing system, and we can help you with a solution.

System Configuration

Ranch's primary storage system is a DDN SFA14K DCR (Declustered RAID) based system which is managed by Quantum's Stornext filesystem. The raw capacity is around 30PB, with about 17PB usable space for user data. Metadata is stored on a Quantum SSD based appliance. The backend tape library, which is where files migrate after they have been untouched on disk for a period of time (this will be tuned, but it is currently a few weeks), is a Quantum Scalar i6000, with LTO-8 tapes, each with an uncompressed capacity of 12.5 TB. Compressed capacity of an LTO-8 tape is around 30PB, but that assumes highly compressible data.

Formerly, the Ranch system was based on Oracle's HSM system, with two SL8500 libraries, each with 20,000 tape slots. This system will remain as a backend system while we transition data from the old libraries to the new one.

System Access

Direct login via Secure Shell's ssh command to Ranch is allowed so you can create directories and manage files. The Ranch archive system cannot be mounted on a remote system.

stampede2$ ssh taccusername@ranch.tacc.utexas.edu

Ranch Environment Variables

The preferred way of accessing Ranch, especially from scripts, is by using the TACC-defined environment variables $ARCHIVER and $ARCHIVE. These variables, defined on all TACC resources, define the hostname of the current TACC archival system, $ARCHIVER, and each account's personal archival space, $ARCHIVE. These environment variables help ensure that scripts will continue to work, even if the underlying system configuration changes in the future.

If you are trying to access data that is on the old part of Ranch, and you haven't yet transitioned that data to the new Quantum Stornext based portion of Ranch, you can add the old_HSM directory into the paths defined in your scripts, and still be able to read from Ranch that way. Since the filesystem is mounted as read only, you won't be able to send data into the old_HSM directory structure.

Accessing Files from Within Running Programs

Ranch access is not allowed from within running jobs on other TACC resources. Data must be first transferred from Ranch to your compute resource in order to be available to running jobs.

Citizenship on Ranch

  • Limit rsync and scp processes to no more than two processes.
  • Follow the procedures for archiving data
  • Store only data that was processed, or generated, on TACC's systems
  • Delete all unneeded data under your account
  • No workstation or other system backups

Organizing Your Data

From experience of past performance (predominantly the total retrieval time for a given set until completion), we recommend average file size of 300GB - 1TB. Smaller files slow down the retrieval rates drastically when multiple files were recalled from tapes. e.g. retrieval time of 100TB data collection in 100GB average size will be order of magnitude faster than those in average 1GB or less size. The new environment is designed to meet the demand of ~100TB data sets to be available in a few days or less instead of weeks, which is possible only when the average size is big enough.

Monitor your Ranch Disk Usage and File Counts

Users can check their current and historical Ranch usage by looking at the contents of the "HSM_usage" file in their Ranch home directory:

ranch$ tail ~/HSM_usage

This file is updated nightly as a convenience to the user. The data fields within this file show the files and storage in use both on-line and in the Ranch tape archives, as well as the quotas for each currently in effect. Each entry also shows the date and time of its update. Do not delete or edit this file.

Transferring Data

To maximize the efficiency of data archiving and retrieval for users, data should be transferred using large files. Small files don't do well on tapes, so they should be combined with others in a "tar" file wherever possible. The term "tar" is derived from (t)ape (ar)chive. Files that are very large (5 TB+), can also be a problem, since their contents can be split across multiple tapes, thereby increasing the chances that there will be problems retrieving the data. Use the UNIX split utility on very large files (1 TB+), and tar up small files into chunks between 10 GB and 300 GB in size. This will allow the archiver to work optimally.

Retrieving Files from Ranch

Since Ranch is an archive system, any files which have not been accessed recently will be stored on tape. To access files stored offline, they must be ‘staged' from tape, which is done automatically with tools like rsync and scp. We ask that you use the Unix tar command or another utility to bundle large numbers of small files together, before transferring to Ranch, for more efficient storage and retrieval on Ranch.

Ranch performs best on large files (10GB to 250GB). If you need a single file from a large tarball, it can easily be extracted without extracting the whole tarball. Due to the nature of the tapes that Ranch uses, it is quicker to read a single large file than it is to read multiple small files.

Large numbers of small files are hard for our tape drives to read back from tape, since the drives need to start and stop for every file. So instead of reading steadily at 252MB/sec, a drive reading many tiny files at a crawl may take a week to stage them back to disk, which occupies the drive, and prevents other users from accessing their data.

Limit your scp processes to no more than four at a time.

Data Transfer Methods

TACC supports two transfer mechanisms: scp (recommended) and rsync (avoid if possible).


The simplest way to transfer files to and from Ranch is to use the Secure Shell "scp" command:

stampede2$ scp myfile ${ARCHIVER}:${ARCHIVE}/myfilepath

where "myfile" is the name of the file to copy and "myfilepath" is the path to the archive on Ranch. For large numbers of files, we strongly recommend you employ the Unix "tar" command to create an archive of one or more directories before transferring the data to Ranch, or as part of the transfer process.

To use ssh to create a "tar" archive file from a directory, you can use the following alternatives to copy files to Ranch

stampede2$ tar cvf - dirname | ssh ${ARCHIVER} "cat > ${ARCHIVE}/mytarfile.tar"

where "dirname" is the path to the directory you want to archive, and "mytarfile.tar" is the name of the archive to be created on Ranch.

Note that when transferring to Ranch, the destination directory/ies must already exist. If not, scp will respond with:

No such file or directory

The following command-line examples demonstrate how to transfer files to and from Ranch using scp.

  • copy "mynewfile" from Stampede2 to Ranch:

    stampede2$ scp mynewfile ${ARCHIVER}:${ARCHIVE}/mynewfilename
  • copy "myoldfile" from Ranch to my computer

    stampede2$ scp ${ARCHIVER}:${ARCHIVE}/myoldfile .


The UNIX rsync command is another way to keep archives up-to-date. Rather than transferring entire files, rsync transfers only the actual changed parts of a file. This method has the advantage over the scp command in that it can recover if there is an error in the transfer. Enter "rsync -h" for detailed help on this command.

A huge downside to rsync however, is that it will stage data before it can start the sync, so this can lead to a lot of unnecessary staging calls, and really waste resources. In general, it is a bad idea to rsync a whole directory, and it is horrible for archiving data with a tape based archive system like ours.

On the new Quantum StorNext filesystem, data will stay on the front end disk for significantly longer than it did with the previous system, due to a much larger front end disk system, which means that data that has recently been sent to Ranch can safely be rsync'ed. If the data has been on the system for a significant time (around a month, but we will tune that variable over time), it may have migrated to tape, and will still cause the same problems as it did on the old Oracle HSM system.

Large Data Transfers

If you are moving a very large amount of data to Ranch and you encounter a quota limit error, then you are bumping into the limit of data you can have on Ranch's cache system. There are limits on cache inode usage, and disk block usage, but these limits should only affect a few very heavy users and do not affect a user's total allocation on the Ranch archival system. If you encounter a quota error, please submit a ticket to the TACC user portal, and we will work with you to make sure your data is transferred as efficiently as possible. The limits are merely to prevent the system getting unexpectedly overloaded, and thus maintaining good service for all users.

Use the "du -h" command to see how much data you have on the disk.

Archive a large directory with tar, and move it to Ranch, while splitting it into smaller parts. e.g.:

stampede2$ tar -cvf - /directory/ | ssh ranch.tacc.utexas.edu 'cd /your_ranch_path/ && split -b 1024m - files.tar.'

Alternatively, you can split large output files, or tar files, on the Stampede2 side, then move them to Ranch.

Large files, more than a few TB in size, should be split into chunks, preferably between 10GB and 500GB in size. Use the split command on Stampede2 to accomplish this:

stampede2$ split -b 300G myverybigfile.tar my_file_part_

The above example will create several 300GB files, with the filenames: my_file_part_aa, my_file_part_ab, my_file_part_ac, etc.

The split parts of a file can be joined together again with the "cat" command.

stampede2$ cat my_file_part_?? > myverybigfile.tar

See "man split" for more options.

Large collections of small files must be bundled into a tar archive, called "tarballs" before being sent to Ranch, or even better, create the tar file while on route to Ranch (that way there is no temporary tar file on the source filesystem).

The following example will create an archive of the my_small_files_directory in the current working directory:

stampede2$ tar -cvf my_data.tar my_small_files_directory/


For 24/7 assistance using Ranch, please submit a support ticket.