Last update: January 31, 2020
The TACC Global Shared File System, Stockyard, is mounted on (nearly) all TACC HPC resources as the
$WORK) directory. As TACC's user base continues to expand, the stress on the Global Shared File System (
$WORK) increases daily. This file system is accessible to all TACC users, and therefore experiences a lot of I/O activity (reading and writing to disk) as users run their jobs, reading and generating data including intermediate and checkpoint files.
The greatest stressor of the file system is heavy input/output (I/O): meaning that your program/executable accesses (reads or writes) to disk an excessive amount. Examples of intensive output that could affect the system include, but is not limited to:
- reading/writing 100+ GBs to checkpoint or output files,
- running with 4096+ MPI tasks all reading/writing individual files
- Python jobs using more than 2-3 python modules such as
The stress on the
$WORK file system has increased to the extent that TACC staff is now recommending new file system and job submission guidelines in order to maintain file system stability. If a user's jobs or activities are stressing the
$WORK fileystem, then every other user's jobs and activities are impacted, and the system admins may resort to cancelling your jobs and suspending your access to the queues.
If you know your jobs will require significant I/O, please submit a support ticket and an HPC consultant will work with you.
$WORK are for storage and keeping track of important items. Actual job activity, reading and writing to disk, should be offloaded to your resource's
$SCRATCH file system. You can start a job from anywhere but the actual work of the job should occur only on the
$SCRATCH partition. You can save original items to
$WORK so that you can copy them over to
$SCRATCH if you need to re-generate results.
This table outlines TACC's new recommended guidelines for file system usage:
|File System||Recommended Use||Notes|
| ||cron jobs, scripts and templates, environment settings||each user's |
| ||software installations, original datasets that can't be reproduced.||The Stockyard file system is NOT backed up. Ensure that your data is backed up to Ranch long-term storage.|
| ||Reproducible datasets, I/O files: temporary files, checkpoint/restart files, job output files||All |
Here we present some Best Practices aimed at minimizing I/O impact on all TACC resources.
The purpose of these guidelines is to move I/O activity away from the shared file system,
$WORK, onto each resource's own local storage: usually the
$SCRATCH file systems.
Additionally, each compute node has a local
/tmp directory on it. You can use
/tmp to read/write files that do not need to be accessed by other tasks. If this output data is needed at the end of the job, the files may be copied from
/tmp to your
$SCRATCH directory at the end of your batch script. This will greatly reduce the load on the file system and may provide performance improvement.
Data stored in
/tmp directory is as temporary as its name indicates, lasting only for the duration of your job. Each MPI task will write output to the
/tmp directory on the node on which it is running. MPI tasks cannot access data from
/tmp on different nodes. Each system has a different amount of
/tmp space. Use the table below as guidance to the amount of space in
/tmp. Submit a support ticket for more help using this directory/storage.
|Compute Resource||Storage per Compute Node|
|Stampede2 SKX||144 GB|
|Stampede2 KNL||107 GB normal/large |
32 GB development
|Maverick2||32 GB p100/v100 |
60 GB gtx
Each TACC resource has its own Scratch file system,
/scratch/*username* accessible by the
$SCRATCH environment variable and the
Scratch file systems are not shared across TACC and are specific to one resource. Scratch file systems have neither file count or file size quotas, but are subject to periodic file purges should total disk usage exceed a safety threshold.
TACC staff recommends you run your jobs out of your resource's
$SCRATCH file system instead of the global
$WORK file system. To run your jobs out of
$SCRATCH, copy (stage) the entire executable/package along with all needed job input files and/or needed libraries to your resource's
Computes nodes should not reference the
$WORK file system unless it's to stage data in or out, and only before or after jobs.
Your job script should also direct the job's output to the local scratch directory:
# stage executable and data cd $SCRATCH mkdir testrunA cp $WORK/myprogram testrunA cp $WORK/jobinputdata testrunA # launch program ibrun testrunA/myprogram testrunA/myinputdata > testrunA/output # copy results back permanent storage once job is done cp testrunA/output $WORK/savetestrunA
If your program regularly writes data to disk from each process, for instance for checkpointing, avoid writing output to a separate file for each process, as this will quickly overwhelm the metadata server. Instead, employ a library such as
netcdf to write a single parallel file for the checkpoint. A one-time generation of one file per process (for instance at the end of your run) is less serious, but even then you should consider writing parallel files.
Alternatively, you could write these per-process files to each compute nodes'
/tmp Directory, see below.
Jobs that have multiple tasks that read and/or write to the same file will often suspend the file in question in an open state in order to accommodate the changes happening to it. Please make sure that your I/O activity is not being directed to a single file repeatedly. You can use
/tmp on the node to store this file if the condition cannot be avoided. If you require shared file operations, then please ensure your I/O is optimized.
If you anticipate the need for multiple nodes or processes to write to a single file in parallel (aka single file with multiple writers/collective writers), please submit a support ticket for assistance.
If you are close to file quota on either the
$HOME file system, your job may fail due to being unable to write output, and this will cause stress to the file systems when writing beyond quota.
Principal Investigators can monitor allocation usage via the TACC User Portal under "Allocations->Projects and Allocations". Be aware that the figures shown on the portal may lag behind the most recent usage. Projects and allocation balances are also displayed upon command-line login.
To display a summary of your TACC project balances and disk quotas at any time, execute:
login1$ /usr/local/etc/taccinfo # Generally more current than balances displayed on the portals.
You can monitor your file system's quotas and usage using the
taccinfo command. This output displays whenever you log on to a TACC resource.
---------------------- Project balances for user
---------------------- | Name Avail SUs Expires | Name Avail SUs Expires | | Allocation -1 | Alloc -10037 | ------------------------ Disk quotas for user ------------------------- | Disk Usage (GB) Limit %Used File Usage Limit %Used | | /home1 1.5 25.0 6.02 741 400000 0.19 | | /work 107.5 1024.0 10.50 2434 3000000 0.08 | | /scratch1 0.0 0.0 0.00 3 0 0.00 | | /scratch3 41829.5 0.0 0.00 246295 0 0.00 | -------------------------------------------------------------------------------
TACC staff has developed a tool to reduce the footprint of high I/O jobs called OOPS (Optimal Overloaded I/O Protection System) For jobs that have a particularly high I/O footprint we now ask that users employ the OOOPS module to help govern their I/O activity. To deploy the OOOPS module add the following lines to your job script after your Slurm commands, but before your executable.
module use /work/01255/siliu/stampede2/ooops/modulefiles/ module load ooops export IO_LIMIT_CONFIG=/work/01255/siliu/stampede2/ooops/1.0/conf/config_low set_io_param 0 low
The 1st argument to
set_io_param can be set to either
0 to indicate the
$SCRATCH file system, or
1 to indicate the
$WORK file system. These instructions will allow the system to modulate your job's I/O activity in a way that reduces the impact on the shared file system.
For jobs that make use of large numbers of Python modules or jobs that use local installations of Python/Anaconda/MiniConda, we have an additional tool to help manage the I/O activity caused by library and module calls. To deploy this tool add the following line to your job submission file after your Slurm commands, but before your python executable.
Finally, if you wish to track the full extent of your I/O activity over the course of your job, you can employ another TACC tool that will report on
stat() calls during the run. Place these lines in your job submission script after your Slurm commands and wrapping your executable.
export LD_PRELOAD=/work/00410/huang/share/patch/io_monitor.so:/work /00410/huang/share/patch/hook.so ibrun my_executable unset LD_PRELOAD