ABAQUS at TACC
Last update: July 17, 2017

 

The ABAQUS software suite from Dessault Systems is used for finite element analysis and computer-aided engineering. The ABAQUS software is used on TACC resources for projects from a variety of domains, such as petroleum engineering, biomedical engineering, and aerospace engineering.

Requesting Access to ABAQUS

In order to use ABAQUS on TACC resources, the allowed users (UT Austin users or those having their own ABAQUS license), will need to submit a request for getting added to the ABAQUS group through the ticket system. In their request, the potential ABAQUS users would need to agree to use ABAQUS for purely academic purposes. The non-UT Austin users, will need to provide their license/PO number in addition to agreeing to use the software for academic purposes. Once a user's request has been approved, they are added to the "abaqus" group in TACC's accounting system. Users would then wait for about 30 minutes for the group changes to propagate across TACC systems before they can start using ABAQUS.

ABAQUS License Tokens

TACC has a number of license tokens available for ABAQUS. Please submit a support ticket requesting the license server name. In this document we'll refer to the license server as "port-number@license-server". In order to use this license server, the ABAQUS users should add the following line to their job script:

login1$ export ABAQUSLM_LICENSE_FILE=port-number@license-server

Setting this environment variable on the login node before submitting the job will also work.

Users can also set the information about the license server by setting the value of "abaquslm_license_file" in the "abaqus_v6.env" file. For example:

abaquslm_license_file="port-number@license-server"

TACC has a limited number of license tokens. To reduce the wait time in checking out the required number of license tokens from TACC's license server, users are encouraged to use their own license tokens (from their local license server) if they have the option to do so.

When using their own licenses, the users might need to work with the administrator of their license server to open their firewalls for accepting connections from TACC resources. Their license server administrator would need the range of IP addresses of the TACC systems (Stampede or Lonestar5 or Stampede2) on which the users want to run their ABAQUS jobs. The list of IP addresses of TACC resources is provided towards the end of this document. Users will then use the port number and hostname of their license server to set the values of "ABAQUSLM_LICENSE_FILE" in either their job script or the value of "abaquslm_license_file" in the "abaqus_v6.env" file as explained above.

Running ABAQUS

  1. Familiarize yourself with the Stampede2 and/or Lonestar5 user guide sections on "Running Applications".

    Please do not run ABAQUS jobs on a login node. All jobs must be run on the compute nodes, or you risk account suspension. You might want to run the "idev" command on the login node to get interactive access to a compute node for doing any quick testing and development. To run the job in batch mode, please modify the following SLURM job script for your own use:

    We have two versions of ABAQUS installed on Lonestar5 - ABAQUS 2016 and ABAQUS_6.14 as per user requests. The ABAQUS executables for these versions are available at the following paths on LS5: /opt/apps/abaqus2016/commands, /opt/apps/abaqus_6_14/Commands. Please replace "/path-to-abaqus/" in the explanation below with the one of these paths.

  2. To check how many ABAQUS license tokens are available, run the following command and check the contents of the output file named "checkLicenseABAQUSInfo":

    login1$ /path-to-abaqus/abaqus licensing lmstat -a > checkLicenseABAQUSInfo

    To display all available information about this ABAQUS installation:

    login1$ /path-to-abaqus/abaqus/ information=all >AllInformation

    After running the afoementioned command, one can then check the contents of the file named "AllInformation".

  3. Add an environment file in your local working directory - from where you run ABAQUS. The name of the file should be "abaqus_v6.env" for the current version:

    login2.ls5(50)$ cat abaqus_v6.env
    ask_delete=OFF
    scratch="path-to-the location where the files should be written"
    mp_mode=MPI
    run_mode=INTERACTIVE
    memory="50 gb"
    abaquslm_license_file="port-number@license-server"
    lmhanglimit=5

    Note: ABAQUS jobs will keep checking for licensing tokens till the time the required number of tokens become available or the jobs time-out of the queue. In order to save SUs in the event the tokens are not available for usage, the users should have this line here. "5" in the line "lmhanglimit=5" means that ABAQUS will only wait for a license token for 5 minutes.

  4. To test installation success, run the following commands on a compute node:

    $ unset SLURM_GTIDS
    $ /path-to-abaqus/abaqus input=adams_inst.inp job=test interactive

    This command will block until completion or give an error. One can check the log file ("<i>jobname</i>.log"). If no error, it should say the job checked out licenses and completed successfully.

  5. Compile user modules on the login nodes:

    /path-to-abaqus/abaqus make library=<sourcefile>

    Place resulting files (<sourcefile-basenme>-o, <sourcefile-basename>.so) into a directory: /path-to-the-directory/abaqus_libs

    In the "abaqus_v6.env" file, add the following line:

    usub_lib_dir="/path-to-the-directory/abaqus_libs"
  6. To retrieve an input file from the examples directory, the following command will work - notice fetch

    /path-to-abaqus/abaqus fetch job=knee_bolster
  7. In order to debug the ABAQUS errors, users might find it useful to run the ABAQUS commands with the "verbose" option. They can set verbose to "3" to retrieve all run details:

    /path-to-abaqus/abaqus cpus=1 input=adams_ex1.inp  job=test2 interactive scratch="." verbose=3
  8. The following job-script shows an example of running ABAQUS in parallel, on more than one nodes on the Lonestar5 system - please adjust the number of nodes and number of cores needed per node (please refer to the user guide for more information on these):

    #!/bin/bash 
    #SBATCH -J myjob
    #SBATCH -t 1:00:00
    #SBATCH -N 2
    #SBATCH -n 48
    #SBATCH -o myMPI.o%j
    #SBATCH -p normal
    
    envFile=abaqus_v6.env 
    node_list=`scontrol show hostname $SLURM_NODELIST | sort | uniq `
    echo $node_list 
    
    mp_host_list="["
    for i in  ${node_list} ; do
        mp_host_list="${mp_host_list}['$i', 24],"
    done
    
    mp_host_list=`echo ${mp_host_list} | sed -e "s/,$//"`
    mp_host_list="${mp_host_list}]"
    
    #export MP_HOST_LIST=mp_host_list
    
    echo "mp_host_list=${mp_host_list}"  >> abaqus_v6.env
    
    unset SLURM_GTIDS
    
    /opt/apps/abaqus_6_14/Commands/abaqus job=test_job \
        input=knee_bolster.inp -verbose 3 cpus=48 mp_mode=mpi \
        standard_parallel=all interactive scratch="."
    
    sed -i "/mp_host_list/d" $envFile

    Note about "24" in the line "mp_host_list="${mp_host_list}['$i', 24]": "24" cores are available per processor on Lonestar5, and there are 2 nodes requested in the job script above, and hence 48 cores in total are used by the job run using this script. If the number of cores to be used per processor needs to be varied, this option can be changed. However, in most cases, one would just be adjusting the values of "-n", "-N", and "cpus" options in your job script.

    CAE is part of the ABAQUS 2016 installation on Lonestar5. In order to use this, one would need to have X11 forwarding setup in their SSH client. If the users are connecting from a Windows machine, they would need XMing downloaded and running before they start the SSH session with X11 forwarding. If the users are connecting from a Mac computer, then they would need to download and install XQuartz, which should be running at the time of SSH. Also, add "-X" to the ssh command if running it from a terminal on Mac. The TACC Visualization portal will soon support access to files on Lonestar5 as well.

TACC Resources IP Address Ranges

System Login Nodes Compute Nodes
Stampede2 129.114.54.21-22
129.114.54.44-84
129.114.62.196/27
Stampede 129.114.64.0/19 129.114.62.0/27
Lonestar5 206.76.192.0/19 129.114.63.32/27

References