Last update: May 11, 2018

GROningen MAchine for Chemical Simulations (GROMACS) is a free, open-source, molecular dynamics package. GROMACS can simulate the Newtonian equations of motion for systems with hundreds to millions of particles. GROMACS is primarily designed for biochemical molecules like proteins, lipids and nucleic acids that have a lot of complicated bonded interactions, but since GROMACS is extremely fast at calculating the nonbonded interactions (that usually dominate simulations) many groups are also using it for research on non-biological systems, e.g. polymers.


GROMACS is currently installed on TACC's Stampede2 and Lonestar5 systems. GROMACS is managed under the module system on TACC resources. To run simulations, simply load the module with the following command:

login1$ module load gromacs

As of this date, the recommended and default version is V2016.4. Users are welcome to install different versions of GROMACS in their own directories. See Building Third Party Software in the Stampede2 User Guide. The module file defines the environment variables listed below. Learn more from the module's help file:

login1$ module help gromacs
GROMACS Environment Variables

Variable Value
TACC_GROMACS_DIR GROMACS installation root directory
TACC_GROMACS_DOC documentation
TACC_GROMACS_INC include files
GMXLIB topology file directory


IMPORTANT: Do NOT launch production jobs on the login nodes. Not only will the jobs fail, running on the login nodes is against system policy and may result in account suspension. See Good Citizenship on Stampede2 for more information.

To launch simulation jobs, please use the TACC-specific MPI launcher "ibrun", which is a Stampede2-aware replacement for generic MPI launchers like mpirun and mpiexec. Molecular dynamics engine "mdrun_mpi" is the parallel component of GROMACS. It can be invoked in a job script like this:

ibrun mdrun_mpi -s topol.tpr -o traj.trr -c confout.gro -e ener.edr -g md.log

The topology file topol.tpr,, and deshuf.ndx should be generated with the grompp command:

grompp ... -po mdout.mdp -deshuf deshuf.ndx -o topol.tpr

TACC also provides a double-precision version of the mdrun application: "mdrun_mpi_d". To use the double-precision version, simply replace "mdrun_mpi" in the commands above with "mdrun_mpi_d".

You can also compile and link your own source code with the GROMACS libraries:

login1$ icc -I\$TACC_GROMACS_INC test.c -L\$TACC_GROMACS_LIB –lgromacs

Running GROMACS in Batch Mode

Use Slurm's "sbatch" command to submit a batch job to one of the Stampede2 queues:

login1$ sbatch myjobscript

Here "myjobscript" is the name of a text file containing #SBATCH directives and shell commands that describe the particulars of the job you are submitting. The details of your job script's contents depend on the type of job you intend to run.

Sample Stampede2 GROMACS Job Script

This job script submits a job requesting 1 node (48 cores) for 24 hours using Stampede2's Skylake compute nodes. To configure this script for Lonestar5, vary the "-N" and "n" directives.

#SBATCH -J myjob              # job name
#SBATCH -e myjob.%j.err       # error file name 
#SBATCH -o myjob.%j.out       # output file name 
#SBATCH -N 1                  # request 1 node
#SBATCH -n 48                 # request all 48 cores 
#SBATCH -p skx-normal         # designate queue 
#SBATCH -t 24:00:00           # designate max run time 
#SBATCH -A myproject          # charge job to myproject 

module load gromacs
ibrun mdrun_mpi -s topol.tpr -o traj.trr -c confout.gro -e ener.edr -g md.log