Last update: September 10, 2020
GROningen MAchine for Chemical Simulations (GROMACS) is a free, open-source, molecular dynamics package. GROMACS can simulate the Newtonian equations of motion for systems with hundreds to millions of particles. GROMACS is primarily designed for biochemical molecules like proteins, lipids and nucleic acids that have a lot of complicated bonded interactions, but since GROMACS is extremely fast at calculating the nonbonded interactions (that usually dominate simulations), many groups are also using it for research on non-biological systems, e.g. polymers. |
TACC and GROMACS
GROMACS is currently installed on TACC's Stampede2, Lonestar5, Longhorn, and Frontera systems. GROMACS is managed under the module system on TACC resources. To run simulations, simply load the module with the following command:
login1$ module load gromacs
As of this date, the default GROMACS versions are 2019.6 on Stampede2, 2019.4 on Frontera, 2019.6 on Longhorn, and 2016.4 on Lonestar5. Users are welcome to install different versions of GROMACS in their own directories. See Building Third Party Software in the Stampede2 User Guide. The module file defines the environment variables listed below. Learn more from the module's help file:
login1$ module help gromacs
Table 1. GROMACS Environment Variables
Variable | Value |
---|---|
TACC_GROMACS_DIR | GROMACS installation root directory |
TACC_GROMACS_BIN | binaries |
TACC_GROMACS_DOC | documentation |
TACC_GROMACS_LIB | libraries |
TACC_GROMACS_INC | include files |
GMXLIB | topology file directory |
Running GROMACS at TACC
To launch simulation jobs, please use the TACC-specific MPI launcher "ibrun
", which is a TACC-system-aware replacement for generic MPI launchers like mpirun
and mpiexec
. Molecular dynamics engine "mdrun_mpi
" is the parallel component of GROMACS. It can be invoked in a job script like this:
ibrun mdrun_mpi -s topol.tpr -o traj.trr -c confout.gro -e ener.edr -g md.log
The topology file "topol.tpr
", "mdout.md
", and "deshuf.ndx
" should be generated with the "grompp
" command:
grompp ... -po mdout.mdp -deshuf deshuf.ndx -o topol.tpr
TACC also provides a double-precision version of the mdrun
application: "mdrun_mpi_d
". To use the double-precision version, simply replace "mdrun_mpi
" in the commands above with "mdrun_mpi_d
".
On Lonestar5, Longhorn, and Frontera, you may use "mdrun_mpi_gpu
" instead of "mdrun_mpi
" to run GROMACS on GPUs nodes. Note that not all GROMACS modules on the TACC systems support GPU acceleration. Consult "module help
" to find details about supported functionality.
On Stampede2, the executables with a "_knl
" postfix should be run on the KNL nodes only in the appropriate queues.
You can also compile and link your own source code with the GROMACS libraries:
login1$ icc -I\$TACC_GROMACS_INC test.c -L\$TACC_GROMACS_LIB –lgromacs
Running GROMACS in Batch Mode
Use Slurm's "sbatch
" command to submit a batch job to one of the Stampede2 queues:
login1$ sbatch myjobscript
Here "myjobscript
" is the name of a text file containing #SBATCH
directives and shell commands that describe the particulars of the job you are submitting.
Stampede2 Job Script
The following job script requests 1 node (48 cores) for 24 hours using Stampede2's Skylake compute nodes (skx-normal
queue).
#!/bin/bash #SBATCH -J myjob # job name #SBATCH -e myjob.%j.err # error file name #SBATCH -o myjob.%j.out # output file name #SBATCH -N 1 # request 1 node #SBATCH -n 48 # request all 48 cores #SBATCH -p skx-normal # designate queue #SBATCH -t 24:00:00 # designate max run time #SBATCH -A myproject # charge job to myproject module load gromacs ibrun mdrun_mpi -s topol.tpr -o traj.trr -c confout.gro -e ener.edr -g md.log
NOTE: To run on Stampede2's KNL nodes, substitute "mdrun_mpi
" with one of the following executables: "mdrun_knl
", "mdrun_mpi_knl
" or "mdrun_mpi_d_knl
".
Lonestar5 Job Script
The following job script requests 2 GPU nodes on Lonestar5. The directive -gpu_id 0000
indicates all four MPI ranks on the same node share the same GPU with id 0
. You may use, for example -gpu_id 0011
or -gpu_id 0123
, if there are more than one GPUs available on each node.
#!/bin/bash #SBATCH -J myjob # job name #SBATCH -e myjob.%j.err # error file name #SBATCH -o myjob.%j.out # output file name #SBATCH -N 2 # request 2 node #SBATCH -n 8 # request 8 tasks #SBATCH -p gpu # designate queue #SBATCH -t 24:00:00 # designate max run time #SBATCH -A myproject # charge job to myproject module load gromacs export OMP_NUM_THREADS=4 # 4 OMP threads per task export IBRUN_TASKS_PER_NODE=4 # 4 tasks per node # all 4 tasks on the same node share a gpu with id '0' ibrun mdrun_mpi_gpu -s topol.tpr -o traj.trr -c confout.gro -e ener.edr -g md.log -gpu_id 0000