LAMMPS at TACC
Last update: May 31, 2018

LAMMPS is a classical molecular dynamics code developed at Sandia National Laboratories and is available under the GPL license. LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) makes use of spatial-decomposition techniques to partition the simulation domain and runs in serial or in parallel using MPI. The code is capable of modeling systems with millions or even billions of particles on a large HPC machine. A variety of force fields and boundary conditions are provided in LAMMPS which can be used to model atomic, polymeric, biological, metallic, granular, and coarse-grained systems.

Running LAMMPS

At TACC, LAMMPS is installed on the Stampede2 and Lonestar 5 systems.

$ module spider lammps       # list installed LAMMPS versions
$ module load lammps         # load default version (currently 16Mar18)

The LAMMPS module defines a set of environment variables for the locations of the LAMMPS home, binaries, documentation and more with the prefix "TACC_LAMMPS_". Use the "env" command to display the variables:

$  env | grep "TACC_LAMMPS"

Note that the "examples" and "bench" folder contents are now in the "/work/apps/lammps/production_src/16Mar18 directory". Also note that each installation's executable name differs. For version 17Nov16 and 31Mar17, use lmp_knl instead of lmp_stampede. The lmp_knl works only on Stampede2's KNL compute nodes (not SKX).

The following LAMMPS Standard packages have been installed:

ASPHERE BODY CLASS2 COLLOID CORESHELL
DIPOLE GRANULAR KSPACE MANYBODY MC
MEAM MISC MOLECULE MPIIO OPT
PERI POEMS PYTHON QEQ REPLICA
RIGID SHOCK SNAP SRD VORONOI1
GPU2 KIM LATTE

1Version of external library installed: voronoi: voro++-0.4.6
2Lonestar 5 only

The following USER packages are installed:

USER-ATC USER-AWPMD USER-CGDNA USER-CGSDK USER-COLVARS USER-DIFFRACTION
USER-DPD USER-DRUDE USER-EFF USER-FEP USER-INTEL USER-LB
USER-MANIFOLD USER-MEAMC USER-MESO USER-MGPT USER-MISC USER-OMP
USER-PHONON USER-QTB USER-SMTBQ USER-SPH USER-TALLY USER-UEF
USER-H5MD USER-MOLFILE USER-QUIP USER-SMD

The following packages are not installed:

KOKKOS MSCG REAX*
USER-NETCDF USER-QMMM USER-VTK

*Library REAX was not compiled with this version, because the default virtual space of the library consumes 1.6 GB/task (for a total of 2.2 GB per task), and the TACC monitor kills jobs that use over 2.0 GB/task (32 GB for 16 tasks).

Running LAMMPS Interactively

Interactive LAMMPS does not yet exist, however you can run LAMMPS in an idev session. Below is an example serial session.

login1$ idev
...
$ module load lammps 
$ $ lmp_stampede < lammps_input > log_file

Running LAMMPS in Batch Mode

LAMMPS uses spatial-decomposition techniques to partition the simulation domain into small 3d sub-domains, one of which is assigned to each processor. You will need to set suitable values of "-N" (number of nodes), "-n" (total number of MPI tasks), and OMP_NUM_THREADS (number of threads to use in parallel regions) to optimize the performance of your simulation.

Sample Job Script: LAMMPS on Stampede2

Refer to the Stampede2's Running Jobs section for more Slurm options. To configure this script for Lonestar 5, vary the "-N" and "-n" directives.

#!/bin/bash
#SBATCH -J test                    # Job Name
#SBATCH -A myProject               # Your project name  (Change it !!!!)
#SBATCH -o test.o%j                # Output file name (%j expands to jobID)
#SBATCH -e test.e%j                # Error file name (%j expands to jobID)
#SBATCH -N 1                       # Requesting 1 node
#SBATCH -n 16                      # and 16 tasks
#SBATCH -p normal                  # Queue name (normal, skx-normal, etc.)
#SBATCH -t 24:00:00                # Specify 24 hour run time

module load   intel/17.0.4
module load   impi/17.0.3
module load   lammps/16Mar18

export OMP_NUM_THREADS=1   

ibrun lmp_stampede -in lammps_input > log_file

Example command-line invocations:

  • LAMMPS with USER-OMP package (E.g. using 2 threads)
    ibrun lmp_stampede -sf omp -pk omp 2 -in lammps_input > log_file
  • LAMMPS with USER-INTEL package (E.g. using 2 threads)
    ibrun lmp_stampede -sf intel -pk intel 0 omp 2 -in lammps_input > log_file
  • LAMMPS with GPU package (Lonestar 5)

    You could set "-n" directive to a value > 1 to let more than one MPI tasks share one GPU.

    #SBATCH -p gpu
    ibrun lmp_lonestar -sf gpu -pk gpu 1 -in lammps_input > log_file