Last update: May 31, 2018
LAMMPS is a classical molecular dynamics code developed at Sandia National Laboratories and is available under the GPL license. LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) makes use of spatial-decomposition techniques to partition the simulation domain and runs in serial or in parallel using MPI. The code is capable of modeling systems with millions or even billions of particles on a large HPC machine. A variety of force fields and boundary conditions are provided in LAMMPS which can be used to model atomic, polymeric, biological, metallic, granular, and coarse-grained systems.
$ module spider lammps # list installed LAMMPS versions $ module load lammps # load default version (currently 16Mar18)
The LAMMPS module defines a set of environment variables for the locations of the LAMMPS home, binaries, documentation and more with the prefix "
TACC_LAMMPS_". Use the "
env" command to display the variables:
$ env | grep "TACC_LAMMPS"
Note that the "
examples" and "
bench" folder contents are now in the "
/work/apps/lammps/production_src/16Mar18 directory". Also note that each installation's executable name differs. For version 17Nov16 and 31Mar17, use
lmp_knl instead of
lmp_knl works only on Stampede2's KNL compute nodes (not SKX).
The following LAMMPS Standard packages have been installed:
1Version of external library installed: voronoi: voro++-0.4.6
2Lonestar 5 only
The following USER packages are installed:
The following packages are not installed:
*Library REAX was not compiled with this version, because the default virtual space of the library consumes 1.6 GB/task (for a total of 2.2 GB per task), and the TACC monitor kills jobs that use over 2.0 GB/task (32 GB for 16 tasks).
Interactive LAMMPS does not yet exist, however you can run LAMMPS in an
idev session. Below is an example serial session.
login1$ idev ... $ module load lammps $ $ lmp_stampede < lammps_input > log_file
LAMMPS uses spatial-decomposition techniques to partition the simulation domain into small 3d sub-domains, one of which is assigned to each processor. You will need to set suitable values of "
-N" (number of nodes), "
-n" (total number of MPI tasks), and
OMP_NUM_THREADS (number of threads to use in parallel regions) to optimize the performance of your simulation.
Refer to the Stampede2's Running Jobs section for more Slurm options. To configure this script for Lonestar 5, vary the "
-N" and "
#!/bin/bash #SBATCH -J test # Job Name #SBATCH -A myProject # Your project name (Change it !!!!) #SBATCH -o test.o%j # Output file name (%j expands to jobID) #SBATCH -e test.e%j # Error file name (%j expands to jobID) #SBATCH -N 1 # Requesting 1 node #SBATCH -n 16 # and 16 tasks #SBATCH -p normal # Queue name (normal, skx-normal, etc.) #SBATCH -t 24:00:00 # Specify 24 hour run time module load intel/17.0.4 module load impi/17.0.3 module load lammps/16Mar18 export OMP_NUM_THREADS=1 ibrun lmp_stampede -in lammps_input > log_file
Example command-line invocations:
- LAMMPS with USER-OMP package (E.g. using 2 threads)
ibrun lmp_stampede -sf omp -pk omp 2 -in lammps_input > log_file
- LAMMPS with USER-INTEL package (E.g. using 2 threads)
ibrun lmp_stampede -sf intel -pk intel 0 omp 2 -in lammps_input > log_file
LAMMPS with GPU package (Lonestar 5)
You could set "
-n" directive to a value > 1 to let more than one MPI tasks share one GPU.
#SBATCH -p gpu
ibrun lmp_lonestar -sf gpu -pk gpu 1 -in lammps_input > log_file