Last update: May 06, 2020
LAMMPS is a classical molecular dynamics code developed at Sandia National Laboratories and is available under the GPL license. LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) makes use of spatial-decomposition techniques to partition the simulation domain and runs in serial or in parallel using MPI. The code is capable of modeling systems with millions or even billions of particles on a large HPC machine. A variety of force fields and boundary conditions are provided in LAMMPS which can be used to model atomic, polymeric, biological, metallic, granular, and coarse-grained systems.
As of this date, the default versions are 16Mar18 (Stampede2 and Lonestar 5) and 15Apr20 (Frontera). Users are welcome to install different versions of LAMMPS in their own directories (see Building Third Party Software in the Stampede2 User Guide). Sample build scripts for each system can be found in the "
$ module spider lammps # list installed LAMMPS versions $ module load lammps # load default version
The LAMMPS module defines a set of environment variables for the locations of the LAMMPS home, binaries and more with the prefix "
TACC_LAMMPS". Use the "
env" command to display the variables:
$ env | grep "TACC_LAMMPS"
Note that each installation's executable name differs. The name of the executable is in the format of "
lmp_machine", where "machine" can be either "stampede", "lonestar" or "frontera" depending on the system. The Stampede2 versions 17Nov16 and 31Mar17, use "
lmp_knl" and must be submitted to the Stampede2's KNL (not SKX) queues. The LAMMPS GPU executables, "
lmp_gpu", can only be submitted to Frontera's and Lonestar5's GPU queues.
|Executable|| || || |
| || || |
LAMMPS uses spatial-decomposition techniques to partition the simulation domain into small 3d sub-domains, one of which is assigned to each processor. You will need to set suitable values of "
-N" (number of nodes), "
-n" (total number of MPI tasks), and
OMP_NUM_THREADS (number of threads to use in parallel regions) to optimize the performance of your simulation.
Refer to Stampede2's Running Jobs section for more Slurm options. To configure this script for Lonestar 5 and Frontera, vary the "
-N" and "
#!/bin/bash #SBATCH -J test # Job Name #SBATCH -A myProject # Your project name #SBATCH -o test.o%j # Output file name (%j expands to jobID) #SBATCH -e test.e%j # Error file name (%j expands to jobID) #SBATCH -N 1 # Requesting 1 node #SBATCH -n 16 # and 16 tasks #SBATCH -p normal # Queue name (normal, skx-normal, etc.) #SBATCH -t 24:00:00 # Specify 24 hour run time module load intel/18.0.2 module load impi/18.0.2 module load lammps/16Mar18 export OMP_NUM_THREADS=1 ibrun lmp_stampede -in lammps_input
LAMMPS with USER-OMP package (e.g. using 2 threads)
ibrun lmp_stampede -sf omp -pk omp 2 -in lammps_input
LAMMPS with USER-INTEL package (e.g. using 2 threads)
ibrun lmp_stampede -sf intel -pk intel 0 omp 2 -in lammps_input
LAMMPS with GPU package (Lonestar 5 version 9Jan20 and Frontera 15Apr20 only)
The name of the GPU lammps executable is lmp_gpu. Set the "
-n" directive to a value > 1 to let more than one MPI task share one GPU.
#SBATCH -N 1 # Requesting 1 node #SBATCH -n 10 # and 10 tasks that share 1 GPU #SBATCH -p gpu # Lonestar5 gpu queue ibrun lmp_gpu -sf gpu -pk gpu 1 -in lammps_input
On Frontera GPU nodes, you could set "
-pk gpu 4" to utilize all four RTX GPUs available on each node.
#SBATCH -p rtx # Frontera rtx queue ibrun lmp_gpu -sf gpu -pk gpu 4 -in lammps_input
You can also run LAMMPS within an
idev session as demonstrated below:
login1$ idev ... c123-456$ module load lammps c123-456$ lmp_stampede < lammps_input
Use the "
-h" option to print out a list of all supported functions and packages:
c123-456$ lmp_stampede -h
A copy of the same output for each version of lammps can be found in the directory "