GROMACS: Difference between revisions
Line 83: | Line 83: | ||
CUDA runtime: N/A | CUDA runtime: N/A | ||
</pre> | </pre> | ||
=== Running a GROMACS Job === | |||
To run your simulation on ARC cluster you have to have: | |||
(1) a set of '''GROMACS''' input files and | |||
(2) a SLURM job script '''.slurm'''. | |||
Place you input files for a simulation into a separate directory and | |||
prepare an appropriate '''job script''' for it. |
Revision as of 17:49, 20 May 2020
General
- GROMACS home site: http://www.gromacs.org/
GROMACS is a versatile package to perform molecular dynamics,
i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles.
It is primarily designed for biochemical molecules like proteins, lipids and nucleic acids that have a lot of complicated bonded interactions, but since GROMACS is extremely fast at calculating the nonbonded interactions (that usually dominate simulations) many groups are also using it for research on non-biological systems, e.g. polymers.
Using GROMACS on ARC
Researchers using GROMACS on ARC are expected to be generally familiar with its capabilities, input file types and their formats and the use of checkpoint files to restart symulations.
Like other calculations on ARC systems, GROMACS is run by submitting an appropriate script for batch scheduling using the sbatch command. For more information about submitting jobs, see Running jobs article.
GROMACS modules
Currently there are several software modules on ARC that provide different versions of GROMACS. The versions differ in the release date as well as in the kind of the CPU architecture the software is compiled for.
You can see them using the module
command:
$ module avail gromacs ----------- /global/software/Modules/3.2.10/modulefiles --------- gromacs/2016.3-gnu gromacs/2018.0-gnu gromacs/2019.6-nehalem gromacs/2019.6-skylake gromacs/5.0.7-gnu
The names of the modules give hints on the specific version of GROMACS they provide access to.
- The gnu suffix indicates that those versions have been compiled with GNU GCC compiler.
- In these specific cases, GCC 4.8.5.
- GROMACS 2019.6 was compiled using GCC 7.3.0 for two different CPU kinds, the old kind, nehalem, and the new kind, skylake.
The nehalem module should be used on compute nodes before 2019, and the skylake module is for node from 2019 and up.
- All GROMACS versions provided by all the modules have support for GPU computations, even though it may not be practical to run it on GPU nodes due to limited resources.
A module has to be loaded before GROMACS can be used on ARC. Like this:
$ gmx --version bash: gmx: command not found... $ module load gromacs/2019.6-nehalem $ gmx --version :-) GROMACS - gmx, 2019.6 (-: GROMACS is written by: Emile Apol Rossen Apostolov Paul Bauer Herman J.C. Berendsen ..... ..... GROMACS version: 2019.6 Precision: single Memory model: 64 bit MPI library: none OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64) GPU support: CUDA SIMD instructions: SSE4.1 FFT library: fftw-3.3.7-sse2-avx RDTSCP usage: enabled TNG support: enabled Hwloc support: hwloc-1.11.8 Tracing support: disabled C compiler: /global/software/gcc/gcc-7.3.0/bin/gcc GNU 7.3.0 C compiler flags: -msse4.1 -O3 -DNDEBUG -funroll-all-loops -fexcess-precision=fast C++ compiler: /global/software/gcc/gcc-7.3.0/bin/g++ GNU 7.3.0 C++ compiler flags: -msse4.1 -std=c++11 -O3 -DNDEBUG -funroll-all-loops -fexcess-precision=fast CUDA compiler: /global/software/cuda/cuda-10.0.130/bin/nvcc nvcc: NVIDIA (R) Cuda compiler driver;Copyright (c) 2005-2018 NVIDIA Corporation;Built on Sat_Aug_25_21:08:01_CDT_2018;Cuda compilation tools, release 10.0, V10.0.130 CUDA compiler flags:-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=compute_75;-use_fast_math;;; ;-msse4.1;-std=c++11;-O3;-DNDEBUG;-funroll-all-loops;-fexcess-precision=fast; CUDA driver: 9.10 CUDA runtime: N/A
Running a GROMACS Job
To run your simulation on ARC cluster you have to have: (1) a set of GROMACS input files and (2) a SLURM job script .slurm.
Place you input files for a simulation into a separate directory and prepare an appropriate job script for it.