Quantum ESPRESSO: Difference between revisions
Line 160: | Line 160: | ||
Now you can submit the job to the SLURM scheduler and check the job: | Now you can submit the job to the SLURM scheduler and check the job: | ||
< | <source lang=bash> | ||
$ sbatch pw-scf.slurm | $ sbatch pw-scf.slurm | ||
Submitted batch job 29235262 | Submitted batch job 29235262 | ||
Line 168: | Line 168: | ||
JOBID USER STATE PARTITION TIME_LIMIT TIME NODES TASKS CPUS MIN_MEMORY TRES_PER_NREASON NODELIST | JOBID USER STATE PARTITION TIME_LIMIT TIME NODES TASKS CPUS MIN_MEMORY TRES_PER_NREASON NODELIST | ||
29235262 username PENDING .... | 29235262 username PENDING .... | ||
</ | </source> | ||
The computation is very short, about 2 seconds. You will not be able to see it in the queue if it starts quickly. | The computation is very short, about 2 seconds. You will not be able to see it in the queue if it starts quickly. | ||
If the cluster is busy, it may take some time until the job runs, until then the job will be in the '''PENDING''' state. | If the cluster is busy, it may take some time until the job runs, until then the job will be in the '''PENDING''' state. |
Revision as of 20:36, 24 April 2024
General
- QE Web site: http://www.quantum-espresso.org
- EPW web site: http://epw.org.uk
- Input keywords: https://www.quantum-espresso.org/Doc/INPUT_PW.html
- CC Docs page: https://docs.computecanada.ca/wiki/Quantum_ESPRESSO
Quantum Espresso is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials.
Quantum ESPRESSO has evolved into a distribution of independent and inter-operable codes in the spirit of an open-source project. The Quantum ESPRESSO distribution consists of a “historical” core set of components, and a set of plug-ins that perform more advanced tasks, plus a number of third-party packages designed to be inter-operable with the core components.
The projector augmented wave method (PAW) is a technique used in ab initio electronic structure calculations.
It is a generalization of the pseudopotential and linear augmented-plane-wave methods, and allows for density functional theory calculations to be performed with greater computational efficiency.
See Wikipedia: https://en.wikipedia.org/wiki/Projector_augmented_wave_method
Since 26 April 2016 EPW is distributed as part of the Quantum ESPRESSO suite.
- EPW home (version history): http://epw.org.uk/Main/About
EPW is the short name for "Electron-phonon Wannier". EPW is an open-source F90/MPI code which calculates properties related to the electron-phonon interaction using Density-Functional Perturbation Theory and Maximally Localized Wannier Functions.
- PSLIBRARY -- a library for generating ultra-soft pseudo-potentials.
- Quantum Espresso vs VASP discussion:
QE on ARC
To see available versions of QE on ARC, use the module command:
$ module avail espresso -------------------------- /global/software/Modules/4.6.0/modulefiles ----------------------------- espresso/6.3-gnu espresso/7.2
The versions:
espresso/7.2
was released on 31 March 2023.
- Built with GCC and gfortran 8.5.0
- OpenMPI 4.1.1
LibXC v6.2.2
OpenBLAS v0.3.23
- No GPU support.
- No HDF5 support.
espresso/6.3-gnu
is an older version of QE and is provided for compatibility purpose.
Test case
Input file
QE PW input file for some Diamond crystal, pw-scf.in
:
&control calculation = 'scf' prefix = 'diam' restart_mode = 'from_scratch' wf_collect = .false. pseudo_dir = 'pseudo' outdir = './out' tprnfor = .true. tstress = .true. / &system ibrav = 2 celldm(1) = 6.64245 nat = 2 ntyp = 1 ecutwfc = 60 occupations = 'smearing' smearing = 'mp' degauss = 0.02 nbnd = 4 / &electrons diagonalization = 'david' mixing_beta = 0.7 conv_thr = 1.0d-10 / ATOMIC_SPECIES C 12.01078 C_3.98148.UPF ATOMIC_POSITIONS alat C 0.00 0.00 0.00 C 0.25 0.25 0.25 K_POINTS automatic 8 8 8 1 1 1
We will run this calculation on a compute node from the default list using 4 MPI processes.
Job script
The job script for this computation, pw-scf.slurm
:
#!/bin/bash
# ===========================================================================
#SBATCH --job-name=qe-pw-test
#SBATCH --nodes=1
#SBATCH --ntasks=4
#SBATCH --cpus-per-task=1
#SBATCH --mem=16gb
#SBATCH --time=0-01:00:00
# ===========================================================================
module load espresso/7.2
# Create a symbolic link to the pseudopotentials
ln -s $ESPRESSO_PSEUDO .
mpiexec pw.x -npool $SLURM_NTASKS < scf.in
# ===========================================================================
This job script requests 1 hour or run time on a compute node in one of the default partitions. It also requests 4 MPI processes and 16 Gb of RAM.
Running the test
You have to create a directory to contain all the files for this test case, and "change" to that directory:
$ cd $ mkdir -p my-jobs/qe-tests/diamond-scf $ cd my-jobs/qe-tests/diamond-scf
Then you have to create the pw-scf.in
input file, as well as
the job script, pw-scf.slurm
.
You can use the nano
text editor for this and copy/paste the text for the files from this page.
$ nano pw-scf.in .... $ nano pw-scf.slurm ....
Once you create both the files, you can check if they are in the current directory:
$ pwd /home/username/my-jobs/qe-tests/diamond-scf $ ls -l -rw-r----- 1 username username 751 Apr 24 14:06 pw-scf.in -rwxr-x--x 1 username username 488 Apr 24 14:15 pw-scf.slurm
Now you can submit the job to the SLURM scheduler and check the job:
$ sbatch pw-scf.slurm
Submitted batch job 29235262
# Check if it has started yet.
$ squeue-long -j 29235262
JOBID USER STATE PARTITION TIME_LIMIT TIME NODES TASKS CPUS MIN_MEMORY TRES_PER_NREASON NODELIST
29235262 username PENDING ....
The computation is very short, about 2 seconds. You will not be able to see it in the queue if it starts quickly. If the cluster is busy, it may take some time until the job runs, until then the job will be in the PENDING state.
Once the job is completed,
the output is saved by SLURM to the slurm-29235262.out
output file.
So, we can check the results:
$ ls -l
drwxr-xr-x 3 username username 4096 Apr 24 14:16 out
lrwxrwxrwx 1 username username 30 Apr 24 14:16 pseudo -> /global/software/qe/7.2/pseudo
-rw-r----- 1 username username 751 Apr 24 14:06 pw-scf.in
-rwxr-x--x 1 username username 488 Apr 24 14:15 pw-scf.slurm
-rw-r--r-- 1 username username 21727 Apr 24 14:16 slurm-29235262.out
# Check the output directory
$ ls out
diam.save diam.xml
# Check the output file (the end of it only).
$ tail slurm-29235262.out
Parallel routines
PWSCF : 1.09s CPU 1.22s WALL
This run was terminated on: 14:16:23 24Apr2024
=------------------------------------------------------------------------------=
JOB DONE.
=------------------------------------------------------------------------------=
Success!