Quantum ESPRESSO: Difference between revisions

From RCSWiki
Jump to navigation Jump to search
 
(10 intermediate revisions by the same user not shown)
Line 11: Line 11:


* CC Docs page: https://docs.computecanada.ca/wiki/Quantum_ESPRESSO
* CC Docs page: https://docs.computecanada.ca/wiki/Quantum_ESPRESSO
* Ready-to-use pseudopotentials from the PSlibrary: https://pseudopotentials.quantum-espresso.org/legacy_tables




Line 43: Line 45:


= QE on ARC =
= QE on ARC =
== QE modules ==


To see available versions of QE on ARC, use the module command:
To see available versions of QE on ARC, use the module command:
Line 50: Line 54:
espresso/6.3-gnu  espresso/7.2   
espresso/6.3-gnu  espresso/7.2   
</pre>
</pre>
At the moment of writing there are two modules that provide the same version of ORCA, v5.0.3. The modules are based on the binary distributions provided by the developers, for shared and static builds of of the executable files.


* <code>espresso/7.2</code> was released on 31 March 2023.
The versions:
: <code>LibXC v6.2.2</code>
 
: <code>OpenBLAS v0.3.23</code>
* '''espresso/7.2''', QE v7.2 was released on 31 March 2023.
: Built with GCC and '''gfortran''' 8.5.0
: '''OpenMPI''' 4.1.1
: '''LibXC''' v6.2.2
: '''OpenBLAS''' v0.3.23
: '''No GPU''' support.
: '''No GPU''' support.
: '''No HDF5''' support.
: '''No HDF5''' support.


* <code>espresso/6.3-gnu</code> is an older version of QE and is provided for compatibility purpose.
 
* '''espresso/6.3-gnu''', QE v6.3 is an older version of QE and is provided for compatibility purpose.
 
== Test case ==
 
=== Input file ===
 
QE '''PW''' input file for some Diamond crystal, <code>pw-scf.in</code>:
<pre>
&control
    calculation    = 'scf'
    prefix          = 'diam'
    restart_mode    = 'from_scratch'
    wf_collect      = .false.
    pseudo_dir      = 'pseudo'
    outdir          = './out'
    tprnfor        = .true.
    tstress        = .true.
/
&system
    ibrav          = 2
    celldm(1)      = 6.64245
    nat            = 2
    ntyp            = 1
    ecutwfc        = 60
    occupations    = 'smearing'
    smearing        = 'mp'
    degauss        = 0.02
    nbnd            = 4
/
&electrons
    diagonalization = 'david'
    mixing_beta    = 0.7
    conv_thr        = 1.0d-10
/
ATOMIC_SPECIES
  C  12.01078  C_3.98148.UPF
ATOMIC_POSITIONS alat
  C  0.00  0.00  0.00
  C  0.25  0.25  0.25
K_POINTS automatic
8 8 8 1 1 1
</pre>
 
We will run this calculation on a compute node from the default list using '''4 MPI processes'''.
 
=== Job script ===
 
The job script for this computation, <code>pw-scf.slurm</code>:
<syntaxhighlight lang=bash>
#!/bin/bash
# ===========================================================================
#SBATCH --job-name=qe-pw-test
 
#SBATCH --nodes=1
#SBATCH --ntasks=4
#SBATCH --cpus-per-task=1
#SBATCH --mem=16gb
#SBATCH --time=0-01:00:00
# ===========================================================================
module load espresso/7.2
 
# Create a symbolic link to the pseudopotentials
ln -s $ESPRESSO_PSEUDO pseudo
 
mpiexec pw.x -npool $SLURM_NTASKS < scf.in
 
# ===========================================================================
</syntaxhighlight>
This job script requests '''1 hour''' or run time on a compute node in one of the default partitions.
It also requests '''4 MPI processes''' and '''16 Gb''' of RAM.
 
When the QE module is loaded, the path to the directory with some pre-installed pseudo-potentials is stored in
the <code>ESPRESSO_PSEUDO</code> environmental variable.
In this example, the job script creates a symbolic link to that directory, so that in the input file it can
simply be used as '''pseudo''' (see the <code>pw-scf.in</code> input file).
 
=== Running the test ===
 
You have to create a directory to contain all the files for this test case, and "change" to that directory:
$ cd
$ mkdir -p my-jobs/qe-tests/diamond-scf
$ cd my-jobs/qe-tests/diamond-scf
 
Then you have to create the <code>pw-scf.in</code> '''input file''', as well as
the '''job script''', <code>pw-scf.slurm</code>.
You can use the <code>nano</code> text editor for this and '''copy/paste''' the text for the files from this page.
$ nano pw-scf.in
....
$ nano pw-scf.slurm
....
 
Once you create both the files, you can check if they are in the current directory:
<pre>
$ pwd
/home/username/my-jobs/qe-tests/diamond-scf
 
$ ls -l
-rw-r----- 1 username username  751 Apr 24 14:06 pw-scf.in
-rwxr-x--x 1 username username  488 Apr 24 14:15 pw-scf.slurm
</pre>
 
Now you can submit the job to the SLURM scheduler and check the job:
<source lang=bash>
$ sbatch pw-scf.slurm
Submitted batch job 29235262
 
# Check if it has started yet.
$ squeue-long -j 29235262
JOBID    USER      STATE  PARTITION TIME_LIMIT  TIME  NODES  TASKS  CPUS  MIN_MEMORY TRES_PER_NREASON  NODELIST
29235262  username  PENDING ....
</source>
The computation is very short, about 2 seconds. You will not be able to see it in the queue if it starts quickly.
If the cluster is busy, it may take some time until the job runs, until then the job will be in the '''PENDING''' state.
 
Once the job is completed,
the output is saved by SLURM to the <code>slurm-29235262.out</code> output file.
 
So, we can check the results:
<source lang=bash>
$ ls -l
 
drwxr-xr-x 3 username username  4096 Apr 24 14:16 out
lrwxrwxrwx 1 username username    30 Apr 24 14:16 pseudo -> /global/software/qe/7.2/pseudo
-rw-r----- 1 username username  751 Apr 24 14:06 pw-scf.in
-rwxr-x--x 1 username username  488 Apr 24 14:15 pw-scf.slurm
-rw-r--r-- 1 username username 21727 Apr 24 14:16 slurm-29235262.out
 
# Check the output directory
$ ls out
diam.save  diam.xml
 
# Check the output file (the end of it only).
$ tail slurm-29235262.out
    Parallel routines
 
    PWSCF        :      1.09s CPU      1.22s WALL
 
 
  This run was terminated on:  14:16:23  24Apr2024           
 
=------------------------------------------------------------------------------=
  JOB DONE.
=------------------------------------------------------------------------------=
</source>
Success!


= Links =
= Links =
[[ARC Software]]
[[ARC Software]]

Latest revision as of 16:43, 14 May 2024

General

Installation: https://www.quantum-espresso.org/Doc/user_guide/node7.html



Quantum Espresso is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials.

Quantum ESPRESSO has evolved into a distribution of independent and inter-operable codes in the spirit of an open-source project. The Quantum ESPRESSO distribution consists of a “historical” core set of components, and a set of plug-ins that perform more advanced tasks, plus a number of third-party packages designed to be inter-operable with the core components.


The projector augmented wave method (PAW) is a technique used in ab initio electronic structure calculations. It is a generalization of the pseudopotential and linear augmented-plane-wave methods, and allows for density functional theory calculations to be performed with greater computational efficiency.

See Wikipedia: https://en.wikipedia.org/wiki/Projector_augmented_wave_method


Since 26 April 2016 EPW is distributed as part of the Quantum ESPRESSO suite.

EPW is the short name for "Electron-phonon Wannier". EPW is an open-source F90/MPI code which calculates properties related to the electron-phonon interaction using Density-Functional Perturbation Theory and Maximally Localized Wannier Functions.


  • PSLIBRARY -- a library for generating ultra-soft pseudo-potentials.
https://dalcorso.github.io/pslibrary/pslibrary_help.html


QE on ARC

QE modules

To see available versions of QE on ARC, use the module command:

$ module avail espresso
-------------------------- /global/software/Modules/4.6.0/modulefiles -----------------------------
espresso/6.3-gnu  espresso/7.2  

The versions:

  • espresso/7.2, QE v7.2 was released on 31 March 2023.
Built with GCC and gfortran 8.5.0
OpenMPI 4.1.1
LibXC v6.2.2
OpenBLAS v0.3.23
No GPU support.
No HDF5 support.


  • espresso/6.3-gnu, QE v6.3 is an older version of QE and is provided for compatibility purpose.

Test case

Input file

QE PW input file for some Diamond crystal, pw-scf.in:

 &control
    calculation     = 'scf'
    prefix          = 'diam'
    restart_mode    = 'from_scratch'
    wf_collect      = .false.
    pseudo_dir      = 'pseudo'
    outdir          = './out'
    tprnfor         = .true.
    tstress         = .true.
 /
 &system
    ibrav           = 2
    celldm(1)       = 6.64245
    nat             = 2
    ntyp            = 1
    ecutwfc         = 60
    occupations     = 'smearing'
    smearing        = 'mp'
    degauss         = 0.02
    nbnd            = 4
 /
 &electrons
    diagonalization = 'david'
    mixing_beta     = 0.7
    conv_thr        = 1.0d-10
 /
ATOMIC_SPECIES
  C  12.01078  C_3.98148.UPF
ATOMIC_POSITIONS alat
  C   0.00  0.00  0.00
  C   0.25  0.25  0.25
K_POINTS automatic
8 8 8 1 1 1

We will run this calculation on a compute node from the default list using 4 MPI processes.

Job script

The job script for this computation, pw-scf.slurm:

#!/bin/bash 
# ===========================================================================
#SBATCH --job-name=qe-pw-test

#SBATCH --nodes=1
#SBATCH --ntasks=4
#SBATCH --cpus-per-task=1
#SBATCH --mem=16gb
#SBATCH --time=0-01:00:00
# ===========================================================================
module load espresso/7.2

# Create a symbolic link to the pseudopotentials
ln -s $ESPRESSO_PSEUDO pseudo

mpiexec pw.x -npool $SLURM_NTASKS < scf.in 

# ===========================================================================

This job script requests 1 hour or run time on a compute node in one of the default partitions. It also requests 4 MPI processes and 16 Gb of RAM.

When the QE module is loaded, the path to the directory with some pre-installed pseudo-potentials is stored in the ESPRESSO_PSEUDO environmental variable. In this example, the job script creates a symbolic link to that directory, so that in the input file it can simply be used as pseudo (see the pw-scf.in input file).

Running the test

You have to create a directory to contain all the files for this test case, and "change" to that directory:

$ cd 
$ mkdir -p my-jobs/qe-tests/diamond-scf
$ cd my-jobs/qe-tests/diamond-scf

Then you have to create the pw-scf.in input file, as well as the job script, pw-scf.slurm. You can use the nano text editor for this and copy/paste the text for the files from this page.

$ nano pw-scf.in
....
$ nano pw-scf.slurm
....

Once you create both the files, you can check if they are in the current directory:

$ pwd
/home/username/my-jobs/qe-tests/diamond-scf

$ ls -l
-rw-r----- 1 username username   751 Apr 24 14:06 pw-scf.in
-rwxr-x--x 1 username username   488 Apr 24 14:15 pw-scf.slurm

Now you can submit the job to the SLURM scheduler and check the job:

$ sbatch pw-scf.slurm
Submitted batch job 29235262

# Check if it has started yet.
$ squeue-long -j 29235262
JOBID     USER      STATE   PARTITION TIME_LIMIT  TIME  NODES  TASKS  CPUS  MIN_MEMORY TRES_PER_NREASON   NODELIST
29235262  username  PENDING ....

The computation is very short, about 2 seconds. You will not be able to see it in the queue if it starts quickly. If the cluster is busy, it may take some time until the job runs, until then the job will be in the PENDING state.

Once the job is completed, the output is saved by SLURM to the slurm-29235262.out output file.

So, we can check the results:

$ ls -l

drwxr-xr-x 3 username username  4096 Apr 24 14:16 out
lrwxrwxrwx 1 username username    30 Apr 24 14:16 pseudo -> /global/software/qe/7.2/pseudo
-rw-r----- 1 username username   751 Apr 24 14:06 pw-scf.in
-rwxr-x--x 1 username username   488 Apr 24 14:15 pw-scf.slurm
-rw-r--r-- 1 username username 21727 Apr 24 14:16 slurm-29235262.out

# Check the output directory
$ ls out
diam.save  diam.xml

# Check the output file (the end of it only).
$ tail slurm-29235262.out 
     Parallel routines

     PWSCF        :      1.09s CPU      1.22s WALL


   This run was terminated on:  14:16:23  24Apr2024            

=------------------------------------------------------------------------------=
   JOB DONE.
=------------------------------------------------------------------------------=

Success!

Links

ARC Software