Nektar++: Difference between revisions
(Replaced links with navbox) |
|||
(4 intermediate revisions by 2 users not shown) | |||
Line 69: | Line 69: | ||
</pre> | </pre> | ||
== Example | == Example == | ||
= | === The test case === | ||
[[ARC | |||
A test case is available in <code>/global/software/nektar/tests/test3d</code>. | |||
<pre> | |||
$ cd | |||
$ cp -r /global/software/nektar/tests/test3d . | |||
$ cd test3d | |||
$ ls -l | |||
-rw-r--r-- 1 username username 44194196 Apr 12 15:01 cylinder_3d_mesh_gmsh_oriented.xml | |||
-rw-r--r-- 1 username username 8617 Apr 12 15:01 cylinder_full3d_conditions.xml | |||
-rw-r--r-- 1 username username 768 Apr 12 15:01 job.slurm | |||
$ sbatch job.slurm | |||
.... | |||
</pre> | |||
=== The job script === | |||
A job script to run on 4 full nodes of the <code>cpu2022</code> partition, | |||
<code>job.slurm</code>: | |||
<syntaxhighlight lang=bash> | |||
#! /bin/bash | |||
# ------------------------------------------------------------------------------------------- | |||
#SBATCH --job-name=nektar-test | |||
#SBATCH --nodes=4 | |||
#SBATCH --ntasks-per-node=52 | |||
#SBATCH --cpus-per-task=1 | |||
#SBATCH --mem=128gb | |||
#SBATCH --time=0-24:00:00 | |||
#SBATCH --paritition=cpu2022 | |||
# ------------------------------------------------------------------------------------------- | |||
module load openmpi/4.1.1-gnu | |||
CONTAINER="/global/software/nektar/containers/nektar-5.3.0.sif" | |||
OPTS="--verbose --io-format Hdf5 --use-hdf5-node-comm" | |||
mpiexec singularity exec -B /work $CONTAINER IncNavierStokesSolver cylinder_3d_mesh_gmsh_oriented.xml cylinder_full3d_conditions.xml $OPTS | |||
# ------------------------------------------------------------------------------------------- | |||
</syntaxhighlight> | |||
[[Category:Software]] | |||
[[Category:ARC]] | |||
{{Navbox ARC}} |
Latest revision as of 18:10, 21 September 2023
General
- Web site: https://www.nektar.info/
- Speed Comparison among Nektar++ Solvers
Nektar++ is a tensor product based finite element package designed to allow one to construct efficient classical low polynomial order h-type solvers
(where h is the size of the finite element) as well as higher p-order piecewise polynomial order solvers.
The framework comes with a number of solvers and also allows one to construct a variety of new solvers.
Nektar++ on ARC
Limitations
Currently, only a container version of Nektar++ is provided on ARC.
Due to the technical incompatibility, Nektar++ 5.3.0 does not seem to be working on the cpu2019
partition when more than 1 node is allocated for the job.
It does seem to work on the cpu2021
, cpu2022
, and cpu2023
partitions.
Available containers
$ ls -l /global/software/nektar/containers -rwxr-xr-x 1 drozmano drozmano 412758016 Apr 12 14:36 nektar-5.3.0.sif
Testing
Check the version:
$ apptainer exec /global/software/nektar/containers/nektar-5.3.0.sif IncNavierStokesSolver --version Nektar++ version 5.3.0
Check the help for a solver:
$ apptainer exec /global/software/nektar/containers/nektar-5.3.0.sif IncNavierStokesSolver --help Allowed options: -v [ --verbose ] be verbose -V [ --version ] print version information -h [ --help ] print this help message -I [ --solverinfo ] arg override a SOLVERINFO property -P [ --parameter ] arg override a parameter --npx arg number of procs in X-dir --npy arg number of procs in Y-dir --npz arg number of procs in Z-dir --nsz arg number of slices in Z-dir --npt arg number of procs in T-dir (parareal) --part-only arg only partition mesh into N partitions. --part-only-overlapping arg only partition mesh into N overlapping partitions. --part-info Output partition information -f [ --forceoutput ] Disables backups files and forces output to be written without any checks --writeoptfile write an optimisation file --useoptfile arg use an optimisation file -i [ --io-format ] arg Default input/output format (e.g. Xml, Hdf5) --set-start-chknumber arg Set the starting number of the checkpoint file. --set-start-time arg Set the starting time of the simulation. --use-hdf5-node-comm Use a per-node communicator for HDF5 partitioning. --use-ptscotch Use PtScotch for parallel mesh partitioning. --use-scotch Use Scotch for mesh partitioning.
Example
The test case
A test case is available in /global/software/nektar/tests/test3d
.
$ cd $ cp -r /global/software/nektar/tests/test3d . $ cd test3d $ ls -l -rw-r--r-- 1 username username 44194196 Apr 12 15:01 cylinder_3d_mesh_gmsh_oriented.xml -rw-r--r-- 1 username username 8617 Apr 12 15:01 cylinder_full3d_conditions.xml -rw-r--r-- 1 username username 768 Apr 12 15:01 job.slurm $ sbatch job.slurm ....
The job script
A job script to run on 4 full nodes of the cpu2022
partition,
job.slurm
:
#! /bin/bash
# -------------------------------------------------------------------------------------------
#SBATCH --job-name=nektar-test
#SBATCH --nodes=4
#SBATCH --ntasks-per-node=52
#SBATCH --cpus-per-task=1
#SBATCH --mem=128gb
#SBATCH --time=0-24:00:00
#SBATCH --paritition=cpu2022
# -------------------------------------------------------------------------------------------
module load openmpi/4.1.1-gnu
CONTAINER="/global/software/nektar/containers/nektar-5.3.0.sif"
OPTS="--verbose --io-format Hdf5 --use-hdf5-node-comm"
mpiexec singularity exec -B /work $CONTAINER IncNavierStokesSolver cylinder_3d_mesh_gmsh_oriented.xml cylinder_full3d_conditions.xml $OPTS
# -------------------------------------------------------------------------------------------