ANSYS

From RCSWiki
Jump to navigation Jump to search


Introduction

ANSYS (external link) is a commercial suite of programs for engineering simulation, including fluid dynamics (Fluent and CFX), structural analysis (ANSYS Mechanical) and electromagnetics/electronics software.

Typically, researchers will install ANSYS on their own computers to develop models in a graphical user interface and then run simulations that exceed their local hardware capabilities on ARC.

The software can be downloaded, upon approval, from the IT Software Distribution web site.

ANSYS is available to all U of C researchers with an ARC account, but, with licensing restrictions as outlined in the next section.= Licensing considerations =

For many years, Information Technologies has provided a limited number of license tokens for ANSYS software, sometimes supplemented by contributions from researchers. The software contract is typically renewed annually in August. If you are interested in contributing to the pool of licenses, you can write to the IT Help Desk itsupport@ucalgary.ca and ask that your email be redirected to the IT software librarian.

The discussion that follows relates only to the research version of the software. Note that the conditions of use of the teaching licenses prohibits them from being used for research projects.

At the time of this writing in May 2020, there are 50 basic academic licenses and 512 extended "HPC" license tokens available (with 256 of the latter reserved for a specific research group who purchased their own licenses). The number of tokens available at a given time can be seen by running the following commands on ARC:

module load ansys/2019r2
lmutil lmstat -c 1055@ansyslic.ucalgary.ca -a

For ANSYS Fluent, each job on ARC will use one token of the software feature "aa_r" in the lmstat output. In addition, one license token per core is used of the "aa_r_hpc" type for cores in excess of 16. So, for example, a job using a 40-core node from the cpu2019 partition will use one aa_r token and 24 aa_r_hpc tokens.

Using the fastest hardware available will provide the most value a given number of license tokens, so, using the 40-core compute nodes, selected by specifying the cpu2019 partition in your batch job (see example scripts below), is preferred. However, if there is a shortage of license tokens, you may use just part of a compute node or compute nodes from the older legacy partitions, such as parallel.

ANSYS Fluent on ARC

Researchers using ANSYS on ARC are expected to be generally familiar with ANSYS capabilities, input file format and the use of restart files.

You can use

$ module avail ansys
--------------------- /global/software/Modules/4.6.0/modulefiles -------------------------------
ansys/19.1  ansys/2019r2  ansys/2020r2  ansys/2021r1

to see the versions of the ANSYS software that have been installed on ARC.

Not all of them may be in working conditions or fully supported on ARC. So far, versions 2019r2 and 2020r2 are supported. The other versions are kept either for historical reasons or for testing purposes.

Creating a Fluent input file

After preparing your model, at the point where you are ready to run a Fluent solver, you save the case and data files and transfer them to ARC. In addition to those files, to run your model on ARC you need an input file containing Fluent text interface commands to specify such parameters as the solver to use, the number of time steps, the frequency of output and other simulation controls.

Typically the main difficulty in getting started with Fluent on ARC is figuring out what text interface commands correspond to the graphical interface commands with which you may be more familiar from using a desktop version of Fluent. At the Fluent command prompt, if you just hit enter the available commands will be shown, similar to:

adapt/                  file/                   report/
define/                 mesh/                   solve/
display/                parallel/               surface/
exit                    plot/                   views/

Entering one of those commands and then another enter will give sub-options:

> file

/file>
async-optimize?         read-case-data          start-journal
auto-save/              read-field-functions    start-transcript
binary-files?           read-journal            stop-journal
confirm-overwrite?      read-macros             stop-macro
define-macro            read-profile            stop-transcript
execute-macro           read-transient-table    transient-export/
export/                 set-batch-options       write-cleanup-script
import/                 show-configuration      write-field-functions
read-case               solution-files/         write-macros

So, for example, one can discover by exploring these menus that the commands to set the frequency with which data and case files can be automatically stored periodically during a long run are of the form:

/file/auto-save/data-frequency 1000
/file/auto-save/case-frequency if-case-is-modified

Here is an example of a complete text input file in which case and data files are read in, some parameters are set related to the storing of output, the solver is run and data and case files saved at the end of the run.

/file/read-case test.cas
/file/read-data test.dat

/file/confirm-overwrite no
/file/auto-save/data-frequency 1000
/file/auto-save/case-frequency if-case-is-modified
/file/auto-save/root-name test

/solve/dual-time-iterate
22200
150

/file/write-case test.%t.%i.cas
/file/write-data test.%t.%i.dat

Note that blank lines are significant for some commands.

Running ANSYS Fluent batch jobs on ARC

Like other calculations on ARC systems, ANSYS software is run by submitting an appropriate script for batch scheduling using the sbatch command. For more information about submitting jobs, see the ARC Cluster Guide.

The scripts below can serve as a template for your own batch job scripts. Please note that different versions of ANSYS Fluent require different options depending on the version. Different options are also required when Fluent is run on different partitions of the ARC cluster. Typically, only the cpu2019, cpu2021, and the parallel partitions are recommended for Fluent.

The following examples, the input files, elbow3.in and elbow3.cas, are used. They are available on ARC in the directory /global/software/ansys/scripts.

Ansys Fluent 2020r2

cpu2019 partition

When running on a full compute node, specify --mem=0 to request all the associated memory on the node. Note that when using the cpu2019 partition (40-core nodes), an n-node ANSYS job will take 40*n-16 license tokens from the aa_r_hpc pool.

fluent2020r2_cpu2019.slurm:

#!/bin/bash
#-------------------------------------------------------------------------------------------
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=40
#SBATCH --mem=0
#SBATCH --time=00:10:00
#SBATCH --partition=cpu2019,cpu2019-bf05

# Fluent job script for elbow example on 40-core ARC cpu2019 partition nodes.
# You may change the time and nodes requests, but, leave ntasks-per-node=40 and mem=0
#-------------------------------------------------------------------------------------------
# 2022-01-21 DR
# -----------------------------------------------------------------------------------------------
module load ansys/2020r2
module load openmpi/4.1.1-gnu
export OPENMPI_ROOT=$(dirname $(dirname `which mpiexec`))

# Create a node list so that Fluent knows which nodes to use.
HOSTLIST=hostlist_${SLURM_JOB_ID}
scontrol show hostnames > $HOSTLIST
echo "Created host list file $HOSTLIST"
echo "Running on hosts:"
cat $HOSTLIST

echo "Using $SLURM_NTASKS cores."
echo "Starting run at: `date`"
echo "Current working directory is `pwd`"

# -----------------------------------------------------------------------------------------------
# Provide the name of the input file and run Fluent here.
# The -pib.infinipath option is required to make fluent run on Intel OPA interconnect.
INPUT=elbow3.in
fluent 3ddp -g -t${SLURM_NTASKS} -cnf=${HOSTLIST} -mpi=openmpi -pib -i $INPUT 

# -----------------------------------------------------------------------------------------------
echo "Job finished at: `date`"

parallel partition

Use the parallel partition only when the waiting time for cpu2019 nodes is comparable to the run time, as the cpu2019 partition nodes should run Fluent about twice as fast as the parallel partition nodes.

fluent2020r2_parallel.slurm:

#!/bin/bash
# -----------------------------------------------------------------------------------------------
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=12
#SBATCH --mem=0
#SBATCH --time=00:10:00
#SBATCH --partition=parallel

# Fluent job script for elbow example on 12-core ARC parallel partition nodes.
# You may change the time and nodes requests, but, leave ntasks-per-node=12 and mem=0
# -----------------------------------------------------------------------------------------------
# 2022-01-21 DR 
# -----------------------------------------------------------------------------------------------
module load ansys/2020r2

# Create a node list so that Fluent knows which nodes to use.
HOSTLIST=hostlist_${SLURM_JOB_ID}
scontrol show hostnames > $HOSTLIST
echo "Created host list file $HOSTLIST"
echo "Running on hosts:"
cat $HOSTLIST

echo "Using $SLURM_NTASKS cores."
echo "Starting run at: `date`"
echo "Current working directory is `pwd`"

# -----------------------------------------------------------------------------------------------
# Provide the name of the input file and run Fluent here.
INPUT=elbow3.in
fluent 3ddp -g -t${SLURM_NTASKS} -cnf=${HOSTLIST} -mpi=openmpi -pib -i $INPUT 

# -----------------------------------------------------------------------------------------------
echo "Job finished at: `date`"

For the legacy partitions, note the use of the -pib argument on the Fluent command line to indicate that InfiniBand networking is to be used.

Ansys Fluent 2019r2

cpu2019 partition

When running on a full compute node, specify --mem=0 to request all the associated memory on the node. Note that when using the cpu2019 partition (40-core nodes), an n-node ANSYS job will take 40*n-16 license tokens from the aa_r_hpc pool.

ansys_2019r2_fluent_cpu2019_node.slurm:

#!/bin/bash
#-------------------------------------------------------------------------------------------
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=40
#SBATCH --mem=0
#SBATCH --time=00:10:00
#SBATCH --partition=cpu2019,cpu2019-bf05

# Fluent job script for elbow example on 40-core ARC cpu2019 partition nodes.
# You may change the time and nodes requests, but, leave ntasks-per-node=40 and mem=0
#-------------------------------------------------------------------------------------------
# 2022-01-21 DR - Updated
# 2019-07-16 DSP - Updated for Fluent 2019R2 on ARC
# -----------------------------------------------------------------------------------------------
module load ansys/2019r2

# Create a node list so that Fluent knows which nodes to use.
HOSTLIST=hostlist_${SLURM_JOB_ID}
scontrol show hostnames > $HOSTLIST
echo "Created host list file $HOSTLIST"
echo "Running on hosts:"
cat $HOSTLIST

echo "Using $SLURM_NTASKS cores."
echo "Starting run at: `date`"
echo "Current working directory is `pwd`"

# -----------------------------------------------------------------------------------------------
# Provide the name of the input file and run Fluent here.
# The -pib.infinipath option is required to make fluent run on Intel OPA interconnect.
INPUT=elbow3.in
fluent 3ddp -g -t${SLURM_NTASKS} -cnf=${HOSTLIST} -mpi=ibmmpi -pib.infinipath -i $INPUT 

# -----------------------------------------------------------------------------------------------
echo "Job finished at: `date`"

parallel partition

Use the parallel partition only when the waiting time for cpu2019 nodes is comparable to the run time, as the cpu2019 partition nodes should run Fluent about twice as fast as the parallel partition nodes.

The following example, in ansys_2019r2_fluent_parallel_node.slurm , and the input files, elbow3.in and elbow3.cas are available on ARC in the directory /global/software/ansys/scripts .

#!/bin/bash
# -----------------------------------------------------------------------------------------------
#SBATCH --time=00:10:00
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=12
#SBATCH --mem=0
#SBATCH --partition=parallel

# Fluent job script for elbow example on 12-core ARC parallel partition nodes.
# You may change the time and nodes requests, but, leave ntasks-per-node=12 and mem=0
# -----------------------------------------------------------------------------------------------
# 2022-01-21 DR - Updated
# 2019-07-16 DSP - Updated for Fluent 2019R2 on ARC
# -----------------------------------------------------------------------------------------------
module load ansys/2019r2

# Create a node list so that Fluent knows which nodes to use.
HOSTLIST=hostlist_${SLURM_JOB_ID}
scontrol show hostnames > $HOSTLIST
echo "Created host list file $HOSTLIST"
echo "Running on hosts:"
cat $HOSTLIST

echo "Using $SLURM_NTASKS cores."
echo "Starting run at: `date`"
echo "Current working directory is `pwd`"

# -----------------------------------------------------------------------------------------------
# Provide the name of the input file and run Fluent here.
INPUT=elbow3.in
fluent 3ddp -g -t${SLURM_NTASKS} -cnf=${HOSTLIST} -mpi=ibmmpi -pib -i $INPUT 

# -----------------------------------------------------------------------------------------------
echo "Job finished at: `date`"

For the legacy partitions, note the use of the -pib argument on the Fluent command line to indicate that InfiniBand networking is to be used.

Timings

These timings are provided for reference purpose. They have been obtained on some example system that runs for about 10 minutes on 2 full compute nodes of the cpu2019 partition.

Partition    #Nodes    #CPUs/Procs              MPI     Interconnect   Walltime          Fluent
---------------------------------------------------------------------------------------------------
cpu2019           2             80   openmpi vendor              eth      10:36          2020r2

                                      openmpi local              eth       7:49          2020r2
                                      openmpi local               ib       7:53          2020r2

                                      openmpi local               ib       8:01          2019r2
                              
                                      1bmmpi vendor              eth       8:34          2019r2
                                      ibmmpi vendor    ib.infinipath       7:56          2019r2


parallel          8             96   openmpi vendor              eth      19:12          2020r2
                                     openmpi vendor               ib      13:25          2020r2

                                     openmpi local               eth      12:18          2020r2
                                     openmpi local                ib      12:25          2020r2

Issues

Cleaning the system after Crashed Jobs

Update, 2021-01: After a major update in December 2020, the job scheduler, SLURM, should be deleting these leftover processes automatically, making the cleaning step unnecessary. This article is kept so far for historical and reference purposes.


There is problem with Fluent contaminating the cluster with leftover processes when a multi-node job crashes on the cluster.

The issue is in that Fluent uses its own set of MPI libraries which do not communicate with the job scheduler on ARC. Therefore, the processes that are spread on the nodes when an MPI job is running are not known to SLURM and SLURM cannot take care of them when the job finishes in a bad way (crashes). If the job finishes in a good way, those processes just terminate normally and there is no problem.

To help with this issue, Fluent creates a script in the working directory which is called cleanup-fluent-….sh that must be run to clean up the system in case when the jobs terminates abnormally. If the job finishes normally, this script is deleted automatically. If the job crashes it never gets to the deletion step and the clean up script stays in the working directory.

The name of the script is cleanup-fluent-node-12345.sh, where node is the node name and 122345 is a number somehow related to the run.

So, if you

  • run multi-node Fluent jobs and
  • your job crashes and
  • you can see that script in the working directory of that job,

you have have to execute that script immediately to clean the system. Like this:

$ ./cleanup-fluent-....sh

The clean up script deletes itself when you run it, this prevents any double cleaning of the system.

Please make sure that you take care of the leftover fluent processes after your jobs are done. This is a serious issue, as the leftover processes slow down other users jobs and will never die on their own.

Support

Please send any questions regarding using ANSYS on ARC to support@hpc.ucalgary.ca.