ANSYS: Difference between revisions

From RCSWiki
Jump to navigation Jump to search
mNo edit summary
m (using syntax highlight and moved categories down)
Line 1: Line 1:
[[Category:ANSYS]]
 
[[Category:ANSYS Fluent]]
 
[[Category:ANSYS CFX]]
[[Category:ANSYS Mechanical]]
[[Category:Software]]
[[Category:ARC]]
<span id="contents"></span>
<span id="contents"></span>
= <span id="intro"></span>Introduction =
 
<span id="intro"></span>
= Introduction =
[http://www.ansys.com ANSYS (external link)] is a commercial suite of programs for engineering simulation, including fluid dynamics (Fluent and CFX), structural analysis (ANSYS Mechanical) and electromagnetics/electronics software.
[http://www.ansys.com ANSYS (external link)] is a commercial suite of programs for engineering simulation, including fluid dynamics (Fluent and CFX), structural analysis (ANSYS Mechanical) and electromagnetics/electronics software.


Line 20: Line 18:


At the time of this writing in May 2020, there are 50 basic academic licenses and 512 extended "HPC" license tokens available (with 256 of the latter reserved for a specific research group who purchased their own licenses). The number of tokens available at a given time can be seen by running the following commands on ARC:
At the time of this writing in May 2020, there are 50 basic academic licenses and 512 extended "HPC" license tokens available (with 256 of the latter reserved for a specific research group who purchased their own licenses). The number of tokens available at a given time can be seen by running the following commands on ARC:
<pre>
<syntaxhighlight lang="bash">
module load ansys/2019r2
module load ansys/2019r2
lmutil lmstat -c 1055@ansyslic.ucalgary.ca -a
lmutil lmstat -c 1055@ansyslic.ucalgary.ca -a
</pre>
</syntaxhighlight>
For ANSYS Fluent, each job on ARC will use one token of the software feature "aa_r" in the lmstat output. In addition, one license token per core is used of the "aa_r_hpc" type for cores in excess of 16. So, for example, a job using a 40-core node from the cpu2019 partition will use one aa_r token and 24 aa_r_hpc tokens.
For ANSYS Fluent, each job on ARC will use one token of the software feature "aa_r" in the lmstat output. In addition, one license token per core is used of the "aa_r_hpc" type for cores in excess of 16. So, for example, a job using a 40-core node from the cpu2019 partition will use one aa_r token and 24 aa_r_hpc tokens.


Line 32: Line 30:


You can use
You can use
<pre>
<syntaxhighlight lang="bash">
module avail ansys
module avail ansys
</pre>
</syntaxhighlight>
to see the versions of the ANSYS software that have been installed on ARC.
to see the versions of the ANSYS software that have been installed on ARC.
== Creating a Fluent input file ==
== Creating a Fluent input file ==
Line 40: Line 38:


Typically the main difficulty in getting started with Fluent on ARC is figuring out what text interface commands correspond to the graphical interface commands with which you may be more familiar from using a desktop version of Fluent. At the Fluent command prompt, if you just hit enter the available commands will be shown, similar to:
Typically the main difficulty in getting started with Fluent on ARC is figuring out what text interface commands correspond to the graphical interface commands with which you may be more familiar from using a desktop version of Fluent. At the Fluent command prompt, if you just hit enter the available commands will be shown, similar to:
<pre>
<syntaxhighlight lang="bash">
adapt/                  file/                  report/
adapt/                  file/                  report/
define/                mesh/                  solve/
define/                mesh/                  solve/
display/                parallel/              surface/
display/                parallel/              surface/
exit                    plot/                  views/
exit                    plot/                  views/
</pre>
</syntaxhighlight>
Entering one of those commands and then another enter will give sub-options:
Entering one of those commands and then another enter will give sub-options:
<pre>
<syntaxhighlight lang="bash">
> file
> file


Line 60: Line 58:
import/                show-configuration      write-field-functions
import/                show-configuration      write-field-functions
read-case              solution-files/        write-macros
read-case              solution-files/        write-macros
</pre>
</syntaxhighlight>
So, for example, one can discover by exploring these menus that the commands to set the frequency with which data and case files can be automatically stored periodically during a long run are of the form:
So, for example, one can discover by exploring these menus that the commands to set the frequency with which data and case files can be automatically stored periodically during a long run are of the form:
<pre>
<syntaxhighlight lang="text">
/file/auto-save/data-frequency 1000
/file/auto-save/data-frequency 1000
/file/auto-save/case-frequency if-case-is-modified
/file/auto-save/case-frequency if-case-is-modified
</pre>
</syntaxhighlight>
Here is an example of a complete text input file in which case and data files are read in, some parameters are set related to the storing of output, the solver is run and data and case files saved at the end of the run.
Here is an example of a complete text input file in which case and data files are read in, some parameters are set related to the storing of output, the solver is run and data and case files saved at the end of the run.
<pre>
<syntaxhighlight lang="text">
/file/read-case test.cas
/file/read-case test.cas
/file/read-data test.dat
/file/read-data test.dat
Line 82: Line 80:
/file/write-case test.%t.%i.cas
/file/write-case test.%t.%i.cas
/file/write-data test.%t.%i.dat
/file/write-data test.%t.%i.dat
</pre>
</syntaxhighlight>
Note that blank lines are significant for some commands.
Note that blank lines are significant for some commands.
== Slurm batch job script examples ==
== Slurm batch job script examples ==
Line 92: Line 90:
When running on a full compute node, specify --mem=0 to request all the associated memory on the node. Note that when using the cpu2019 partition (40-core nodes), an n-node ANSYS job will take 40*n-16 license tokens from the aa_r_hpc pool.
When running on a full compute node, specify --mem=0 to request all the associated memory on the node. Note that when using the cpu2019 partition (40-core nodes), an n-node ANSYS job will take 40*n-16 license tokens from the aa_r_hpc pool.
The following example, in ansys_2019r2_fluent_cpu2019_node.slurm , and the input files, elbow3.in and elbow3.cas are available on ARC in the directory /global/software/ansys/scripts .
The following example, in ansys_2019r2_fluent_cpu2019_node.slurm , and the input files, elbow3.in and elbow3.cas are available on ARC in the directory /global/software/ansys/scripts .
<pre>
<syntaxhighlight lang="bash">
#!/bin/bash
#!/bin/bash


Line 134: Line 132:


echo "Job finished at: `date`"
echo "Job finished at: `date`"
</pre>
</syntaxhighlight>


=== Legacy node example - parallel partition ===
=== Legacy node example - parallel partition ===
Line 141: Line 139:


The following example, in ansys_2019r2_fluent_parallel_node.slurm , and the input files, elbow3.in and elbow3.cas are available on ARC in the directory /global/software/ansys/scripts .
The following example, in ansys_2019r2_fluent_parallel_node.slurm , and the input files, elbow3.in and elbow3.cas are available on ARC in the directory /global/software/ansys/scripts .
<pre>
<syntaxhighlight lang="bash">
#!/bin/bash
#!/bin/bash


Line 183: Line 181:


echo "Job finished at: `date`"
echo "Job finished at: `date`"
</pre>
</syntaxhighlight>
For the legacy partitions, note the use of the -pib argument on the Fluent command line to indicate that InfiniBand networking is to be used.
For the legacy partitions, note the use of the -pib argument on the Fluent command line to indicate that InfiniBand networking is to be used.
= Support =
= Support =
Please send any questions regarding using ANSYS on ARC to support@hpc.ucalgary.ca.


Please send any questions regarding using ANSYS on ARC to support@hpc.ucalgary.ca.
[[Category:ANSYS]]
[[Category:ANSYS Fluent]]
[[Category:ANSYS CFX]]
[[Category:ANSYS Mechanical]]
[[Category:Software]]
[[Category:ARC]]

Revision as of 16:01, 24 July 2020


Introduction

ANSYS (external link) is a commercial suite of programs for engineering simulation, including fluid dynamics (Fluent and CFX), structural analysis (ANSYS Mechanical) and electromagnetics/electronics software.

Typically, researchers will install ANSYS on their own computers to develop models in a graphical user interface and then run simulations that exceed their local hardware capabilities on ARC.

The software can be downloaded, upon approval, from the IT Software Distribution web site.

ANSYS is available to all U of C researchers with an ARC account, but, with licensing restrictions as outlined in the next section.= Licensing considerations =

For many years, Information Technologies has provided a limited number of license tokens for ANSYS software, sometimes supplemented by contributions from researchers. The software contract is typically renewed annually in August. If you are interested in contributing to the pool of licenses, you can write to the IT Help Desk itsupport@ucalgary.ca and ask that your email be redirected to the IT software librarian.

The discussion that follows relates only to the research version of the software. Note that the conditions of use of the teaching licenses prohibits them from being used for research projects.

At the time of this writing in May 2020, there are 50 basic academic licenses and 512 extended "HPC" license tokens available (with 256 of the latter reserved for a specific research group who purchased their own licenses). The number of tokens available at a given time can be seen by running the following commands on ARC:

module load ansys/2019r2
lmutil lmstat -c 1055@ansyslic.ucalgary.ca -a

For ANSYS Fluent, each job on ARC will use one token of the software feature "aa_r" in the lmstat output. In addition, one license token per core is used of the "aa_r_hpc" type for cores in excess of 16. So, for example, a job using a 40-core node from the cpu2019 partition will use one aa_r token and 24 aa_r_hpc tokens.

Using the fastest hardware available will provide the most value a given number of license tokens, so, using the 40-core compute nodes, selected by specifying the cpu2019 partition in your batch job (see example scripts below), is preferred. However, if there is a shortage of license tokens, you may use just part of a compute node or compute nodes from the older legacy partitions, such as parallel.

Running ANSYS Fluent batch jobs

Researchers using ANSYS on ARC are expected to be generally familiar with ANSYS capabilities, input file format and the use of restart files.

You can use

module avail ansys

to see the versions of the ANSYS software that have been installed on ARC.

Creating a Fluent input file

After preparing your model, at the point where you are ready to run a Fluent solver, you save the case and data files and transfer them to ARC. In addition to those files, to run your model on ARC you need an input file containing Fluent text interface commands to specify such parameters as the solver to use, the number of time steps, the frequency of output and other simulation controls.

Typically the main difficulty in getting started with Fluent on ARC is figuring out what text interface commands correspond to the graphical interface commands with which you may be more familiar from using a desktop version of Fluent. At the Fluent command prompt, if you just hit enter the available commands will be shown, similar to:

adapt/                  file/                   report/
define/                 mesh/                   solve/
display/                parallel/               surface/
exit                    plot/                   views/

Entering one of those commands and then another enter will give sub-options:

> file

/file>
async-optimize?         read-case-data          start-journal
auto-save/              read-field-functions    start-transcript
binary-files?           read-journal            stop-journal
confirm-overwrite?      read-macros             stop-macro
define-macro            read-profile            stop-transcript
execute-macro           read-transient-table    transient-export/
export/                 set-batch-options       write-cleanup-script
import/                 show-configuration      write-field-functions
read-case               solution-files/         write-macros

So, for example, one can discover by exploring these menus that the commands to set the frequency with which data and case files can be automatically stored periodically during a long run are of the form:

/file/auto-save/data-frequency 1000
/file/auto-save/case-frequency if-case-is-modified

Here is an example of a complete text input file in which case and data files are read in, some parameters are set related to the storing of output, the solver is run and data and case files saved at the end of the run.

/file/read-case test.cas
/file/read-data test.dat

/file/confirm-overwrite no
/file/auto-save/data-frequency 1000
/file/auto-save/case-frequency if-case-is-modified
/file/auto-save/root-name test

/solve/dual-time-iterate
22200
150

/file/write-case test.%t.%i.cas
/file/write-data test.%t.%i.dat

Note that blank lines are significant for some commands.

Slurm batch job script examples

Like other calculations on ARC systems, ANSYS software is run by submitting an appropriate script for batch scheduling using the sbatch command. For more information about submitting jobs, see the ARC Cluster Guide.

The scripts below can serve as a template for your own batch job scripts.

Full node example - cpu2019 partition

When running on a full compute node, specify --mem=0 to request all the associated memory on the node. Note that when using the cpu2019 partition (40-core nodes), an n-node ANSYS job will take 40*n-16 license tokens from the aa_r_hpc pool. The following example, in ansys_2019r2_fluent_cpu2019_node.slurm , and the input files, elbow3.in and elbow3.cas are available on ARC in the directory /global/software/ansys/scripts .

#!/bin/bash

#SBATCH --time=00:10:00
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=40
#SBATCH --mem=0
#SBATCH --partition=cpu2019

# Fluent job script for elbow example on 40-core ARC cpu2019 partition nodes.
# You may change the time and nodes requests, but, leave ntasks-per-node=40 and mem=0

# 2019-07-16 DSP - Updated for Fluent 2019R2 on ARC

# Define the run files and solver type:
BASE=elbow3
INPUT=${BASE}.in
OUTPUT=${BASE}_${SLURM_JOB_ID}.out
SOLVER="2d"

# Choose version of ANSYS Fluent to use:
module load ansys/2019r2

FLUENT=`which fluent`
echo "Using Fluent: $FLUENT"

echo "Current working directory is `pwd`"

# Create a node list so that Fluent knows which nodes to use.
HOSTLIST=hostlist_${SLURM_JOB_ID}
scontrol show hostnames > $HOSTLIST
echo "Created host list file $HOSTLIST"
echo "Running on hosts:"
cat $HOSTLIST

echo "Using $SLURM_NTASKS cores."

echo "Starting run at: `date`"

$FLUENT $SOLVER -g -t${SLURM_NTASKS} -ssh -cnf=${HOSTLIST} -i $INPUT > $OUTPUT 2>&1

echo "Job finished at: `date`"

Legacy node example - parallel partition

Use the parallel partition only when the waiting time for cpu2019 nodes is comparable to the run time, as the cpu2019 partition nodes should run Fluent about twice as fast as the parallel partition nodes.

The following example, in ansys_2019r2_fluent_parallel_node.slurm , and the input files, elbow3.in and elbow3.cas are available on ARC in the directory /global/software/ansys/scripts .

#!/bin/bash

#SBATCH --time=00:10:00
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=12
#SBATCH --mem=0
#SBATCH --partition=parallel

# Fluent job script for elbow example on 12-core ARC parallel partition nodes.
# You may change the time and nodes requests, but, leave ntasks-per-node=12 and mem=0

# 2019-07-16 DSP - Updated for Fluent 2019R2 on ARC

# Define the run files and solver type:
BASE=elbow3
INPUT=${BASE}.in
OUTPUT=${BASE}_${SLURM_JOB_ID}.out
SOLVER="2d"

# Choose version of ANSYS Fluent to use:
module load ansys/2019r2

FLUENT=`which fluent`
echo "Using Fluent: $FLUENT"

echo "Current working directory is `pwd`"

# Create a node list so that Fluent knows which nodes to use.
HOSTLIST=hostlist_${SLURM_JOB_ID}
scontrol show hostnames > $HOSTLIST
echo "Created host list file $HOSTLIST"
echo "Running on hosts:"
cat $HOSTLIST

echo "Using $SLURM_NTASKS cores."

echo "Starting run at: `date`"

$FLUENT $SOLVER -g -t${SLURM_NTASKS} -ssh  -pib -cnf=${HOSTLIST} -i $INPUT > $OUTPUT 2>&1

echo "Job finished at: `date`"

For the legacy partitions, note the use of the -pib argument on the Fluent command line to indicate that InfiniBand networking is to be used.

Support

Please send any questions regarding using ANSYS on ARC to support@hpc.ucalgary.ca.