OpenFOAM

From RCSWiki
Revision as of 14:20, 31 May 2020 by Phillips (talk | contribs) (Add table of contents)
Jump to navigation Jump to search

Introduction

OpenFOAM is a free, open-source, toolkit for creation of computational fluid dynamics (CFD) applications. It includes solver libraries and pre- and post-processing utilities. Common variants of OpenFOAM include those from openfoam.org and openfoam.com .

Typically, researchers will install OpenFOAM on their own computers to learn how to use the software, run simulations that exceed their local hardware capabilities on ARC and then transfer output data back to their own computers for visualization.

Running OpenFOAM batch jobs on ARC

Researchers using OpenFOAM on ARC are expected to be generally familiar with OpenFOAM capabilities, input file format and the use of restart files.

Like other jobs on ARC, OpenFOAM calculations are run by submitting an appropriate script for batch scheduling using the sbatch command. See documentation on [Running_jobs|running batch jobs] for more information.

Several different versions of OpenFOAM have been installed on ARC under /global/software/openfoam, but, some researchers have chosen to install particular versions in their own home directories or take advantage of the wide range of versions installed on Compute Canada clusters (external link).

Here is a sample script that was used to test OpenFOAM on ARC with one of the supplied tutorial cases (damBreakFine (external link)), which used interFoam, modified to use the scotch decomposition option. The job script and input files can be copied from the /global/software/openfoam/examples/damBreak/damBreakVeryFine_scotch_template directory on ARC.

The version ("6.x") of OpenFOAM used is from openfoam.org and was built with GNU 4.8.5 compilers and OpenMPI version 2.1.3. OpenFOAM build options used were WM_LABEL_SIZE=64 and FOAMY_HEX_MESH=yes.

#!/bin/bash

#SBATCH --nodes=2
#SBATCH --ntasks-per-node=40    # number of MPI processes per node - adjust according to the partition
#SBATCH --mem=0                 # Use --mem=0 to request all the available memory on a node  
#SBATCH --time=05:00:00         # Maximum run time in hh:mm:ss, or d-hh:mm
#SBATCH --partition=pawson-bf,apophis-bf,razi-bf,cpu2019

# Check on some basics:

echo "Running on host: $(hostname)"
echo "Current working directory is: $(pwd)"
echo "Starting job at $(date)"

# Initialize OpenFOAM environment.
module load openmpi/2.1.3-gnu
export OMPI_MCA_mpi_cuda_support=0

source /global/software/openfoam/6x_20181025_gcc485_mpi_213gnu/OpenFOAM-6/etc/bashrc FOAMY_HEX_MESH=yes

export FOAM_RUN=$PWD

echo "Working in $PWD"

CORES=$SLURM_NTASKS
echo "Running on $CORES cores."

echo "Make a new decomposeParDict file"
DATE=$(date)

cat > system/decomposeParDict <<EOF
FoamFile
{
    version     2.0;
    format      ascii;
    class       dictionary;
    location    "system";
    object      decomposeParDict;
}

// decomposeParDict created at ${DATE}.

numberOfSubdomains $CORES;

method          scotch;

EOF

echo "Forcing new decomposition"

decomposePar -force

echo "Using mpiexec: $(which mpiexec)"

FOAM=$(which interFoam)
echo "About to run $FOAM at $(date)"

mpiexec $FOAM -parallel > dambreakveryfine_scotch_arc_${CORES}cores_${SLURM_JOB_ID}.out

echo "Finished at $(date)"

echo "Running reconstructPar at $(date)."
reconstructPar -newTimes
echo "Finished reconstructPar at $(date)."
echo "Manually delete processor directories if reconstruction succeeded. "



Support

Please send any questions regarding using OpenFOAM on ARC to support@hpc.ucalgary.ca.