CESM: Difference between revisions
Line 78: | Line 78: | ||
To create a new case with '''CESM''', therefore, | To create a new case with '''CESM''', therefore, | ||
the '''machine type''' as well as the '''target queue''' have to be indicated. | the '''machine type''' as well as the '''target queue''' have to be indicated. | ||
Queues of the '''arc40''' machine types: | Queues of the '''arc40''' machine types: | ||
Line 105: | Line 106: | ||
cpu2022-bf24 16 52 4 24 | cpu2022-bf24 16 52 4 24 | ||
</pre> | </pre> | ||
Please note, that this information may change as the cluster is constantly changes based on new hardware being added and | |||
old hardware being removed. | |||
= Links = | = Links = | ||
[[ARC Software pages]] | [[ARC Software pages]] |
Revision as of 17:45, 9 August 2022
General
- National Center for Atmospheric Research (NCAR):
- Project site: https://ncar.ucar.edu/what-we-offer/models/community-earth-system-model-cesm
- QuickStart Manual for CESM2.1: https://escomp.github.io/CESM/release-cesm2/index.html
The Community Earth System Model is a fully-coupled global climate model developed in collaboration with colleagues in the research community. CESM provides state-of-the-art computer simulations of Earth's past, present, and future climate states.
CESM2 is built on the CIME framework.
The majority of the CESM2 User’s Guide is contained in the CIME documentation.
- CIME: https://github.com/ESMCI/cime
- CIME Manual: https://esmci.github.io/cime/versions/master/html/index.html
The Common Infrastructure for Modeling the Earth (CIME - pronounced “SEAM”) provides a Case Control System for configuring,
compiling and executing Earth system models, data and stub model components, a driver and associated tools and libraries.
CESM on ARC
Currently, there are two versions of CESM are installed on ARC, but only one works and supported. The supported version is 2.1.3.
CESM is installed and setup to be used via environmental modules, using the module command.
$ module avail cesm ------------------- /global/software/Modules/4.6.0/modulefiles ------------------- cesm/2.1.1 cesm/2.1.3
To activate it please load its module:
$ module load cesm/2.1.3 Loading cesm/2.1.3 Loading requirement: gcc/9.4.0 cmake/3.17.3 git/2.25.0 svn/1.10.6 openmpi/4.1.1-gnu lib/openblas/0.3.13-gnu
This installation of CESM comes with its own dedicated installs of Python and Perl. To verify that the software has been properly activated you can check the locations of some of the commands provided by the install:
$ which python alias python='python3' /global/software/cesm/python/3.10.4/bin/python3 $ which perl /global/software/cesm/perl/5.34.1/bin/perl $ which create_newcase /global/software/cesm/cesm-2.1.3/cime/scripts/create_newcase
If you have any other software modules loaded on ARC, they may interfere with CESM. Please avoid loading too many modules at the same time.
There is a shared data directory, pointed by the DIN_LOC_ROOT
environmental variable for CESM data sets.
Sharing this storage directory should reduce the amount of data that needs to be downloaded as well as save storage space in users home directories.
Using CESM on ARC
Machines and Queues
The ARC cluster uses the SLURM scheduling system to manage and control jobs. SLURM assumes that their is one main queue for jobs that need to be executed, and that the cluster consists of several partitions. The partitions are collections of compute nodes, that are grouped based on some common properly. On ARC most partitions are grouped based on hardware similarity, scheduling limits, and ownership. CESM has its own model of a compute cluster, that is based on multiple queues and machine types.
In practice, on ARC CESM is setup to use arc40, arc48, and arc52 machine types, compute nodes on which have 40, 48, and 52 CPU cores per node, respectively. However, these machine types can be used in several SLURM partitions. In this case, these partitions do contain machines of the same kind, but the run time limits are different. CESM model treats these SLURM partitions as queues. To create a new case with CESM, therefore, the machine type as well as the target queue have to be indicated.
Queues of the arc40 machine types:
Queue #Nodes #CPUs Max#nodes MaxRuntime Comment name total /node /user hours ---------------------------------------------------------------------------- cpu2019 40 40 6 168 cpu2019-bf05 87 40 20 5 default
Queues of the arc48 machine types:
Queue #Nodes #CPUs Max#nodes MaxRuntime Comment name total /node /user hours ---------------------------------------------------------------------------- cpu2021 34 48 12 168 default cpu2021-bf24 7 48 4 24
Queues of the arc52 machine types:
Queue #Nodes #CPUs Max#nodes MaxRuntime Comment name total /node /user hours ---------------------------------------------------------------------------- cpu2022 52 52 10 168 default cpu2022-bf24 16 52 4 24
Please note, that this information may change as the cluster is constantly changes based on new hardware being added and old hardware being removed.