ARC Cluster Guide: Difference between revisions

From RCSWiki
Jump to navigation Jump to search
Line 29: Line 29:


=== Partition Hardware Specs ===
=== Partition Hardware Specs ===
When submitting jobs to ARC, you may specify a partition that your job will run on. Please choose a partition that is most appropriate for your work.  For your convenience, the CPU model is listed since specific workloads requiring special Intel Instruction Set Extensions may only work on newer hardware. For help on selecting a appropriate partition, please see [[#Selecting a partition|the Selecting a Partition Section]] below.  
When submitting jobs to ARC, you may specify a partition that your job will run on. Please choose a partition that is most appropriate for your work.


Most of the partitions have a high-speed interconnect between the compute nodes, making them suitable for multi-node parallel processing. It is sometimes important to make a distinction in the type of network technology used, as some software may be built to work with libraries supporting one type of network and not another.
A few things to keep in mind when choosing a partition:
* Specific workloads requiring special Intel Instruction Set Extensions may only work on newer Intel CPUs.
* If working with multi-node parallel processing, ensure your software and libraries support the partition's interconnect networking.
* While older partitions may be slower, they may be less busy and have little to no wait times.
 
If you are unsure which partition to use or need assistance on selecting an appropriate partition, please see [[#Selecting a partition|the Selecting a Partition Section]] below.  


{| class="wikitable"
{| class="wikitable"
Line 139: Line 144:
|}
|}


=== ARC Cluster Storage ===
{{Message Box
| title=No Backup Policy!
| message=You are responsible for your own backups.  Many researchers will have accounts with Compute Canada and may choose to back up their data there (the Project file system accessible through the Cedar cluster would often be used).
Please contact us at support@hpc.ucalgary.ca if you want more information about this option.
}}


=== Storage ===
The ARC cluster has around 2 petabyte of shared disk storage available across the entire cluster as well as temporary storage local to each of the compute nodes. Please refer to the individual sections below on the capacity limitations and usage policies.
The ARC cluster has around 2 petabyte of shared disk storage available across the entire cluster as well as temporary storage local to each of the compute nodes. Please refer to the individual sections below on the capacity limitations and usage policies.
'''Backup policy:''' you are responsible for your own backups.  Many researchers will have accounts with Compute Canada and may choose to back up their data there (the Project file system accessible through the Cedar cluster would often be used).  We can explain more about this option if you write to support@hpc.ucalgary.ca .


{| class="wikitable"
{| class="wikitable"
Line 166: Line 176:
| Dependent on nodes
| Dependent on nodes
|}
|}
==== /home: Home file system ====
==== /home: Home file system ====
Each user has a directory under /home and is the default working directory when logging in to ARC. Each home directory has a per-user quota of 500 GB. This limit is fixed and cannot be increased. Researchers requiring additional storage exceeding what is available on their home directory may use /work and /scratch.
Each user has a directory under /home and is the default working directory when logging in to ARC. Each home directory has a per-user quota of 500 GB. This limit is fixed and cannot be increased. Researchers requiring additional storage exceeding what is available on their home directory may use /work and /scratch.

Revision as of 22:20, 27 July 2020

Security Icon.png

Cybersecurity awareness at the U of C

Please note that there are typically about 950 phishing attempts targeting University of Calgary accounts each month. This is just a reminder to be careful about computer security issues, both at home and at the University. Please visit https://it.ucalgary.ca/it-security for more information, tips on secure computing, and how to report suspected security problems.

Support Icon.png

Need Help or have other ARC Related Questions?

For all general RCS related issues, questions, or comments, please contact us at support@hpc.ucalgary.ca.

This guide gives an overview of the Advanced Research Computing (ARC) cluster at the University of Calgary and is intended to be read by new account holders getting started on ARC. This guide covers topics as the hardware and performance characteristics, available software, usage policies and how to log in and run jobs.

Introduction

The ARC compute cluster can be used for running large numbers (hundreds) of concurrent serial (one core) jobs, OpenMP or other thread-based jobs, shared-memory parallel code using up to 40 or 80 threads per job (depending on the partition), distributed-memory (MPI-based) parallel code using up to hundreds of cores, or jobs that take advantage of Graphics Processing Units (GPUs). This computational resource is available for research projects based at the University of Calgary and is meant to supplement the resources available to researchers through Compute Canada.

Historically, ARC is primarily comprised of older, disparate Linux-based clusters that were formerly offered to researchers from across Canada such as Breezy, Lattice, and Parallel. In addition, a large-memory compute node (Bigbyte) was salvaged from the now-retired local Storm cluster. In January 2019, a major addition to ARC with modern hardware was purchased. In 2020, compute clusters from CHGI have been migrated into ARC.

How to Get Started

If you have a project you think would be appropriate for ARC, please write to support@hpc.ucalgary.ca and mention the intended research and software you plan to use.

Access to ARC will be granted to your University of Calgary IT account.

  • For users that do not have a University of IT account or email address, please register for one at https://itregport.ucalgary.ca/.
  • For users external to the University, such as for users collaborating on a research project at the University of Calgary, please contact us and mention the project leader you are collaborating with.

Once your access to ARC has been granted, you will be able to immediately make use of the cluster by following the usage guide outlined below.

Hardware

Since the ARC cluster is a conglomeration of many different compute clusters, the hardware within ARC can vary widely in terms of performance and capabilities. To mitigate any compatibility issues with different hardware, we combine similar hardware into their own Slurm partition to ensure your workload runs as consistently as possible within one partition. Please carefully review the hardware specs for each of the partitions below to avoid any surprises.

Partition Hardware Specs

When submitting jobs to ARC, you may specify a partition that your job will run on. Please choose a partition that is most appropriate for your work.

A few things to keep in mind when choosing a partition:

  • Specific workloads requiring special Intel Instruction Set Extensions may only work on newer Intel CPUs.
  • If working with multi-node parallel processing, ensure your software and libraries support the partition's interconnect networking.
  • While older partitions may be slower, they may be less busy and have little to no wait times.

If you are unsure which partition to use or need assistance on selecting an appropriate partition, please see the Selecting a Partition Section below.

Partition Description Nodes CPU Cores, Model, and Year Memory GPU Network
- ARC Login Node 1 16 cores, 2x Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (2010) 48 GB N/A 40 Gbit/s InfiniBand
gpu-v100 GPU Parition 13 80 cores, 2x Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz (2019) 754 GB 2x Tesla V100-PCIE-16GB 100 Gbit/s Omni-Path
cpu2019 General Purpose Compute 14 40 cores, 2x Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz (2019) 190 GB N/A 100 Gbit/s Omni-Path
apophis General Purpose Compute 21 40 cores, 2x Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz (2019) 190 GB N/A 100 Gbit/s Omni-Path
razi General Purpose Compute 41 40 cores, 2x Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz (2019) 190 GB N/A 100 Gbit/s Omni-Path
bigmem Big Memory Nodes 2 80 cores, 4x Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz (2019) 3022 GB N/A 100 Gbit/s Omni-Path
pawson General Purpose Compute 13 40 cores, 2x Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz (2019) 190 GB N/A 100 Gbit/s Omni-Path
theia Former Theia cluster 20 56 cores, 2x Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz (2012) 188 GB N/A 40 Gbit/s InfiniBand
cpu2013 Former hyperion cluster 12 32 cores, 2x Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz (2012) 126 GB N/A 40 Gbit/s InfiniBand
lattice Former Lattice cluster 307 8 cores, 2x Intel(R) Xeon(R) CPU L5520 @ 2.27GHz (2011) 12 GB N/A 40 Gbit/s InfiniBand
single Former Lattice cluster 168 8 cores, 2x Intel(R) Xeon(R) CPU L5520 @ 2.27GHz (2011) 12 GB N/A 40 Gbit/s InfiniBand
parallel Former Parallel Cluster 576 12 cores, 2x Intel(R) Xeon(R) CPU E5649 @ 2.53GHz (2011) 24 GB N/A 40 Gbit/s InfiniBand

ARC Cluster Storage

Information Icon.png

No Backup Policy!

You are responsible for your own backups. Many researchers will have accounts with Compute Canada and may choose to back up their data there (the Project file system accessible through the Cedar cluster would often be used). Please contact us at support@hpc.ucalgary.ca if you want more information about this option.

The ARC cluster has around 2 petabyte of shared disk storage available across the entire cluster as well as temporary storage local to each of the compute nodes. Please refer to the individual sections below on the capacity limitations and usage policies.

Partition Description Capacity
/home User home directories 500 GB (per user)
/work Research project storage Up to 100's of TB
/scratch Scratch space for temporary files Up to 30 TB
/tmp Temporary space local to the compute cluster Dependent on nodes

/home: Home file system

Each user has a directory under /home and is the default working directory when logging in to ARC. Each home directory has a per-user quota of 500 GB. This limit is fixed and cannot be increased. Researchers requiring additional storage exceeding what is available on their home directory may use /work and /scratch.

Note on file sharing: Due to security concerns, permissions set using chmod on your home directory to allow other users to read/write to your home directory be automatically reverted by an automated system process unless an explicit exception is made. If you need to share files with other researchers on the ARC cluster, please write to support@hpc.ucalgary.ca to ask for such an exception.

/scratch: Scratch file system for large job-oriented storage

Associated with each job, under the /scratch directory, a subdirectory is created that can be referenced in job scripts as /scratch/${SLURM_JOB_ID}. You can use that directory for temporary files needed during the course of a job. Up to 30 TB of storage may be used, per user (total for all your jobs) in the /scratch file system. Deletion policy: data in /scratch associated with a given job will be deleted automatically, without exception, five days after the job finishes.

/work: Work file system for larger projects

If you need more space than provided in /home and the /scratch job-oriented space is not appropriate for you case, please write to support@hpc.ucalgary.ca with an explanation, including an indication of how much storage you expect to need and for how long. If approved, you will then be assigned a directory under /work with an appropriately large quota.

Software

Look for installed software under /global/software and through the module avail command (described below). Links to documentation for some of the installed software is highlighted on a separate Wiki page.

The setup of the environment for using some of the installed software is through the module command. An overview of modules on WestGrid (external link) is largely applicable to ARC.

To list available modules, type:

module avail

So, for example, to load a module for Python use:

module load python/anaconda-3.6-5.1.0

and to remove it use:

module remove python/anaconda-3.6-5.1.0

To see currently loaded modules, type:

module list

Unlike some clusters, there are no modules loaded by default. So, for example, to use Intel compilers, or to use Open MPI parallel programming, you must load an appropriate module.

Write to support@hpc.ucalgary.ca if you need additional software installed.

Using ARC

Logging in

To log in to ARC, connect using SSH to arc.ucalgary.ca. Connections to ARC are accepted only from the University of Calgary network (on campus) or through the University of Calgary General VPN (off campus).

See Connecting to RCS HPC Systems for more information.

Storage

Please review the Storage section above for important policies and advice regarding file storage and file sharing.

Interactive Jobs

The ARC login node may be used for such tasks as editing files, compiling programs and running short tests while developing programs. We suggest CPU intensive workloads on the login node be restricted to under 15 minutes as per our cluster guidelines. For interactive workloads exceeding 15 minutes, use the salloc command to allocate an interactive session on a compute node.

The default salloc allocation is 1 CPU and 1 GB of memory. Adjust this by specifying -n CPU# and --mem Megabytes. You may request up to 5 hours of CPU time for interactive jobs.

salloc --time 5:00:00 --partition cpu2019 


Running non-interactive jobs (batch processing)

Production runs and longer test runs should be submitted as (non-interactive) batch jobs, in which commands to be executed are listed in a script (text file). Batch jobs scripts are submitted using the sbatch command, part of the Slurm job management and scheduling software. #SBATCH directive lines at the beginning of the script are used to specify the resources needed for the job (cores, memory, run time limit and any specialized hardware needed).

Most of the information on the Running Jobs (external link) page on the Compute Canada web site is also relevant for submitting and managing batch jobs and reserving processors for interactive work on ARC. One major difference between running jobs on the ARC and Compute Canada clusters is in selecting the type of hardware that should be used for a job. On ARC, you choose the hardware to use primarily by specifying a partition, as described below.

Selecting a partition

The type of computer on which a job can or should be run is determined by characteristics of your software, such as whether it supports parallel processing and by simulation or data-dependent factors such as the amount of memory required. If the program you are running uses MPI (Message Passing Interface) for parallel processing, which allows the memory usage to be distributed across multiple compute nodes, then, the memory required per MPI process is an important factor. If you are running a serial code (that is, it is not able to use multiple CPU cores) or one that is parallelized with OpenMP or other thread-based techniques that restrict it to running on just a single compute node, then, the total memory required is the main factor to consider. If your program can make use of graphics processing units, then, that will be the determining factor. If you have questions about which ARC hardware to use, please write to support@hpc.ucalgary.ca and we would be happy to discuss this with you.

One you have decided what type of hardware best suits your calculations, you can select it on a job-by-job basis by including the partition keyword for an SBATCH directive in your batch job. The tables below summarize the characteristics of the various partitions

If you omit the partition specification, the system will try to assign your job to appropriate hardware based on other aspects of your request, but, for more control you can specify one or more partitions yourself. You are allowed to specify a comma-separate list of partitions.

In some cases, you really should specify the partition explicitly. For example, if you are running single-node jobs with thread-based parallel processing requesting 8 cores you could use:

#SBATCH --mem=0 
#SBATCH --nodes=1 
#SBATCH --ntasks=1 
#SBATCH --cpus-per-task=8 
#SBATCH --partition=single,lattice

Since the single and lattice partitions both have the same type of hardware, it is appropriate to list them both. Specifying --mem=0 allows you to use all the available memory (12000 MB) on the compute node assigned to the job. Since the compute nodes in those partitions have 8 cores each and you will be using them all, you need not be concerned about other users' jobs sharing the memory with your job. However, if you didn't explicitly specify the partition in such a case, the system would try to assign your job to the cpu2019 or similar partition. Those nodes have 40 cores and much more memory than the single and lattice partitions. If you specified --mem=0 in such a case, you would be wasting 32 cores of processing. So, if you don't specify a partition yourself, you have to give greater thought to the memory specification to make sure that the scheduler will not assign your job more resources than are needed.

As time limits may be changed by administrators to adjust to maintenance schedules or system load, the values given in the tables are not definitive. See the Time limits section below for commands you can use on ARC itself to determine current limits.

Parameters such as --ntasks-per-cpu, --cpus-per-task, --mem and --mem-per-cpu> have to be adjusted according to the capabilities of the hardware also. The product of --ntasks-per-cpu and --cpus-per-task should be less than or equal to the number given in the "Cores/node" column. The --mem> parameter (or the product of --mem-per-cpu and --cpus-per-task) should be less than the "Memory limit" shown. If using whole nodes, you can specify --mem=0 to request the maximum amount of memory per node.

Partitions for modern hardware

Note, MPI codes using this hardware should be compiled with Omni-Path networking support. This is provided by loading the openmpi/2.1.3-opa or openmpi/3.1.2-opa modules prior to compiling.

Partition Cores/node Memory limit (MB) Time limit (h) GPUs/node
cpu2019 40 185000 168
apophis† 40 185000 168
apophis-bf† 40 185000 5
razi† 40 185000 168
razi-bf† 40 185000 5
bigmem 80 3000000 24
gpu-v100 40 753000 24 2

† The apophis and razi partitions contain hardware contributed to ARC by particular researchers. They should be used only by members of those researchers' groups. However, they have generously allowed their compute nodes to be shared with others outside their research groups for relatively short jobs by specifying the apophis-bf and razi-bf partitions. (In some cases in which a partition is not explicitly specified, these "back-fill" partitions may be automatically selected by the system).

Partitions for legacy hardware

Partition Cores/node Memory limit (MB) Time limit (h) GPUs/node
cpu2013 16 120000 168
lattice 8 12000 168
parallel 12 23000 168
breezy‡ 24 255000 72
bigbyte‡ 32 1000000 24
single 8 12000 168
gpu 12 23000 72 3

‡ Update 2019-11-27 - the breezy and bigbyte partition nodes are being repurposed as a cluster to support teaching and learning activities and are no longer available as part of ARC.

Examples

Here are some examples of specifying the various partitions.

As mentioned in the Hardware section above, the ARC cluster was expanded in January 2019. To select the 40-core general purpose nodes specify:

#SBATCH --partition=cpu2019

To run on the Tesla V100 GPU-enabled nodes, use the gpu-v100 partition. You will also need to include an SBATCH directive in the form --gres=gpu:n to specify the number of GPUs, n, that you need. For example, if the software you are running can make use of both GPUs on a gpu-v100 partition compute node, use:

#SBATCH --partition=gpu-v100 --gres=gpu:2

For very large memory jobs (more than 185000 MB), specify the bigmem partition:

#SBATCH --partition=bigmem

If the more modern computers are too busy or you have a job well-suited to run on the compute nodes described in the legacy hardware section above, choose the cpu2013, Lattice or Parallel compute nodes (without graphics processing units) by specifying the corresponding partition keyword:

#SBATCH --partition=cpu2013
#SBATCH --partition=lattice

or

#SBATCH --partition=parallel

There is an additional partition called single that provides nodes similar to the lattice partition, but, is intended for single-node jobs. Select the single partition with

#SBATCH --partition=single

For single-node jobs requiring more memory or processors than available through the breezy or single partitions, use the bigbyte partition:

#SBATCH --partition=bigbyte

To select the nodes that have GPUs, specify the gpu partition. Use an SBATCH directive in the form --gres=gpu:n to specify the number of GPUs, n, that you need. For example, if the software you are running can make use of all three GPUs on a compute node, use:

#SBATCH --partition=gpu --gres=gpu:3

Time limits

Use a directive of the form

#SBATCH --time=hh:mm:ss

to tell the job scheduler the maximum time that your job might run. You can use the command

scontrol show partitions

to see the current configuration of the partitions including the maximum time limit you can specify for each partition, as given by the MaxTime field. Alternatively, see the TIMELIMIT column in the output from

sinfo

Hardware resource and job policy limits

There are limits on the number of cores, nodes and/or GPUs that one can use on ARC at any given time. There is also a limit on the number of jobs that a user can have pending or running at a given time (the MaxSubmitJobs parameter in the command below). The limits are generally applied on a partition-by-partition basis, so, using resources in one partition should not affect the amount you can use in a different partition. To see the current limits you can run the command:

sacctmgr show qos format=Name,MaxWall,MaxTRESPU%20,MaxSubmitJobs

Support

Please send ARC-related questions to support@hpc.ucalgary.ca.