General Cluster Guidelines and Policies: Difference between revisions
Line 22: | Line 22: | ||
=== Data Transfer Node === | === Data Transfer Node === | ||
If the cluster has a Data Transfer Node (DTN), please use it rather than the login node to transfer files to/from the cluster. | If the cluster has a Data Transfer Node (DTN), please use it rather than the login node to transfer files to/from the cluster. | ||
=== Interactive Jobs === | === Interactive Jobs === |
Revision as of 19:04, 4 September 2020
General Rules
- Never run anything related to your research on the Login Node.
- You have to make sure that the resources you request for the job are used by the job.
When resources are requested from SLURM by the job script, the resources are reserved for the job and will not be allocated for other users. Jobs that do not make complete use of the allocated resource reduces the overall cluster efficiency. It is essential that the user makes resource requests that fit with the requirements of their jobs so that resources are properly used.
Guidelines
Please review the guidelines set out below when using our cluster.
Login Node
The login node should be used only for:
- Data management, that is file management, compression / decompression, and, possibly, data transfer.
- Job management: job script creation / submission / monitoring.
- Software development: Source editing / compilation.
- Short data analysis computations that take 100% of 1 CPU for up to 15 minutes.
Everything else should be run on compute nodes either via the sbatch command or in an interactive job via the salloc command. These restrictions are in place to ensure that the login node remains available for other users and is not unnecessarily overburdened.
Short jobs
Jobs on ARC, generally, should be at least 15 minutes long. Scheduling a node for a new job takes time and if jobs are too short, then scheduling time becomes similar or longer than the actual run time of the job, which is very inefficient.
If you are expecting to run a large number of very short jobs, that are from seconds to several minutes long, please pack several of those computations into one longer jobs, that is 2-3 hours long. For example, if you have 200000 jobs that ran for 30 seconds, consider run 1000 of these computations inside one job, with will result in 200 medium long jobs.
Data Transfer Node
If the cluster has a Data Transfer Node (DTN), please use it rather than the login node to transfer files to/from the cluster.
Interactive Jobs
Interactive jobs can be started using the salloc command and are limited to a maximum of 5 hours.
The reason for the time restriction on interactive jobs are:
- If an interactive job asks for more than 5 hours of run time, it is hardly interactive. Who can stare in the screen for more than 5 hours straight?
- Interactive jobs tend to be resource-wise wasteful as the job does not finish when the computation is done, but keeps running until it times out.
- The partition setup allows for much quicker resource allocation for jobs that are 5 hours or less, so it is significantly easier to get resources in the default partitions for shorter jobs.
Bigmem Partition
The bigmem partitions can be used for general shorter jobs is intended for computations that need lots of memory.
Please avoid running low memory computations on the bigmem partition.
gpu-v100 Partition
The gpu-v100 partition is strictly for computations that utilize GPUs.
Please do not run CPU-only computations on the gpu-v100 partition.