General Cluster Guidelines and Policies: Difference between revisions
No edit summary |
m (dtn) |
||
Line 14: | Line 14: | ||
Everything else should be run on compute nodes either via the '''[[Running_jobs#Use_sbatch_to_submit_jobs|sbatch command]]''' or in an interactive job via the '''[[Running_jobs#Interactive_jobs|salloc command]]'''. These restrictions are in place to ensure that the login node remains available for other users and is not unnecessarily overburdened. | Everything else should be run on compute nodes either via the '''[[Running_jobs#Use_sbatch_to_submit_jobs|sbatch command]]''' or in an interactive job via the '''[[Running_jobs#Interactive_jobs|salloc command]]'''. These restrictions are in place to ensure that the login node remains available for other users and is not unnecessarily overburdened. | ||
=== Data Transfer Node === | |||
If the cluster has a Data Transfer Node (DTN), please use it rather than the login node to transfer files to/from the cluster. | |||
=== Interactive Jobs === | === Interactive Jobs === |
Revision as of 20:06, 29 July 2020
General Rules
- Never run anything related to your research on the Login Node.
- You have to make sure that the resources you request for the job are used by the job.
When resources are requested from SLURM by the job script, the resources are reserved for the job and will not be allocated for other users. Jobs that do not make complete use of the allocated resource reduces the overall cluster efficiency. It is essential that the user makes resource requests that fit with the requirements of their jobs so that resources are properly used.
Guidelines
Please review the guidelines set out below when using our cluster.
Login Node
The login node should be used only for:
- Data management, that is file management, compression / decompression, and, possibly, data transfer.
- Job management: job script creation / submission / monitoring.
- Software development: Source editing / compilation.
- Short data analysis computations that take 100% of 1 CPU for up to 15 minutes.
Everything else should be run on compute nodes either via the sbatch command or in an interactive job via the salloc command. These restrictions are in place to ensure that the login node remains available for other users and is not unnecessarily overburdened.
Data Transfer Node
If the cluster has a Data Transfer Node (DTN), please use it rather than the login node to transfer files to/from the cluster.
Interactive Jobs
Interactive jobs can be started using the salloc command and are limited to a maximum of 5 hours.
The reason for the time restriction on interactive jobs are:
- If an interactive job asks for more than 5 hours of run time, it is hardly interactive. Who can stare in the screen for more than 5 hours straight?
- Interactive jobs tend to be resource-wise wasteful as the job does not finish when the computation is done, but keeps running until it times out.
- The partition setup allows for much quicker resource allocation for jobs that are 5 hours or less, so it is significantly easier to get resources in the default partitions for shorter jobs.
Bigmem Partition
The bigmem partitions can be used for general shorter jobs is intended for computations that need lots of memory.
Please avoid running low memory computations on the bigmem partition.
gpu-v100 Partition
The gpu-v100 partition is strictly for computations that utilize GPUs.
Please do not run CPU-only computations on the gpu-v100 partition.