ARC Storage Terms of Use: Difference between revisions
No edit summary |
No edit summary |
||
Line 48: | Line 48: | ||
! Work | ! Work | ||
|- | |- | ||
|Single node streaming R/W 540/730 MB/s | | | ||
Aggregate (multi-node) R/W 4140/4400 MB/s | * Single node streaming R/W 540/730 MB/s | ||
Untar Linux kernel (~39000 files): 3:09 (m:ss) | * Aggregate (multi-node) R/W 4140/4400 MB/s | ||
|Single node streaming R/W 620/750 MB/s | * Untar Linux kernel (~39000 files): 3:09 (m:ss) | ||
Aggregate (multi-node) R/W 6131/5616 MB/s | | | ||
Untar Linux kernel (~39000 files): 2:30 (m:ss) | * Single node streaming R/W 620/750 MB/s | ||
* Aggregate (multi-node) R/W 6131/5616 MB/s | |||
* Untar Linux kernel (~39000 files): 2:30 (m:ss) | |||
|} | |} | ||
Revision as of 17:26, 2 December 2020
There are several options for disk storage on the ARC Cluster. Please review this section carefully to decide where to place your data. Contact systems staff at support@hpc.ucalgary.ca if you have any questions. As this is a limited resource, please use the space responsibly. Disk space on the ARC Cluster should not be used as archival storage, nor should it be used as a backup (2nd copy location) for other systems (e.g. desktops, laptops, etc.).
Summary of File Storage Options
File System | Type | Snapshots | Backups | Quota |
---|---|---|---|---|
/home | NetApp
FAS8200, NFS |
7 days | No | 500 GB/user |
/scratch | NetApp
FAS8200, NFS |
None | No | 15 TB/user |
/work | NetApp
FAS8200, NFS |
7 days | No | By request/group |
/bulk | NetApp
FAS2720, NFS |
7 days | No | By request/group |
/tmp | Local | None | No | 100 GB |
Currently all users are responsible for managing their own backups.
Performance
Bulk | Work |
---|---|
|
|
/home
Each user has a home directory called /home/username. The /home file system has a quota of 500 GB per user which cannot be increased, and seven days of snapshots.
/scratch
/scratch is intended for temporary data for the duration of the job. Directories in /scratch are created when a user job starts. The naming of the directory is /scratch/JOBID, where JOBID is the job id that is assigned by Slurm. It is expected that jobs clean up the contents of their scratch directory when the job completes, as part of the Slurm batch job. Any files older than 10 days will be automatically deleted. Each user can store up to a maximum of 15 TB in /scratch. However, due to the shared nature of /scratch, we cannot make any guarantees that the full 15 TB will be available at any given time. If /scratch becomes more than 75% full, we reserve the right to delete the files as needed.
/work
/work is intended for projects whose data requirements exceeds the storage allocation for /home. /work is requested by the Principal Investigator (PI) on behalf of the entire group. The request should be made as needed. Please contact us at support@hpc.ucagary.ca for further assistance. 7-day snapshots are available.
/bulk
/bulk is intended for large allocations that have lesser I/O needs intended for streaming reads and writes. /bulk is requested by the Principal Investigator (PI) on behalf of the entire group. The request should be made as needed. Please contact us at support@hpc.ucagary.ca for further assistance. 7-day snapshots are available. Portion of /bulk file system will be available to scientific instruments on campus, on a request basis. Please contact support@hpc.ucalgary.ca for further assistance.
Best Practices
Bad | Good |
---|---|
|
|
Please note that we reserve the right to perform emergency and critical maintenance at any moment with minimal warning.