ARC Storage Terms of Use: Difference between revisions
No edit summary |
m (Added category and navbox) |
||
(3 intermediate revisions by 2 users not shown) | |||
Line 41: | Line 41: | ||
|} | |} | ||
Currently all users are responsible for managing their own backups. | Currently all users are responsible for managing their own backups. You can back up data to your personal UofC OneDrive for business cloud storage. see: [[How to transfer data#rclone:%20rsync%20for%20cloud%20storage|https://rcs.ucalgary.ca/How_to_transfer_data#rclone:_rsync_for_cloud_storage]] This allocation starts at 5TB. Contact the support center for questions regarding OneDrive for Business. | ||
== Performance == | == Performance == | ||
Line 63: | Line 63: | ||
===/scratch=== | ===/scratch=== | ||
/scratch is intended for temporary data for the duration of the job. | /scratch is intended for temporary data for the duration of the job. | ||
Directories in /scratch are created when a user job starts. The naming of the directory is /scratch/JOBID, where JOBID is the job id that is assigned by Slurm. It is expected that jobs clean up the contents of their scratch directory when the job completes, as part of the Slurm batch job. Any files older than 10 days will be automatically deleted. | Directories in /scratch are created when a user job starts. The naming of the directory is /scratch/JOBID, where JOBID is the job id that is assigned by Slurm. It is expected that jobs clean up the contents of their scratch directory when the job completes, as part of the Slurm batch job. Any files older than 10 days will be automatically deleted. | ||
Each user can store up to a maximum of 15 TB in /scratch. However, due to the shared nature of /scratch, we cannot make any guarantees that the full 15 TB will be available at any given time. | Each user can store up to a maximum of 15 TB in /scratch. However, due to the shared nature of /scratch, we cannot make any guarantees that the full 15 TB will be available at any given time. | ||
If /scratch becomes more than 75% full, we reserve the right to delete the files as needed. | If /scratch becomes more than 75% full, we reserve the right to delete the files as needed. | ||
===/work=== | ===/work=== | ||
/work is intended for projects whose data requirements exceeds the storage allocation for /home. /work is requested by the Principal Investigator (PI) on behalf of the entire group. The request should be made as needed. Please contact us at support@hpc.ucagary.ca for further assistance. | /work is intended for projects whose data requirements exceeds the storage allocation for /home. /work is requested by the Principal Investigator (PI) on behalf of the entire group. The request should be made as needed. | ||
Please contact us at support@hpc.ucagary.ca for further assistance. | |||
7-day snapshots are available. | 7-day snapshots are available. | ||
===/bulk=== | ===/bulk=== | ||
/bulk is intended for large allocations that have lesser I/O needs intended for streaming reads and writes. /bulk is requested by the Principal Investigator (PI) on behalf of the entire group. The request should be made as needed. Please contact us at support@hpc.ucagary.ca for further assistance. | /bulk is intended for large allocations that have lesser I/O needs intended for streaming reads and writes. /bulk is requested by the Principal Investigator (PI) on behalf of the entire group. The request should be made as needed. Please contact us at support@hpc.ucagary.ca for further assistance. | ||
7-day snapshots are available. | 7-day snapshots are available. | ||
Portion of /bulk file system will be available to scientific instruments on campus, on a request basis. | Portion of /bulk file system will be available to scientific instruments on campus, on a request basis. | ||
Please contact support@hpc.ucalgary.ca for further assistance. | Please contact support@hpc.ucalgary.ca for further assistance. | ||
Line 88: | Line 97: | ||
* Multiple copies of same data between individual researchers | * Multiple copies of same data between individual researchers | ||
* Using ARC storage for archiving | * Using ARC storage for archiving | ||
* Not ensuring your data and derived results are backed up outside of ARC storage | |||
| | | | ||
* Run a test job, and scale the data requirements appropriately | * Run a test job, and scale the data requirements appropriately | ||
Line 100: | Line 110: | ||
__NOTOC__ | __NOTOC__ | ||
[[Category:ARC]] | |||
{{Navbox ARC}} |
Latest revision as of 22:30, 20 September 2023
There are several options for disk storage on the ARC Cluster. Please review this section carefully to decide where to place your data. Contact systems staff at support@hpc.ucalgary.ca if you have any questions. As this is a limited resource, please use the space responsibly. Disk space on the ARC Cluster should not be used as archival storage, nor should it be used as a backup (2nd copy location) for other systems (e.g. desktops, laptops, etc.).
Summary of File Storage Options
File System | Type | Snapshots | Backups | Quota |
---|---|---|---|---|
/home | NetApp
FAS8200, NFS |
7 days | No | 500 GB/user |
/scratch | NetApp
FAS8200, NFS |
None | No | 15 TB/user |
/work | NetApp
FAS8200, NFS |
7 days | No | By request/group |
/bulk | NetApp
FAS2720, NFS |
7 days | No | By request/group |
/tmp | Local | None | No | 100 GB |
Currently all users are responsible for managing their own backups. You can back up data to your personal UofC OneDrive for business cloud storage. see: https://rcs.ucalgary.ca/How_to_transfer_data#rclone:_rsync_for_cloud_storage This allocation starts at 5TB. Contact the support center for questions regarding OneDrive for Business.
Performance
Bulk | Work |
---|---|
|
|
/home
Each user has a home directory called /home/username. The /home file system has a quota of 500 GB per user which cannot be increased, and seven days of snapshots.
/scratch
/scratch is intended for temporary data for the duration of the job.
Directories in /scratch are created when a user job starts. The naming of the directory is /scratch/JOBID, where JOBID is the job id that is assigned by Slurm. It is expected that jobs clean up the contents of their scratch directory when the job completes, as part of the Slurm batch job. Any files older than 10 days will be automatically deleted.
Each user can store up to a maximum of 15 TB in /scratch. However, due to the shared nature of /scratch, we cannot make any guarantees that the full 15 TB will be available at any given time.
If /scratch becomes more than 75% full, we reserve the right to delete the files as needed.
/work
/work is intended for projects whose data requirements exceeds the storage allocation for /home. /work is requested by the Principal Investigator (PI) on behalf of the entire group. The request should be made as needed.
Please contact us at support@hpc.ucagary.ca for further assistance.
7-day snapshots are available.
/bulk
/bulk is intended for large allocations that have lesser I/O needs intended for streaming reads and writes. /bulk is requested by the Principal Investigator (PI) on behalf of the entire group. The request should be made as needed. Please contact us at support@hpc.ucagary.ca for further assistance.
7-day snapshots are available.
Portion of /bulk file system will be available to scientific instruments on campus, on a request basis.
Please contact support@hpc.ucalgary.ca for further assistance.
Best Practices
Bad | Good |
---|---|
|
|
Please note that we reserve the right to perform emergency and critical maintenance at any moment with minimal warning.