How to use ARC scratch space: Difference between revisions

From RCSWiki
Jump to navigation Jump to search
Line 12: Line 12:
For example, if the job ID is '''1234567''', then the directory name will be <code>/scratch/1234567</code>.
For example, if the job ID is '''1234567''', then the directory name will be <code>/scratch/1234567</code>.


The quota, the storage limit, for the <code>/scratch</code> file system is set to '''15TB''' and '''1 million''' files, '''per user''', at the moment of writing. The '''current storage usage''' on ARC can be checked at any moment with the <code>arc.quota</code> command.
The quota, the storage limit, for the <code>/scratch</code> file system is set to '''15TB''' and '''1 million''' files '''per user''',  
at the moment of writing.  
The '''current storage usage''' on ARC can be checked at any moment with the <code>arc.quota</code> command.




Line 24: Line 26:
While a job is running there is a SLURM environmental variable <code>$SLURM_JOBID</code> is set to the actual job ID of the job.
While a job is running there is a SLURM environmental variable <code>$SLURM_JOBID</code> is set to the actual job ID of the job.
Hence, the scratch directory can be referenced as <code>/scratch/$SLURM_JOBID</code>.
Hence, the scratch directory can be referenced as <code>/scratch/$SLURM_JOBID</code>.
= Examples =
== Large data set ==
Let us assume that one have a large data set that is compressed into an archive <code>data.zip</code> in one's home directory.
The data set has to be processed by a Python 3 code <code>analysis.py</code> that is located in the <code>bin</code> directory in the home.
The data set is too large to un-compress into the home directory and the scratch space has to be used to process the data.
The result of the analysis is saved by the Python code to the <code>results.log</code> in the working directory.
Then it can be done like this:
* Job starts and changes its working directory to the scratch space;
* The data file is decompressed from user's home directly in the scratch space;
* The data is processed by the <code>analysis.py</code> code in the scratch space;
* Once the analysis is done, the output <code>results.log</code> is copied back to the directory the job started from.
* The scratch space is cleaned up by deleting the data, and the output from the scratch directory.
* Job ends and the scratch directory is automatically deleted.


= Links =
= Links =
[[How-Tos]]
[[How-Tos]]

Revision as of 17:40, 19 April 2023

Background

References:


The scratch space provided on ARC is designed to handle large temporary files generated by the job during job's run time. The scratch space is crated by SLURM when the job starts on the /scratch file systems as a directory with the name /scratch/<job ID>. For example, if the job ID is 1234567, then the directory name will be /scratch/1234567.

The quota, the storage limit, for the /scratch file system is set to 15TB and 1 million files per user, at the moment of writing. The current storage usage on ARC can be checked at any moment with the arc.quota command.


If the scratch directory is empty at the end of the job, then it is deleted automatically upon job's completion. If, however, the scratch directory in not empty and contains some data, when the job finishes, the directory is not deleted immediately, but instead it is allowed to stay for another 5 days, starting from the time of the job's end, to let the user to move the data to a proper storage location. After 5 days the scratch directory will be automatically deleted even if it still contains data.


Before the job starts, its job ID is not yet known, and the scratch directory does not yet exist. Thus, is it impossible to stage any data into that directory before the job starts. Any data staging, if required has to be during job's run time in the job script. While a job is running there is a SLURM environmental variable $SLURM_JOBID is set to the actual job ID of the job. Hence, the scratch directory can be referenced as /scratch/$SLURM_JOBID.

Examples

Large data set

Let us assume that one have a large data set that is compressed into an archive data.zip in one's home directory. The data set has to be processed by a Python 3 code analysis.py that is located in the bin directory in the home. The data set is too large to un-compress into the home directory and the scratch space has to be used to process the data. The result of the analysis is saved by the Python code to the results.log in the working directory.

Then it can be done like this:

  • Job starts and changes its working directory to the scratch space;
  • The data file is decompressed from user's home directly in the scratch space;
  • The data is processed by the analysis.py code in the scratch space;
  • Once the analysis is done, the output results.log is copied back to the directory the job started from.
  • The scratch space is cleaned up by deleting the data, and the output from the scratch directory.
  • Job ends and the scratch directory is automatically deleted.

Links

How-Tos