<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://rcs.ucalgary.ca/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Fridman</id>
	<title>RCSWiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://rcs.ucalgary.ca/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Fridman"/>
	<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/Special:Contributions/Fridman"/>
	<updated>2026-04-29T02:52:48Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.3</generator>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=High-Performance_Computing_Courses_at_RCS&amp;diff=2976</id>
		<title>High-Performance Computing Courses at RCS</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=High-Performance_Computing_Courses_at_RCS&amp;diff=2976"/>
		<updated>2023-11-17T23:02:13Z</updated>

		<summary type="html">&lt;p&gt;Fridman: /* Past courses offered by the Research Computing Services Team */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;For University of Calgary researchers intending to leverage the HPC infrastructure for their research work and seeking customized sessions, please contact us at support@hpc.ucalgary.ca for further discussion.&lt;br /&gt;
&lt;br /&gt;
To customize our courses according to your requirements, please furnish the following details at least four weeks in advance:&lt;br /&gt;
&lt;br /&gt;
# Specify the number of researchers to undergo training (minimum of 5 or more).&lt;br /&gt;
# Share the domain of your research.&lt;br /&gt;
# Let us know if you have a particular workflow or application in mind.&lt;br /&gt;
# Inform us about the Linux background of the researchers; this information aids us in designing the course effectively.&lt;br /&gt;
&lt;br /&gt;
== Past courses offered by the Research Computing Services Team ==&lt;br /&gt;
&lt;br /&gt;
=== Year 2023 ===&lt;br /&gt;
&lt;br /&gt;
* May 01, 2023- Grad Success Week (GSW) 2023 (invited by: Paul Pappin)&lt;br /&gt;
:   &#039;&#039;Data Analysis in R and Accessing Advanced Research Computing&#039;&#039; &lt;br /&gt;
* Jan 04, 2023 - Session for undergraduate bioinformatics students (invited by: David Anderson)&lt;br /&gt;
:   &#039;&#039;Introduction to High-Performance Computing Infrastructure at the University of Calgary&#039;&#039;&lt;br /&gt;
*Oct 10, 2023 - ENSF 619&lt;br /&gt;
&lt;br /&gt;
=== Year 2022 ===&lt;br /&gt;
* May 05, 2022- Grad Success Week (GSW) 2022 (invited by: Paul Pappin)&lt;br /&gt;
:   &#039;&#039;Data Analysis in R and Accessing Advanced Research Computing&#039;&#039;&lt;br /&gt;
* Jan 04, 2022- Block week 2022 by David Anderson (invited by: David Anderson)&lt;br /&gt;
:   &#039;&#039;Introduction to High-Performance Computing Infrastructure at the University of Calgary&#039;&#039;&lt;br /&gt;
*Oct 11, 2022 - ENSF 619&lt;br /&gt;
&lt;br /&gt;
=== Year 2021 ===&lt;br /&gt;
* May 05, 2021- Grad Success Week (GSW) 2021 (invited by: Paul Pappin)&lt;br /&gt;
:   &#039;&#039;Data Analysis in R and Accessing Advanced Research Computing&#039;&#039; &lt;br /&gt;
* Apr 22, 2021 - Centre for Health Informatics &lt;br /&gt;
:   &#039;&#039;Secure Computing on the MARC cluster&#039;&#039;&lt;br /&gt;
*Oct 21, 2021 - ENSF 619&lt;br /&gt;
:&lt;br /&gt;
:&lt;br /&gt;
&lt;br /&gt;
=== Year 2020 ===&lt;br /&gt;
* May 05, 2020- Grad Success Week (GSW) 2020 (invited by: Paul Pappin)&lt;br /&gt;
:    &#039;&#039;Data Analysis in R and Accessing Advanced Research Computing&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
**********************************************************************************************************************************************&lt;br /&gt;
&lt;br /&gt;
== Overview of the courses we offer ==&lt;br /&gt;
&lt;br /&gt;
=== Introduction to Bash and Linux - Basic ===&lt;br /&gt;
&lt;br /&gt;
:     &#039;&#039;I. Understanding the Command Line Interface (CLI)&#039;&#039;&lt;br /&gt;
:*         Overview of CLI vs. Graphical User Interface (GUI)&lt;br /&gt;
:*         Importance of CLI in Bioinformatics and Data Science&lt;br /&gt;
&lt;br /&gt;
:     &#039;&#039;II. Getting Started with Bash&#039;&#039;&lt;br /&gt;
:*          Opening the Terminal&lt;br /&gt;
:*          Basic Shell Commands&lt;br /&gt;
&lt;br /&gt;
:     &#039;&#039;III. File and Directory Manipulation&#039;&#039;&lt;br /&gt;
:*           Creating and Deleting Files/Directories&lt;br /&gt;
:*           Copying and Moving Files/Directories&lt;br /&gt;
:*           Understanding Permissions (chmod)&lt;br /&gt;
&lt;br /&gt;
:      &#039;&#039;IV. Text Processing with Bash&#039;&#039;&lt;br /&gt;
:*           cat, head, and tail commands&lt;br /&gt;
:*           grep for pattern matching&lt;br /&gt;
:*           Redirection and Pipelines&lt;br /&gt;
&lt;br /&gt;
:      &#039;&#039;V. Basic Scripting&#039;&#039;&lt;br /&gt;
:*          Creating and Executing Bash Scripts&lt;br /&gt;
:*          Variables and Basic Control Structures&lt;br /&gt;
:*          Introduction to Functions&lt;br /&gt;
&lt;br /&gt;
=== Introduction to Bash and Linux - Advanced ===&lt;br /&gt;
:      &#039;&#039;I. Advanced Text Processing&#039;&#039;&lt;br /&gt;
:*          Regular Expressions&lt;br /&gt;
:*          Text Manipulation&lt;br /&gt;
&lt;br /&gt;
:      &#039;&#039;II. Shell Scripting Best Practices&lt;br /&gt;
:*           Error Handling and Logging&lt;br /&gt;
:*           Command-Line Arguments&lt;br /&gt;
:*           Debugging Techniques&lt;br /&gt;
.&lt;br /&gt;
.&lt;br /&gt;
.&lt;br /&gt;
.&lt;br /&gt;
.&lt;br /&gt;
&lt;br /&gt;
=== Introduction to Python on ARC ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Introduction to R on ARC ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== How to Speed Up R Codes? ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Introduction to Singularity/Apptainer - Basic ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Introduction to Singularity/Apptainer - Advanced ===&lt;/div&gt;</summary>
		<author><name>Fridman</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=High-Performance_Computing_Courses_at_RCS&amp;diff=2975</id>
		<title>High-Performance Computing Courses at RCS</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=High-Performance_Computing_Courses_at_RCS&amp;diff=2975"/>
		<updated>2023-11-17T23:01:06Z</updated>

		<summary type="html">&lt;p&gt;Fridman: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;For University of Calgary researchers intending to leverage the HPC infrastructure for their research work and seeking customized sessions, please contact us at support@hpc.ucalgary.ca for further discussion.&lt;br /&gt;
&lt;br /&gt;
To customize our courses according to your requirements, please furnish the following details at least four weeks in advance:&lt;br /&gt;
&lt;br /&gt;
# Specify the number of researchers to undergo training (minimum of 5 or more).&lt;br /&gt;
# Share the domain of your research.&lt;br /&gt;
# Let us know if you have a particular workflow or application in mind.&lt;br /&gt;
# Inform us about the Linux background of the researchers; this information aids us in designing the course effectively.&lt;br /&gt;
&lt;br /&gt;
== Past courses offered by the Research Computing Services Team ==&lt;br /&gt;
&lt;br /&gt;
=== Year 2023 ===&lt;br /&gt;
&lt;br /&gt;
* May 01, 2023- Grad Success Week (GSW) 2023 (invited by: Paul Pappin)&lt;br /&gt;
:   &#039;&#039;Data Analysis in R and Accessing Advanced Research Computing&#039;&#039; &lt;br /&gt;
* Jan 04, 2023 - Session for undergraduate bioinformatics students (invited by: David Anderson)&lt;br /&gt;
:   &#039;&#039;Introduction to High-Performance Computing Infrastructure at the University of Calgary&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Year 2022 ===&lt;br /&gt;
* May 05, 2022- Grad Success Week (GSW) 2022 (invited by: Paul Pappin)&lt;br /&gt;
:   &#039;&#039;Data Analysis in R and Accessing Advanced Research Computing&#039;&#039;&lt;br /&gt;
* Jan 04, 2022- Block week 2022 by David Anderson (invited by: David Anderson)&lt;br /&gt;
:   &#039;&#039;Introduction to High-Performance Computing Infrastructure at the University of Calgary&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Year 2021 ===&lt;br /&gt;
* May 05, 2021- Grad Success Week (GSW) 2021 (invited by: Paul Pappin)&lt;br /&gt;
:   &#039;&#039;Data Analysis in R and Accessing Advanced Research Computing&#039;&#039; &lt;br /&gt;
* Apr 22, 2021 - Centre for Health Informatics &lt;br /&gt;
:   &#039;&#039;Secure Computing on the MARC cluster&#039;&#039;&lt;br /&gt;
*Oct 21, 2021 - ENSF 619&lt;br /&gt;
:&lt;br /&gt;
:&lt;br /&gt;
&lt;br /&gt;
=== Year 2020 ===&lt;br /&gt;
* May 05, 2020- Grad Success Week (GSW) 2020 (invited by: Paul Pappin)&lt;br /&gt;
:    &#039;&#039;Data Analysis in R and Accessing Advanced Research Computing&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
**********************************************************************************************************************************************&lt;br /&gt;
== Overview of the courses we offer ==&lt;br /&gt;
&lt;br /&gt;
=== Introduction to Bash and Linux - Basic ===&lt;br /&gt;
&lt;br /&gt;
:     &#039;&#039;I. Understanding the Command Line Interface (CLI)&#039;&#039;&lt;br /&gt;
:*         Overview of CLI vs. Graphical User Interface (GUI)&lt;br /&gt;
:*         Importance of CLI in Bioinformatics and Data Science&lt;br /&gt;
&lt;br /&gt;
:     &#039;&#039;II. Getting Started with Bash&#039;&#039;&lt;br /&gt;
:*          Opening the Terminal&lt;br /&gt;
:*          Basic Shell Commands&lt;br /&gt;
&lt;br /&gt;
:     &#039;&#039;III. File and Directory Manipulation&#039;&#039;&lt;br /&gt;
:*           Creating and Deleting Files/Directories&lt;br /&gt;
:*           Copying and Moving Files/Directories&lt;br /&gt;
:*           Understanding Permissions (chmod)&lt;br /&gt;
&lt;br /&gt;
:      &#039;&#039;IV. Text Processing with Bash&#039;&#039;&lt;br /&gt;
:*           cat, head, and tail commands&lt;br /&gt;
:*           grep for pattern matching&lt;br /&gt;
:*           Redirection and Pipelines&lt;br /&gt;
&lt;br /&gt;
:      &#039;&#039;V. Basic Scripting&#039;&#039;&lt;br /&gt;
:*          Creating and Executing Bash Scripts&lt;br /&gt;
:*          Variables and Basic Control Structures&lt;br /&gt;
:*          Introduction to Functions&lt;br /&gt;
&lt;br /&gt;
=== Introduction to Bash and Linux - Advanced ===&lt;br /&gt;
:      &#039;&#039;I. Advanced Text Processing&#039;&#039;&lt;br /&gt;
:*          Regular Expressions&lt;br /&gt;
:*          Text Manipulation&lt;br /&gt;
&lt;br /&gt;
:      &#039;&#039;II. Shell Scripting Best Practices&lt;br /&gt;
:*           Error Handling and Logging&lt;br /&gt;
:*           Command-Line Arguments&lt;br /&gt;
:*           Debugging Techniques&lt;br /&gt;
.&lt;br /&gt;
.&lt;br /&gt;
.&lt;br /&gt;
.&lt;br /&gt;
.&lt;br /&gt;
&lt;br /&gt;
=== Introduction to Python on ARC ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Introduction to R on ARC ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== How to Speed Up R Codes? ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Introduction to Singularity/Apptainer - Basic ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Introduction to Singularity/Apptainer - Advanced ===&lt;/div&gt;</summary>
		<author><name>Fridman</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=Template:ARC_Cluster_Status&amp;diff=2547</id>
		<title>Template:ARC Cluster Status</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=Template:ARC_Cluster_Status&amp;diff=2547"/>
		<updated>2023-07-05T18:35:56Z</updated>

		<summary type="html">&lt;p&gt;Fridman: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Cluster Status&lt;br /&gt;
| cluster = ARC&lt;br /&gt;
| status = green&lt;br /&gt;
| title = Cluster operational&lt;br /&gt;
| message = No issues to report.&lt;br /&gt;
See the [[ARC Cluster Status]] page for system notices. &lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Fridman</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=External_collaborators&amp;diff=2541</id>
		<title>External collaborators</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=External_collaborators&amp;diff=2541"/>
		<updated>2023-06-28T20:00:38Z</updated>

		<summary type="html">&lt;p&gt;Fridman: /* Background */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Background =&lt;br /&gt;
&lt;br /&gt;
Accessing resources provided by Research Computing Services at the University of Calgary &lt;br /&gt;
requires that potential new users have an IT account with the University as well as an email address associated with the account, &lt;br /&gt;
such as &amp;lt;code&amp;gt;name@ucalagary.ca&amp;lt;/code&amp;gt;. &lt;br /&gt;
This makes it impossible for a person outside the University of Calgary to access the resources,&lt;br /&gt;
including external collaborators.&lt;br /&gt;
The solution is to formally associate an external collaborator who needs access to HPC resources at UofC with the University.&lt;br /&gt;
&lt;br /&gt;
= General Associate =&lt;br /&gt;
&lt;br /&gt;
An &#039;&#039;&#039;External Research Collaborator&#039;&#039;&#039; designation is for those who need to remotely access &lt;br /&gt;
the S&#039;&#039;&#039;ecure Compute&#039;&#039;&#039; and &#039;&#039;&#039;High Performance Compute (HPC)&#039;&#039;&#039; resources at the University of Calgary. &lt;br /&gt;
This designation allows you to request a &#039;&#039;&#039;General Associate (GA)&#039;&#039;&#039; access for external research collaborators, &lt;br /&gt;
that are not University of Calgary employees or associated with AHS, to our HPC and Secure Compute services in an expedited manner. &lt;br /&gt;
Researchers in this category require a Principal Investigator (PI) or a PI delegate to submit a Template Based Hire (TBH) &lt;br /&gt;
with the GA template. &lt;br /&gt;
&lt;br /&gt;
Please note that &#039;&#039;&#039;AHS external researchers&#039;&#039;&#039; have their own GA template available for them and they don’t need to use this new GA template.&lt;br /&gt;
 &lt;br /&gt;
Principal Investigators or their delegates can request the creation of a new General Associate External Collaborator following &lt;br /&gt;
the Template Base Hire form process in PeopleSoft and selecting the template “UC_CWR_EXT_RES_CL – Gen Associate – External Research Collaborator”. &lt;br /&gt;
These requests will need to be approved by Research Computing Services, who is managing HPC. Once the transaction is approved, &lt;br /&gt;
it will go to HR to complete the hiring process and the new account will be ready for your associate.&lt;br /&gt;
&lt;br /&gt;
= Official documents =&lt;br /&gt;
&lt;br /&gt;
* This reference guide provides instructions on how to initiate a template-based hire for a &#039;&#039;&#039;General Associate Relationship&#039;&#039;&#039;&lt;br /&gt;
:https://www.ucalgary.ca/hr/sites/default/files/teams/241/creating-new-relationship-general-associate-qrg.pdf&lt;br /&gt;
&lt;br /&gt;
* This link gives details on the &#039;&#039;&#039;various types&#039;&#039;&#039; of general associate hires:&lt;br /&gt;
:https://www.ucalgary.ca/hr/hiring-managing/administration/general-associate-set-up&lt;/div&gt;</summary>
		<author><name>Fridman</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=RCS_Home_Page&amp;diff=2385</id>
		<title>RCS Home Page</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=RCS_Home_Page&amp;diff=2385"/>
		<updated>2023-03-20T20:51:27Z</updated>

		<summary type="html">&lt;p&gt;Fridman: /* What&amp;#039;s New */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Research Computing Services (RCS) is a group within the wider University of Calgary Information Technologies team that plans, manages, and supports high performance computing (HPC) systems in use by researchers throughout the University of Calgary.  Our primary focus is to meet the increasing demand for engineering and scientific computation by offering a wide range of specialized services to help researchers solve highly complex real-world problems or run large scale computationally intensive workloads on our high-end HPC resources.&lt;br /&gt;
&lt;br /&gt;
This RCS Wiki contains technical documentation for use by users of HPC systems operated by RCS&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- &lt;br /&gt;
In case cluster status changes:&lt;br /&gt;
    *  set the status to yellow or red &lt;br /&gt;
    *  provide a custom &#039;title&#039; and &#039;message&#039;&lt;br /&gt;
&lt;br /&gt;
{{Cluster Status&lt;br /&gt;
|status=green&lt;br /&gt;
}}&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
=== Contact us for support ===&lt;br /&gt;
&lt;br /&gt;
* For general RCS/HPC inquiries, please email: [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca]&lt;br /&gt;
* For IT related issues (networking, VPN, email), please email: [mailto:it@ucalgary.ca it@ucalgary.ca]&lt;br /&gt;
* For Compute Canada specific questions: [mailto:support@computecanada.ca support@computecanada.ca]&lt;br /&gt;
&lt;br /&gt;
{{Clear}}&lt;br /&gt;
&amp;lt;div class=&amp;quot;row&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;div class=&amp;quot;col-md-6&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== General information ==&lt;br /&gt;
* [[General Cluster Guidelines and Policies]]&lt;br /&gt;
* [[How to get an account]]&lt;br /&gt;
* [[Data ownership]]&lt;br /&gt;
* [[Connecting to RCS HPC Systems]]&lt;br /&gt;
* [[External collaborators]]&lt;br /&gt;
&lt;br /&gt;
* [[CloudStack|Cloud/Virtual Machine Infrastructure (CloudStack)]]&lt;br /&gt;
&lt;br /&gt;
* [[On-line resources for new Linux and ARC users]]&lt;br /&gt;
* [[Acknowledging Research Computing Services Group]]&lt;br /&gt;
&lt;br /&gt;
== Cluster Guides ==&lt;br /&gt;
* [[ ARC Cluster Guide]] - ARC is a general purpose cluster for University of Calgary researchers.&lt;br /&gt;
*  [[GLaDOS Cluster Guide]] - GLaDOS is a researcher-owned cluster maintained by Research Computing Services.&lt;br /&gt;
*  [[TALC Cluster Guide]] - Teaching and Learning Cluster (TALC) is a cluster created by Research Computing Services to support academic courses and workshops.&lt;br /&gt;
* [[MARC Cluster Guide]] -- Medical Advanced Research Computing cluster at the University of Calgary created by Research Computing Services in 2020.&lt;br /&gt;
&lt;br /&gt;
== Other services ==&lt;br /&gt;
&lt;br /&gt;
* [[Jupyter Notebooks]]&lt;br /&gt;
* [[Open OnDemand | Open OnDemand portal]]&lt;br /&gt;
&lt;br /&gt;
== Software pages ==&lt;br /&gt;
* [[Managing software on ARC]]&lt;br /&gt;
* [https://hpc.ucalgary.ca/arc/software/conda Using Conda (external link)]&lt;br /&gt;
* [[Gaussian on ARC]] -- How to use Gaussian 16 on ARC.&lt;br /&gt;
* [[Apache Spark on ARC]]&lt;br /&gt;
* [[ARC Software pages]]&lt;br /&gt;
* [[Bioinformatics applications]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div class=&amp;quot;col-md-6&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Running courses on HPC resources ==&lt;br /&gt;
* [[TALC Cluster|TALC]] - Teaching and Learning Cluster (TALC) is a cluster created by Research Computing Services to support academic courses and workshops.&lt;br /&gt;
* [[TALC Terms of Use]] - Terms of use to which TALC account holders must agree to use the cluster.&lt;br /&gt;
* [[List of courses on TALC]] - A list of current and historical courses taught using TALC.&lt;br /&gt;
&lt;br /&gt;
== Training ==&lt;br /&gt;
* Our [[HPC Systems]]&lt;br /&gt;
* [[HPC Linux topics]] - A list of topics on which RCS technical support staff can provide one-on-one or group training&lt;br /&gt;
* [[Courses]]&lt;br /&gt;
* [[Linux Introduction]]&lt;br /&gt;
* [[What is a scheduler?]]&lt;br /&gt;
* [[Running jobs]]&lt;br /&gt;
* [[Data storage options for UofC researchers]]&lt;br /&gt;
* [[Security and privacy]]&lt;br /&gt;
* [[How to transfer data]]&lt;br /&gt;
&lt;br /&gt;
* [[UofC Services]]&lt;br /&gt;
&lt;br /&gt;
* [[Book online training sessions]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [[How-Tos | More How-Tos]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Clear}}&lt;br /&gt;
&lt;br /&gt;
==What&#039;s New==&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;/div&gt;</summary>
		<author><name>Fridman</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=RCS_Home_Page&amp;diff=2384</id>
		<title>RCS Home Page</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=RCS_Home_Page&amp;diff=2384"/>
		<updated>2023-03-20T20:49:28Z</updated>

		<summary type="html">&lt;p&gt;Fridman: /* Cluster Guides */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Research Computing Services (RCS) is a group within the wider University of Calgary Information Technologies team that plans, manages, and supports high performance computing (HPC) systems in use by researchers throughout the University of Calgary.  Our primary focus is to meet the increasing demand for engineering and scientific computation by offering a wide range of specialized services to help researchers solve highly complex real-world problems or run large scale computationally intensive workloads on our high-end HPC resources.&lt;br /&gt;
&lt;br /&gt;
This RCS Wiki contains technical documentation for use by users of HPC systems operated by RCS&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- &lt;br /&gt;
In case cluster status changes:&lt;br /&gt;
    *  set the status to yellow or red &lt;br /&gt;
    *  provide a custom &#039;title&#039; and &#039;message&#039;&lt;br /&gt;
&lt;br /&gt;
{{Cluster Status&lt;br /&gt;
|status=green&lt;br /&gt;
}}&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
=== Contact us for support ===&lt;br /&gt;
&lt;br /&gt;
* For general RCS/HPC inquiries, please email: [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca]&lt;br /&gt;
* For IT related issues (networking, VPN, email), please email: [mailto:it@ucalgary.ca it@ucalgary.ca]&lt;br /&gt;
* For Compute Canada specific questions: [mailto:support@computecanada.ca support@computecanada.ca]&lt;br /&gt;
&lt;br /&gt;
{{Clear}}&lt;br /&gt;
&amp;lt;div class=&amp;quot;row&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;div class=&amp;quot;col-md-6&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== General information ==&lt;br /&gt;
* [[General Cluster Guidelines and Policies]]&lt;br /&gt;
* [[How to get an account]]&lt;br /&gt;
* [[Data ownership]]&lt;br /&gt;
* [[Connecting to RCS HPC Systems]]&lt;br /&gt;
* [[External collaborators]]&lt;br /&gt;
&lt;br /&gt;
* [[CloudStack|Cloud/Virtual Machine Infrastructure (CloudStack)]]&lt;br /&gt;
&lt;br /&gt;
* [[On-line resources for new Linux and ARC users]]&lt;br /&gt;
* [[Acknowledging Research Computing Services Group]]&lt;br /&gt;
&lt;br /&gt;
== Cluster Guides ==&lt;br /&gt;
* [[ ARC Cluster Guide]] - ARC is a general purpose cluster for University of Calgary researchers.&lt;br /&gt;
*  [[GLaDOS Cluster Guide]] - GLaDOS is a researcher-owned cluster maintained by Research Computing Services.&lt;br /&gt;
*  [[TALC Cluster Guide]] - Teaching and Learning Cluster (TALC) is a cluster created by Research Computing Services to support academic courses and workshops.&lt;br /&gt;
* [[MARC Cluster Guide]] -- Medical Advanced Research Computing cluster at the University of Calgary created by Research Computing Services in 2020.&lt;br /&gt;
&lt;br /&gt;
== Other services ==&lt;br /&gt;
&lt;br /&gt;
* [[Jupyter Notebooks]]&lt;br /&gt;
* [[Open OnDemand | Open OnDemand portal]]&lt;br /&gt;
&lt;br /&gt;
== Software pages ==&lt;br /&gt;
* [[Managing software on ARC]]&lt;br /&gt;
* [https://hpc.ucalgary.ca/arc/software/conda Using Conda (external link)]&lt;br /&gt;
* [[Gaussian on ARC]] -- How to use Gaussian 16 on ARC.&lt;br /&gt;
* [[Apache Spark on ARC]]&lt;br /&gt;
* [[ARC Software pages]]&lt;br /&gt;
* [[Bioinformatics applications]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div class=&amp;quot;col-md-6&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Running courses on HPC resources ==&lt;br /&gt;
* [[TALC Cluster|TALC]] - Teaching and Learning Cluster (TALC) is a cluster created by Research Computing Services to support academic courses and workshops.&lt;br /&gt;
* [[TALC Terms of Use]] - Terms of use to which TALC account holders must agree to use the cluster.&lt;br /&gt;
* [[List of courses on TALC]] - A list of current and historical courses taught using TALC.&lt;br /&gt;
&lt;br /&gt;
== Training ==&lt;br /&gt;
* Our [[HPC Systems]]&lt;br /&gt;
* [[HPC Linux topics]] - A list of topics on which RCS technical support staff can provide one-on-one or group training&lt;br /&gt;
* [[Courses]]&lt;br /&gt;
* [[Linux Introduction]]&lt;br /&gt;
* [[What is a scheduler?]]&lt;br /&gt;
* [[Running jobs]]&lt;br /&gt;
* [[Data storage options for UofC researchers]]&lt;br /&gt;
* [[Security and privacy]]&lt;br /&gt;
* [[How to transfer data]]&lt;br /&gt;
&lt;br /&gt;
* [[UofC Services]]&lt;br /&gt;
&lt;br /&gt;
* [[Book online training sessions]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [[How-Tos | More How-Tos]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Clear}}&lt;br /&gt;
&lt;br /&gt;
==What&#039;s New==&lt;br /&gt;
* [[CHGI Transition]] - Information on the current CHGI Transition&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;/div&gt;</summary>
		<author><name>Fridman</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=List_of_courses_on_TALC&amp;diff=1721</id>
		<title>List of courses on TALC</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=List_of_courses_on_TALC&amp;diff=1721"/>
		<updated>2022-02-10T17:52:24Z</updated>

		<summary type="html">&lt;p&gt;Fridman: /* Current Courses */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Message Box&lt;br /&gt;
|title=Interested in using TALC for your course?&lt;br /&gt;
|message=&#039;&#039;&#039;If you are the instructor for a course that could benefit from using TALC, please contact us at support@hpc.ucalgary.ca to discuss your requirements.&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br &amp;gt;To ensure that the appropriate software is available, student accounts are in place, and appropriate training has been provided for your teaching assistants, it is best to start this discussion several months prior to the start of the course.&lt;br /&gt;
|icon=Support Icon.png}}&lt;br /&gt;
&lt;br /&gt;
= Information =&lt;br /&gt;
&lt;br /&gt;
* TALC Terms of Use: https://rcs.ucalgary.ca/TALC_Terms_of_Use&lt;br /&gt;
&lt;br /&gt;
* TALC Guide: https://rcs.ucalgary.ca/TALC_Cluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== A New Course Checklist ==&lt;br /&gt;
&lt;br /&gt;
At least &#039;&#039;&#039;three (3) weeks&#039;&#039;&#039; before the course.&lt;br /&gt;
* Please contact us about the course you are going to teach using TALC.&lt;br /&gt;
&lt;br /&gt;
* Please provide us with the &#039;&#039;&#039;course ID&#039;&#039;&#039;, &#039;&#039;&#039;name of the course&#039;&#039;&#039;, and the &#039;&#039;&#039;name of the instructor&#039;&#039;&#039; for the course.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Soon after&#039;&#039;&#039;:&lt;br /&gt;
* Request &#039;&#039;&#039;accounts&#039;&#039;&#039; on TALC for yourself, and TA who is going to be helping with the course.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Please note that&#039;&#039;&#039;&lt;br /&gt;
: it is &#039;&#039;&#039;instructor&#039;s responsibility&#039;&#039;&#039; to make sure that the &#039;&#039;&#039;software&#039;&#039;&#039; required for the course is &#039;&#039;&#039;available&#039;&#039;&#039; and &#039;&#039;&#039;works&#039;&#039;&#039; on TALC. &lt;br /&gt;
: if &#039;&#039;&#039;special software&#039;&#039;&#039; is required for the course it is recommended that the instructor contacts us &#039;&#039;&#039;well before the course starts&#039;&#039;&#039; to allow enough time for installation and testing of the software.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* If you are planning to share data with the students, request a &#039;&#039;&#039;shared directory for the course&#039;&#039;&#039;. &lt;br /&gt;
: There will also be a &#039;&#039;&#039;unix group&#039;&#039;&#039; on TALC to control access to the shared directory.&lt;br /&gt;
: The directory name and the unix group will probably have the same name, like &amp;quot;course601-21&amp;quot;. &lt;br /&gt;
: Your and TA&#039;s accounts have to be added to the access group.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* The data for courses run on TALC is deleted once the course is over. So that the &#039;&#039;&#039;shared directory&#039;&#039;&#039; and &#039;&#039;&#039;software&#039;&#039;&#039; for a course have to be setup every time the course is run.&lt;br /&gt;
: If you need to build / have some &#039;&#039;&#039;specific software&#039;&#039;&#039; on TALC for the course, please start early. &lt;br /&gt;
: It needs to be &#039;&#039;&#039;installed and tested&#039;&#039;&#039; well before the course start, so that the solution can be found in case something does not work.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* If you &#039;&#039;&#039;need help with setting up the software&#039;&#039;&#039; for the course, let the RCS support know at support@hpc.ucalgary.ca .&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* During the course, if students have &#039;&#039;&#039;difficulties with using TALC&#039;&#039;&#039;, the TAs are expected to help the students.&lt;br /&gt;
: If the TA cannot solve the issue, we prefer that the TA contact us rather than the student who has the problem.&lt;br /&gt;
: If TAs need training, this has to be arranged before the course with us (RCS);&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* As soon as you have a &#039;&#039;&#039;list of students&#039;&#039;&#039; who are going to take the course, please send it to support@hpc.ucalgary.ca .&lt;br /&gt;
: The list has to have students&#039; &#039;&#039;&#039;names&#039;&#039;&#039; as well as associated &#039;&#039;&#039;UofC email addresses&#039;&#039;&#039;.&lt;br /&gt;
: The accounts will be created and added to the access group.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* If you have any concerns or questions about running a course on TALC please let us know.&lt;br /&gt;
&lt;br /&gt;
= Courses =&lt;br /&gt;
&lt;br /&gt;
== Current Courses ==&lt;br /&gt;
&lt;br /&gt;
=== Winter 2022 ===&lt;br /&gt;
* ENSF 511 - Industrial Internet of Things Systems and Data Analytics, Hatem Abou-Zeid &lt;br /&gt;
* MDSC 301 -  Introduction to Bioinformatics, Tatiana Maroilley&lt;br /&gt;
* GLGY 605 - Groundwater Flow and Reactive Transport Modelling, Benjamin Tutolo (requested Nov 2021)&lt;br /&gt;
&lt;br /&gt;
* MDSC 201 - Applied Bioinformatics, David Anderson (requested Nov 2021)&lt;br /&gt;
&lt;br /&gt;
* MDSC 519 - Advanced Bioinformatics, David Anderson (requested Nov 2021)&lt;br /&gt;
&lt;br /&gt;
* DATA 608 - Developing Big Data Applications, Leanne Wu (requested Nov 2021).&lt;br /&gt;
&lt;br /&gt;
* BMEN 415 - Sensor Systems and Data Analytics, Ethan MacDonald (requested Dec 2021).&lt;br /&gt;
&lt;br /&gt;
* CPSC 601.04 - High Performance Scientific Computing and Visualization, Usman Alim (late request, Jan 2022);&lt;br /&gt;
&lt;br /&gt;
* ENEL 645 - Data Mining &amp;amp; Machine Learning, Roberto Souza (late request, Jan 2022);&lt;br /&gt;
&lt;br /&gt;
== Past Courses ==&lt;br /&gt;
&lt;br /&gt;
=== Fall 2021 ===&lt;br /&gt;
&lt;br /&gt;
* ENSF 619.01 - Ethan MacDonald&lt;br /&gt;
* MDSC 523 - David Anderson&lt;br /&gt;
* ENSF 619.02 - Roberto Medeiros de Souza&lt;br /&gt;
* ENSF 612 - Gias Uddin and Ajoy Das&lt;br /&gt;
&lt;br /&gt;
=== 2021 Winter ===&lt;br /&gt;
* GLGY 605 - Benjamin Tutolo&lt;br /&gt;
* BMEN 415 - Ethan MacDonald&lt;br /&gt;
* MDSC 201 - David Anderson&lt;br /&gt;
&lt;br /&gt;
=== 2020 Spring ===&lt;br /&gt;
* Bioinformatics workshop Q Zhang&lt;br /&gt;
* Bioinformatics workshop Q Zhang&lt;br /&gt;
&lt;br /&gt;
=== 2020 Winter ===&lt;br /&gt;
* DATA 623 R Walker&lt;br /&gt;
* GLGY 605 B Tutolo&lt;br /&gt;
* DATA 608 P Federl&lt;br /&gt;
* ENSF 612 J Kaur&lt;br /&gt;
&lt;br /&gt;
=== 2020 Winter Block Week ===&lt;br /&gt;
* MDSC 395 D Anderson&lt;br /&gt;
&lt;br /&gt;
= Resources =&lt;br /&gt;
&lt;br /&gt;
== Academic Schedule ==&lt;br /&gt;
* See https://www.ucalgary.ca/pubs/calendar/current/academic-schedule.html for the UofC &#039;s academic schedule. &lt;br /&gt;
&lt;br /&gt;
[[Category:TALC]]&lt;/div&gt;</summary>
		<author><name>Fridman</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=MARC_Cluster_Guide&amp;diff=1659</id>
		<title>MARC Cluster Guide</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=MARC_Cluster_Guide&amp;diff=1659"/>
		<updated>2022-01-19T22:41:18Z</updated>

		<summary type="html">&lt;p&gt;Fridman: /* Hardware */ changed 8 nodes to 4 nodes&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Message Box&lt;br /&gt;
|icon=Security Icon.png&lt;br /&gt;
|title=Cybersecurity awareness at the U of C&lt;br /&gt;
|message=Please note that there are typically about 950 phishing attempts targeting University of Calgary accounts each month. This is just a reminder to be careful about computer security issues, both at home and at the University. Please visit https://it.ucalgary.ca/it-security for more information, tips on secure computing, and how to report suspected security problems.}}&lt;br /&gt;
&lt;br /&gt;
{{Message Box&lt;br /&gt;
|title=Need Help or have other MARC Related Questions?&lt;br /&gt;
|message=For all general RCS related issues, questions, or comments, please contact us at support@hpc.ucalgary.ca.&lt;br /&gt;
|icon=Support Icon.png}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This guide gives an overview of the MARC (Medical Advanced Research Computing) cluster at the University of Calgary.&lt;br /&gt;
&lt;br /&gt;
It is intended to be read by new account holders getting started on MARC, covering such topics as the hardware and performance characteristics, available software, usage policies and how to log in.&lt;br /&gt;
&lt;br /&gt;
If you are looking for how to login to MARC or how to get an account, please see [[MARC_accounts]]&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
MARC is a cluster comprised of Linux-based computers purchased in 2019&lt;br /&gt;
&lt;br /&gt;
The MARC cluster has been designed with controls appropriate for Level 3 and Level 4 classified data.  The University of Calgary Information Security Classification Standard is published here: https://www.ucalgary.ca/policies/files/policies/im010-03-security-standard_0.pdf&lt;br /&gt;
&lt;br /&gt;
Due to security requirements for Level 3/4 data, some necessary restrictions have been placed on MARC to prevent accidental (or otherwise) data exfiltration.&lt;br /&gt;
* Compute nodes and login nodes have no access to the internet&lt;br /&gt;
* All data must be ingested to MARC by first copying it to SCDS (Secure Compute Data Store) and then fetching it from SCDS to MARC.&lt;br /&gt;
* Resulting data (outputs of analyses) must be copyied to SCDS and then fetching it from SCDS to wherever it needs to go using established means.&lt;br /&gt;
* All file accesses are recorded for auditing purposes.&lt;br /&gt;
* ssh connections to MARC must be through the IT Citrix system (Admin VPN is not sufficient nor necessary)&lt;br /&gt;
* All accounts must be IT accounts&lt;br /&gt;
* A project ID is required to use MARC.  This project ID is the same number that is used on SCDS&lt;br /&gt;
&lt;br /&gt;
== Hardware ==&lt;br /&gt;
MARC has compute nodes of two different varieties: &lt;br /&gt;
* 4 GPU (Graphics Processing Unit)-enabled nodes containing:&lt;br /&gt;
** 40-cores: each node having 2 sockets. Each socket has an Intel Xeon Gold 6148 20-core processor, running at 2.4 GHz. &lt;br /&gt;
** The 40 cores on the individual compute nodes share about 750 GB of RAM (memory) but, jobs should request no more than 753000 MB.&lt;br /&gt;
** Two Tesla V100-PCIE-16GB GPUs.&lt;br /&gt;
* 1 Bigmem Node&lt;br /&gt;
** 80-cores: node with 4 sockets.  Each socket has an Intel Xeon Gold 6148 20-core processor, running at 2.4 GHz.  &lt;br /&gt;
** The 80 cores on the node share about 3 TB of RAM (memory), but, jobs should request no more than 3000000 MB.&lt;br /&gt;
&lt;br /&gt;
=== cpu2019 ===&lt;br /&gt;
Allows non-GPU jobs to use:&lt;br /&gt;
* Up to 38 cpus per node &lt;br /&gt;
* No gpus.&lt;br /&gt;
* Up to 500GB memory&lt;br /&gt;
* Are the same &lt;br /&gt;
&lt;br /&gt;
=== gpu2019 ===&lt;br /&gt;
Allows jobs requiring nVidia v100 gpu jobs to use:&lt;br /&gt;
* 1 or 2 gpus per node &lt;br /&gt;
* Up to 40 cpus per node.&lt;br /&gt;
* Up to 750GB memory&lt;br /&gt;
&lt;br /&gt;
=== bigmem ===&lt;br /&gt;
For very large memory jobs:&lt;br /&gt;
* Up to 80 cpus&lt;br /&gt;
* Up to 3TB memory&lt;br /&gt;
* No gpus&lt;br /&gt;
&lt;br /&gt;
== Storage ==&lt;br /&gt;
About a petabyte of raw disk storage is available to the MARC cluster, but for error checking and performance reasons, the amount of usable storage for researchers&#039; projects is considerably less than that.  From a user&#039;s perspective, the total amount of storage is less important than the individual storage limits.  As described below, there are two storage areas: home and project.&lt;br /&gt;
&lt;br /&gt;
=== Home file system: /home ===&lt;br /&gt;
There is a per-user quota of 25 GB under /home. This limit is fixed and cannot be increased.  Each user has a directory under /home, which is the default working directory when logging in to MARC. It is expected that most researchers will work from /project and only use home for software and such things.  /home is expected to be used only for L1/L2 data and not for your patient identifiable files.  The identifiable files go in the appropriate directory under /project.&lt;br /&gt;
&lt;br /&gt;
=== Project file system for larger projects: /project ===&lt;br /&gt;
Directories will be created in /project named after your project ID.  This name will be the same as your SCDS share name.  The expectation is that all files to do with that project will be stored in /project/projectid.  Quotas in /project are somewhat flexible.  Please write to support@hpc.ucalgary.ca with an estimate of how much space you will require.&lt;br /&gt;
&lt;br /&gt;
== Software installations ==&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
&lt;br /&gt;
There are some complications in using Python on MARC relative to using ARC. &lt;br /&gt;
Normally, we would recommend installing conda in user&#039;s home directory. &lt;br /&gt;
On MARC, security requirements for working with L4 data require that we block outgoing and incoming internet connections. &lt;br /&gt;
As a result, new packages cannot be downloaded with conda. &lt;br /&gt;
&lt;br /&gt;
Depending on what you need, the two recommendations we can make are&lt;br /&gt;
 &lt;br /&gt;
* Download the standard anaconda distribution from the anaconda website to a personal computer: https://repo.anaconda.com/archive/Anaconda3-2020.07-Linux-x86_64.sh&lt;br /&gt;
** Transfer the script to MARC via SCDS&lt;br /&gt;
** Copy it to your /home directory&lt;br /&gt;
** Install it in your home directory with $bash Anaconda3-2020.07-Linux-x86_64.sh&lt;br /&gt;
** you will be asked to agree to a license agreement and to confirm that you wish to create a folder anaconda3 once the installation completes, you will have a new directory under your home directory ~/anaconda3. In order to make it possible to use the local conda instance you will need to change the system path to include your local python directories $ export PATH=~/anaconda3/bin:$PATH&lt;br /&gt;
* Download a docker container with the software that you need including python (e.g. tensorflow-gpu)&lt;br /&gt;
** Transfer the docker container to MARC via SCDS&lt;br /&gt;
** Copy it to your /home directory&lt;br /&gt;
** Run it with singularity&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Non-open source software which requires a connection to a license server may require admin assistance to set up. contact [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] for support.&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Further Reading ==&lt;br /&gt;
See [[Running_jobs]] for information on starting a job&lt;br /&gt;
&lt;br /&gt;
[[Category:MARC]]&lt;br /&gt;
[[Category:Guides]]&lt;/div&gt;</summary>
		<author><name>Fridman</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=CloudStack&amp;diff=1658</id>
		<title>CloudStack</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=CloudStack&amp;diff=1658"/>
		<updated>2022-01-18T18:07:49Z</updated>

		<summary type="html">&lt;p&gt;Fridman: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Apache CloudStack is the RCS Virtual Machine (VM) Infrastructure as a Service (IaaS) offering for University of Calgary Researchers in order to allow them to quickly deploy VMs to support short term projects. It is intended as a proof of concept or for prototyping environments, where resources are quickly spun up as required. Once the project matures, the expectation would be to transition to a more permanent environment such as I.T. VMware.&lt;br /&gt;
&lt;br /&gt;
Researchers have a great degree of freedom in how they use CloudStack. As such, they are solely responsible for maintaining any VMs they deploy. The expectation is that the owners of the VMs will be patching and doing other maintenance work on a regular basis.&lt;br /&gt;
&lt;br /&gt;
Note, as the CloudStack deployment is a research environment, it is not intended for long term resilient service provision.&lt;br /&gt;
&lt;br /&gt;
==Release Date==&lt;br /&gt;
We expect that CloudStack will be available for UofC researchers at the end of March.&lt;br /&gt;
==Requesting Access==&lt;br /&gt;
Requests to use CloudStack are done though I.T&#039;s Service Now request software. The request form is being finalized and further instruction will be forthcoming.&lt;br /&gt;
==End User Agreement==&lt;br /&gt;
Please see https://rcs.ucalgary.ca/CloudStack_End_User_Agreement&lt;/div&gt;</summary>
		<author><name>Fridman</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=CloudStack&amp;diff=1657</id>
		<title>CloudStack</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=CloudStack&amp;diff=1657"/>
		<updated>2022-01-18T18:07:23Z</updated>

		<summary type="html">&lt;p&gt;Fridman: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Apache CloudStack is the RCS Virtual Machine (VM) Infrastructure as a Service (IaaS) offering for University of Calgary Researchers in order to allow them to quickly deploy VMs to support short term projects. It is intended as a proof of concept or for prototyping environments, where resources are quickly spun up as required. Once the project matures, the expectation would be to transition to a more permanent environment such as I.T. VMware.&lt;br /&gt;
&lt;br /&gt;
Researchers have a great degree of freedom in how they use CloudStack. As such, they are solely responsible for maintaining any VMs they deploy. The expectation is that the owners of the VMs will be patching and doing other maintenance work on a regular basis.&lt;br /&gt;
&lt;br /&gt;
Note, as the CloudStack deployment is a research environment, it is not intended for long term resilient service provision.&lt;br /&gt;
&lt;br /&gt;
==Release Date==&lt;br /&gt;
We expect that CloudStack will be available for UofC researchers at the end of March.&lt;br /&gt;
==Requesting Access==&lt;br /&gt;
Requests to use CloudStack are done though I.T&#039;s Service Now request software. The request form is being finalized and further instruction will be forthcoming.&lt;br /&gt;
==End User Agreement==&lt;br /&gt;
Please see &amp;lt;https://rcs.ucalgary.ca/CloudStack_End_User_Agreement&amp;gt;&lt;/div&gt;</summary>
		<author><name>Fridman</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=CloudStack&amp;diff=1656</id>
		<title>CloudStack</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=CloudStack&amp;diff=1656"/>
		<updated>2022-01-18T18:06:25Z</updated>

		<summary type="html">&lt;p&gt;Fridman: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Apache CloudStack is the RCS Virtual Machine (VM) Infrastructure as a Service (IaaS) offering for University of Calgary Researchers in order to allow them to quickly deploy VMs to support short term projects. It is intended as a proof of concept or for prototyping environments, where resources are quickly spun up as required. Once the project matures, the expectation would be to transition to a more permanent environment such as I.T. VMware.&lt;br /&gt;
&lt;br /&gt;
Researchers have a great degree of freedom in how they use CloudStack. As such, they are solely responsible for maintaining any VMs they deploy. The expectation is that the owners of the VMs will be patching and doing other maintenance work on a regular basis.&lt;br /&gt;
&lt;br /&gt;
Note, as the CloudStack deployment is a research environment, it is not intended for long term resilient service provision.&lt;br /&gt;
&lt;br /&gt;
==Release Date==&lt;br /&gt;
We expect that CloudStack will be available for UofC researchers at the end of March.&lt;br /&gt;
==Requesting Access==&lt;br /&gt;
Requests to use CloudStack are done though I.T&#039;s Service Now request software. The request form is being finalized and further instruction will be forthcoming.&lt;/div&gt;</summary>
		<author><name>Fridman</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=CloudStack&amp;diff=1655</id>
		<title>CloudStack</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=CloudStack&amp;diff=1655"/>
		<updated>2022-01-18T18:02:15Z</updated>

		<summary type="html">&lt;p&gt;Fridman: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Apache CloudStack is the RCS Virtual Machine (VM) Infrastructure as a Service (IaaS) offering for University of Calgary Researchers in order to allow them to quickly deploy VMs to support short term projects. It is intended as a proof of concept or for prototyping environments, where resources are quickly spun up as required. Once the project matures, the expectation would be to transition to a more permanent environment such as I.T. VMware.&lt;br /&gt;
&lt;br /&gt;
Researchers have a great degree of freedom in how they use CloudStack. As such, they are solely responsible for maintaining any VMs they deploy. The expectation is that the owners of the VMs will be patching and doing other maintenance work on a regular basis.&lt;br /&gt;
&lt;br /&gt;
Note, as the CloudStack deployment is a research environment, it is not intended for long term resilient service provision.&lt;/div&gt;</summary>
		<author><name>Fridman</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=List_of_courses_on_TALC&amp;diff=1623</id>
		<title>List of courses on TALC</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=List_of_courses_on_TALC&amp;diff=1623"/>
		<updated>2022-01-07T19:26:58Z</updated>

		<summary type="html">&lt;p&gt;Fridman: /* Winter 2022 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Message Box&lt;br /&gt;
|title=Interested in using TALC for your course?&lt;br /&gt;
|message=&#039;&#039;&#039;If you are the instructor for a course that could benefit from using TALC, please contact us at support@hpc.ucalgary.ca to discuss your requirements.&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br &amp;gt;To ensure that the appropriate software is available, student accounts are in place, and appropriate training has been provided for your teaching assistants, it is best to start this discussion several months prior to the start of the course.&lt;br /&gt;
|icon=Support Icon.png}}&lt;br /&gt;
&lt;br /&gt;
= Information =&lt;br /&gt;
&lt;br /&gt;
* TALC Terms of Use: https://rcs.ucalgary.ca/TALC_Terms_of_Use&lt;br /&gt;
&lt;br /&gt;
* TALC Guide: https://rcs.ucalgary.ca/TALC_Cluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== A New Course Checklist ==&lt;br /&gt;
&lt;br /&gt;
At least &#039;&#039;&#039;three (3) weeks&#039;&#039;&#039; before the course.&lt;br /&gt;
* Please contact us about the course you are going to teach using TALC.&lt;br /&gt;
&lt;br /&gt;
* Please provide us with the &#039;&#039;&#039;course ID&#039;&#039;&#039;, &#039;&#039;&#039;name of the course&#039;&#039;&#039;, and the &#039;&#039;&#039;name of the instructor&#039;&#039;&#039; for the course.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Soon after&#039;&#039;&#039;:&lt;br /&gt;
* Request &#039;&#039;&#039;accounts&#039;&#039;&#039; on TALC for yourself, and TA who is going to be helping with the course.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Please note that&#039;&#039;&#039;&lt;br /&gt;
: it is &#039;&#039;&#039;instructor&#039;s responsibility&#039;&#039;&#039; to make sure that the &#039;&#039;&#039;software&#039;&#039;&#039; required for the course is &#039;&#039;&#039;available&#039;&#039;&#039; and &#039;&#039;&#039;works&#039;&#039;&#039; on TALC. &lt;br /&gt;
: if &#039;&#039;&#039;special software&#039;&#039;&#039; is required for the course it is recommended that the instructor contacts us &#039;&#039;&#039;well before the course starts&#039;&#039;&#039; to allow enough time for installation and testing of the software.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* If you are planning to share data with the students, request a &#039;&#039;&#039;shared directory for the course&#039;&#039;&#039;. &lt;br /&gt;
: There will also be a &#039;&#039;&#039;unix group&#039;&#039;&#039; on TALC to control access to the shared directory.&lt;br /&gt;
: The directory name and the unix group will probably have the same name, like &amp;quot;course601-21&amp;quot;. &lt;br /&gt;
: Your and TA&#039;s accounts have to be added to the access group.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* The data for courses run on TALC is deleted once the course is over. So that the &#039;&#039;&#039;shared directory&#039;&#039;&#039; and &#039;&#039;&#039;software&#039;&#039;&#039; for a course have to be setup every time the course is run.&lt;br /&gt;
: If you need to build / have some &#039;&#039;&#039;specific software&#039;&#039;&#039; on TALC for the course, please start early. &lt;br /&gt;
: It needs to be &#039;&#039;&#039;installed and tested&#039;&#039;&#039; well before the course start, so that the solution can be found in case something does not work.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* If you &#039;&#039;&#039;need help with setting up the software&#039;&#039;&#039; for the course, let the RCS support know at support@hpc.ucalgary.ca .&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* During the course, if students have &#039;&#039;&#039;difficulties with using TALC&#039;&#039;&#039;, the TAs are expected to help the students.&lt;br /&gt;
: If the TA cannot solve the issue, we prefer that the TA contact us rather than the student who has the problem.&lt;br /&gt;
: If TAs need training, this has to be arranged before the course with us (RCS);&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* As soon as you have a &#039;&#039;&#039;list of students&#039;&#039;&#039; who are going to take the course, please send it to support@hpc.ucalgary.ca .&lt;br /&gt;
: The list has to have students&#039; &#039;&#039;&#039;names&#039;&#039;&#039; as well as associated &#039;&#039;&#039;UofC email addresses&#039;&#039;&#039;.&lt;br /&gt;
: The accounts will be created and added to the access group.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* If you have any concerns or questions about running a course on TALC please let us know.&lt;br /&gt;
&lt;br /&gt;
= Courses =&lt;br /&gt;
&lt;br /&gt;
== Current Courses ==&lt;br /&gt;
&lt;br /&gt;
=== Winter 2022 ===&lt;br /&gt;
* MDSC 301 -  Introduction to Bioinformatics, Tatiana Maroilley&lt;br /&gt;
* GLGY 605 - Groundwater Flow and Reactive Transport Modelling, Benjamin Tutolo (requested Nov 2021)&lt;br /&gt;
&lt;br /&gt;
* MDSC 201 - Applied Bioinformatics, David Anderson (requested Nov 2021)&lt;br /&gt;
&lt;br /&gt;
* MDSC 519 - Advanced Bioinformatics, David Anderson (requested Nov 2021)&lt;br /&gt;
&lt;br /&gt;
* DATA 608 - Developing Big Data Applications, Leanne Wu (requested Nov 2021).&lt;br /&gt;
&lt;br /&gt;
* BMEN 415 - Sensor Systems and Data Analytics, Ethan MacDonald (requested Dec 2021).&lt;br /&gt;
&lt;br /&gt;
* CPSC 601.04 - High Performance Scientific Computing and Visualization, Usman Alim (late request, Jan 2022);&lt;br /&gt;
&lt;br /&gt;
* ENEL 645 - Data Mining &amp;amp; Machine Learning, Roberto Souza (late request, Jan 2022);&lt;br /&gt;
&lt;br /&gt;
== Past Courses ==&lt;br /&gt;
&lt;br /&gt;
=== Fall 2021 ===&lt;br /&gt;
&lt;br /&gt;
* ENSF 619.01 - Ethan MacDonald&lt;br /&gt;
* MDSC 523 - David Anderson&lt;br /&gt;
* ENSF 619.02 - Roberto Medeiros de Souza&lt;br /&gt;
* ENSF 612 - Gias Uddin and Ajoy Das&lt;br /&gt;
&lt;br /&gt;
=== 2021 Winter ===&lt;br /&gt;
* GLGY 605 - Benjamin Tutolo&lt;br /&gt;
* BMEN 415 - Ethan MacDonald&lt;br /&gt;
* MDSC 201 - David Anderson&lt;br /&gt;
&lt;br /&gt;
=== 2020 Spring ===&lt;br /&gt;
* Bioinformatics workshop Q Zhang&lt;br /&gt;
* Bioinformatics workshop Q Zhang&lt;br /&gt;
&lt;br /&gt;
=== 2020 Winter ===&lt;br /&gt;
* DATA 623 R Walker&lt;br /&gt;
* GLGY 605 B Tutolo&lt;br /&gt;
* DATA 608 P Federl&lt;br /&gt;
* ENSF 612 J Kaur&lt;br /&gt;
&lt;br /&gt;
=== 2020 Winter Block Week ===&lt;br /&gt;
* MDSC 395 D Anderson&lt;br /&gt;
&lt;br /&gt;
= Resources =&lt;br /&gt;
&lt;br /&gt;
== Academic Schedule ==&lt;br /&gt;
* See https://www.ucalgary.ca/pubs/calendar/current/academic-schedule.html for the UofC &#039;s academic schedule. &lt;br /&gt;
&lt;br /&gt;
[[Category:TALC]]&lt;/div&gt;</summary>
		<author><name>Fridman</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=List_of_courses_on_TALC&amp;diff=1622</id>
		<title>List of courses on TALC</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=List_of_courses_on_TALC&amp;diff=1622"/>
		<updated>2022-01-07T19:25:10Z</updated>

		<summary type="html">&lt;p&gt;Fridman: /* Winter 2022 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Message Box&lt;br /&gt;
|title=Interested in using TALC for your course?&lt;br /&gt;
|message=&#039;&#039;&#039;If you are the instructor for a course that could benefit from using TALC, please contact us at support@hpc.ucalgary.ca to discuss your requirements.&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br &amp;gt;To ensure that the appropriate software is available, student accounts are in place, and appropriate training has been provided for your teaching assistants, it is best to start this discussion several months prior to the start of the course.&lt;br /&gt;
|icon=Support Icon.png}}&lt;br /&gt;
&lt;br /&gt;
= Information =&lt;br /&gt;
&lt;br /&gt;
* TALC Terms of Use: https://rcs.ucalgary.ca/TALC_Terms_of_Use&lt;br /&gt;
&lt;br /&gt;
* TALC Guide: https://rcs.ucalgary.ca/TALC_Cluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== A New Course Checklist ==&lt;br /&gt;
&lt;br /&gt;
At least &#039;&#039;&#039;three (3) weeks&#039;&#039;&#039; before the course.&lt;br /&gt;
* Please contact us about the course you are going to teach using TALC.&lt;br /&gt;
&lt;br /&gt;
* Please provide us with the &#039;&#039;&#039;course ID&#039;&#039;&#039;, &#039;&#039;&#039;name of the course&#039;&#039;&#039;, and the &#039;&#039;&#039;name of the instructor&#039;&#039;&#039; for the course.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Soon after&#039;&#039;&#039;:&lt;br /&gt;
* Request &#039;&#039;&#039;accounts&#039;&#039;&#039; on TALC for yourself, and TA who is going to be helping with the course.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Please note that&#039;&#039;&#039;&lt;br /&gt;
: it is &#039;&#039;&#039;instructor&#039;s responsibility&#039;&#039;&#039; to make sure that the &#039;&#039;&#039;software&#039;&#039;&#039; required for the course is &#039;&#039;&#039;available&#039;&#039;&#039; and &#039;&#039;&#039;works&#039;&#039;&#039; on TALC. &lt;br /&gt;
: if &#039;&#039;&#039;special software&#039;&#039;&#039; is required for the course it is recommended that the instructor contacts us &#039;&#039;&#039;well before the course starts&#039;&#039;&#039; to allow enough time for installation and testing of the software.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* If you are planning to share data with the students, request a &#039;&#039;&#039;shared directory for the course&#039;&#039;&#039;. &lt;br /&gt;
: There will also be a &#039;&#039;&#039;unix group&#039;&#039;&#039; on TALC to control access to the shared directory.&lt;br /&gt;
: The directory name and the unix group will probably have the same name, like &amp;quot;course601-21&amp;quot;. &lt;br /&gt;
: Your and TA&#039;s accounts have to be added to the access group.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* The data for courses run on TALC is deleted once the course is over. So that the &#039;&#039;&#039;shared directory&#039;&#039;&#039; and &#039;&#039;&#039;software&#039;&#039;&#039; for a course have to be setup every time the course is run.&lt;br /&gt;
: If you need to build / have some &#039;&#039;&#039;specific software&#039;&#039;&#039; on TALC for the course, please start early. &lt;br /&gt;
: It needs to be &#039;&#039;&#039;installed and tested&#039;&#039;&#039; well before the course start, so that the solution can be found in case something does not work.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* If you &#039;&#039;&#039;need help with setting up the software&#039;&#039;&#039; for the course, let the RCS support know at support@hpc.ucalgary.ca .&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* During the course, if students have &#039;&#039;&#039;difficulties with using TALC&#039;&#039;&#039;, the TAs are expected to help the students.&lt;br /&gt;
: If the TA cannot solve the issue, we prefer that the TA contact us rather than the student who has the problem.&lt;br /&gt;
: If TAs need training, this has to be arranged before the course with us (RCS);&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* As soon as you have a &#039;&#039;&#039;list of students&#039;&#039;&#039; who are going to take the course, please send it to support@hpc.ucalgary.ca .&lt;br /&gt;
: The list has to have students&#039; &#039;&#039;&#039;names&#039;&#039;&#039; as well as associated &#039;&#039;&#039;UofC email addresses&#039;&#039;&#039;.&lt;br /&gt;
: The accounts will be created and added to the access group.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* If you have any concerns or questions about running a course on TALC please let us know.&lt;br /&gt;
&lt;br /&gt;
= Courses =&lt;br /&gt;
&lt;br /&gt;
== Current Courses ==&lt;br /&gt;
&lt;br /&gt;
=== Winter 2022 ===&lt;br /&gt;
* MDSC 301 - Tatiana Maroilley &lt;br /&gt;
* GLGY 605 - Groundwater Flow and Reactive Transport Modelling, Benjamin Tutolo (requested Nov 2021)&lt;br /&gt;
&lt;br /&gt;
* MDSC 201 - Applied Bioinformatics, David Anderson (requested Nov 2021)&lt;br /&gt;
&lt;br /&gt;
* MDSC 519 - Advanced Bioinformatics, David Anderson (requested Nov 2021)&lt;br /&gt;
&lt;br /&gt;
* DATA 608 - Developing Big Data Applications, Leanne Wu (requested Nov 2021).&lt;br /&gt;
&lt;br /&gt;
* BMEN 415 - Sensor Systems and Data Analytics, Ethan MacDonald (requested Dec 2021).&lt;br /&gt;
&lt;br /&gt;
* CPSC 601.04 - High Performance Scientific Computing and Visualization, Usman Alim (late request, Jan 2022);&lt;br /&gt;
&lt;br /&gt;
* ENEL 645 - Data Mining &amp;amp; Machine Learning, Roberto Souza (late request, Jan 2022);&lt;br /&gt;
&lt;br /&gt;
== Past Courses ==&lt;br /&gt;
&lt;br /&gt;
=== Fall 2021 ===&lt;br /&gt;
&lt;br /&gt;
* ENSF 619.01 - Ethan MacDonald&lt;br /&gt;
* MDSC 523 - David Anderson&lt;br /&gt;
* ENSF 619.02 - Roberto Medeiros de Souza&lt;br /&gt;
* ENSF 612 - Gias Uddin and Ajoy Das&lt;br /&gt;
&lt;br /&gt;
=== 2021 Winter ===&lt;br /&gt;
* GLGY 605 - Benjamin Tutolo&lt;br /&gt;
* BMEN 415 - Ethan MacDonald&lt;br /&gt;
* MDSC 201 - David Anderson&lt;br /&gt;
&lt;br /&gt;
=== 2020 Spring ===&lt;br /&gt;
* Bioinformatics workshop Q Zhang&lt;br /&gt;
* Bioinformatics workshop Q Zhang&lt;br /&gt;
&lt;br /&gt;
=== 2020 Winter ===&lt;br /&gt;
* DATA 623 R Walker&lt;br /&gt;
* GLGY 605 B Tutolo&lt;br /&gt;
* DATA 608 P Federl&lt;br /&gt;
* ENSF 612 J Kaur&lt;br /&gt;
&lt;br /&gt;
=== 2020 Winter Block Week ===&lt;br /&gt;
* MDSC 395 D Anderson&lt;br /&gt;
&lt;br /&gt;
= Resources =&lt;br /&gt;
&lt;br /&gt;
== Academic Schedule ==&lt;br /&gt;
* See https://www.ucalgary.ca/pubs/calendar/current/academic-schedule.html for the UofC &#039;s academic schedule. &lt;br /&gt;
&lt;br /&gt;
[[Category:TALC]]&lt;/div&gt;</summary>
		<author><name>Fridman</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=TALC_Cluster_Guide&amp;diff=1617</id>
		<title>TALC Cluster Guide</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=TALC_Cluster_Guide&amp;diff=1617"/>
		<updated>2022-01-06T22:04:05Z</updated>

		<summary type="html">&lt;p&gt;Fridman: /* Time limits */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Message Box&lt;br /&gt;
|icon=Security Icon.png&lt;br /&gt;
|title=Cybersecurity awareness at the U of C&lt;br /&gt;
|message=Please note that there are typically about 950 phishing attempts targeting University of Calgary accounts each month. This is just a reminder to be careful about computer security issues, both at home and at the University. Please visit https://it.ucalgary.ca/it-security for more information, tips on secure computing, and how to report suspected security problems.}}&lt;br /&gt;
&lt;br /&gt;
This guide gives an overview of the Teaching and Learning Cluster (TALC) at the University of Calgary and is intended to be read by new account holders getting started on TALC. This guide covers topics as the hardware and performance characteristics, available software, usage policies and how to log in and run jobs. &lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
TALC is a cluster of computers created by Research Computing Services (RCS) in response to requests for a central computing resource to support academic courses and workshops offered at the University of Calgary. It is a complement to the Advanced Research Computing (ARC) cluster that is used for research, rather than educational purposes. The software environment in the TALC and ARC clusters is very similar and workflows between the two clusters are identical.  What students learn about using TALC will have direct applicability to using ARC should they go on to use ARC for research work. &lt;br /&gt;
&lt;br /&gt;
If you are the instructor for a course that could benefit from using TALC, please review this guide and the [[TALC Terms of Use]] and then contact us at support@hpc.ucalgary.ca to discuss your requirements.  &lt;br /&gt;
&lt;br /&gt;
Please note that in order to ensure that the appropriate software is available, student accounts are in place, and appropriate training has been provided for your teaching assistants, it is best to start this discussion several months prior to the start of the course.&lt;br /&gt;
&lt;br /&gt;
If you are a student in a course using TALC, please review this guide for basic instructions in using the cluster.  Questions should first be directed to the teaching assistants or instructor for your course.&lt;br /&gt;
&lt;br /&gt;
===Obtaining an account===&lt;br /&gt;
TALC account requests are expected to be submitted by the course instructor rather than from individual students. You must have a University of Calgary IT account in order to use TALC. If you do not have a University of IT account or email address, please register for one at https://itregport.ucalgary.ca/. In order to ensure TALC is provisioned in time for a course start date, the instructor should submit the initial list of @ucalgary.ca accounts needed for the course 2 weeks before the start date.&lt;br /&gt;
&lt;br /&gt;
=== Getting Support ===&lt;br /&gt;
{{Message Box&lt;br /&gt;
|title=Need Help or have other TALC Related Questions?&lt;br /&gt;
|message=&#039;&#039;&#039;Students&#039;&#039;&#039;, please send TALC-related questions to your course instructor or teaching assistants.&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Course instructors and TAs&#039;&#039;&#039;, please report system issues to support@hpc.ucalgary.ca.&lt;br /&gt;
|icon=Support Icon.png}}&lt;br /&gt;
&lt;br /&gt;
==Hardware==&lt;br /&gt;
The TALC cluster is comprised of repurposed research clusters that are a few generations old. As a result, individual processor performance will not be comparable to the latest processors but should be sufficient for educational purposes and course work.  &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Partition&lt;br /&gt;
!Description&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores, Model, and Year&lt;br /&gt;
!Installed Memory&lt;br /&gt;
!GPU&lt;br /&gt;
!Network&lt;br /&gt;
|-&lt;br /&gt;
|gpu&lt;br /&gt;
|GPU Compute&lt;br /&gt;
|3&lt;br /&gt;
|12 cores, 2x Intel Xeon Bronze 3204 CPU @ 1.90GHz (2019)&lt;br /&gt;
|192 GB&lt;br /&gt;
|5x NVIDIA Corporation TU104GL [Tesla T4]&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|cpu16&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|36&lt;br /&gt;
|16 cores, 2x Eight-Core Intel Xeon CPU E5-2650 @ 2.00GHz (2012)&lt;br /&gt;
|64 GB&lt;br /&gt;
|N/A&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|bigmem&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|2&lt;br /&gt;
|32 cores, 4x Intel(R) Xeon(R) CPU E7- 4830  @ 2.13GHz (2015)&lt;br /&gt;
|1024 GB&lt;br /&gt;
|N/A&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Storage===&lt;br /&gt;
{{Message Box&lt;br /&gt;
| title=No Backup Policy!&lt;br /&gt;
| message=You are responsible for your own backups.   Since accounts on TALC and related data are removed shortly after the associated course has finished, you should download anything you need to save to your own computer before the end of the course.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
TALC is connected to a network disk storage system. This storage is split across the &amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; file systems.  &lt;br /&gt;
====&amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt;: Home file system====&lt;br /&gt;
Each user has a directory under /home and is the default working directory when logging in to TALC. Each home directory has a per-user quota of 500 GB. This limit is fixed and cannot be increased.&lt;br /&gt;
&lt;br /&gt;
Note on file sharing: Due to security concerns, permissions set using &amp;lt;code&amp;gt;chmod&amp;lt;/code&amp;gt; on your home directory to allow other users to read/write to your home directory will be automatically reverted by an automated system process unless an explicit exception is made.  If you need to share files with others on the TALC cluster, please write to support@hpc.ucalgary.ca to ask for such an exception.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt;: Scratch file system for large job-oriented storage====&lt;br /&gt;
Associated with each job, under the &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; directory, a subdirectory is created that can be referenced in job scripts as &amp;lt;code&amp;gt;/scratch/${SLURM_JOB_ID}&amp;lt;/code&amp;gt;. You can use that directory for temporary files needed during the course of a job. Up to 30 TB of storage may be used, per user (total for all your jobs) in the &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; file system.  &lt;br /&gt;
&lt;br /&gt;
Data in &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; associated with a given job will be deleted automatically, without exception, five days after the job finishes.&lt;br /&gt;
&lt;br /&gt;
== Software ==&lt;br /&gt;
{{Message Box&lt;br /&gt;
| title=Software Package Requests&lt;br /&gt;
| message=Course instructors or teaching assistants should write to support@hpc.ucalgary.ca if additional software is required for their course.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
All TALC nodes run a version of Rocky Linux. For your convenience, we have packaged commonly used software packages and dependencies as modules available under &amp;lt;code&amp;gt;/global/software&amp;lt;/code&amp;gt;. If your software package is not available as a module, you may also try Anaconda which allows users to manage and install custom packages in an isolated environment.&lt;br /&gt;
&lt;br /&gt;
For a list of available packages that have been made available, please see [[ARC Software pages]]. &lt;br /&gt;
&lt;br /&gt;
=== Modules ===&lt;br /&gt;
The setup of the environment for using some of the installed software is through the &amp;lt;code&amp;gt;module&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
Software packages bundled as a module will be available under &amp;lt;code&amp;gt;/global/software&amp;lt;/code&amp;gt; and can be listed with the &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module avail&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To enable Python, load the Python module by running:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module load python/anaconda-3.6-5.1.0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To unload the Python module, run:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module remove python/anaconda-3.6-5.1.0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To see currently loaded modules, run:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module list&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Using TALC==&lt;br /&gt;
{{Message Box&lt;br /&gt;
|title=Usage subject to [[TALC Terms of Use]]&lt;br /&gt;
|message=Please review the [[TALC Terms of Use]] prior to using TALC.&lt;br /&gt;
|icon=Support Icon.png}}&lt;br /&gt;
&lt;br /&gt;
===Logging in===&lt;br /&gt;
To log in to TALC, connect using SSH to talc.ucalgary.ca. Connections to TALC are accepted only from the University of Calgary network (on campus) or through the University of Calgary General VPN (off campus).&lt;br /&gt;
&lt;br /&gt;
See [[Connecting to RCS HPC Systems]] for more information.&lt;br /&gt;
&lt;br /&gt;
===Working interactively===&lt;br /&gt;
&amp;lt;!-- original chunk --&amp;gt;&lt;br /&gt;
TALC uses the Linux operating system. The program that responds to your typed commands and allows you to run other programs is called the Linux shell. There are several different shells available, but, by default you will use one called bash. It is useful to have some knowledge of the shell and a variety of other command-line programs that you can use to manipulate files. If you are new to Linux systems, we recommend that you work through one of the many online tutorials that are available, such as the [http://www.ee.surrey.ac.uk/Teaching/Unix/index.html UNIX Tutorial for Beginners (external link)] provided by the University of Surrey. The tutorial covers such fundamental topics, among others, as creating, renaming and deleting files and directories, how to produce a listing of your files and how to tell how much disk space you are using.  For a more comprehensive introduction to Linux, see [http://linuxcommand.sourceforge.net/tlcl.php The Linux Command Line (external link)].&lt;br /&gt;
&lt;br /&gt;
The TALC login node may be used for such tasks as editing files, compiling programs and running short tests while developing programs. CPU intensive workloads on the login node should be restricted to under 15 minutes as per [[General Cluster Guidelines and Policies|our cluster guidelines]]. For interactive workloads exceeding 15 minutes, use the &#039;&#039;&#039;[[Running_jobs#Interactive_jobs|salloc command]]&#039;&#039;&#039; to allocate an interactive session on a compute node.&lt;br /&gt;
&lt;br /&gt;
The default &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; allocation is 1 CPU and 1 GB of memory. Adjust this by specifying &amp;lt;code&amp;gt;-n CPU#&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--mem Megabytes&amp;lt;/code&amp;gt;. You may request up to 5 hours of CPU time for interactive jobs.&lt;br /&gt;
 salloc --time 5:00:00 --partition cpu16&lt;br /&gt;
&lt;br /&gt;
===Running non-interactive jobs (batch processing)===&lt;br /&gt;
Production runs and longer test runs should be submitted as (non-interactive) batch jobs, in which commands to be executed are listed in a script (text file). Batch jobs scripts are submitted using the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command, part of the Slurm job management and scheduling software. #SBATCH directive lines at the beginning of the script are used to specify the resources needed for the job (cores, memory, run time limit and any specialized hardware needed).&lt;br /&gt;
&lt;br /&gt;
Most of the information on the Running Jobs page on the Compute Canada web site is also relevant for submitting and managing batch jobs and reserving processors for interactive work on TALC.  One major difference between running jobs on the TALC and Compute Canada clusters is in selecting the type of hardware that should be used for a job. On TALC, you choose the hardware to use primarily by specifying a partition, as described below.&lt;br /&gt;
&lt;br /&gt;
===Using JupyterHub on Talc===&lt;br /&gt;
TALC has a Jupyterhub server which runs a Jupyter server on one of the TALC compute nodes and provides all the necessary encryption and plumbing to deliver the notebook to your computer.  To access this service you must have a TALC account. Point your browser at http://talc.ucalgary.ca and login with your usual UC account.  As of this writing, the job that runs the jupyter notebook is 1 cpu and 10GiB of memory on a cpu16 node.&lt;br /&gt;
&lt;br /&gt;
===Selecting a partition===&lt;br /&gt;
TALC currently has the following partitions available for use. The &amp;lt;code&amp;gt;gpu&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;cpu12&amp;lt;/code&amp;gt; partitions are refer to the same nodes. The &amp;lt;code&amp;gt;cpu12&amp;lt;/code&amp;gt; partition was created to only expose the CPUs on the GPU hardware for general purpose use. Each GPU node has 5 Tesla T4 GPUs installed, but you may only request one per job within the TALC environment.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Partition&lt;br /&gt;
!Description&lt;br /&gt;
!Nodes&lt;br /&gt;
!Cores&lt;br /&gt;
!Memory &lt;br /&gt;
!Memory Request Limit&lt;br /&gt;
!Time Limit&lt;br /&gt;
!GPU Request per Job&lt;br /&gt;
!Network&lt;br /&gt;
|-&lt;br /&gt;
|gpu&lt;br /&gt;
|GPU Compute&lt;br /&gt;
|3&lt;br /&gt;
|12 cores&lt;br /&gt;
|192 GB&lt;br /&gt;
|190 GB&lt;br /&gt;
|24 hours&lt;br /&gt;
|1x NVIDIA Corporation TU104GL [Tesla T4]&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|cpu12&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|3&lt;br /&gt;
|12 cores&lt;br /&gt;
|192 GB&lt;br /&gt;
|190 GB&lt;br /&gt;
|24 hours&lt;br /&gt;
|None&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|cpu16&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|36&lt;br /&gt;
|16 cores&lt;br /&gt;
|64 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
|24 hours&lt;br /&gt;
|None&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|bigmem&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|2&lt;br /&gt;
|32 cores&lt;br /&gt;
|1024 GB&lt;br /&gt;
|1022 GB&lt;br /&gt;
|24 hours&lt;br /&gt;
|None&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|}&lt;br /&gt;
There are some aspects to consider when selecting a partition including:&lt;br /&gt;
* Resource requirements in terms of memory and CPU cores&lt;br /&gt;
* Hardware specific requirements, such as GPU or CPU Instruction Set Extensions&lt;br /&gt;
* Partition resource limits and potential wait time&lt;br /&gt;
* Software support for parallel processing using Message Passing Interface (MPI), OpenMP, etc. For example, MPI for parallel processing can distribute memory across multiple nodes, so that per-node memory requirements could be lower. Whereas OpenMP or single process serial code that is restricted to one node would require a higher memory node.&lt;br /&gt;
&lt;br /&gt;
Since resources that are requested are reserved for your job, please request only as much CPU and memory as your job requires to avoid reducing the cluster efficiency.  If you are unsure which partition to use or the specific resource requests that are appropriate for your jobs, please contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] and we would be happy to work with you.&lt;br /&gt;
&lt;br /&gt;
=== Using a partition ===&lt;br /&gt;
&lt;br /&gt;
==== CPU only jobs ====&lt;br /&gt;
To select the &amp;lt;code&amp;gt;cpu16&amp;lt;/code&amp;gt; partition, include the following line in your batch job script:&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --partition=cpu16&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;You may also start an interactive session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt;:&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
$ salloc --time 1:00:00 -p cpu16&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== GPU jobs ====&lt;br /&gt;
In TALC, you are limited to exactly 1 GPU per job. Jobs that request for 0 GPUs or 2 or more GPUs will not be scheduled.&lt;br /&gt;
&lt;br /&gt;
To submit a job using the &amp;lt;code&amp;gt;gpu&amp;lt;/code&amp;gt; partition with one GPU request, include the following to your batch job script:&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --partition=gpu&lt;br /&gt;
#SBATCH --gpus-per-node=1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Like the previous example, you may also request interactive sessions with GPU nodes using &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt;. Just specify the &amp;lt;code&amp;gt;gpu&amp;lt;/code&amp;gt; partition and the number of GPUs required. &amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
$ salloc --time 1:00:00 -p gpu -n 1 --gpus-per-node 1 &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;You may verify that a GPU was assigned to your job or interactive session by running &amp;lt;code&amp;gt;nvidia-smi&amp;lt;/code&amp;gt;. This command will show you the status of the GPU that was assigned to you.&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
$ nvidia-smi&lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
| NVIDIA-SMI 450.80.02    Driver Version: 450.80.02    CUDA Version: 11.0     |&lt;br /&gt;
|-------------------------------+----------------------+----------------------+&lt;br /&gt;
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |&lt;br /&gt;
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |&lt;br /&gt;
|                               |                      |               MIG M. |&lt;br /&gt;
|===============================+======================+======================|&lt;br /&gt;
|   0  Tesla T4            Off  | 00000000:3B:00.0 Off |                    0 |&lt;br /&gt;
| N/A   36C    P0    14W /  70W |      0MiB / 15109MiB |      5%      Default |&lt;br /&gt;
|                               |                      |                  N/A |&lt;br /&gt;
+-------------------------------+----------------------+----------------------+&lt;br /&gt;
                                                                               &lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
| Processes:                                                                  |&lt;br /&gt;
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |&lt;br /&gt;
|        ID   ID                                                   Usage      |&lt;br /&gt;
|=============================================================================|&lt;br /&gt;
|  No running processes found                                                 |&lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Partition limitations ====&lt;br /&gt;
In addition to the hardware limitations of the nodes within the partition, please be aware that there may also be policy limits imposed on your account for each partition. These limits restrict the number of cores, nodes, or GPUs that can be used at any given time. Since the limits are applied on a partition-by-partition basis, using resources in one partition should not affect the available resources you can use in another partition.&lt;br /&gt;
&lt;br /&gt;
These limits can be listed by running:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sacctmgr show qos format=Name,MaxWall,MaxTRESPU%20,MaxSubmitJobs&lt;br /&gt;
      Name     MaxWall            MaxTRESPU MaxSubmit &lt;br /&gt;
---------- ----------- -------------------- --------- &lt;br /&gt;
    normal  1-00:00:00                                &lt;br /&gt;
  cpulimit                           cpu=48           &lt;br /&gt;
gpucpulim+                           cpu=18           &lt;br /&gt;
  gpulimit                 cpu=2,gres/gpu=1                &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Time limits ===&lt;br /&gt;
Use the &amp;lt;code&amp;gt;--time&amp;lt;/code&amp;gt; directive to tell the job scheduler the maximum time that your job might run.  For example:&lt;br /&gt;
 #SBATCH --time=hh:mm:ss&lt;br /&gt;
&lt;br /&gt;
You can use &amp;lt;code&amp;gt;scontrol show partitions&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; to see the current maximum time that a job can run.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; highlight=&amp;quot;6&amp;quot;&amp;gt;&lt;br /&gt;
$ scontrol show partitions&lt;br /&gt;
PartitionName=cpu16&lt;br /&gt;
   AllowGroups=ALL AllowAccounts=ALL AllowQos=ALL&lt;br /&gt;
   AllocNodes=ALL Default=YES QoS=cpulimit&lt;br /&gt;
   DefaultTime=NONE DisableRootJobs=NO ExclusiveUser=NO GraceTime=0 Hidden=NO&lt;br /&gt;
   MaxNodes=UNLIMITED MaxTime=1-00:00:00 MinNodes=0 LLN=NO MaxCPUsPerNode=UNLIMITED&lt;br /&gt;
   Nodes=n[1-36]&lt;br /&gt;
   PriorityJobFactor=1 PriorityTier=1 RootOnly=NO ReqResv=NO OverSubscribe=NO&lt;br /&gt;
   OverTimeLimit=NONE PreemptMode=OFF&lt;br /&gt;
   State=UP TotalCPUs=576 TotalNodes=36 SelectTypeParameters=NONE&lt;br /&gt;
   JobDefaults=(null)&lt;br /&gt;
   DefMemPerNode=UNLIMITED MaxMemPerNode=UNLIMITED&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, with &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; under the &amp;lt;code&amp;gt;TIMELIMIT&amp;lt;/code&amp;gt; column:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sinfo&lt;br /&gt;
PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST&lt;br /&gt;
cpu12        up 1-00:00:00      3   idle t[1-3]&lt;br /&gt;
cpu16        up 1-00:00:00     36   idle n[1-36]&lt;br /&gt;
bigmem       up 1-00:00:00      2   idle bigmem[1-2]&lt;br /&gt;
gpu          up 1-00:00:00      3   idle t[1-3]&lt;br /&gt;
 &lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
[[Category:TALC]]&lt;br /&gt;
[[Category:Guides]]&lt;/div&gt;</summary>
		<author><name>Fridman</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=TALC_Cluster_Guide&amp;diff=1616</id>
		<title>TALC Cluster Guide</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=TALC_Cluster_Guide&amp;diff=1616"/>
		<updated>2022-01-06T22:02:29Z</updated>

		<summary type="html">&lt;p&gt;Fridman: /* Time limits */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Message Box&lt;br /&gt;
|icon=Security Icon.png&lt;br /&gt;
|title=Cybersecurity awareness at the U of C&lt;br /&gt;
|message=Please note that there are typically about 950 phishing attempts targeting University of Calgary accounts each month. This is just a reminder to be careful about computer security issues, both at home and at the University. Please visit https://it.ucalgary.ca/it-security for more information, tips on secure computing, and how to report suspected security problems.}}&lt;br /&gt;
&lt;br /&gt;
This guide gives an overview of the Teaching and Learning Cluster (TALC) at the University of Calgary and is intended to be read by new account holders getting started on TALC. This guide covers topics as the hardware and performance characteristics, available software, usage policies and how to log in and run jobs. &lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
TALC is a cluster of computers created by Research Computing Services (RCS) in response to requests for a central computing resource to support academic courses and workshops offered at the University of Calgary. It is a complement to the Advanced Research Computing (ARC) cluster that is used for research, rather than educational purposes. The software environment in the TALC and ARC clusters is very similar and workflows between the two clusters are identical.  What students learn about using TALC will have direct applicability to using ARC should they go on to use ARC for research work. &lt;br /&gt;
&lt;br /&gt;
If you are the instructor for a course that could benefit from using TALC, please review this guide and the [[TALC Terms of Use]] and then contact us at support@hpc.ucalgary.ca to discuss your requirements.  &lt;br /&gt;
&lt;br /&gt;
Please note that in order to ensure that the appropriate software is available, student accounts are in place, and appropriate training has been provided for your teaching assistants, it is best to start this discussion several months prior to the start of the course.&lt;br /&gt;
&lt;br /&gt;
If you are a student in a course using TALC, please review this guide for basic instructions in using the cluster.  Questions should first be directed to the teaching assistants or instructor for your course.&lt;br /&gt;
&lt;br /&gt;
===Obtaining an account===&lt;br /&gt;
TALC account requests are expected to be submitted by the course instructor rather than from individual students. You must have a University of Calgary IT account in order to use TALC. If you do not have a University of IT account or email address, please register for one at https://itregport.ucalgary.ca/. In order to ensure TALC is provisioned in time for a course start date, the instructor should submit the initial list of @ucalgary.ca accounts needed for the course 2 weeks before the start date.&lt;br /&gt;
&lt;br /&gt;
=== Getting Support ===&lt;br /&gt;
{{Message Box&lt;br /&gt;
|title=Need Help or have other TALC Related Questions?&lt;br /&gt;
|message=&#039;&#039;&#039;Students&#039;&#039;&#039;, please send TALC-related questions to your course instructor or teaching assistants.&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Course instructors and TAs&#039;&#039;&#039;, please report system issues to support@hpc.ucalgary.ca.&lt;br /&gt;
|icon=Support Icon.png}}&lt;br /&gt;
&lt;br /&gt;
==Hardware==&lt;br /&gt;
The TALC cluster is comprised of repurposed research clusters that are a few generations old. As a result, individual processor performance will not be comparable to the latest processors but should be sufficient for educational purposes and course work.  &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Partition&lt;br /&gt;
!Description&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores, Model, and Year&lt;br /&gt;
!Installed Memory&lt;br /&gt;
!GPU&lt;br /&gt;
!Network&lt;br /&gt;
|-&lt;br /&gt;
|gpu&lt;br /&gt;
|GPU Compute&lt;br /&gt;
|3&lt;br /&gt;
|12 cores, 2x Intel Xeon Bronze 3204 CPU @ 1.90GHz (2019)&lt;br /&gt;
|192 GB&lt;br /&gt;
|5x NVIDIA Corporation TU104GL [Tesla T4]&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|cpu16&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|36&lt;br /&gt;
|16 cores, 2x Eight-Core Intel Xeon CPU E5-2650 @ 2.00GHz (2012)&lt;br /&gt;
|64 GB&lt;br /&gt;
|N/A&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|bigmem&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|2&lt;br /&gt;
|32 cores, 4x Intel(R) Xeon(R) CPU E7- 4830  @ 2.13GHz (2015)&lt;br /&gt;
|1024 GB&lt;br /&gt;
|N/A&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Storage===&lt;br /&gt;
{{Message Box&lt;br /&gt;
| title=No Backup Policy!&lt;br /&gt;
| message=You are responsible for your own backups.   Since accounts on TALC and related data are removed shortly after the associated course has finished, you should download anything you need to save to your own computer before the end of the course.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
TALC is connected to a network disk storage system. This storage is split across the &amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; file systems.  &lt;br /&gt;
====&amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt;: Home file system====&lt;br /&gt;
Each user has a directory under /home and is the default working directory when logging in to TALC. Each home directory has a per-user quota of 500 GB. This limit is fixed and cannot be increased.&lt;br /&gt;
&lt;br /&gt;
Note on file sharing: Due to security concerns, permissions set using &amp;lt;code&amp;gt;chmod&amp;lt;/code&amp;gt; on your home directory to allow other users to read/write to your home directory will be automatically reverted by an automated system process unless an explicit exception is made.  If you need to share files with others on the TALC cluster, please write to support@hpc.ucalgary.ca to ask for such an exception.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt;: Scratch file system for large job-oriented storage====&lt;br /&gt;
Associated with each job, under the &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; directory, a subdirectory is created that can be referenced in job scripts as &amp;lt;code&amp;gt;/scratch/${SLURM_JOB_ID}&amp;lt;/code&amp;gt;. You can use that directory for temporary files needed during the course of a job. Up to 30 TB of storage may be used, per user (total for all your jobs) in the &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; file system.  &lt;br /&gt;
&lt;br /&gt;
Data in &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; associated with a given job will be deleted automatically, without exception, five days after the job finishes.&lt;br /&gt;
&lt;br /&gt;
== Software ==&lt;br /&gt;
{{Message Box&lt;br /&gt;
| title=Software Package Requests&lt;br /&gt;
| message=Course instructors or teaching assistants should write to support@hpc.ucalgary.ca if additional software is required for their course.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
All TALC nodes run a version of Rocky Linux. For your convenience, we have packaged commonly used software packages and dependencies as modules available under &amp;lt;code&amp;gt;/global/software&amp;lt;/code&amp;gt;. If your software package is not available as a module, you may also try Anaconda which allows users to manage and install custom packages in an isolated environment.&lt;br /&gt;
&lt;br /&gt;
For a list of available packages that have been made available, please see [[ARC Software pages]]. &lt;br /&gt;
&lt;br /&gt;
=== Modules ===&lt;br /&gt;
The setup of the environment for using some of the installed software is through the &amp;lt;code&amp;gt;module&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
Software packages bundled as a module will be available under &amp;lt;code&amp;gt;/global/software&amp;lt;/code&amp;gt; and can be listed with the &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module avail&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To enable Python, load the Python module by running:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module load python/anaconda-3.6-5.1.0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To unload the Python module, run:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module remove python/anaconda-3.6-5.1.0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To see currently loaded modules, run:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module list&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Using TALC==&lt;br /&gt;
{{Message Box&lt;br /&gt;
|title=Usage subject to [[TALC Terms of Use]]&lt;br /&gt;
|message=Please review the [[TALC Terms of Use]] prior to using TALC.&lt;br /&gt;
|icon=Support Icon.png}}&lt;br /&gt;
&lt;br /&gt;
===Logging in===&lt;br /&gt;
To log in to TALC, connect using SSH to talc.ucalgary.ca. Connections to TALC are accepted only from the University of Calgary network (on campus) or through the University of Calgary General VPN (off campus).&lt;br /&gt;
&lt;br /&gt;
See [[Connecting to RCS HPC Systems]] for more information.&lt;br /&gt;
&lt;br /&gt;
===Working interactively===&lt;br /&gt;
&amp;lt;!-- original chunk --&amp;gt;&lt;br /&gt;
TALC uses the Linux operating system. The program that responds to your typed commands and allows you to run other programs is called the Linux shell. There are several different shells available, but, by default you will use one called bash. It is useful to have some knowledge of the shell and a variety of other command-line programs that you can use to manipulate files. If you are new to Linux systems, we recommend that you work through one of the many online tutorials that are available, such as the [http://www.ee.surrey.ac.uk/Teaching/Unix/index.html UNIX Tutorial for Beginners (external link)] provided by the University of Surrey. The tutorial covers such fundamental topics, among others, as creating, renaming and deleting files and directories, how to produce a listing of your files and how to tell how much disk space you are using.  For a more comprehensive introduction to Linux, see [http://linuxcommand.sourceforge.net/tlcl.php The Linux Command Line (external link)].&lt;br /&gt;
&lt;br /&gt;
The TALC login node may be used for such tasks as editing files, compiling programs and running short tests while developing programs. CPU intensive workloads on the login node should be restricted to under 15 minutes as per [[General Cluster Guidelines and Policies|our cluster guidelines]]. For interactive workloads exceeding 15 minutes, use the &#039;&#039;&#039;[[Running_jobs#Interactive_jobs|salloc command]]&#039;&#039;&#039; to allocate an interactive session on a compute node.&lt;br /&gt;
&lt;br /&gt;
The default &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; allocation is 1 CPU and 1 GB of memory. Adjust this by specifying &amp;lt;code&amp;gt;-n CPU#&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--mem Megabytes&amp;lt;/code&amp;gt;. You may request up to 5 hours of CPU time for interactive jobs.&lt;br /&gt;
 salloc --time 5:00:00 --partition cpu16&lt;br /&gt;
&lt;br /&gt;
===Running non-interactive jobs (batch processing)===&lt;br /&gt;
Production runs and longer test runs should be submitted as (non-interactive) batch jobs, in which commands to be executed are listed in a script (text file). Batch jobs scripts are submitted using the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command, part of the Slurm job management and scheduling software. #SBATCH directive lines at the beginning of the script are used to specify the resources needed for the job (cores, memory, run time limit and any specialized hardware needed).&lt;br /&gt;
&lt;br /&gt;
Most of the information on the Running Jobs page on the Compute Canada web site is also relevant for submitting and managing batch jobs and reserving processors for interactive work on TALC.  One major difference between running jobs on the TALC and Compute Canada clusters is in selecting the type of hardware that should be used for a job. On TALC, you choose the hardware to use primarily by specifying a partition, as described below.&lt;br /&gt;
&lt;br /&gt;
===Using JupyterHub on Talc===&lt;br /&gt;
TALC has a Jupyterhub server which runs a Jupyter server on one of the TALC compute nodes and provides all the necessary encryption and plumbing to deliver the notebook to your computer.  To access this service you must have a TALC account. Point your browser at http://talc.ucalgary.ca and login with your usual UC account.  As of this writing, the job that runs the jupyter notebook is 1 cpu and 10GiB of memory on a cpu16 node.&lt;br /&gt;
&lt;br /&gt;
===Selecting a partition===&lt;br /&gt;
TALC currently has the following partitions available for use. The &amp;lt;code&amp;gt;gpu&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;cpu12&amp;lt;/code&amp;gt; partitions are refer to the same nodes. The &amp;lt;code&amp;gt;cpu12&amp;lt;/code&amp;gt; partition was created to only expose the CPUs on the GPU hardware for general purpose use. Each GPU node has 5 Tesla T4 GPUs installed, but you may only request one per job within the TALC environment.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Partition&lt;br /&gt;
!Description&lt;br /&gt;
!Nodes&lt;br /&gt;
!Cores&lt;br /&gt;
!Memory &lt;br /&gt;
!Memory Request Limit&lt;br /&gt;
!Time Limit&lt;br /&gt;
!GPU Request per Job&lt;br /&gt;
!Network&lt;br /&gt;
|-&lt;br /&gt;
|gpu&lt;br /&gt;
|GPU Compute&lt;br /&gt;
|3&lt;br /&gt;
|12 cores&lt;br /&gt;
|192 GB&lt;br /&gt;
|190 GB&lt;br /&gt;
|24 hours&lt;br /&gt;
|1x NVIDIA Corporation TU104GL [Tesla T4]&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|cpu12&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|3&lt;br /&gt;
|12 cores&lt;br /&gt;
|192 GB&lt;br /&gt;
|190 GB&lt;br /&gt;
|24 hours&lt;br /&gt;
|None&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|cpu16&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|36&lt;br /&gt;
|16 cores&lt;br /&gt;
|64 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
|24 hours&lt;br /&gt;
|None&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|bigmem&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|2&lt;br /&gt;
|32 cores&lt;br /&gt;
|1024 GB&lt;br /&gt;
|1022 GB&lt;br /&gt;
|24 hours&lt;br /&gt;
|None&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|}&lt;br /&gt;
There are some aspects to consider when selecting a partition including:&lt;br /&gt;
* Resource requirements in terms of memory and CPU cores&lt;br /&gt;
* Hardware specific requirements, such as GPU or CPU Instruction Set Extensions&lt;br /&gt;
* Partition resource limits and potential wait time&lt;br /&gt;
* Software support for parallel processing using Message Passing Interface (MPI), OpenMP, etc. For example, MPI for parallel processing can distribute memory across multiple nodes, so that per-node memory requirements could be lower. Whereas OpenMP or single process serial code that is restricted to one node would require a higher memory node.&lt;br /&gt;
&lt;br /&gt;
Since resources that are requested are reserved for your job, please request only as much CPU and memory as your job requires to avoid reducing the cluster efficiency.  If you are unsure which partition to use or the specific resource requests that are appropriate for your jobs, please contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] and we would be happy to work with you.&lt;br /&gt;
&lt;br /&gt;
=== Using a partition ===&lt;br /&gt;
&lt;br /&gt;
==== CPU only jobs ====&lt;br /&gt;
To select the &amp;lt;code&amp;gt;cpu16&amp;lt;/code&amp;gt; partition, include the following line in your batch job script:&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --partition=cpu16&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;You may also start an interactive session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt;:&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
$ salloc --time 1:00:00 -p cpu16&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== GPU jobs ====&lt;br /&gt;
In TALC, you are limited to exactly 1 GPU per job. Jobs that request for 0 GPUs or 2 or more GPUs will not be scheduled.&lt;br /&gt;
&lt;br /&gt;
To submit a job using the &amp;lt;code&amp;gt;gpu&amp;lt;/code&amp;gt; partition with one GPU request, include the following to your batch job script:&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --partition=gpu&lt;br /&gt;
#SBATCH --gpus-per-node=1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Like the previous example, you may also request interactive sessions with GPU nodes using &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt;. Just specify the &amp;lt;code&amp;gt;gpu&amp;lt;/code&amp;gt; partition and the number of GPUs required. &amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
$ salloc --time 1:00:00 -p gpu -n 1 --gpus-per-node 1 &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;You may verify that a GPU was assigned to your job or interactive session by running &amp;lt;code&amp;gt;nvidia-smi&amp;lt;/code&amp;gt;. This command will show you the status of the GPU that was assigned to you.&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
$ nvidia-smi&lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
| NVIDIA-SMI 450.80.02    Driver Version: 450.80.02    CUDA Version: 11.0     |&lt;br /&gt;
|-------------------------------+----------------------+----------------------+&lt;br /&gt;
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |&lt;br /&gt;
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |&lt;br /&gt;
|                               |                      |               MIG M. |&lt;br /&gt;
|===============================+======================+======================|&lt;br /&gt;
|   0  Tesla T4            Off  | 00000000:3B:00.0 Off |                    0 |&lt;br /&gt;
| N/A   36C    P0    14W /  70W |      0MiB / 15109MiB |      5%      Default |&lt;br /&gt;
|                               |                      |                  N/A |&lt;br /&gt;
+-------------------------------+----------------------+----------------------+&lt;br /&gt;
                                                                               &lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
| Processes:                                                                  |&lt;br /&gt;
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |&lt;br /&gt;
|        ID   ID                                                   Usage      |&lt;br /&gt;
|=============================================================================|&lt;br /&gt;
|  No running processes found                                                 |&lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Partition limitations ====&lt;br /&gt;
In addition to the hardware limitations of the nodes within the partition, please be aware that there may also be policy limits imposed on your account for each partition. These limits restrict the number of cores, nodes, or GPUs that can be used at any given time. Since the limits are applied on a partition-by-partition basis, using resources in one partition should not affect the available resources you can use in another partition.&lt;br /&gt;
&lt;br /&gt;
These limits can be listed by running:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sacctmgr show qos format=Name,MaxWall,MaxTRESPU%20,MaxSubmitJobs&lt;br /&gt;
      Name     MaxWall            MaxTRESPU MaxSubmit &lt;br /&gt;
---------- ----------- -------------------- --------- &lt;br /&gt;
    normal  1-00:00:00                                &lt;br /&gt;
  cpulimit                           cpu=48           &lt;br /&gt;
gpucpulim+                           cpu=18           &lt;br /&gt;
  gpulimit                 cpu=2,gres/gpu=1                &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Time limits ===&lt;br /&gt;
Use the &amp;lt;code&amp;gt;--time&amp;lt;/code&amp;gt; directive to tell the job scheduler the maximum time that your job might run.  For example:&lt;br /&gt;
 #SBATCH --time=hh:mm:ss&lt;br /&gt;
&lt;br /&gt;
You can use &amp;lt;code&amp;gt;scontrol show partitions&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; to see the current maximum time that a job can run.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; highlight=&amp;quot;6&amp;quot;&amp;gt;&lt;br /&gt;
$ scontrol show partitions&lt;br /&gt;
PartitionName=cpu16&lt;br /&gt;
   AllowGroups=ALL AllowAccounts=ALL AllowQos=ALL&lt;br /&gt;
   AllocNodes=ALL Default=YES QoS=cpulimit&lt;br /&gt;
   DefaultTime=NONE DisableRootJobs=NO ExclusiveUser=NO GraceTime=0 Hidden=NO&lt;br /&gt;
   MaxNodes=UNLIMITED MaxTime=1-00:00:00 MinNodes=0 LLN=NO MaxCPUsPerNode=UNLIMITED&lt;br /&gt;
   Nodes=n[1-36]&lt;br /&gt;
   PriorityJobFactor=1 PriorityTier=1 RootOnly=NO ReqResv=NO OverSubscribe=NO&lt;br /&gt;
   OverTimeLimit=NONE PreemptMode=OFF&lt;br /&gt;
   State=UP TotalCPUs=576 TotalNodes=36 SelectTypeParameters=NONE&lt;br /&gt;
   JobDefaults=(null)&lt;br /&gt;
   DefMemPerNode=UNLIMITED MaxMemPerNode=UNLIMITED&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, with &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; under the &amp;lt;code&amp;gt;TIMELIMIT&amp;lt;/code&amp;gt; column:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sinfo                                                     &lt;br /&gt;
PARTITION  AVAIL  TIMELIMIT  NODES  STATE NODELIST               &lt;br /&gt;
single        up 7-00:00:00      1 drain* cn097                  &lt;br /&gt;
single        up 7-00:00:00      1  maint cn002                  &lt;br /&gt;
single        up 7-00:00:00      4 drain* cn[001,061,133,154]    &lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
[[Category:TALC]]&lt;br /&gt;
[[Category:Guides]]&lt;/div&gt;</summary>
		<author><name>Fridman</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=TALC_Cluster_Guide&amp;diff=1615</id>
		<title>TALC Cluster Guide</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=TALC_Cluster_Guide&amp;diff=1615"/>
		<updated>2022-01-06T22:01:27Z</updated>

		<summary type="html">&lt;p&gt;Fridman: /* Partition limitations */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Message Box&lt;br /&gt;
|icon=Security Icon.png&lt;br /&gt;
|title=Cybersecurity awareness at the U of C&lt;br /&gt;
|message=Please note that there are typically about 950 phishing attempts targeting University of Calgary accounts each month. This is just a reminder to be careful about computer security issues, both at home and at the University. Please visit https://it.ucalgary.ca/it-security for more information, tips on secure computing, and how to report suspected security problems.}}&lt;br /&gt;
&lt;br /&gt;
This guide gives an overview of the Teaching and Learning Cluster (TALC) at the University of Calgary and is intended to be read by new account holders getting started on TALC. This guide covers topics as the hardware and performance characteristics, available software, usage policies and how to log in and run jobs. &lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
TALC is a cluster of computers created by Research Computing Services (RCS) in response to requests for a central computing resource to support academic courses and workshops offered at the University of Calgary. It is a complement to the Advanced Research Computing (ARC) cluster that is used for research, rather than educational purposes. The software environment in the TALC and ARC clusters is very similar and workflows between the two clusters are identical.  What students learn about using TALC will have direct applicability to using ARC should they go on to use ARC for research work. &lt;br /&gt;
&lt;br /&gt;
If you are the instructor for a course that could benefit from using TALC, please review this guide and the [[TALC Terms of Use]] and then contact us at support@hpc.ucalgary.ca to discuss your requirements.  &lt;br /&gt;
&lt;br /&gt;
Please note that in order to ensure that the appropriate software is available, student accounts are in place, and appropriate training has been provided for your teaching assistants, it is best to start this discussion several months prior to the start of the course.&lt;br /&gt;
&lt;br /&gt;
If you are a student in a course using TALC, please review this guide for basic instructions in using the cluster.  Questions should first be directed to the teaching assistants or instructor for your course.&lt;br /&gt;
&lt;br /&gt;
===Obtaining an account===&lt;br /&gt;
TALC account requests are expected to be submitted by the course instructor rather than from individual students. You must have a University of Calgary IT account in order to use TALC. If you do not have a University of IT account or email address, please register for one at https://itregport.ucalgary.ca/. In order to ensure TALC is provisioned in time for a course start date, the instructor should submit the initial list of @ucalgary.ca accounts needed for the course 2 weeks before the start date.&lt;br /&gt;
&lt;br /&gt;
=== Getting Support ===&lt;br /&gt;
{{Message Box&lt;br /&gt;
|title=Need Help or have other TALC Related Questions?&lt;br /&gt;
|message=&#039;&#039;&#039;Students&#039;&#039;&#039;, please send TALC-related questions to your course instructor or teaching assistants.&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Course instructors and TAs&#039;&#039;&#039;, please report system issues to support@hpc.ucalgary.ca.&lt;br /&gt;
|icon=Support Icon.png}}&lt;br /&gt;
&lt;br /&gt;
==Hardware==&lt;br /&gt;
The TALC cluster is comprised of repurposed research clusters that are a few generations old. As a result, individual processor performance will not be comparable to the latest processors but should be sufficient for educational purposes and course work.  &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Partition&lt;br /&gt;
!Description&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores, Model, and Year&lt;br /&gt;
!Installed Memory&lt;br /&gt;
!GPU&lt;br /&gt;
!Network&lt;br /&gt;
|-&lt;br /&gt;
|gpu&lt;br /&gt;
|GPU Compute&lt;br /&gt;
|3&lt;br /&gt;
|12 cores, 2x Intel Xeon Bronze 3204 CPU @ 1.90GHz (2019)&lt;br /&gt;
|192 GB&lt;br /&gt;
|5x NVIDIA Corporation TU104GL [Tesla T4]&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|cpu16&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|36&lt;br /&gt;
|16 cores, 2x Eight-Core Intel Xeon CPU E5-2650 @ 2.00GHz (2012)&lt;br /&gt;
|64 GB&lt;br /&gt;
|N/A&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|bigmem&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|2&lt;br /&gt;
|32 cores, 4x Intel(R) Xeon(R) CPU E7- 4830  @ 2.13GHz (2015)&lt;br /&gt;
|1024 GB&lt;br /&gt;
|N/A&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Storage===&lt;br /&gt;
{{Message Box&lt;br /&gt;
| title=No Backup Policy!&lt;br /&gt;
| message=You are responsible for your own backups.   Since accounts on TALC and related data are removed shortly after the associated course has finished, you should download anything you need to save to your own computer before the end of the course.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
TALC is connected to a network disk storage system. This storage is split across the &amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; file systems.  &lt;br /&gt;
====&amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt;: Home file system====&lt;br /&gt;
Each user has a directory under /home and is the default working directory when logging in to TALC. Each home directory has a per-user quota of 500 GB. This limit is fixed and cannot be increased.&lt;br /&gt;
&lt;br /&gt;
Note on file sharing: Due to security concerns, permissions set using &amp;lt;code&amp;gt;chmod&amp;lt;/code&amp;gt; on your home directory to allow other users to read/write to your home directory will be automatically reverted by an automated system process unless an explicit exception is made.  If you need to share files with others on the TALC cluster, please write to support@hpc.ucalgary.ca to ask for such an exception.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt;: Scratch file system for large job-oriented storage====&lt;br /&gt;
Associated with each job, under the &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; directory, a subdirectory is created that can be referenced in job scripts as &amp;lt;code&amp;gt;/scratch/${SLURM_JOB_ID}&amp;lt;/code&amp;gt;. You can use that directory for temporary files needed during the course of a job. Up to 30 TB of storage may be used, per user (total for all your jobs) in the &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; file system.  &lt;br /&gt;
&lt;br /&gt;
Data in &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; associated with a given job will be deleted automatically, without exception, five days after the job finishes.&lt;br /&gt;
&lt;br /&gt;
== Software ==&lt;br /&gt;
{{Message Box&lt;br /&gt;
| title=Software Package Requests&lt;br /&gt;
| message=Course instructors or teaching assistants should write to support@hpc.ucalgary.ca if additional software is required for their course.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
All TALC nodes run a version of Rocky Linux. For your convenience, we have packaged commonly used software packages and dependencies as modules available under &amp;lt;code&amp;gt;/global/software&amp;lt;/code&amp;gt;. If your software package is not available as a module, you may also try Anaconda which allows users to manage and install custom packages in an isolated environment.&lt;br /&gt;
&lt;br /&gt;
For a list of available packages that have been made available, please see [[ARC Software pages]]. &lt;br /&gt;
&lt;br /&gt;
=== Modules ===&lt;br /&gt;
The setup of the environment for using some of the installed software is through the &amp;lt;code&amp;gt;module&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
Software packages bundled as a module will be available under &amp;lt;code&amp;gt;/global/software&amp;lt;/code&amp;gt; and can be listed with the &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module avail&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To enable Python, load the Python module by running:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module load python/anaconda-3.6-5.1.0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To unload the Python module, run:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module remove python/anaconda-3.6-5.1.0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To see currently loaded modules, run:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module list&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Using TALC==&lt;br /&gt;
{{Message Box&lt;br /&gt;
|title=Usage subject to [[TALC Terms of Use]]&lt;br /&gt;
|message=Please review the [[TALC Terms of Use]] prior to using TALC.&lt;br /&gt;
|icon=Support Icon.png}}&lt;br /&gt;
&lt;br /&gt;
===Logging in===&lt;br /&gt;
To log in to TALC, connect using SSH to talc.ucalgary.ca. Connections to TALC are accepted only from the University of Calgary network (on campus) or through the University of Calgary General VPN (off campus).&lt;br /&gt;
&lt;br /&gt;
See [[Connecting to RCS HPC Systems]] for more information.&lt;br /&gt;
&lt;br /&gt;
===Working interactively===&lt;br /&gt;
&amp;lt;!-- original chunk --&amp;gt;&lt;br /&gt;
TALC uses the Linux operating system. The program that responds to your typed commands and allows you to run other programs is called the Linux shell. There are several different shells available, but, by default you will use one called bash. It is useful to have some knowledge of the shell and a variety of other command-line programs that you can use to manipulate files. If you are new to Linux systems, we recommend that you work through one of the many online tutorials that are available, such as the [http://www.ee.surrey.ac.uk/Teaching/Unix/index.html UNIX Tutorial for Beginners (external link)] provided by the University of Surrey. The tutorial covers such fundamental topics, among others, as creating, renaming and deleting files and directories, how to produce a listing of your files and how to tell how much disk space you are using.  For a more comprehensive introduction to Linux, see [http://linuxcommand.sourceforge.net/tlcl.php The Linux Command Line (external link)].&lt;br /&gt;
&lt;br /&gt;
The TALC login node may be used for such tasks as editing files, compiling programs and running short tests while developing programs. CPU intensive workloads on the login node should be restricted to under 15 minutes as per [[General Cluster Guidelines and Policies|our cluster guidelines]]. For interactive workloads exceeding 15 minutes, use the &#039;&#039;&#039;[[Running_jobs#Interactive_jobs|salloc command]]&#039;&#039;&#039; to allocate an interactive session on a compute node.&lt;br /&gt;
&lt;br /&gt;
The default &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; allocation is 1 CPU and 1 GB of memory. Adjust this by specifying &amp;lt;code&amp;gt;-n CPU#&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--mem Megabytes&amp;lt;/code&amp;gt;. You may request up to 5 hours of CPU time for interactive jobs.&lt;br /&gt;
 salloc --time 5:00:00 --partition cpu16&lt;br /&gt;
&lt;br /&gt;
===Running non-interactive jobs (batch processing)===&lt;br /&gt;
Production runs and longer test runs should be submitted as (non-interactive) batch jobs, in which commands to be executed are listed in a script (text file). Batch jobs scripts are submitted using the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command, part of the Slurm job management and scheduling software. #SBATCH directive lines at the beginning of the script are used to specify the resources needed for the job (cores, memory, run time limit and any specialized hardware needed).&lt;br /&gt;
&lt;br /&gt;
Most of the information on the Running Jobs page on the Compute Canada web site is also relevant for submitting and managing batch jobs and reserving processors for interactive work on TALC.  One major difference between running jobs on the TALC and Compute Canada clusters is in selecting the type of hardware that should be used for a job. On TALC, you choose the hardware to use primarily by specifying a partition, as described below.&lt;br /&gt;
&lt;br /&gt;
===Using JupyterHub on Talc===&lt;br /&gt;
TALC has a Jupyterhub server which runs a Jupyter server on one of the TALC compute nodes and provides all the necessary encryption and plumbing to deliver the notebook to your computer.  To access this service you must have a TALC account. Point your browser at http://talc.ucalgary.ca and login with your usual UC account.  As of this writing, the job that runs the jupyter notebook is 1 cpu and 10GiB of memory on a cpu16 node.&lt;br /&gt;
&lt;br /&gt;
===Selecting a partition===&lt;br /&gt;
TALC currently has the following partitions available for use. The &amp;lt;code&amp;gt;gpu&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;cpu12&amp;lt;/code&amp;gt; partitions are refer to the same nodes. The &amp;lt;code&amp;gt;cpu12&amp;lt;/code&amp;gt; partition was created to only expose the CPUs on the GPU hardware for general purpose use. Each GPU node has 5 Tesla T4 GPUs installed, but you may only request one per job within the TALC environment.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Partition&lt;br /&gt;
!Description&lt;br /&gt;
!Nodes&lt;br /&gt;
!Cores&lt;br /&gt;
!Memory &lt;br /&gt;
!Memory Request Limit&lt;br /&gt;
!Time Limit&lt;br /&gt;
!GPU Request per Job&lt;br /&gt;
!Network&lt;br /&gt;
|-&lt;br /&gt;
|gpu&lt;br /&gt;
|GPU Compute&lt;br /&gt;
|3&lt;br /&gt;
|12 cores&lt;br /&gt;
|192 GB&lt;br /&gt;
|190 GB&lt;br /&gt;
|24 hours&lt;br /&gt;
|1x NVIDIA Corporation TU104GL [Tesla T4]&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|cpu12&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|3&lt;br /&gt;
|12 cores&lt;br /&gt;
|192 GB&lt;br /&gt;
|190 GB&lt;br /&gt;
|24 hours&lt;br /&gt;
|None&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|cpu16&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|36&lt;br /&gt;
|16 cores&lt;br /&gt;
|64 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
|24 hours&lt;br /&gt;
|None&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|bigmem&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|2&lt;br /&gt;
|32 cores&lt;br /&gt;
|1024 GB&lt;br /&gt;
|1022 GB&lt;br /&gt;
|24 hours&lt;br /&gt;
|None&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|}&lt;br /&gt;
There are some aspects to consider when selecting a partition including:&lt;br /&gt;
* Resource requirements in terms of memory and CPU cores&lt;br /&gt;
* Hardware specific requirements, such as GPU or CPU Instruction Set Extensions&lt;br /&gt;
* Partition resource limits and potential wait time&lt;br /&gt;
* Software support for parallel processing using Message Passing Interface (MPI), OpenMP, etc. For example, MPI for parallel processing can distribute memory across multiple nodes, so that per-node memory requirements could be lower. Whereas OpenMP or single process serial code that is restricted to one node would require a higher memory node.&lt;br /&gt;
&lt;br /&gt;
Since resources that are requested are reserved for your job, please request only as much CPU and memory as your job requires to avoid reducing the cluster efficiency.  If you are unsure which partition to use or the specific resource requests that are appropriate for your jobs, please contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] and we would be happy to work with you.&lt;br /&gt;
&lt;br /&gt;
=== Using a partition ===&lt;br /&gt;
&lt;br /&gt;
==== CPU only jobs ====&lt;br /&gt;
To select the &amp;lt;code&amp;gt;cpu16&amp;lt;/code&amp;gt; partition, include the following line in your batch job script:&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --partition=cpu16&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;You may also start an interactive session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt;:&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
$ salloc --time 1:00:00 -p cpu16&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== GPU jobs ====&lt;br /&gt;
In TALC, you are limited to exactly 1 GPU per job. Jobs that request for 0 GPUs or 2 or more GPUs will not be scheduled.&lt;br /&gt;
&lt;br /&gt;
To submit a job using the &amp;lt;code&amp;gt;gpu&amp;lt;/code&amp;gt; partition with one GPU request, include the following to your batch job script:&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --partition=gpu&lt;br /&gt;
#SBATCH --gpus-per-node=1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Like the previous example, you may also request interactive sessions with GPU nodes using &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt;. Just specify the &amp;lt;code&amp;gt;gpu&amp;lt;/code&amp;gt; partition and the number of GPUs required. &amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
$ salloc --time 1:00:00 -p gpu -n 1 --gpus-per-node 1 &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;You may verify that a GPU was assigned to your job or interactive session by running &amp;lt;code&amp;gt;nvidia-smi&amp;lt;/code&amp;gt;. This command will show you the status of the GPU that was assigned to you.&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
$ nvidia-smi&lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
| NVIDIA-SMI 450.80.02    Driver Version: 450.80.02    CUDA Version: 11.0     |&lt;br /&gt;
|-------------------------------+----------------------+----------------------+&lt;br /&gt;
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |&lt;br /&gt;
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |&lt;br /&gt;
|                               |                      |               MIG M. |&lt;br /&gt;
|===============================+======================+======================|&lt;br /&gt;
|   0  Tesla T4            Off  | 00000000:3B:00.0 Off |                    0 |&lt;br /&gt;
| N/A   36C    P0    14W /  70W |      0MiB / 15109MiB |      5%      Default |&lt;br /&gt;
|                               |                      |                  N/A |&lt;br /&gt;
+-------------------------------+----------------------+----------------------+&lt;br /&gt;
                                                                               &lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
| Processes:                                                                  |&lt;br /&gt;
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |&lt;br /&gt;
|        ID   ID                                                   Usage      |&lt;br /&gt;
|=============================================================================|&lt;br /&gt;
|  No running processes found                                                 |&lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Partition limitations ====&lt;br /&gt;
In addition to the hardware limitations of the nodes within the partition, please be aware that there may also be policy limits imposed on your account for each partition. These limits restrict the number of cores, nodes, or GPUs that can be used at any given time. Since the limits are applied on a partition-by-partition basis, using resources in one partition should not affect the available resources you can use in another partition.&lt;br /&gt;
&lt;br /&gt;
These limits can be listed by running:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sacctmgr show qos format=Name,MaxWall,MaxTRESPU%20,MaxSubmitJobs&lt;br /&gt;
      Name     MaxWall            MaxTRESPU MaxSubmit &lt;br /&gt;
---------- ----------- -------------------- --------- &lt;br /&gt;
    normal  1-00:00:00                                &lt;br /&gt;
  cpulimit                           cpu=48           &lt;br /&gt;
gpucpulim+                           cpu=18           &lt;br /&gt;
  gpulimit                 cpu=2,gres/gpu=1                &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Time limits ===&lt;br /&gt;
Use the &amp;lt;code&amp;gt;--time&amp;lt;/code&amp;gt; directive to tell the job scheduler the maximum time that your job might run.  For example:&lt;br /&gt;
 #SBATCH --time=hh:mm:ss&lt;br /&gt;
&lt;br /&gt;
You can use &amp;lt;code&amp;gt;scontrol show partitions&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; to see the current maximum time that a job can run.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; highlight=&amp;quot;6&amp;quot;&amp;gt;&lt;br /&gt;
$ scontrol show partitions&lt;br /&gt;
PartitionName=cpu16&lt;br /&gt;
   AllowGroups=ALL AllowAccounts=ALL AllowQos=ALL&lt;br /&gt;
   AllocNodes=ALL Default=NO QoS=cpu16&lt;br /&gt;
   DefaultTime=NONE DisableRootJobs=NO ExclusiveUser=NO GraceTime=0 Hidden=NO&lt;br /&gt;
   MaxNodes=UNLIMITED MaxTime=22:10:00 MinNodes=0 LLN=NO MaxCPUsPerNode=UNLIMITED&lt;br /&gt;
   Nodes=n[1-36]&lt;br /&gt;
   PriorityJobFactor=1 PriorityTier=1 RootOnly=NO ReqResv=NO OverSubscribe=NO&lt;br /&gt;
   OverTimeLimit=NONE PreemptMode=OFF&lt;br /&gt;
   State=UP TotalCPUs=576 TotalNodes=36 SelectTypeParameters=NONE&lt;br /&gt;
   JobDefaults=(null)&lt;br /&gt;
   DefMemPerNode=UNLIMITED MaxMemPerNode=UNLIMITED&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, with &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; under the &amp;lt;code&amp;gt;TIMELIMIT&amp;lt;/code&amp;gt; column:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sinfo                                                     &lt;br /&gt;
PARTITION  AVAIL  TIMELIMIT  NODES  STATE NODELIST               &lt;br /&gt;
single        up 7-00:00:00      1 drain* cn097                  &lt;br /&gt;
single        up 7-00:00:00      1  maint cn002                  &lt;br /&gt;
single        up 7-00:00:00      4 drain* cn[001,061,133,154]    &lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
[[Category:TALC]]&lt;br /&gt;
[[Category:Guides]]&lt;/div&gt;</summary>
		<author><name>Fridman</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=List_of_courses_on_TALC&amp;diff=1614</id>
		<title>List of courses on TALC</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=List_of_courses_on_TALC&amp;diff=1614"/>
		<updated>2022-01-06T19:41:16Z</updated>

		<summary type="html">&lt;p&gt;Fridman: /* Winter 2022 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Message Box&lt;br /&gt;
|title=Interested in using TALC for your course?&lt;br /&gt;
|message=&#039;&#039;&#039;If you are the instructor for a course that could benefit from using TALC, please contact us at support@hpc.ucalgary.ca to discuss your requirements.&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br &amp;gt;To ensure that the appropriate software is available, student accounts are in place, and appropriate training has been provided for your teaching assistants, it is best to start this discussion several months prior to the start of the course.&lt;br /&gt;
|icon=Support Icon.png}}&lt;br /&gt;
&lt;br /&gt;
= Information =&lt;br /&gt;
&lt;br /&gt;
* TALC Terms of Use: https://rcs.ucalgary.ca/TALC_Terms_of_Use&lt;br /&gt;
&lt;br /&gt;
* TALC Guide: https://rcs.ucalgary.ca/TALC_Cluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== A New Course Checklist ==&lt;br /&gt;
&lt;br /&gt;
At least &#039;&#039;&#039;four weeks&#039;&#039;&#039; before the course.&lt;br /&gt;
* Please contact us about the course you are going to teach using TALC.&lt;br /&gt;
&lt;br /&gt;
* Please provide us with the &#039;&#039;&#039;course ID&#039;&#039;&#039;, &#039;&#039;&#039;name of the course&#039;&#039;&#039;, and the &#039;&#039;&#039;name of the instructor&#039;&#039;&#039; for the course.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Soon after&#039;&#039;&#039;:&lt;br /&gt;
* Request &#039;&#039;&#039;accounts&#039;&#039;&#039; on TALC for yourself, and TA who is going to be helping with the course.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Please note that&#039;&#039;&#039;&lt;br /&gt;
: it is &#039;&#039;&#039;instructor&#039;s responsibility&#039;&#039;&#039; to make sure that the &#039;&#039;&#039;software&#039;&#039;&#039; required for the course is &#039;&#039;&#039;available&#039;&#039;&#039; and &#039;&#039;&#039;works&#039;&#039;&#039; on TALC. &lt;br /&gt;
: it is recommended that if special software is required for the course &#039;&#039;&#039;well before the course starts&#039;&#039;&#039; to allow enough time for installation and testing of the software.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* If you are planning to share data with the students, request a &#039;&#039;&#039;shared directory for the course&#039;&#039;&#039;. &lt;br /&gt;
: There will also be a &#039;&#039;&#039;unix group&#039;&#039;&#039; on TALC to control access to the shared directory.&lt;br /&gt;
: The directory name and the unix group will probably have the same name, like &amp;quot;course601-21&amp;quot;. &lt;br /&gt;
: Your and TA&#039;s accounts have to be added to the access group.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* The data for courses run on TALC is deleted once the course is over. So that the &#039;&#039;&#039;shared directory&#039;&#039;&#039; and &#039;&#039;&#039;software&#039;&#039;&#039; for a course have to be setup every time the course is run.&lt;br /&gt;
: If you need to build / have some &#039;&#039;&#039;specific software&#039;&#039;&#039; on TALC for the course, please start early. &lt;br /&gt;
: It needs to be &#039;&#039;&#039;installed and tested&#039;&#039;&#039; well before the course start, so that the solution can be found in case something does not work.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* If you &#039;&#039;&#039;need help with setting up the software&#039;&#039;&#039; for the course, let the RCS support know at support@hpc.ucalgary.ca .&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* During the course, if students have &#039;&#039;&#039;difficulties with using TALC&#039;&#039;&#039;, the TAs are expected to help the students.&lt;br /&gt;
: If the TA cannot solve the issue, we prefer that the TA contact us rather than the student who has the problem.&lt;br /&gt;
: If TAs need training, this has to be arranged before the course with us (RCS);&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* As soon as you have a &#039;&#039;&#039;list of students&#039;&#039;&#039; who are going to take the course, please send it to support@hpc.ucalgary.ca .&lt;br /&gt;
: The list has to have students&#039; &#039;&#039;&#039;names&#039;&#039;&#039; as well as associated &#039;&#039;&#039;UofC email addresses&#039;&#039;&#039;.&lt;br /&gt;
: The accounts will be created and added to the access group.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* If you have any concerns or questions about running a course on TALC please let us know.&lt;br /&gt;
&lt;br /&gt;
= Courses =&lt;br /&gt;
&lt;br /&gt;
== Current Courses ==&lt;br /&gt;
&lt;br /&gt;
=== Winter 2022 ===&lt;br /&gt;
&lt;br /&gt;
* GLGY 605 - Groundwater Flow and Reactive Transport Modelling, Benjamin Tutolo (requested Nov 2021)&lt;br /&gt;
&lt;br /&gt;
* MDSC 201 - Applied Bioinformatics, David Anderson (requested Nov 2021)&lt;br /&gt;
&lt;br /&gt;
* MDSC 519 - Advanced Bioinformatics, David Anderson (requested Nov 2021)&lt;br /&gt;
&lt;br /&gt;
* DATA 608 - Developing Big Data Applications, Leanne Wu (requested Nov 2021).&lt;br /&gt;
&lt;br /&gt;
* BMEN 415 - Sensor Systems and Data Analytics, Ethan MacDonald (requested Dec 2021).&lt;br /&gt;
&lt;br /&gt;
* CPSC 601.04 - High Performance Scientific Computing and Visualization, Usman Alim (late request, Jan 2022);&lt;br /&gt;
&lt;br /&gt;
* ENEL 645 - Data Mining &amp;amp; Machine Learning, Roberto Souza (late request, Jan 2022);&lt;br /&gt;
&lt;br /&gt;
== Past Courses ==&lt;br /&gt;
&lt;br /&gt;
=== Fall 2021 ===&lt;br /&gt;
&lt;br /&gt;
* ENSF 619.01 - Ethan MacDonald&lt;br /&gt;
* MDSC 523 - David Anderson&lt;br /&gt;
* ENSF 619.02 - Roberto Medeiros de Souza&lt;br /&gt;
* ENSF 612 - Gias Uddin and Ajoy Das&lt;br /&gt;
&lt;br /&gt;
=== 2021 Winter ===&lt;br /&gt;
* GLGY 605 - Benjamin Tutolo&lt;br /&gt;
* BMEN 415 - Ethan MacDonald&lt;br /&gt;
* MDSC 201 - David Anderson&lt;br /&gt;
&lt;br /&gt;
=== 2020 Spring ===&lt;br /&gt;
* Bioinformatics workshop Q Zhang&lt;br /&gt;
* Bioinformatics workshop Q Zhang&lt;br /&gt;
&lt;br /&gt;
=== 2020 Winter ===&lt;br /&gt;
* DATA 623 R Walker&lt;br /&gt;
* GLGY 605 B Tutolo&lt;br /&gt;
* DATA 608 P Federl&lt;br /&gt;
* ENSF 612 J Kaur&lt;br /&gt;
&lt;br /&gt;
=== 2020 Winter Block Week ===&lt;br /&gt;
* MDSC 395 D Anderson&lt;br /&gt;
&lt;br /&gt;
= Resources =&lt;br /&gt;
&lt;br /&gt;
== Academic Schedule ==&lt;br /&gt;
* See https://www.ucalgary.ca/pubs/calendar/current/academic-schedule.html for the UofC &#039;s academic schedule. &lt;br /&gt;
&lt;br /&gt;
[[Category:TALC]]&lt;/div&gt;</summary>
		<author><name>Fridman</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=HPC_Systems&amp;diff=1613</id>
		<title>HPC Systems</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=HPC_Systems&amp;diff=1613"/>
		<updated>2022-01-06T18:28:15Z</updated>

		<summary type="html">&lt;p&gt;Fridman: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;RCS manages various high performance computing clusters for the University of Calgary.&lt;br /&gt;
&lt;br /&gt;
{{Clear}}&lt;br /&gt;
&amp;lt;div class=&amp;quot;row&amp;quot;&amp;gt;&lt;br /&gt;
{{Cluster Box&lt;br /&gt;
|letter=A&lt;br /&gt;
|color=cef2e0&lt;br /&gt;
|link=ARC Cluster Guide&lt;br /&gt;
|title=ARC - Advanced Research Computing&lt;br /&gt;
|description=ARC is a general purpose cluster for University of Calgary researchers.&lt;br /&gt;
|content=&lt;br /&gt;
* [[General Cluster Guidelines and Policies]]&lt;br /&gt;
* [[ARC Cluster Guide|ARC Cluster Guide]]&lt;br /&gt;
* [[Managing software on ARC]]&lt;br /&gt;
* [[Gaussian on ARC]] -- How to use Gaussian 16 on ARC.&lt;br /&gt;
* [[Apache Spark on ARC]]&lt;br /&gt;
}}&lt;br /&gt;
{{Cluster Box&lt;br /&gt;
|letter=T&lt;br /&gt;
|color=ddcef2&lt;br /&gt;
|link=TALC Cluster Guide&lt;br /&gt;
|title=TALC - Teaching And Learning Computing&lt;br /&gt;
|description=TALC is a cluster to support academic courses and workshops.&lt;br /&gt;
|content=&lt;br /&gt;
* [[TALC Cluster Guide|TALC Cluster Guide]]&lt;br /&gt;
* [[TALC Terms of Use]]&lt;br /&gt;
* [[List of courses on TALC]] &lt;br /&gt;
}}&lt;br /&gt;
{{Clear}}&lt;br /&gt;
{{Cluster Box&lt;br /&gt;
|letter=M&lt;br /&gt;
|color=f2cece&lt;br /&gt;
|link=MARC Cluster Guide&lt;br /&gt;
|title=MARC - Medical Advanced Research Computing&lt;br /&gt;
|description=MARC is a specialized cluster for processing level 4 data.&lt;br /&gt;
|content=&lt;br /&gt;
* [[General Cluster Guidelines and Policies]]&lt;br /&gt;
* [[MARC Cluster Guide]]&lt;br /&gt;
* [[MARC accounts]]&lt;br /&gt;
}}&lt;br /&gt;
{{Cluster Box&lt;br /&gt;
|letter=G&lt;br /&gt;
|color=cedff2&lt;br /&gt;
|title=GLaDOS&lt;br /&gt;
|description=GLaDOS is a researcher-owned compute cluster&lt;br /&gt;
|content=&lt;br /&gt;
* [[General Cluster Guidelines and Policies]]&lt;br /&gt;
* [[GLaDOS Guide | GLaDOS cluster]]&lt;br /&gt;
}}&lt;br /&gt;
{{Clear}}&lt;br /&gt;
{{Cluster Box&lt;br /&gt;
|letter=H&lt;br /&gt;
|color=f2ecce&lt;br /&gt;
|title=Helix &lt;br /&gt;
|description=Helix is a specialized cluster for Cumming School of Medicine projects&lt;br /&gt;
|content=&lt;br /&gt;
* [[General Cluster Guidelines and Policies]]&lt;br /&gt;
* [[Helix Guide | Helix cluster]]&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
{{Clear}}&lt;/div&gt;</summary>
		<author><name>Fridman</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=TALC_Terms_of_Use&amp;diff=1612</id>
		<title>TALC Terms of Use</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=TALC_Terms_of_Use&amp;diff=1612"/>
		<updated>2022-01-06T18:22:53Z</updated>

		<summary type="html">&lt;p&gt;Fridman: /* Guidelines and Regulations */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [[TALC Cluster Guide|Teaching and Learning Cluster (TALC)]] is a computing resource provided by Research Computing Services (RCS) to support approved courses and workshops.  Usage of the cluster is&lt;br /&gt;
subject to certain conditions as outlined below and detailed on this page.&lt;br /&gt;
&lt;br /&gt;
{{Message Box&lt;br /&gt;
|title=Questions or Concerns?&lt;br /&gt;
|message=Please send all questions and inquiries to support@hpc.ucalgary.ca.&lt;br /&gt;
|icon=Support Icon.png}}&lt;br /&gt;
&lt;br /&gt;
==Guidelines and Regulations==&lt;br /&gt;
* Usage must not violate Municipal, Provincial and Federal laws&lt;br /&gt;
* Usage must not violate the University&#039;s Policies and Procedures outlined in the [https://www.ucalgary.ca/legal-services/university-policies-procedures/acceptable-use-electronic-resources-and-information-policy Acceptable Use of Electronic Resources and Information Policy]&lt;br /&gt;
* This account is for your use only.  It is made available to you specifically for the course that requires the use of TALC.  You must not share your password or let anyone use your account.&lt;br /&gt;
* Commercial use of TALC, including digital currency mining, is strictly prohibited.&lt;br /&gt;
* TALC is configured for Level 1 and Level 2 data as set forth in the [https://www.ucalgary.ca/legal-services/sites/default/files/teams/1/Standards-Legal-Information-Security-Classification-Standard.pdf University of Calgary Information Security Classification Standard] and is not suitable for Level 3 or Level 4 data.&lt;br /&gt;
* RCS reserves the right to examine files, programs and any other material used on RCS systems at any time without warning.&lt;br /&gt;
* Evidence of inappropriate use of TALC may result in immediate loss of access to your TALC account and may result in academic misconduct.&lt;br /&gt;
* Please note that no backups are performed for data stored on TALC.  It is your responsibility to copy data you need to save elsewhere.  By default, your account and data will be deleted one week after the last course associated with the account has finished.&lt;br /&gt;
* To ensure that the appropriate software is available, student accounts are in place, and appropriate training has been provided for your teaching assistants, it is best to contact RCS several months prior to the start of the course. &lt;br /&gt;
&lt;br /&gt;
[[Category:TALC]]&lt;/div&gt;</summary>
		<author><name>Fridman</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=TALC_Terms_of_Use&amp;diff=1611</id>
		<title>TALC Terms of Use</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=TALC_Terms_of_Use&amp;diff=1611"/>
		<updated>2022-01-06T18:21:15Z</updated>

		<summary type="html">&lt;p&gt;Fridman: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [[TALC Cluster Guide|Teaching and Learning Cluster (TALC)]] is a computing resource provided by Research Computing Services (RCS) to support approved courses and workshops.  Usage of the cluster is&lt;br /&gt;
subject to certain conditions as outlined below and detailed on this page.&lt;br /&gt;
&lt;br /&gt;
{{Message Box&lt;br /&gt;
|title=Questions or Concerns?&lt;br /&gt;
|message=Please send all questions and inquiries to support@hpc.ucalgary.ca.&lt;br /&gt;
|icon=Support Icon.png}}&lt;br /&gt;
&lt;br /&gt;
==Guidelines and Regulations==&lt;br /&gt;
* Usage must not violate Municipal, Provincial and Federal laws&lt;br /&gt;
* Usage must not violate the University&#039;s Policies and Procedures outlined in the [https://www.ucalgary.ca/legal-services/university-policies-procedures/acceptable-use-electronic-resources-and-information-policy Acceptable Use of Electronic Resources and Information Policy]&lt;br /&gt;
* This account is for your use only.  It is made available to you specifically for the course that requires the use of TALC.  You must not share your password or let anyone use your account.&lt;br /&gt;
* Commercial use of TALC, including digital currency mining, is strictly prohibited.&lt;br /&gt;
* TALC is configured for Level 1 and Level 2 data as set forth in the [https://www.ucalgary.ca/policies/files/policies/im010-03-security-standard_0.pdf University of Calgary Information Security Classification Standard] and is not suitable for Level 3 or Level 4 data.&lt;br /&gt;
* RCS reserves the right to examine files, programs and any other material used on RCS systems at any time without warning.&lt;br /&gt;
* Evidence of inappropriate use of TALC may result in immediate loss of access to your TALC account and may result in academic misconduct.&lt;br /&gt;
* Please note that no backups are performed for data stored on TALC.  It is your responsibility to copy data you need to save elsewhere.  By default, your account and data will be deleted one week after the last course associated with the account has finished.&lt;br /&gt;
* To ensure that the appropriate software is available, student accounts are in place, and appropriate training has been provided for your teaching assistants, it is best to contact RCS several months prior to the start of the course. &lt;br /&gt;
&lt;br /&gt;
[[Category:TALC]]&lt;/div&gt;</summary>
		<author><name>Fridman</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=TALC_Cluster_Guide&amp;diff=1610</id>
		<title>TALC Cluster Guide</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=TALC_Cluster_Guide&amp;diff=1610"/>
		<updated>2022-01-06T18:02:53Z</updated>

		<summary type="html">&lt;p&gt;Fridman: /* Time limits */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Message Box&lt;br /&gt;
|icon=Security Icon.png&lt;br /&gt;
|title=Cybersecurity awareness at the U of C&lt;br /&gt;
|message=Please note that there are typically about 950 phishing attempts targeting University of Calgary accounts each month. This is just a reminder to be careful about computer security issues, both at home and at the University. Please visit https://it.ucalgary.ca/it-security for more information, tips on secure computing, and how to report suspected security problems.}}&lt;br /&gt;
&lt;br /&gt;
This guide gives an overview of the Teaching and Learning Cluster (TALC) at the University of Calgary and is intended to be read by new account holders getting started on TALC. This guide covers topics as the hardware and performance characteristics, available software, usage policies and how to log in and run jobs. &lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
TALC is a cluster of computers created by Research Computing Services (RCS) in response to requests for a central computing resource to support academic courses and workshops offered at the University of Calgary. It is a complement to the Advanced Research Computing (ARC) cluster that is used for research, rather than educational purposes. The software environment in the TALC and ARC clusters is very similar and workflows between the two clusters are identical.  What students learn about using TALC will have direct applicability to using ARC should they go on to use ARC for research work. &lt;br /&gt;
&lt;br /&gt;
If you are the instructor for a course that could benefit from using TALC, please review this guide and the [[TALC Terms of Use]] and then contact us at support@hpc.ucalgary.ca to discuss your requirements.  &lt;br /&gt;
&lt;br /&gt;
Please note that in order to ensure that the appropriate software is available, student accounts are in place, and appropriate training has been provided for your teaching assistants, it is best to start this discussion several months prior to the start of the course.&lt;br /&gt;
&lt;br /&gt;
If you are a student in a course using TALC, please review this guide for basic instructions in using the cluster.  Questions should first be directed to the teaching assistants or instructor for your course.&lt;br /&gt;
&lt;br /&gt;
===Obtaining an account===&lt;br /&gt;
TALC account requests are expected to be submitted by the course instructor rather than from individual students. You must have a University of Calgary IT account in order to use TALC. If you do not have a University of IT account or email address, please register for one at https://itregport.ucalgary.ca/. In order to ensure TALC is provisioned in time for a course start date, the instructor should submit the initial list of @ucalgary.ca accounts needed for the course 2 weeks before the start date.&lt;br /&gt;
&lt;br /&gt;
=== Getting Support ===&lt;br /&gt;
{{Message Box&lt;br /&gt;
|title=Need Help or have other TALC Related Questions?&lt;br /&gt;
|message=&#039;&#039;&#039;Students&#039;&#039;&#039;, please send TALC-related questions to your course instructor or teaching assistants.&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Course instructors and TAs&#039;&#039;&#039;, please report system issues to support@hpc.ucalgary.ca.&lt;br /&gt;
|icon=Support Icon.png}}&lt;br /&gt;
&lt;br /&gt;
==Hardware==&lt;br /&gt;
The TALC cluster is comprised of repurposed research clusters that are a few generations old. As a result, individual processor performance will not be comparable to the latest processors but should be sufficient for educational purposes and course work.  &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Partition&lt;br /&gt;
!Description&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores, Model, and Year&lt;br /&gt;
!Installed Memory&lt;br /&gt;
!GPU&lt;br /&gt;
!Network&lt;br /&gt;
|-&lt;br /&gt;
|gpu&lt;br /&gt;
|GPU Compute&lt;br /&gt;
|3&lt;br /&gt;
|12 cores, 2x Intel Xeon Bronze 3204 CPU @ 1.90GHz (2019)&lt;br /&gt;
|192 GB&lt;br /&gt;
|5x NVIDIA Corporation TU104GL [Tesla T4]&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|cpu16&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|36&lt;br /&gt;
|16 cores, 2x Eight-Core Intel Xeon CPU E5-2650 @ 2.00GHz (2012)&lt;br /&gt;
|64 GB&lt;br /&gt;
|N/A&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|bigmem&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|2&lt;br /&gt;
|32 cores, 4x Intel(R) Xeon(R) CPU E7- 4830  @ 2.13GHz (2015)&lt;br /&gt;
|1024 GB&lt;br /&gt;
|N/A&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Storage===&lt;br /&gt;
{{Message Box&lt;br /&gt;
| title=No Backup Policy!&lt;br /&gt;
| message=You are responsible for your own backups.   Since accounts on TALC and related data are removed shortly after the associated course has finished, you should download anything you need to save to your own computer before the end of the course.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
TALC is connected to a network disk storage system. This storage is split across the &amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; file systems.  &lt;br /&gt;
====&amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt;: Home file system====&lt;br /&gt;
Each user has a directory under /home and is the default working directory when logging in to TALC. Each home directory has a per-user quota of 500 GB. This limit is fixed and cannot be increased.&lt;br /&gt;
&lt;br /&gt;
Note on file sharing: Due to security concerns, permissions set using &amp;lt;code&amp;gt;chmod&amp;lt;/code&amp;gt; on your home directory to allow other users to read/write to your home directory will be automatically reverted by an automated system process unless an explicit exception is made.  If you need to share files with others on the TALC cluster, please write to support@hpc.ucalgary.ca to ask for such an exception.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt;: Scratch file system for large job-oriented storage====&lt;br /&gt;
Associated with each job, under the &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; directory, a subdirectory is created that can be referenced in job scripts as &amp;lt;code&amp;gt;/scratch/${SLURM_JOB_ID}&amp;lt;/code&amp;gt;. You can use that directory for temporary files needed during the course of a job. Up to 30 TB of storage may be used, per user (total for all your jobs) in the &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; file system.  &lt;br /&gt;
&lt;br /&gt;
Data in &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; associated with a given job will be deleted automatically, without exception, five days after the job finishes.&lt;br /&gt;
&lt;br /&gt;
== Software ==&lt;br /&gt;
{{Message Box&lt;br /&gt;
| title=Software Package Requests&lt;br /&gt;
| message=Course instructors or teaching assistants should write to support@hpc.ucalgary.ca if additional software is required for their course.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
All TALC nodes run a version of Rocky Linux. For your convenience, we have packaged commonly used software packages and dependencies as modules available under &amp;lt;code&amp;gt;/global/software&amp;lt;/code&amp;gt;. If your software package is not available as a module, you may also try Anaconda which allows users to manage and install custom packages in an isolated environment.&lt;br /&gt;
&lt;br /&gt;
For a list of available packages that have been made available, please see [[ARC Software pages]]. &lt;br /&gt;
&lt;br /&gt;
=== Modules ===&lt;br /&gt;
The setup of the environment for using some of the installed software is through the &amp;lt;code&amp;gt;module&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
Software packages bundled as a module will be available under &amp;lt;code&amp;gt;/global/software&amp;lt;/code&amp;gt; and can be listed with the &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module avail&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To enable Python, load the Python module by running:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module load python/anaconda-3.6-5.1.0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To unload the Python module, run:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module remove python/anaconda-3.6-5.1.0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To see currently loaded modules, run:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module list&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Using TALC==&lt;br /&gt;
{{Message Box&lt;br /&gt;
|title=Usage subject to [[TALC Terms of Use]]&lt;br /&gt;
|message=Please review the [[TALC Terms of Use]] prior to using TALC.&lt;br /&gt;
|icon=Support Icon.png}}&lt;br /&gt;
&lt;br /&gt;
===Logging in===&lt;br /&gt;
To log in to TALC, connect using SSH to talc.ucalgary.ca. Connections to TALC are accepted only from the University of Calgary network (on campus) or through the University of Calgary General VPN (off campus).&lt;br /&gt;
&lt;br /&gt;
See [[Connecting to RCS HPC Systems]] for more information.&lt;br /&gt;
&lt;br /&gt;
===Working interactively===&lt;br /&gt;
&amp;lt;!-- original chunk --&amp;gt;&lt;br /&gt;
TALC uses the Linux operating system. The program that responds to your typed commands and allows you to run other programs is called the Linux shell. There are several different shells available, but, by default you will use one called bash. It is useful to have some knowledge of the shell and a variety of other command-line programs that you can use to manipulate files. If you are new to Linux systems, we recommend that you work through one of the many online tutorials that are available, such as the [http://www.ee.surrey.ac.uk/Teaching/Unix/index.html UNIX Tutorial for Beginners (external link)] provided by the University of Surrey. The tutorial covers such fundamental topics, among others, as creating, renaming and deleting files and directories, how to produce a listing of your files and how to tell how much disk space you are using.  For a more comprehensive introduction to Linux, see [http://linuxcommand.sourceforge.net/tlcl.php The Linux Command Line (external link)].&lt;br /&gt;
&lt;br /&gt;
The TALC login node may be used for such tasks as editing files, compiling programs and running short tests while developing programs. CPU intensive workloads on the login node should be restricted to under 15 minutes as per [[General Cluster Guidelines and Policies|our cluster guidelines]]. For interactive workloads exceeding 15 minutes, use the &#039;&#039;&#039;[[Running_jobs#Interactive_jobs|salloc command]]&#039;&#039;&#039; to allocate an interactive session on a compute node.&lt;br /&gt;
&lt;br /&gt;
The default &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; allocation is 1 CPU and 1 GB of memory. Adjust this by specifying &amp;lt;code&amp;gt;-n CPU#&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--mem Megabytes&amp;lt;/code&amp;gt;. You may request up to 5 hours of CPU time for interactive jobs.&lt;br /&gt;
 salloc --time 5:00:00 --partition cpu16&lt;br /&gt;
&lt;br /&gt;
===Running non-interactive jobs (batch processing)===&lt;br /&gt;
Production runs and longer test runs should be submitted as (non-interactive) batch jobs, in which commands to be executed are listed in a script (text file). Batch jobs scripts are submitted using the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command, part of the Slurm job management and scheduling software. #SBATCH directive lines at the beginning of the script are used to specify the resources needed for the job (cores, memory, run time limit and any specialized hardware needed).&lt;br /&gt;
&lt;br /&gt;
Most of the information on the Running Jobs page on the Compute Canada web site is also relevant for submitting and managing batch jobs and reserving processors for interactive work on TALC.  One major difference between running jobs on the TALC and Compute Canada clusters is in selecting the type of hardware that should be used for a job. On TALC, you choose the hardware to use primarily by specifying a partition, as described below.&lt;br /&gt;
&lt;br /&gt;
===Using JupyterHub on Talc===&lt;br /&gt;
TALC has a Jupyterhub server which runs a Jupyter server on one of the TALC compute nodes and provides all the necessary encryption and plumbing to deliver the notebook to your computer.  To access this service you must have a TALC account. Point your browser at http://talc.ucalgary.ca and login with your usual UC account.  As of this writing, the job that runs the jupyter notebook is 1 cpu and 10GiB of memory on a cpu16 node.&lt;br /&gt;
&lt;br /&gt;
===Selecting a partition===&lt;br /&gt;
TALC currently has the following partitions available for use. The &amp;lt;code&amp;gt;gpu&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;cpu12&amp;lt;/code&amp;gt; partitions are refer to the same nodes. The &amp;lt;code&amp;gt;cpu12&amp;lt;/code&amp;gt; partition was created to only expose the CPUs on the GPU hardware for general purpose use. Each GPU node has 5 Tesla T4 GPUs installed, but you may only request one per job within the TALC environment.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Partition&lt;br /&gt;
!Description&lt;br /&gt;
!Nodes&lt;br /&gt;
!Cores&lt;br /&gt;
!Memory &lt;br /&gt;
!Memory Request Limit&lt;br /&gt;
!Time Limit&lt;br /&gt;
!GPU Request per Job&lt;br /&gt;
!Network&lt;br /&gt;
|-&lt;br /&gt;
|gpu&lt;br /&gt;
|GPU Compute&lt;br /&gt;
|3&lt;br /&gt;
|12 cores&lt;br /&gt;
|192 GB&lt;br /&gt;
|190 GB&lt;br /&gt;
|24 hours&lt;br /&gt;
|1x NVIDIA Corporation TU104GL [Tesla T4]&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|cpu12&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|3&lt;br /&gt;
|12 cores&lt;br /&gt;
|192 GB&lt;br /&gt;
|190 GB&lt;br /&gt;
|24 hours&lt;br /&gt;
|None&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|cpu16&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|36&lt;br /&gt;
|16 cores&lt;br /&gt;
|64 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
|24 hours&lt;br /&gt;
|None&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|bigmem&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|2&lt;br /&gt;
|32 cores&lt;br /&gt;
|1024 GB&lt;br /&gt;
|1022 GB&lt;br /&gt;
|24 hours&lt;br /&gt;
|None&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|}&lt;br /&gt;
There are some aspects to consider when selecting a partition including:&lt;br /&gt;
* Resource requirements in terms of memory and CPU cores&lt;br /&gt;
* Hardware specific requirements, such as GPU or CPU Instruction Set Extensions&lt;br /&gt;
* Partition resource limits and potential wait time&lt;br /&gt;
* Software support for parallel processing using Message Passing Interface (MPI), OpenMP, etc. For example, MPI for parallel processing can distribute memory across multiple nodes, so that per-node memory requirements could be lower. Whereas OpenMP or single process serial code that is restricted to one node would require a higher memory node.&lt;br /&gt;
&lt;br /&gt;
Since resources that are requested are reserved for your job, please request only as much CPU and memory as your job requires to avoid reducing the cluster efficiency.  If you are unsure which partition to use or the specific resource requests that are appropriate for your jobs, please contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] and we would be happy to work with you.&lt;br /&gt;
&lt;br /&gt;
=== Using a partition ===&lt;br /&gt;
&lt;br /&gt;
==== CPU only jobs ====&lt;br /&gt;
To select the &amp;lt;code&amp;gt;cpu16&amp;lt;/code&amp;gt; partition, include the following line in your batch job script:&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --partition=cpu16&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;You may also start an interactive session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt;:&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
$ salloc --time 1:00:00 -p cpu16&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== GPU jobs ====&lt;br /&gt;
In TALC, you are limited to exactly 1 GPU per job. Jobs that request for 0 GPUs or 2 or more GPUs will not be scheduled.&lt;br /&gt;
&lt;br /&gt;
To submit a job using the &amp;lt;code&amp;gt;gpu&amp;lt;/code&amp;gt; partition with one GPU request, include the following to your batch job script:&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --partition=gpu&lt;br /&gt;
#SBATCH --gpus-per-node=1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Like the previous example, you may also request interactive sessions with GPU nodes using &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt;. Just specify the &amp;lt;code&amp;gt;gpu&amp;lt;/code&amp;gt; partition and the number of GPUs required. &amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
$ salloc --time 1:00:00 -p gpu -n 1 --gpus-per-node 1 &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;You may verify that a GPU was assigned to your job or interactive session by running &amp;lt;code&amp;gt;nvidia-smi&amp;lt;/code&amp;gt;. This command will show you the status of the GPU that was assigned to you.&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
$ nvidia-smi&lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
| NVIDIA-SMI 450.80.02    Driver Version: 450.80.02    CUDA Version: 11.0     |&lt;br /&gt;
|-------------------------------+----------------------+----------------------+&lt;br /&gt;
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |&lt;br /&gt;
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |&lt;br /&gt;
|                               |                      |               MIG M. |&lt;br /&gt;
|===============================+======================+======================|&lt;br /&gt;
|   0  Tesla T4            Off  | 00000000:3B:00.0 Off |                    0 |&lt;br /&gt;
| N/A   36C    P0    14W /  70W |      0MiB / 15109MiB |      5%      Default |&lt;br /&gt;
|                               |                      |                  N/A |&lt;br /&gt;
+-------------------------------+----------------------+----------------------+&lt;br /&gt;
                                                                               &lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
| Processes:                                                                  |&lt;br /&gt;
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |&lt;br /&gt;
|        ID   ID                                                   Usage      |&lt;br /&gt;
|=============================================================================|&lt;br /&gt;
|  No running processes found                                                 |&lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Partition limitations ====&lt;br /&gt;
In addition to the hardware limitations of the nodes within the partition, please be aware that there may also be policy limits imposed on your account for each partition. These limits restrict the number of cores, nodes, or GPUs that can be used at any given time. Since the limits are applied on a partition-by-partition basis, using resources in one partition should not affect the available resources you can use in another partition.&lt;br /&gt;
&lt;br /&gt;
These limits can be listed by running:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sacctmgr show qos format=Name,MaxWall,MaxTRESPU%20,MaxSubmitJobs&lt;br /&gt;
      Name     MaxWall            MaxTRESPU MaxSubmit&lt;br /&gt;
---------- ----------- -------------------- ---------&lt;br /&gt;
    normal  1-00:00:00          mem=127000M          &lt;br /&gt;
     cpu24  1-00:00:00             mem=127G          &lt;br /&gt;
    bigmem  1-00:00:00                               &lt;br /&gt;
       gpu                       gres/gpu=1          &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Time limits ===&lt;br /&gt;
Use the &amp;lt;code&amp;gt;--time&amp;lt;/code&amp;gt; directive to tell the job scheduler the maximum time that your job might run.  For example:&lt;br /&gt;
 #SBATCH --time=hh:mm:ss&lt;br /&gt;
&lt;br /&gt;
You can use &amp;lt;code&amp;gt;scontrol show partitions&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; to see the current maximum time that a job can run.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; highlight=&amp;quot;6&amp;quot;&amp;gt;&lt;br /&gt;
$ scontrol show partitions&lt;br /&gt;
PartitionName=cpu16&lt;br /&gt;
   AllowGroups=ALL AllowAccounts=ALL AllowQos=ALL&lt;br /&gt;
   AllocNodes=ALL Default=NO QoS=cpu16&lt;br /&gt;
   DefaultTime=NONE DisableRootJobs=NO ExclusiveUser=NO GraceTime=0 Hidden=NO&lt;br /&gt;
   MaxNodes=UNLIMITED MaxTime=22:10:00 MinNodes=0 LLN=NO MaxCPUsPerNode=UNLIMITED&lt;br /&gt;
   Nodes=n[1-36]&lt;br /&gt;
   PriorityJobFactor=1 PriorityTier=1 RootOnly=NO ReqResv=NO OverSubscribe=NO&lt;br /&gt;
   OverTimeLimit=NONE PreemptMode=OFF&lt;br /&gt;
   State=UP TotalCPUs=576 TotalNodes=36 SelectTypeParameters=NONE&lt;br /&gt;
   JobDefaults=(null)&lt;br /&gt;
   DefMemPerNode=UNLIMITED MaxMemPerNode=UNLIMITED&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, with &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; under the &amp;lt;code&amp;gt;TIMELIMIT&amp;lt;/code&amp;gt; column:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sinfo                                                     &lt;br /&gt;
PARTITION  AVAIL  TIMELIMIT  NODES  STATE NODELIST               &lt;br /&gt;
single        up 7-00:00:00      1 drain* cn097                  &lt;br /&gt;
single        up 7-00:00:00      1  maint cn002                  &lt;br /&gt;
single        up 7-00:00:00      4 drain* cn[001,061,133,154]    &lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
[[Category:TALC]]&lt;br /&gt;
[[Category:Guides]]&lt;/div&gt;</summary>
		<author><name>Fridman</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=TALC_Cluster_Guide&amp;diff=1609</id>
		<title>TALC Cluster Guide</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=TALC_Cluster_Guide&amp;diff=1609"/>
		<updated>2022-01-06T17:58:13Z</updated>

		<summary type="html">&lt;p&gt;Fridman: /* Working interactively */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Message Box&lt;br /&gt;
|icon=Security Icon.png&lt;br /&gt;
|title=Cybersecurity awareness at the U of C&lt;br /&gt;
|message=Please note that there are typically about 950 phishing attempts targeting University of Calgary accounts each month. This is just a reminder to be careful about computer security issues, both at home and at the University. Please visit https://it.ucalgary.ca/it-security for more information, tips on secure computing, and how to report suspected security problems.}}&lt;br /&gt;
&lt;br /&gt;
This guide gives an overview of the Teaching and Learning Cluster (TALC) at the University of Calgary and is intended to be read by new account holders getting started on TALC. This guide covers topics as the hardware and performance characteristics, available software, usage policies and how to log in and run jobs. &lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
TALC is a cluster of computers created by Research Computing Services (RCS) in response to requests for a central computing resource to support academic courses and workshops offered at the University of Calgary. It is a complement to the Advanced Research Computing (ARC) cluster that is used for research, rather than educational purposes. The software environment in the TALC and ARC clusters is very similar and workflows between the two clusters are identical.  What students learn about using TALC will have direct applicability to using ARC should they go on to use ARC for research work. &lt;br /&gt;
&lt;br /&gt;
If you are the instructor for a course that could benefit from using TALC, please review this guide and the [[TALC Terms of Use]] and then contact us at support@hpc.ucalgary.ca to discuss your requirements.  &lt;br /&gt;
&lt;br /&gt;
Please note that in order to ensure that the appropriate software is available, student accounts are in place, and appropriate training has been provided for your teaching assistants, it is best to start this discussion several months prior to the start of the course.&lt;br /&gt;
&lt;br /&gt;
If you are a student in a course using TALC, please review this guide for basic instructions in using the cluster.  Questions should first be directed to the teaching assistants or instructor for your course.&lt;br /&gt;
&lt;br /&gt;
===Obtaining an account===&lt;br /&gt;
TALC account requests are expected to be submitted by the course instructor rather than from individual students. You must have a University of Calgary IT account in order to use TALC. If you do not have a University of IT account or email address, please register for one at https://itregport.ucalgary.ca/. In order to ensure TALC is provisioned in time for a course start date, the instructor should submit the initial list of @ucalgary.ca accounts needed for the course 2 weeks before the start date.&lt;br /&gt;
&lt;br /&gt;
=== Getting Support ===&lt;br /&gt;
{{Message Box&lt;br /&gt;
|title=Need Help or have other TALC Related Questions?&lt;br /&gt;
|message=&#039;&#039;&#039;Students&#039;&#039;&#039;, please send TALC-related questions to your course instructor or teaching assistants.&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Course instructors and TAs&#039;&#039;&#039;, please report system issues to support@hpc.ucalgary.ca.&lt;br /&gt;
|icon=Support Icon.png}}&lt;br /&gt;
&lt;br /&gt;
==Hardware==&lt;br /&gt;
The TALC cluster is comprised of repurposed research clusters that are a few generations old. As a result, individual processor performance will not be comparable to the latest processors but should be sufficient for educational purposes and course work.  &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Partition&lt;br /&gt;
!Description&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores, Model, and Year&lt;br /&gt;
!Installed Memory&lt;br /&gt;
!GPU&lt;br /&gt;
!Network&lt;br /&gt;
|-&lt;br /&gt;
|gpu&lt;br /&gt;
|GPU Compute&lt;br /&gt;
|3&lt;br /&gt;
|12 cores, 2x Intel Xeon Bronze 3204 CPU @ 1.90GHz (2019)&lt;br /&gt;
|192 GB&lt;br /&gt;
|5x NVIDIA Corporation TU104GL [Tesla T4]&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|cpu16&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|36&lt;br /&gt;
|16 cores, 2x Eight-Core Intel Xeon CPU E5-2650 @ 2.00GHz (2012)&lt;br /&gt;
|64 GB&lt;br /&gt;
|N/A&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|bigmem&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|2&lt;br /&gt;
|32 cores, 4x Intel(R) Xeon(R) CPU E7- 4830  @ 2.13GHz (2015)&lt;br /&gt;
|1024 GB&lt;br /&gt;
|N/A&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Storage===&lt;br /&gt;
{{Message Box&lt;br /&gt;
| title=No Backup Policy!&lt;br /&gt;
| message=You are responsible for your own backups.   Since accounts on TALC and related data are removed shortly after the associated course has finished, you should download anything you need to save to your own computer before the end of the course.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
TALC is connected to a network disk storage system. This storage is split across the &amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; file systems.  &lt;br /&gt;
====&amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt;: Home file system====&lt;br /&gt;
Each user has a directory under /home and is the default working directory when logging in to TALC. Each home directory has a per-user quota of 500 GB. This limit is fixed and cannot be increased.&lt;br /&gt;
&lt;br /&gt;
Note on file sharing: Due to security concerns, permissions set using &amp;lt;code&amp;gt;chmod&amp;lt;/code&amp;gt; on your home directory to allow other users to read/write to your home directory will be automatically reverted by an automated system process unless an explicit exception is made.  If you need to share files with others on the TALC cluster, please write to support@hpc.ucalgary.ca to ask for such an exception.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt;: Scratch file system for large job-oriented storage====&lt;br /&gt;
Associated with each job, under the &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; directory, a subdirectory is created that can be referenced in job scripts as &amp;lt;code&amp;gt;/scratch/${SLURM_JOB_ID}&amp;lt;/code&amp;gt;. You can use that directory for temporary files needed during the course of a job. Up to 30 TB of storage may be used, per user (total for all your jobs) in the &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; file system.  &lt;br /&gt;
&lt;br /&gt;
Data in &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; associated with a given job will be deleted automatically, without exception, five days after the job finishes.&lt;br /&gt;
&lt;br /&gt;
== Software ==&lt;br /&gt;
{{Message Box&lt;br /&gt;
| title=Software Package Requests&lt;br /&gt;
| message=Course instructors or teaching assistants should write to support@hpc.ucalgary.ca if additional software is required for their course.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
All TALC nodes run a version of Rocky Linux. For your convenience, we have packaged commonly used software packages and dependencies as modules available under &amp;lt;code&amp;gt;/global/software&amp;lt;/code&amp;gt;. If your software package is not available as a module, you may also try Anaconda which allows users to manage and install custom packages in an isolated environment.&lt;br /&gt;
&lt;br /&gt;
For a list of available packages that have been made available, please see [[ARC Software pages]]. &lt;br /&gt;
&lt;br /&gt;
=== Modules ===&lt;br /&gt;
The setup of the environment for using some of the installed software is through the &amp;lt;code&amp;gt;module&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
Software packages bundled as a module will be available under &amp;lt;code&amp;gt;/global/software&amp;lt;/code&amp;gt; and can be listed with the &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module avail&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To enable Python, load the Python module by running:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module load python/anaconda-3.6-5.1.0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To unload the Python module, run:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module remove python/anaconda-3.6-5.1.0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To see currently loaded modules, run:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module list&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Using TALC==&lt;br /&gt;
{{Message Box&lt;br /&gt;
|title=Usage subject to [[TALC Terms of Use]]&lt;br /&gt;
|message=Please review the [[TALC Terms of Use]] prior to using TALC.&lt;br /&gt;
|icon=Support Icon.png}}&lt;br /&gt;
&lt;br /&gt;
===Logging in===&lt;br /&gt;
To log in to TALC, connect using SSH to talc.ucalgary.ca. Connections to TALC are accepted only from the University of Calgary network (on campus) or through the University of Calgary General VPN (off campus).&lt;br /&gt;
&lt;br /&gt;
See [[Connecting to RCS HPC Systems]] for more information.&lt;br /&gt;
&lt;br /&gt;
===Working interactively===&lt;br /&gt;
&amp;lt;!-- original chunk --&amp;gt;&lt;br /&gt;
TALC uses the Linux operating system. The program that responds to your typed commands and allows you to run other programs is called the Linux shell. There are several different shells available, but, by default you will use one called bash. It is useful to have some knowledge of the shell and a variety of other command-line programs that you can use to manipulate files. If you are new to Linux systems, we recommend that you work through one of the many online tutorials that are available, such as the [http://www.ee.surrey.ac.uk/Teaching/Unix/index.html UNIX Tutorial for Beginners (external link)] provided by the University of Surrey. The tutorial covers such fundamental topics, among others, as creating, renaming and deleting files and directories, how to produce a listing of your files and how to tell how much disk space you are using.  For a more comprehensive introduction to Linux, see [http://linuxcommand.sourceforge.net/tlcl.php The Linux Command Line (external link)].&lt;br /&gt;
&lt;br /&gt;
The TALC login node may be used for such tasks as editing files, compiling programs and running short tests while developing programs. CPU intensive workloads on the login node should be restricted to under 15 minutes as per [[General Cluster Guidelines and Policies|our cluster guidelines]]. For interactive workloads exceeding 15 minutes, use the &#039;&#039;&#039;[[Running_jobs#Interactive_jobs|salloc command]]&#039;&#039;&#039; to allocate an interactive session on a compute node.&lt;br /&gt;
&lt;br /&gt;
The default &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; allocation is 1 CPU and 1 GB of memory. Adjust this by specifying &amp;lt;code&amp;gt;-n CPU#&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--mem Megabytes&amp;lt;/code&amp;gt;. You may request up to 5 hours of CPU time for interactive jobs.&lt;br /&gt;
 salloc --time 5:00:00 --partition cpu16&lt;br /&gt;
&lt;br /&gt;
===Running non-interactive jobs (batch processing)===&lt;br /&gt;
Production runs and longer test runs should be submitted as (non-interactive) batch jobs, in which commands to be executed are listed in a script (text file). Batch jobs scripts are submitted using the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command, part of the Slurm job management and scheduling software. #SBATCH directive lines at the beginning of the script are used to specify the resources needed for the job (cores, memory, run time limit and any specialized hardware needed).&lt;br /&gt;
&lt;br /&gt;
Most of the information on the Running Jobs page on the Compute Canada web site is also relevant for submitting and managing batch jobs and reserving processors for interactive work on TALC.  One major difference between running jobs on the TALC and Compute Canada clusters is in selecting the type of hardware that should be used for a job. On TALC, you choose the hardware to use primarily by specifying a partition, as described below.&lt;br /&gt;
&lt;br /&gt;
===Using JupyterHub on Talc===&lt;br /&gt;
TALC has a Jupyterhub server which runs a Jupyter server on one of the TALC compute nodes and provides all the necessary encryption and plumbing to deliver the notebook to your computer.  To access this service you must have a TALC account. Point your browser at http://talc.ucalgary.ca and login with your usual UC account.  As of this writing, the job that runs the jupyter notebook is 1 cpu and 10GiB of memory on a cpu16 node.&lt;br /&gt;
&lt;br /&gt;
===Selecting a partition===&lt;br /&gt;
TALC currently has the following partitions available for use. The &amp;lt;code&amp;gt;gpu&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;cpu12&amp;lt;/code&amp;gt; partitions are refer to the same nodes. The &amp;lt;code&amp;gt;cpu12&amp;lt;/code&amp;gt; partition was created to only expose the CPUs on the GPU hardware for general purpose use. Each GPU node has 5 Tesla T4 GPUs installed, but you may only request one per job within the TALC environment.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Partition&lt;br /&gt;
!Description&lt;br /&gt;
!Nodes&lt;br /&gt;
!Cores&lt;br /&gt;
!Memory &lt;br /&gt;
!Memory Request Limit&lt;br /&gt;
!Time Limit&lt;br /&gt;
!GPU Request per Job&lt;br /&gt;
!Network&lt;br /&gt;
|-&lt;br /&gt;
|gpu&lt;br /&gt;
|GPU Compute&lt;br /&gt;
|3&lt;br /&gt;
|12 cores&lt;br /&gt;
|192 GB&lt;br /&gt;
|190 GB&lt;br /&gt;
|24 hours&lt;br /&gt;
|1x NVIDIA Corporation TU104GL [Tesla T4]&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|cpu12&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|3&lt;br /&gt;
|12 cores&lt;br /&gt;
|192 GB&lt;br /&gt;
|190 GB&lt;br /&gt;
|24 hours&lt;br /&gt;
|None&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|cpu16&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|36&lt;br /&gt;
|16 cores&lt;br /&gt;
|64 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
|24 hours&lt;br /&gt;
|None&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|bigmem&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|2&lt;br /&gt;
|32 cores&lt;br /&gt;
|1024 GB&lt;br /&gt;
|1022 GB&lt;br /&gt;
|24 hours&lt;br /&gt;
|None&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|}&lt;br /&gt;
There are some aspects to consider when selecting a partition including:&lt;br /&gt;
* Resource requirements in terms of memory and CPU cores&lt;br /&gt;
* Hardware specific requirements, such as GPU or CPU Instruction Set Extensions&lt;br /&gt;
* Partition resource limits and potential wait time&lt;br /&gt;
* Software support for parallel processing using Message Passing Interface (MPI), OpenMP, etc. For example, MPI for parallel processing can distribute memory across multiple nodes, so that per-node memory requirements could be lower. Whereas OpenMP or single process serial code that is restricted to one node would require a higher memory node.&lt;br /&gt;
&lt;br /&gt;
Since resources that are requested are reserved for your job, please request only as much CPU and memory as your job requires to avoid reducing the cluster efficiency.  If you are unsure which partition to use or the specific resource requests that are appropriate for your jobs, please contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] and we would be happy to work with you.&lt;br /&gt;
&lt;br /&gt;
=== Using a partition ===&lt;br /&gt;
&lt;br /&gt;
==== CPU only jobs ====&lt;br /&gt;
To select the &amp;lt;code&amp;gt;cpu16&amp;lt;/code&amp;gt; partition, include the following line in your batch job script:&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --partition=cpu16&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;You may also start an interactive session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt;:&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
$ salloc --time 1:00:00 -p cpu16&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== GPU jobs ====&lt;br /&gt;
In TALC, you are limited to exactly 1 GPU per job. Jobs that request for 0 GPUs or 2 or more GPUs will not be scheduled.&lt;br /&gt;
&lt;br /&gt;
To submit a job using the &amp;lt;code&amp;gt;gpu&amp;lt;/code&amp;gt; partition with one GPU request, include the following to your batch job script:&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --partition=gpu&lt;br /&gt;
#SBATCH --gpus-per-node=1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Like the previous example, you may also request interactive sessions with GPU nodes using &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt;. Just specify the &amp;lt;code&amp;gt;gpu&amp;lt;/code&amp;gt; partition and the number of GPUs required. &amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
$ salloc --time 1:00:00 -p gpu -n 1 --gpus-per-node 1 &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;You may verify that a GPU was assigned to your job or interactive session by running &amp;lt;code&amp;gt;nvidia-smi&amp;lt;/code&amp;gt;. This command will show you the status of the GPU that was assigned to you.&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
$ nvidia-smi&lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
| NVIDIA-SMI 450.80.02    Driver Version: 450.80.02    CUDA Version: 11.0     |&lt;br /&gt;
|-------------------------------+----------------------+----------------------+&lt;br /&gt;
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |&lt;br /&gt;
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |&lt;br /&gt;
|                               |                      |               MIG M. |&lt;br /&gt;
|===============================+======================+======================|&lt;br /&gt;
|   0  Tesla T4            Off  | 00000000:3B:00.0 Off |                    0 |&lt;br /&gt;
| N/A   36C    P0    14W /  70W |      0MiB / 15109MiB |      5%      Default |&lt;br /&gt;
|                               |                      |                  N/A |&lt;br /&gt;
+-------------------------------+----------------------+----------------------+&lt;br /&gt;
                                                                               &lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
| Processes:                                                                  |&lt;br /&gt;
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |&lt;br /&gt;
|        ID   ID                                                   Usage      |&lt;br /&gt;
|=============================================================================|&lt;br /&gt;
|  No running processes found                                                 |&lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Partition limitations ====&lt;br /&gt;
In addition to the hardware limitations of the nodes within the partition, please be aware that there may also be policy limits imposed on your account for each partition. These limits restrict the number of cores, nodes, or GPUs that can be used at any given time. Since the limits are applied on a partition-by-partition basis, using resources in one partition should not affect the available resources you can use in another partition.&lt;br /&gt;
&lt;br /&gt;
These limits can be listed by running:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sacctmgr show qos format=Name,MaxWall,MaxTRESPU%20,MaxSubmitJobs&lt;br /&gt;
      Name     MaxWall            MaxTRESPU MaxSubmit&lt;br /&gt;
---------- ----------- -------------------- ---------&lt;br /&gt;
    normal  1-00:00:00          mem=127000M          &lt;br /&gt;
     cpu24  1-00:00:00             mem=127G          &lt;br /&gt;
    bigmem  1-00:00:00                               &lt;br /&gt;
       gpu                       gres/gpu=1          &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Time limits ===&lt;br /&gt;
Use the &amp;lt;code&amp;gt;--time&amp;lt;/code&amp;gt; directive to tell the job scheduler the maximum time that your job might run.  For example:&lt;br /&gt;
 #SBATCH --time=hh:mm:ss&lt;br /&gt;
&lt;br /&gt;
You can use &amp;lt;code&amp;gt;scontrol show partitions&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; to see the current maximum time that a job can run.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; highlight=&amp;quot;6&amp;quot;&amp;gt;&lt;br /&gt;
$ scontrol show partitions&lt;br /&gt;
PartitionName=single                                                                 &lt;br /&gt;
   AllowGroups=ALL AllowAccounts=ALL AllowQos=ALL                                    &lt;br /&gt;
   AllocNodes=ALL Default=NO QoS=single                                              &lt;br /&gt;
   DefaultTime=NONE DisableRootJobs=NO ExclusiveUser=NO GraceTime=0 Hidden=NO        &lt;br /&gt;
   MaxNodes=UNLIMITED MaxTime=7-00:00:00 MinNodes=1 LLN=NO MaxCPUsPerNode=UNLIMITED  &lt;br /&gt;
   Nodes=cn[001-168]                                                                 &lt;br /&gt;
   PriorityJobFactor=1 PriorityTier=1 RootOnly=NO ReqResv=NO OverSubscribe=NO        &lt;br /&gt;
   OverTimeLimit=NONE PreemptMode=OFF                                                &lt;br /&gt;
   State=UP TotalCPUs=1344 TotalNodes=168 SelectTypeParameters=NONE                  &lt;br /&gt;
   DefMemPerNode=UNLIMITED MaxMemPerNode=UNLIMITED                                   &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, with &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; under the &amp;lt;code&amp;gt;TIMELIMIT&amp;lt;/code&amp;gt; column:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sinfo                                                     &lt;br /&gt;
PARTITION  AVAIL  TIMELIMIT  NODES  STATE NODELIST               &lt;br /&gt;
single        up 7-00:00:00      1 drain* cn097                  &lt;br /&gt;
single        up 7-00:00:00      1  maint cn002                  &lt;br /&gt;
single        up 7-00:00:00      4 drain* cn[001,061,133,154]    &lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
[[Category:TALC]]&lt;br /&gt;
[[Category:Guides]]&lt;/div&gt;</summary>
		<author><name>Fridman</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=TALC_Cluster_Guide&amp;diff=1608</id>
		<title>TALC Cluster Guide</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=TALC_Cluster_Guide&amp;diff=1608"/>
		<updated>2022-01-06T17:57:41Z</updated>

		<summary type="html">&lt;p&gt;Fridman: /* /home: Home file system */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Message Box&lt;br /&gt;
|icon=Security Icon.png&lt;br /&gt;
|title=Cybersecurity awareness at the U of C&lt;br /&gt;
|message=Please note that there are typically about 950 phishing attempts targeting University of Calgary accounts each month. This is just a reminder to be careful about computer security issues, both at home and at the University. Please visit https://it.ucalgary.ca/it-security for more information, tips on secure computing, and how to report suspected security problems.}}&lt;br /&gt;
&lt;br /&gt;
This guide gives an overview of the Teaching and Learning Cluster (TALC) at the University of Calgary and is intended to be read by new account holders getting started on TALC. This guide covers topics as the hardware and performance characteristics, available software, usage policies and how to log in and run jobs. &lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
TALC is a cluster of computers created by Research Computing Services (RCS) in response to requests for a central computing resource to support academic courses and workshops offered at the University of Calgary. It is a complement to the Advanced Research Computing (ARC) cluster that is used for research, rather than educational purposes. The software environment in the TALC and ARC clusters is very similar and workflows between the two clusters are identical.  What students learn about using TALC will have direct applicability to using ARC should they go on to use ARC for research work. &lt;br /&gt;
&lt;br /&gt;
If you are the instructor for a course that could benefit from using TALC, please review this guide and the [[TALC Terms of Use]] and then contact us at support@hpc.ucalgary.ca to discuss your requirements.  &lt;br /&gt;
&lt;br /&gt;
Please note that in order to ensure that the appropriate software is available, student accounts are in place, and appropriate training has been provided for your teaching assistants, it is best to start this discussion several months prior to the start of the course.&lt;br /&gt;
&lt;br /&gt;
If you are a student in a course using TALC, please review this guide for basic instructions in using the cluster.  Questions should first be directed to the teaching assistants or instructor for your course.&lt;br /&gt;
&lt;br /&gt;
===Obtaining an account===&lt;br /&gt;
TALC account requests are expected to be submitted by the course instructor rather than from individual students. You must have a University of Calgary IT account in order to use TALC. If you do not have a University of IT account or email address, please register for one at https://itregport.ucalgary.ca/. In order to ensure TALC is provisioned in time for a course start date, the instructor should submit the initial list of @ucalgary.ca accounts needed for the course 2 weeks before the start date.&lt;br /&gt;
&lt;br /&gt;
=== Getting Support ===&lt;br /&gt;
{{Message Box&lt;br /&gt;
|title=Need Help or have other TALC Related Questions?&lt;br /&gt;
|message=&#039;&#039;&#039;Students&#039;&#039;&#039;, please send TALC-related questions to your course instructor or teaching assistants.&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Course instructors and TAs&#039;&#039;&#039;, please report system issues to support@hpc.ucalgary.ca.&lt;br /&gt;
|icon=Support Icon.png}}&lt;br /&gt;
&lt;br /&gt;
==Hardware==&lt;br /&gt;
The TALC cluster is comprised of repurposed research clusters that are a few generations old. As a result, individual processor performance will not be comparable to the latest processors but should be sufficient for educational purposes and course work.  &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Partition&lt;br /&gt;
!Description&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores, Model, and Year&lt;br /&gt;
!Installed Memory&lt;br /&gt;
!GPU&lt;br /&gt;
!Network&lt;br /&gt;
|-&lt;br /&gt;
|gpu&lt;br /&gt;
|GPU Compute&lt;br /&gt;
|3&lt;br /&gt;
|12 cores, 2x Intel Xeon Bronze 3204 CPU @ 1.90GHz (2019)&lt;br /&gt;
|192 GB&lt;br /&gt;
|5x NVIDIA Corporation TU104GL [Tesla T4]&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|cpu16&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|36&lt;br /&gt;
|16 cores, 2x Eight-Core Intel Xeon CPU E5-2650 @ 2.00GHz (2012)&lt;br /&gt;
|64 GB&lt;br /&gt;
|N/A&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|bigmem&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|2&lt;br /&gt;
|32 cores, 4x Intel(R) Xeon(R) CPU E7- 4830  @ 2.13GHz (2015)&lt;br /&gt;
|1024 GB&lt;br /&gt;
|N/A&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Storage===&lt;br /&gt;
{{Message Box&lt;br /&gt;
| title=No Backup Policy!&lt;br /&gt;
| message=You are responsible for your own backups.   Since accounts on TALC and related data are removed shortly after the associated course has finished, you should download anything you need to save to your own computer before the end of the course.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
TALC is connected to a network disk storage system. This storage is split across the &amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; file systems.  &lt;br /&gt;
====&amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt;: Home file system====&lt;br /&gt;
Each user has a directory under /home and is the default working directory when logging in to TALC. Each home directory has a per-user quota of 500 GB. This limit is fixed and cannot be increased.&lt;br /&gt;
&lt;br /&gt;
Note on file sharing: Due to security concerns, permissions set using &amp;lt;code&amp;gt;chmod&amp;lt;/code&amp;gt; on your home directory to allow other users to read/write to your home directory will be automatically reverted by an automated system process unless an explicit exception is made.  If you need to share files with others on the TALC cluster, please write to support@hpc.ucalgary.ca to ask for such an exception.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt;: Scratch file system for large job-oriented storage====&lt;br /&gt;
Associated with each job, under the &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; directory, a subdirectory is created that can be referenced in job scripts as &amp;lt;code&amp;gt;/scratch/${SLURM_JOB_ID}&amp;lt;/code&amp;gt;. You can use that directory for temporary files needed during the course of a job. Up to 30 TB of storage may be used, per user (total for all your jobs) in the &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; file system.  &lt;br /&gt;
&lt;br /&gt;
Data in &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; associated with a given job will be deleted automatically, without exception, five days after the job finishes.&lt;br /&gt;
&lt;br /&gt;
== Software ==&lt;br /&gt;
{{Message Box&lt;br /&gt;
| title=Software Package Requests&lt;br /&gt;
| message=Course instructors or teaching assistants should write to support@hpc.ucalgary.ca if additional software is required for their course.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
All TALC nodes run a version of Rocky Linux. For your convenience, we have packaged commonly used software packages and dependencies as modules available under &amp;lt;code&amp;gt;/global/software&amp;lt;/code&amp;gt;. If your software package is not available as a module, you may also try Anaconda which allows users to manage and install custom packages in an isolated environment.&lt;br /&gt;
&lt;br /&gt;
For a list of available packages that have been made available, please see [[ARC Software pages]]. &lt;br /&gt;
&lt;br /&gt;
=== Modules ===&lt;br /&gt;
The setup of the environment for using some of the installed software is through the &amp;lt;code&amp;gt;module&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
Software packages bundled as a module will be available under &amp;lt;code&amp;gt;/global/software&amp;lt;/code&amp;gt; and can be listed with the &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module avail&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To enable Python, load the Python module by running:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module load python/anaconda-3.6-5.1.0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To unload the Python module, run:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module remove python/anaconda-3.6-5.1.0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To see currently loaded modules, run:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module list&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Using TALC==&lt;br /&gt;
{{Message Box&lt;br /&gt;
|title=Usage subject to [[TALC Terms of Use]]&lt;br /&gt;
|message=Please review the [[TALC Terms of Use]] prior to using TALC.&lt;br /&gt;
|icon=Support Icon.png}}&lt;br /&gt;
&lt;br /&gt;
===Logging in===&lt;br /&gt;
To log in to TALC, connect using SSH to talc.ucalgary.ca. Connections to TALC are accepted only from the University of Calgary network (on campus) or through the University of Calgary General VPN (off campus).&lt;br /&gt;
&lt;br /&gt;
See [[Connecting to RCS HPC Systems]] for more information.&lt;br /&gt;
&lt;br /&gt;
===Working interactively===&lt;br /&gt;
&amp;lt;!-- original chunk --&amp;gt;&lt;br /&gt;
TALC uses the Linux operating system. The program that responds to your typed commands and allows you to run other programs is called the Linux shell. There are several different shells available, but, by default you will use one called bash. It is useful to have some knowledge of the shell and a variety of other command-line programs that you can use to manipulate files. If you are new to Linux systems, we recommend that you work through one of the many online tutorials that are available, such as the [http://www.ee.surrey.ac.uk/Teaching/Unix/index.html UNIX Tutorial for Beginners (external link)] provided by the University of Surrey. The tutorial covers such fundamental topics, among others, as creating, renaming and deleting files and directories, how to produce a listing of your files and how to tell how much disk space you are using.  For a more comprehensive introduction to Linux, see [http://linuxcommand.sourceforge.net/tlcl.php The Linux Command Line (external link)].&lt;br /&gt;
&lt;br /&gt;
The TALC login node may be used for such tasks as editing files, compiling programs and running short tests while developing programs. CPU intensive workloads on the login node should be restricted to under 15 minutes as per [[General Cluster Guidelines and Policies|our cluster guidelines]]. For interactive workloads exceeding 15 minutes, use the &#039;&#039;&#039;[[Running_jobs#Interactive_jobs|salloc command]]&#039;&#039;&#039; to allocate an interactive session on a compute node.&lt;br /&gt;
&lt;br /&gt;
The default &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; allocation is 1 CPU and 1 GB of memory. Adjust this by specifying &amp;lt;code&amp;gt;-n CPU#&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--mem Megabytes&amp;lt;/code&amp;gt;. You may request up to 5 hours of CPU time for interactive jobs.&lt;br /&gt;
 salloc --time 5:00:00 --partition cpu24 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Running non-interactive jobs (batch processing)===&lt;br /&gt;
Production runs and longer test runs should be submitted as (non-interactive) batch jobs, in which commands to be executed are listed in a script (text file). Batch jobs scripts are submitted using the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command, part of the Slurm job management and scheduling software. #SBATCH directive lines at the beginning of the script are used to specify the resources needed for the job (cores, memory, run time limit and any specialized hardware needed).&lt;br /&gt;
&lt;br /&gt;
Most of the information on the Running Jobs page on the Compute Canada web site is also relevant for submitting and managing batch jobs and reserving processors for interactive work on TALC.  One major difference between running jobs on the TALC and Compute Canada clusters is in selecting the type of hardware that should be used for a job. On TALC, you choose the hardware to use primarily by specifying a partition, as described below.&lt;br /&gt;
&lt;br /&gt;
===Using JupyterHub on Talc===&lt;br /&gt;
TALC has a Jupyterhub server which runs a Jupyter server on one of the TALC compute nodes and provides all the necessary encryption and plumbing to deliver the notebook to your computer.  To access this service you must have a TALC account. Point your browser at http://talc.ucalgary.ca and login with your usual UC account.  As of this writing, the job that runs the jupyter notebook is 1 cpu and 10GiB of memory on a cpu16 node.&lt;br /&gt;
&lt;br /&gt;
===Selecting a partition===&lt;br /&gt;
TALC currently has the following partitions available for use. The &amp;lt;code&amp;gt;gpu&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;cpu12&amp;lt;/code&amp;gt; partitions are refer to the same nodes. The &amp;lt;code&amp;gt;cpu12&amp;lt;/code&amp;gt; partition was created to only expose the CPUs on the GPU hardware for general purpose use. Each GPU node has 5 Tesla T4 GPUs installed, but you may only request one per job within the TALC environment.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Partition&lt;br /&gt;
!Description&lt;br /&gt;
!Nodes&lt;br /&gt;
!Cores&lt;br /&gt;
!Memory &lt;br /&gt;
!Memory Request Limit&lt;br /&gt;
!Time Limit&lt;br /&gt;
!GPU Request per Job&lt;br /&gt;
!Network&lt;br /&gt;
|-&lt;br /&gt;
|gpu&lt;br /&gt;
|GPU Compute&lt;br /&gt;
|3&lt;br /&gt;
|12 cores&lt;br /&gt;
|192 GB&lt;br /&gt;
|190 GB&lt;br /&gt;
|24 hours&lt;br /&gt;
|1x NVIDIA Corporation TU104GL [Tesla T4]&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|cpu12&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|3&lt;br /&gt;
|12 cores&lt;br /&gt;
|192 GB&lt;br /&gt;
|190 GB&lt;br /&gt;
|24 hours&lt;br /&gt;
|None&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|cpu16&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|36&lt;br /&gt;
|16 cores&lt;br /&gt;
|64 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
|24 hours&lt;br /&gt;
|None&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|bigmem&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|2&lt;br /&gt;
|32 cores&lt;br /&gt;
|1024 GB&lt;br /&gt;
|1022 GB&lt;br /&gt;
|24 hours&lt;br /&gt;
|None&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|}&lt;br /&gt;
There are some aspects to consider when selecting a partition including:&lt;br /&gt;
* Resource requirements in terms of memory and CPU cores&lt;br /&gt;
* Hardware specific requirements, such as GPU or CPU Instruction Set Extensions&lt;br /&gt;
* Partition resource limits and potential wait time&lt;br /&gt;
* Software support for parallel processing using Message Passing Interface (MPI), OpenMP, etc. For example, MPI for parallel processing can distribute memory across multiple nodes, so that per-node memory requirements could be lower. Whereas OpenMP or single process serial code that is restricted to one node would require a higher memory node.&lt;br /&gt;
&lt;br /&gt;
Since resources that are requested are reserved for your job, please request only as much CPU and memory as your job requires to avoid reducing the cluster efficiency.  If you are unsure which partition to use or the specific resource requests that are appropriate for your jobs, please contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] and we would be happy to work with you.&lt;br /&gt;
&lt;br /&gt;
=== Using a partition ===&lt;br /&gt;
&lt;br /&gt;
==== CPU only jobs ====&lt;br /&gt;
To select the &amp;lt;code&amp;gt;cpu16&amp;lt;/code&amp;gt; partition, include the following line in your batch job script:&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --partition=cpu16&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;You may also start an interactive session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt;:&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
$ salloc --time 1:00:00 -p cpu16&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== GPU jobs ====&lt;br /&gt;
In TALC, you are limited to exactly 1 GPU per job. Jobs that request for 0 GPUs or 2 or more GPUs will not be scheduled.&lt;br /&gt;
&lt;br /&gt;
To submit a job using the &amp;lt;code&amp;gt;gpu&amp;lt;/code&amp;gt; partition with one GPU request, include the following to your batch job script:&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --partition=gpu&lt;br /&gt;
#SBATCH --gpus-per-node=1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Like the previous example, you may also request interactive sessions with GPU nodes using &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt;. Just specify the &amp;lt;code&amp;gt;gpu&amp;lt;/code&amp;gt; partition and the number of GPUs required. &amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
$ salloc --time 1:00:00 -p gpu -n 1 --gpus-per-node 1 &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;You may verify that a GPU was assigned to your job or interactive session by running &amp;lt;code&amp;gt;nvidia-smi&amp;lt;/code&amp;gt;. This command will show you the status of the GPU that was assigned to you.&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
$ nvidia-smi&lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
| NVIDIA-SMI 450.80.02    Driver Version: 450.80.02    CUDA Version: 11.0     |&lt;br /&gt;
|-------------------------------+----------------------+----------------------+&lt;br /&gt;
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |&lt;br /&gt;
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |&lt;br /&gt;
|                               |                      |               MIG M. |&lt;br /&gt;
|===============================+======================+======================|&lt;br /&gt;
|   0  Tesla T4            Off  | 00000000:3B:00.0 Off |                    0 |&lt;br /&gt;
| N/A   36C    P0    14W /  70W |      0MiB / 15109MiB |      5%      Default |&lt;br /&gt;
|                               |                      |                  N/A |&lt;br /&gt;
+-------------------------------+----------------------+----------------------+&lt;br /&gt;
                                                                               &lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
| Processes:                                                                  |&lt;br /&gt;
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |&lt;br /&gt;
|        ID   ID                                                   Usage      |&lt;br /&gt;
|=============================================================================|&lt;br /&gt;
|  No running processes found                                                 |&lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Partition limitations ====&lt;br /&gt;
In addition to the hardware limitations of the nodes within the partition, please be aware that there may also be policy limits imposed on your account for each partition. These limits restrict the number of cores, nodes, or GPUs that can be used at any given time. Since the limits are applied on a partition-by-partition basis, using resources in one partition should not affect the available resources you can use in another partition.&lt;br /&gt;
&lt;br /&gt;
These limits can be listed by running:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sacctmgr show qos format=Name,MaxWall,MaxTRESPU%20,MaxSubmitJobs&lt;br /&gt;
      Name     MaxWall            MaxTRESPU MaxSubmit&lt;br /&gt;
---------- ----------- -------------------- ---------&lt;br /&gt;
    normal  1-00:00:00          mem=127000M          &lt;br /&gt;
     cpu24  1-00:00:00             mem=127G          &lt;br /&gt;
    bigmem  1-00:00:00                               &lt;br /&gt;
       gpu                       gres/gpu=1          &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Time limits ===&lt;br /&gt;
Use the &amp;lt;code&amp;gt;--time&amp;lt;/code&amp;gt; directive to tell the job scheduler the maximum time that your job might run.  For example:&lt;br /&gt;
 #SBATCH --time=hh:mm:ss&lt;br /&gt;
&lt;br /&gt;
You can use &amp;lt;code&amp;gt;scontrol show partitions&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; to see the current maximum time that a job can run.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; highlight=&amp;quot;6&amp;quot;&amp;gt;&lt;br /&gt;
$ scontrol show partitions&lt;br /&gt;
PartitionName=single                                                                 &lt;br /&gt;
   AllowGroups=ALL AllowAccounts=ALL AllowQos=ALL                                    &lt;br /&gt;
   AllocNodes=ALL Default=NO QoS=single                                              &lt;br /&gt;
   DefaultTime=NONE DisableRootJobs=NO ExclusiveUser=NO GraceTime=0 Hidden=NO        &lt;br /&gt;
   MaxNodes=UNLIMITED MaxTime=7-00:00:00 MinNodes=1 LLN=NO MaxCPUsPerNode=UNLIMITED  &lt;br /&gt;
   Nodes=cn[001-168]                                                                 &lt;br /&gt;
   PriorityJobFactor=1 PriorityTier=1 RootOnly=NO ReqResv=NO OverSubscribe=NO        &lt;br /&gt;
   OverTimeLimit=NONE PreemptMode=OFF                                                &lt;br /&gt;
   State=UP TotalCPUs=1344 TotalNodes=168 SelectTypeParameters=NONE                  &lt;br /&gt;
   DefMemPerNode=UNLIMITED MaxMemPerNode=UNLIMITED                                   &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, with &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; under the &amp;lt;code&amp;gt;TIMELIMIT&amp;lt;/code&amp;gt; column:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sinfo                                                     &lt;br /&gt;
PARTITION  AVAIL  TIMELIMIT  NODES  STATE NODELIST               &lt;br /&gt;
single        up 7-00:00:00      1 drain* cn097                  &lt;br /&gt;
single        up 7-00:00:00      1  maint cn002                  &lt;br /&gt;
single        up 7-00:00:00      4 drain* cn[001,061,133,154]    &lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
[[Category:TALC]]&lt;br /&gt;
[[Category:Guides]]&lt;/div&gt;</summary>
		<author><name>Fridman</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=TALC_Cluster_Guide&amp;diff=1607</id>
		<title>TALC Cluster Guide</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=TALC_Cluster_Guide&amp;diff=1607"/>
		<updated>2022-01-06T17:33:54Z</updated>

		<summary type="html">&lt;p&gt;Fridman: /* Hardware */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Message Box&lt;br /&gt;
|icon=Security Icon.png&lt;br /&gt;
|title=Cybersecurity awareness at the U of C&lt;br /&gt;
|message=Please note that there are typically about 950 phishing attempts targeting University of Calgary accounts each month. This is just a reminder to be careful about computer security issues, both at home and at the University. Please visit https://it.ucalgary.ca/it-security for more information, tips on secure computing, and how to report suspected security problems.}}&lt;br /&gt;
&lt;br /&gt;
This guide gives an overview of the Teaching and Learning Cluster (TALC) at the University of Calgary and is intended to be read by new account holders getting started on TALC. This guide covers topics as the hardware and performance characteristics, available software, usage policies and how to log in and run jobs. &lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
TALC is a cluster of computers created by Research Computing Services (RCS) in response to requests for a central computing resource to support academic courses and workshops offered at the University of Calgary. It is a complement to the Advanced Research Computing (ARC) cluster that is used for research, rather than educational purposes. The software environment in the TALC and ARC clusters is very similar and workflows between the two clusters are identical.  What students learn about using TALC will have direct applicability to using ARC should they go on to use ARC for research work. &lt;br /&gt;
&lt;br /&gt;
If you are the instructor for a course that could benefit from using TALC, please review this guide and the [[TALC Terms of Use]] and then contact us at support@hpc.ucalgary.ca to discuss your requirements.  &lt;br /&gt;
&lt;br /&gt;
Please note that in order to ensure that the appropriate software is available, student accounts are in place, and appropriate training has been provided for your teaching assistants, it is best to start this discussion several months prior to the start of the course.&lt;br /&gt;
&lt;br /&gt;
If you are a student in a course using TALC, please review this guide for basic instructions in using the cluster.  Questions should first be directed to the teaching assistants or instructor for your course.&lt;br /&gt;
&lt;br /&gt;
===Obtaining an account===&lt;br /&gt;
TALC account requests are expected to be submitted by the course instructor rather than from individual students. You must have a University of Calgary IT account in order to use TALC. If you do not have a University of IT account or email address, please register for one at https://itregport.ucalgary.ca/. In order to ensure TALC is provisioned in time for a course start date, the instructor should submit the initial list of @ucalgary.ca accounts needed for the course 2 weeks before the start date.&lt;br /&gt;
&lt;br /&gt;
=== Getting Support ===&lt;br /&gt;
{{Message Box&lt;br /&gt;
|title=Need Help or have other TALC Related Questions?&lt;br /&gt;
|message=&#039;&#039;&#039;Students&#039;&#039;&#039;, please send TALC-related questions to your course instructor or teaching assistants.&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Course instructors and TAs&#039;&#039;&#039;, please report system issues to support@hpc.ucalgary.ca.&lt;br /&gt;
|icon=Support Icon.png}}&lt;br /&gt;
&lt;br /&gt;
==Hardware==&lt;br /&gt;
The TALC cluster is comprised of repurposed research clusters that are a few generations old. As a result, individual processor performance will not be comparable to the latest processors but should be sufficient for educational purposes and course work.  &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Partition&lt;br /&gt;
!Description&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores, Model, and Year&lt;br /&gt;
!Installed Memory&lt;br /&gt;
!GPU&lt;br /&gt;
!Network&lt;br /&gt;
|-&lt;br /&gt;
|gpu&lt;br /&gt;
|GPU Compute&lt;br /&gt;
|3&lt;br /&gt;
|12 cores, 2x Intel Xeon Bronze 3204 CPU @ 1.90GHz (2019)&lt;br /&gt;
|192 GB&lt;br /&gt;
|5x NVIDIA Corporation TU104GL [Tesla T4]&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|cpu16&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|36&lt;br /&gt;
|16 cores, 2x Eight-Core Intel Xeon CPU E5-2650 @ 2.00GHz (2012)&lt;br /&gt;
|64 GB&lt;br /&gt;
|N/A&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|bigmem&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|2&lt;br /&gt;
|32 cores, 4x Intel(R) Xeon(R) CPU E7- 4830  @ 2.13GHz (2015)&lt;br /&gt;
|1024 GB&lt;br /&gt;
|N/A&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Storage===&lt;br /&gt;
{{Message Box&lt;br /&gt;
| title=No Backup Policy!&lt;br /&gt;
| message=You are responsible for your own backups.   Since accounts on TALC and related data are removed shortly after the associated course has finished, you should download anything you need to save to your own computer before the end of the course.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
TALC is connected to a network disk storage system. This storage is split across the &amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; file systems.  &lt;br /&gt;
====&amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt;: Home file system====&lt;br /&gt;
Each user has a directory under /home and is the default working directory when logging in to TALC. Each home directory has a per-user quota of 500 GB. This limit is fixed and cannot be increased.&lt;br /&gt;
&lt;br /&gt;
Note on file sharing: Due to security concerns, permissions set using &amp;lt;code&amp;gt;chmod&amp;lt;/code&amp;gt; on your home directory to allow other users to read/write to your home directory be automatically reverted by an automated system process unless an explicit exception is made.  If you need to share files with other researchers on the ARC cluster, please write to support@hpc.ucalgary.ca to ask for such an exception.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt;: Scratch file system for large job-oriented storage====&lt;br /&gt;
Associated with each job, under the &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; directory, a subdirectory is created that can be referenced in job scripts as &amp;lt;code&amp;gt;/scratch/${SLURM_JOB_ID}&amp;lt;/code&amp;gt;. You can use that directory for temporary files needed during the course of a job. Up to 30 TB of storage may be used, per user (total for all your jobs) in the &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; file system.  &lt;br /&gt;
&lt;br /&gt;
Data in &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; associated with a given job will be deleted automatically, without exception, five days after the job finishes.&lt;br /&gt;
&lt;br /&gt;
== Software ==&lt;br /&gt;
{{Message Box&lt;br /&gt;
| title=Software Package Requests&lt;br /&gt;
| message=Course instructors or teaching assistants should write to support@hpc.ucalgary.ca if additional software is required for their course.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
All ARC nodes run the latest version of CentOS 7 with the same set of base software packages. For your convenience, we have packaged commonly used software packages and dependencies as modules available under &amp;lt;code&amp;gt;/global/software&amp;lt;/code&amp;gt;. If your software package is not available as a module, you may also try Anaconda which allows users to manage and install custom packages in an isolated environment.&lt;br /&gt;
&lt;br /&gt;
For a list of available packages that have been made available, please see [[ARC Software pages]]. &lt;br /&gt;
&lt;br /&gt;
=== Modules ===&lt;br /&gt;
The setup of the environment for using some of the installed software is through the &amp;lt;code&amp;gt;module&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
Software packages bundled as a module will be available under &amp;lt;code&amp;gt;/global/software&amp;lt;/code&amp;gt; and can be listed with the &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module avail&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To enable Python, load the Python module by running:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module load python/anaconda-3.6-5.1.0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To unload the Python module, run:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module remove python/anaconda-3.6-5.1.0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To see currently loaded modules, run:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module list&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Using TALC==&lt;br /&gt;
{{Message Box&lt;br /&gt;
|title=Usage subject to [[TALC Terms of Use]]&lt;br /&gt;
|message=Please review the [[TALC Terms of Use]] prior to using TALC.&lt;br /&gt;
|icon=Support Icon.png}}&lt;br /&gt;
&lt;br /&gt;
===Logging in===&lt;br /&gt;
To log in to TALC, connect using SSH to talc.ucalgary.ca. Connections to TALC are accepted only from the University of Calgary network (on campus) or through the University of Calgary General VPN (off campus).&lt;br /&gt;
&lt;br /&gt;
See [[Connecting to RCS HPC Systems]] for more information.&lt;br /&gt;
&lt;br /&gt;
===Working interactively===&lt;br /&gt;
&amp;lt;!-- original chunk --&amp;gt;&lt;br /&gt;
ARC uses the Linux operating system. The program that responds to your typed commands and allows you to run other programs is called the Linux shell. There are several different shells available, but, by default you will use one called bash. It is useful to have some knowledge of the shell and a variety of other command-line programs that you can use to manipulate files. If you are new to Linux systems, we recommend that you work through one of the many online tutorials that are available, such as the [http://www.ee.surrey.ac.uk/Teaching/Unix/index.html UNIX Tutorial for Beginners (external link)] provided by the University of Surrey. The tutorial covers such fundamental topics, among others, as creating, renaming and deleting files and directories, how to produce a listing of your files and how to tell how much disk space you are using.  For a more comprehensive introduction to Linux, see [http://linuxcommand.sourceforge.net/tlcl.php The Linux Command Line (external link)].&lt;br /&gt;
&lt;br /&gt;
The TALC login node may be used for such tasks as editing files, compiling programs and running short tests while developing programs. We suggest CPU intensive workloads on the login node be restricted to under 15 minutes as per [[General Cluster Guidelines and Policies|our cluster guidelines]]. For interactive workloads exceeding 15 minutes, use the &#039;&#039;&#039;[[Running_jobs#Interactive_jobs|salloc command]]&#039;&#039;&#039; to allocate an interactive session on a compute node.&lt;br /&gt;
&lt;br /&gt;
The default &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; allocation is 1 CPU and 1 GB of memory. Adjust this by specifying &amp;lt;code&amp;gt;-n CPU#&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--mem Megabytes&amp;lt;/code&amp;gt;. You may request up to 5 hours of CPU time for interactive jobs.&lt;br /&gt;
 salloc --time 5:00:00 --partition cpu24 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Running non-interactive jobs (batch processing)===&lt;br /&gt;
Production runs and longer test runs should be submitted as (non-interactive) batch jobs, in which commands to be executed are listed in a script (text file). Batch jobs scripts are submitted using the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command, part of the Slurm job management and scheduling software. #SBATCH directive lines at the beginning of the script are used to specify the resources needed for the job (cores, memory, run time limit and any specialized hardware needed).&lt;br /&gt;
&lt;br /&gt;
Most of the information on the Running Jobs page on the Compute Canada web site is also relevant for submitting and managing batch jobs and reserving processors for interactive work on TALC.  One major difference between running jobs on the TALC and Compute Canada clusters is in selecting the type of hardware that should be used for a job. On TALC, you choose the hardware to use primarily by specifying a partition, as described below.&lt;br /&gt;
&lt;br /&gt;
===Using JupyterHub on Talc===&lt;br /&gt;
Talc has a Jupyterhub server which runs a Jupyter server on one of the Talc compute nodes and provides all the necessary encryption and plumbing to deliver the notebook to your computer.  To access this service you must have a Talc account. Point your browser at http://talc.ucalgary.ca and login with your usual UC account.  As of this writing, the job that runs the jupyter notebook is 1 cpu and 10GiB of memory on a cpu24 node.&lt;br /&gt;
&lt;br /&gt;
===Selecting a partition===&lt;br /&gt;
TALC currently has the following partitions available for use. The &amp;lt;code&amp;gt;gpu&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;cpu12&amp;lt;/code&amp;gt; partitions are backed by the same nodes. The &amp;lt;code&amp;gt;cpu12&amp;lt;/code&amp;gt; partition was created to only expose the CPUs on the GPU hardware for general purpose use. Each GPU node has 5 Tesla T4 GPUs installed, but you may only request one per job within the TALC environment.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Partition&lt;br /&gt;
!Description&lt;br /&gt;
!Nodes&lt;br /&gt;
!Cores&lt;br /&gt;
!Memory &lt;br /&gt;
!Memory Request Limit&lt;br /&gt;
!Time Limit&lt;br /&gt;
!GPU Request per Job&lt;br /&gt;
!Network&lt;br /&gt;
|-&lt;br /&gt;
|gpu&lt;br /&gt;
|GPU Compute&lt;br /&gt;
|3&lt;br /&gt;
|12 cores&lt;br /&gt;
|192 GB&lt;br /&gt;
|190 GB&lt;br /&gt;
|24 hours&lt;br /&gt;
|1x NVIDIA Corporation TU104GL [Tesla T4]&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|cpu12&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|3&lt;br /&gt;
|12 cores&lt;br /&gt;
|192 GB&lt;br /&gt;
|190 GB&lt;br /&gt;
|24 hours&lt;br /&gt;
|None&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|cpu24&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|15&lt;br /&gt;
|24 cores&lt;br /&gt;
|256 GB&lt;br /&gt;
|254 GB&lt;br /&gt;
|24 hours&lt;br /&gt;
|None&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|bigmem&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|2&lt;br /&gt;
|32 cores&lt;br /&gt;
|1024 GB&lt;br /&gt;
|1022 GB&lt;br /&gt;
|24 hours&lt;br /&gt;
|None&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|}&lt;br /&gt;
There are some aspects to consider when selecting a partition including:&lt;br /&gt;
* Resource requirements in terms of memory and CPU cores&lt;br /&gt;
* Hardware specific requirements, such as GPU or CPU Instruction Set Extensions&lt;br /&gt;
* Partition resource limits and potential wait time&lt;br /&gt;
* Software support parallel processing using Message Passing Interface (MPI), OpenMP, etc.&lt;br /&gt;
** Eg. MPI for parallel processing can distribute memory across multiple nodes, per-node memory requirements could be lower. Whereas, OpenMP or single process code that is restricted to one node would require a higher memory node.&lt;br /&gt;
** Note: MPI code running on hardware with Omni-Path networking should be compiled with Omni-Path networking support. This is provided by loading the &amp;lt;code&amp;gt;openmpi/2.1.3-opa&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;openmpi/3.1.2-opa&amp;lt;/code&amp;gt; modules prior to compiling.&lt;br /&gt;
&lt;br /&gt;
Since resources that are requested are reserved for your job, please request only as much CPU and memory as your job requires to avoid reducing the cluster efficiency.  If you are unsure which partition to use or the specific resource requests that are appropriate for your jobs, please contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] and we would be happy to work with you.&lt;br /&gt;
&lt;br /&gt;
=== Using a partition ===&lt;br /&gt;
&lt;br /&gt;
==== Bigmem and compute-only jobs ====&lt;br /&gt;
To select the &amp;lt;code&amp;gt;cpu24&amp;lt;/code&amp;gt; partition, include the following line in your batch job script:&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --partition=cpu24&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;You may also start an interactive session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt;:&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
$ salloc --time 1:00:00 -p cpu24&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== GPU jobs ====&lt;br /&gt;
In TALC, you are limited to exactly 1 GPU per job. Jobs that request for 0 GPUs or 2 or more GPUs will not be scheduled.&lt;br /&gt;
&lt;br /&gt;
To submit a job using the &amp;lt;code&amp;gt;gpu&amp;lt;/code&amp;gt; partition with one GPU request, include the following to your batch job script:&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --partition=gpu&lt;br /&gt;
#SBATCH --gpus-per-node=1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Like the previous example, you may also request interactive sessions with GPU nodes using &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt;. Just specify the &amp;lt;code&amp;gt;gpu&amp;lt;/code&amp;gt; partition and the number of GPUs required. &amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
$ salloc --time 1:00:00 -p gpu -n 1 --gpus-per-node 1 &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;You may verify that a GPU was assigned to your job or interactive session by running &amp;lt;code&amp;gt;nvidia-smi&amp;lt;/code&amp;gt;. This command will show you the status of the GPU that was assigned to you.&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
$ nvidia-smi&lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
| NVIDIA-SMI 450.80.02    Driver Version: 450.80.02    CUDA Version: 11.0     |&lt;br /&gt;
|-------------------------------+----------------------+----------------------+&lt;br /&gt;
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |&lt;br /&gt;
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |&lt;br /&gt;
|                               |                      |               MIG M. |&lt;br /&gt;
|===============================+======================+======================|&lt;br /&gt;
|   0  Tesla T4            Off  | 00000000:3B:00.0 Off |                    0 |&lt;br /&gt;
| N/A   36C    P0    14W /  70W |      0MiB / 15109MiB |      5%      Default |&lt;br /&gt;
|                               |                      |                  N/A |&lt;br /&gt;
+-------------------------------+----------------------+----------------------+&lt;br /&gt;
                                                                               &lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
| Processes:                                                                  |&lt;br /&gt;
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |&lt;br /&gt;
|        ID   ID                                                   Usage      |&lt;br /&gt;
|=============================================================================|&lt;br /&gt;
|  No running processes found                                                 |&lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Partition limitations ====&lt;br /&gt;
In addition to the hardware limitations of the nodes within the partition, please be aware that there may also be policy limits imposed on your account for each partition. These limits restrict the number of cores, nodes, or GPUs that can be used at any given time. Since the limits are applied on a partition-by-partition basis, using resources in one partition should not affect the available resources you can use in another partition.&lt;br /&gt;
&lt;br /&gt;
These limits can be listed by running:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sacctmgr show qos format=Name,MaxWall,MaxTRESPU%20,MaxSubmitJobs&lt;br /&gt;
      Name     MaxWall            MaxTRESPU MaxSubmit&lt;br /&gt;
---------- ----------- -------------------- ---------&lt;br /&gt;
    normal  1-00:00:00          mem=127000M          &lt;br /&gt;
     cpu24  1-00:00:00             mem=127G          &lt;br /&gt;
    bigmem  1-00:00:00                               &lt;br /&gt;
       gpu                       gres/gpu=1          &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Time limits ===&lt;br /&gt;
Use the &amp;lt;code&amp;gt;--time&amp;lt;/code&amp;gt; directive to tell the job scheduler the maximum time that your job might run.  For example:&lt;br /&gt;
 #SBATCH --time=hh:mm:ss&lt;br /&gt;
&lt;br /&gt;
You can use &amp;lt;code&amp;gt;scontrol show partitions&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; to see the current maximum time that a job can run.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; highlight=&amp;quot;6&amp;quot;&amp;gt;&lt;br /&gt;
$ scontrol show partitions&lt;br /&gt;
PartitionName=single                                                                 &lt;br /&gt;
   AllowGroups=ALL AllowAccounts=ALL AllowQos=ALL                                    &lt;br /&gt;
   AllocNodes=ALL Default=NO QoS=single                                              &lt;br /&gt;
   DefaultTime=NONE DisableRootJobs=NO ExclusiveUser=NO GraceTime=0 Hidden=NO        &lt;br /&gt;
   MaxNodes=UNLIMITED MaxTime=7-00:00:00 MinNodes=1 LLN=NO MaxCPUsPerNode=UNLIMITED  &lt;br /&gt;
   Nodes=cn[001-168]                                                                 &lt;br /&gt;
   PriorityJobFactor=1 PriorityTier=1 RootOnly=NO ReqResv=NO OverSubscribe=NO        &lt;br /&gt;
   OverTimeLimit=NONE PreemptMode=OFF                                                &lt;br /&gt;
   State=UP TotalCPUs=1344 TotalNodes=168 SelectTypeParameters=NONE                  &lt;br /&gt;
   DefMemPerNode=UNLIMITED MaxMemPerNode=UNLIMITED                                   &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, with &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; under the &amp;lt;code&amp;gt;TIMELIMIT&amp;lt;/code&amp;gt; column:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sinfo                                                     &lt;br /&gt;
PARTITION  AVAIL  TIMELIMIT  NODES  STATE NODELIST               &lt;br /&gt;
single        up 7-00:00:00      1 drain* cn097                  &lt;br /&gt;
single        up 7-00:00:00      1  maint cn002                  &lt;br /&gt;
single        up 7-00:00:00      4 drain* cn[001,061,133,154]    &lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
[[Category:TALC]]&lt;br /&gt;
[[Category:Guides]]&lt;/div&gt;</summary>
		<author><name>Fridman</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=TALC_Cluster_Guide&amp;diff=1606</id>
		<title>TALC Cluster Guide</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=TALC_Cluster_Guide&amp;diff=1606"/>
		<updated>2022-01-06T17:33:02Z</updated>

		<summary type="html">&lt;p&gt;Fridman: /* Hardware */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Message Box&lt;br /&gt;
|icon=Security Icon.png&lt;br /&gt;
|title=Cybersecurity awareness at the U of C&lt;br /&gt;
|message=Please note that there are typically about 950 phishing attempts targeting University of Calgary accounts each month. This is just a reminder to be careful about computer security issues, both at home and at the University. Please visit https://it.ucalgary.ca/it-security for more information, tips on secure computing, and how to report suspected security problems.}}&lt;br /&gt;
&lt;br /&gt;
This guide gives an overview of the Teaching and Learning Cluster (TALC) at the University of Calgary and is intended to be read by new account holders getting started on TALC. This guide covers topics as the hardware and performance characteristics, available software, usage policies and how to log in and run jobs. &lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
TALC is a cluster of computers created by Research Computing Services (RCS) in response to requests for a central computing resource to support academic courses and workshops offered at the University of Calgary. It is a complement to the Advanced Research Computing (ARC) cluster that is used for research, rather than educational purposes. The software environment in the TALC and ARC clusters is very similar and workflows between the two clusters are identical.  What students learn about using TALC will have direct applicability to using ARC should they go on to use ARC for research work. &lt;br /&gt;
&lt;br /&gt;
If you are the instructor for a course that could benefit from using TALC, please review this guide and the [[TALC Terms of Use]] and then contact us at support@hpc.ucalgary.ca to discuss your requirements.  &lt;br /&gt;
&lt;br /&gt;
Please note that in order to ensure that the appropriate software is available, student accounts are in place, and appropriate training has been provided for your teaching assistants, it is best to start this discussion several months prior to the start of the course.&lt;br /&gt;
&lt;br /&gt;
If you are a student in a course using TALC, please review this guide for basic instructions in using the cluster.  Questions should first be directed to the teaching assistants or instructor for your course.&lt;br /&gt;
&lt;br /&gt;
===Obtaining an account===&lt;br /&gt;
TALC account requests are expected to be submitted by the course instructor rather than from individual students. You must have a University of Calgary IT account in order to use TALC. If you do not have a University of IT account or email address, please register for one at https://itregport.ucalgary.ca/. In order to ensure TALC is provisioned in time for a course start date, the instructor should submit the initial list of @ucalgary.ca accounts needed for the course 2 weeks before the start date.&lt;br /&gt;
&lt;br /&gt;
=== Getting Support ===&lt;br /&gt;
{{Message Box&lt;br /&gt;
|title=Need Help or have other TALC Related Questions?&lt;br /&gt;
|message=&#039;&#039;&#039;Students&#039;&#039;&#039;, please send TALC-related questions to your course instructor or teaching assistants.&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Course instructors and TAs&#039;&#039;&#039;, please report system issues to support@hpc.ucalgary.ca.&lt;br /&gt;
|icon=Support Icon.png}}&lt;br /&gt;
&lt;br /&gt;
==Hardware==&lt;br /&gt;
The TALC cluster is comprised of repurposed research clusters that are a few generations old. As a result, individual processor performance will not be comparable to the latest processors but should be sufficient for educational purposes and course work.  &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Partition&lt;br /&gt;
!Description&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores, Model, and Year&lt;br /&gt;
!Installed Memory&lt;br /&gt;
!GPU&lt;br /&gt;
!Network&lt;br /&gt;
|-&lt;br /&gt;
|gpu&lt;br /&gt;
|GPU Compute&lt;br /&gt;
|3&lt;br /&gt;
|12 cores, 2x Intel Xeon Bronze 3204 CPU @ 1.90GHz (2019)&lt;br /&gt;
|192 GB&lt;br /&gt;
|5x NVIDIA Corporation TU104GL [Tesla T4]&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|cpu16&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|36&lt;br /&gt;
|16 cores, 2x Eight-Core Intel Xeon CPU E5-2650 @ 2.00GHz&lt;br /&gt;
|64 GB&lt;br /&gt;
|N/A&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|bigmem&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|2&lt;br /&gt;
|32 cores, 4x Intel(R) Xeon(R) CPU E7- 4830  @ 2.13GHz (2015)&lt;br /&gt;
|1024 GB&lt;br /&gt;
|N/A&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Storage===&lt;br /&gt;
{{Message Box&lt;br /&gt;
| title=No Backup Policy!&lt;br /&gt;
| message=You are responsible for your own backups.   Since accounts on TALC and related data are removed shortly after the associated course has finished, you should download anything you need to save to your own computer before the end of the course.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
TALC is connected to a network disk storage system. This storage is split across the &amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; file systems.  &lt;br /&gt;
====&amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt;: Home file system====&lt;br /&gt;
Each user has a directory under /home and is the default working directory when logging in to TALC. Each home directory has a per-user quota of 500 GB. This limit is fixed and cannot be increased.&lt;br /&gt;
&lt;br /&gt;
Note on file sharing: Due to security concerns, permissions set using &amp;lt;code&amp;gt;chmod&amp;lt;/code&amp;gt; on your home directory to allow other users to read/write to your home directory be automatically reverted by an automated system process unless an explicit exception is made.  If you need to share files with other researchers on the ARC cluster, please write to support@hpc.ucalgary.ca to ask for such an exception.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt;: Scratch file system for large job-oriented storage====&lt;br /&gt;
Associated with each job, under the &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; directory, a subdirectory is created that can be referenced in job scripts as &amp;lt;code&amp;gt;/scratch/${SLURM_JOB_ID}&amp;lt;/code&amp;gt;. You can use that directory for temporary files needed during the course of a job. Up to 30 TB of storage may be used, per user (total for all your jobs) in the &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; file system.  &lt;br /&gt;
&lt;br /&gt;
Data in &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; associated with a given job will be deleted automatically, without exception, five days after the job finishes.&lt;br /&gt;
&lt;br /&gt;
== Software ==&lt;br /&gt;
{{Message Box&lt;br /&gt;
| title=Software Package Requests&lt;br /&gt;
| message=Course instructors or teaching assistants should write to support@hpc.ucalgary.ca if additional software is required for their course.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
All ARC nodes run the latest version of CentOS 7 with the same set of base software packages. For your convenience, we have packaged commonly used software packages and dependencies as modules available under &amp;lt;code&amp;gt;/global/software&amp;lt;/code&amp;gt;. If your software package is not available as a module, you may also try Anaconda which allows users to manage and install custom packages in an isolated environment.&lt;br /&gt;
&lt;br /&gt;
For a list of available packages that have been made available, please see [[ARC Software pages]]. &lt;br /&gt;
&lt;br /&gt;
=== Modules ===&lt;br /&gt;
The setup of the environment for using some of the installed software is through the &amp;lt;code&amp;gt;module&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
Software packages bundled as a module will be available under &amp;lt;code&amp;gt;/global/software&amp;lt;/code&amp;gt; and can be listed with the &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module avail&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To enable Python, load the Python module by running:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module load python/anaconda-3.6-5.1.0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To unload the Python module, run:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module remove python/anaconda-3.6-5.1.0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To see currently loaded modules, run:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module list&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Using TALC==&lt;br /&gt;
{{Message Box&lt;br /&gt;
|title=Usage subject to [[TALC Terms of Use]]&lt;br /&gt;
|message=Please review the [[TALC Terms of Use]] prior to using TALC.&lt;br /&gt;
|icon=Support Icon.png}}&lt;br /&gt;
&lt;br /&gt;
===Logging in===&lt;br /&gt;
To log in to TALC, connect using SSH to talc.ucalgary.ca. Connections to TALC are accepted only from the University of Calgary network (on campus) or through the University of Calgary General VPN (off campus).&lt;br /&gt;
&lt;br /&gt;
See [[Connecting to RCS HPC Systems]] for more information.&lt;br /&gt;
&lt;br /&gt;
===Working interactively===&lt;br /&gt;
&amp;lt;!-- original chunk --&amp;gt;&lt;br /&gt;
ARC uses the Linux operating system. The program that responds to your typed commands and allows you to run other programs is called the Linux shell. There are several different shells available, but, by default you will use one called bash. It is useful to have some knowledge of the shell and a variety of other command-line programs that you can use to manipulate files. If you are new to Linux systems, we recommend that you work through one of the many online tutorials that are available, such as the [http://www.ee.surrey.ac.uk/Teaching/Unix/index.html UNIX Tutorial for Beginners (external link)] provided by the University of Surrey. The tutorial covers such fundamental topics, among others, as creating, renaming and deleting files and directories, how to produce a listing of your files and how to tell how much disk space you are using.  For a more comprehensive introduction to Linux, see [http://linuxcommand.sourceforge.net/tlcl.php The Linux Command Line (external link)].&lt;br /&gt;
&lt;br /&gt;
The TALC login node may be used for such tasks as editing files, compiling programs and running short tests while developing programs. We suggest CPU intensive workloads on the login node be restricted to under 15 minutes as per [[General Cluster Guidelines and Policies|our cluster guidelines]]. For interactive workloads exceeding 15 minutes, use the &#039;&#039;&#039;[[Running_jobs#Interactive_jobs|salloc command]]&#039;&#039;&#039; to allocate an interactive session on a compute node.&lt;br /&gt;
&lt;br /&gt;
The default &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; allocation is 1 CPU and 1 GB of memory. Adjust this by specifying &amp;lt;code&amp;gt;-n CPU#&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--mem Megabytes&amp;lt;/code&amp;gt;. You may request up to 5 hours of CPU time for interactive jobs.&lt;br /&gt;
 salloc --time 5:00:00 --partition cpu24 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Running non-interactive jobs (batch processing)===&lt;br /&gt;
Production runs and longer test runs should be submitted as (non-interactive) batch jobs, in which commands to be executed are listed in a script (text file). Batch jobs scripts are submitted using the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command, part of the Slurm job management and scheduling software. #SBATCH directive lines at the beginning of the script are used to specify the resources needed for the job (cores, memory, run time limit and any specialized hardware needed).&lt;br /&gt;
&lt;br /&gt;
Most of the information on the Running Jobs page on the Compute Canada web site is also relevant for submitting and managing batch jobs and reserving processors for interactive work on TALC.  One major difference between running jobs on the TALC and Compute Canada clusters is in selecting the type of hardware that should be used for a job. On TALC, you choose the hardware to use primarily by specifying a partition, as described below.&lt;br /&gt;
&lt;br /&gt;
===Using JupyterHub on Talc===&lt;br /&gt;
Talc has a Jupyterhub server which runs a Jupyter server on one of the Talc compute nodes and provides all the necessary encryption and plumbing to deliver the notebook to your computer.  To access this service you must have a Talc account. Point your browser at http://talc.ucalgary.ca and login with your usual UC account.  As of this writing, the job that runs the jupyter notebook is 1 cpu and 10GiB of memory on a cpu24 node.&lt;br /&gt;
&lt;br /&gt;
===Selecting a partition===&lt;br /&gt;
TALC currently has the following partitions available for use. The &amp;lt;code&amp;gt;gpu&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;cpu12&amp;lt;/code&amp;gt; partitions are backed by the same nodes. The &amp;lt;code&amp;gt;cpu12&amp;lt;/code&amp;gt; partition was created to only expose the CPUs on the GPU hardware for general purpose use. Each GPU node has 5 Tesla T4 GPUs installed, but you may only request one per job within the TALC environment.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Partition&lt;br /&gt;
!Description&lt;br /&gt;
!Nodes&lt;br /&gt;
!Cores&lt;br /&gt;
!Memory &lt;br /&gt;
!Memory Request Limit&lt;br /&gt;
!Time Limit&lt;br /&gt;
!GPU Request per Job&lt;br /&gt;
!Network&lt;br /&gt;
|-&lt;br /&gt;
|gpu&lt;br /&gt;
|GPU Compute&lt;br /&gt;
|3&lt;br /&gt;
|12 cores&lt;br /&gt;
|192 GB&lt;br /&gt;
|190 GB&lt;br /&gt;
|24 hours&lt;br /&gt;
|1x NVIDIA Corporation TU104GL [Tesla T4]&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|cpu12&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|3&lt;br /&gt;
|12 cores&lt;br /&gt;
|192 GB&lt;br /&gt;
|190 GB&lt;br /&gt;
|24 hours&lt;br /&gt;
|None&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|cpu24&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|15&lt;br /&gt;
|24 cores&lt;br /&gt;
|256 GB&lt;br /&gt;
|254 GB&lt;br /&gt;
|24 hours&lt;br /&gt;
|None&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|bigmem&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|2&lt;br /&gt;
|32 cores&lt;br /&gt;
|1024 GB&lt;br /&gt;
|1022 GB&lt;br /&gt;
|24 hours&lt;br /&gt;
|None&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|}&lt;br /&gt;
There are some aspects to consider when selecting a partition including:&lt;br /&gt;
* Resource requirements in terms of memory and CPU cores&lt;br /&gt;
* Hardware specific requirements, such as GPU or CPU Instruction Set Extensions&lt;br /&gt;
* Partition resource limits and potential wait time&lt;br /&gt;
* Software support parallel processing using Message Passing Interface (MPI), OpenMP, etc.&lt;br /&gt;
** Eg. MPI for parallel processing can distribute memory across multiple nodes, per-node memory requirements could be lower. Whereas, OpenMP or single process code that is restricted to one node would require a higher memory node.&lt;br /&gt;
** Note: MPI code running on hardware with Omni-Path networking should be compiled with Omni-Path networking support. This is provided by loading the &amp;lt;code&amp;gt;openmpi/2.1.3-opa&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;openmpi/3.1.2-opa&amp;lt;/code&amp;gt; modules prior to compiling.&lt;br /&gt;
&lt;br /&gt;
Since resources that are requested are reserved for your job, please request only as much CPU and memory as your job requires to avoid reducing the cluster efficiency.  If you are unsure which partition to use or the specific resource requests that are appropriate for your jobs, please contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] and we would be happy to work with you.&lt;br /&gt;
&lt;br /&gt;
=== Using a partition ===&lt;br /&gt;
&lt;br /&gt;
==== Bigmem and compute-only jobs ====&lt;br /&gt;
To select the &amp;lt;code&amp;gt;cpu24&amp;lt;/code&amp;gt; partition, include the following line in your batch job script:&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --partition=cpu24&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;You may also start an interactive session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt;:&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
$ salloc --time 1:00:00 -p cpu24&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== GPU jobs ====&lt;br /&gt;
In TALC, you are limited to exactly 1 GPU per job. Jobs that request for 0 GPUs or 2 or more GPUs will not be scheduled.&lt;br /&gt;
&lt;br /&gt;
To submit a job using the &amp;lt;code&amp;gt;gpu&amp;lt;/code&amp;gt; partition with one GPU request, include the following to your batch job script:&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --partition=gpu&lt;br /&gt;
#SBATCH --gpus-per-node=1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Like the previous example, you may also request interactive sessions with GPU nodes using &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt;. Just specify the &amp;lt;code&amp;gt;gpu&amp;lt;/code&amp;gt; partition and the number of GPUs required. &amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
$ salloc --time 1:00:00 -p gpu -n 1 --gpus-per-node 1 &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;You may verify that a GPU was assigned to your job or interactive session by running &amp;lt;code&amp;gt;nvidia-smi&amp;lt;/code&amp;gt;. This command will show you the status of the GPU that was assigned to you.&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
$ nvidia-smi&lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
| NVIDIA-SMI 450.80.02    Driver Version: 450.80.02    CUDA Version: 11.0     |&lt;br /&gt;
|-------------------------------+----------------------+----------------------+&lt;br /&gt;
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |&lt;br /&gt;
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |&lt;br /&gt;
|                               |                      |               MIG M. |&lt;br /&gt;
|===============================+======================+======================|&lt;br /&gt;
|   0  Tesla T4            Off  | 00000000:3B:00.0 Off |                    0 |&lt;br /&gt;
| N/A   36C    P0    14W /  70W |      0MiB / 15109MiB |      5%      Default |&lt;br /&gt;
|                               |                      |                  N/A |&lt;br /&gt;
+-------------------------------+----------------------+----------------------+&lt;br /&gt;
                                                                               &lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
| Processes:                                                                  |&lt;br /&gt;
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |&lt;br /&gt;
|        ID   ID                                                   Usage      |&lt;br /&gt;
|=============================================================================|&lt;br /&gt;
|  No running processes found                                                 |&lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Partition limitations ====&lt;br /&gt;
In addition to the hardware limitations of the nodes within the partition, please be aware that there may also be policy limits imposed on your account for each partition. These limits restrict the number of cores, nodes, or GPUs that can be used at any given time. Since the limits are applied on a partition-by-partition basis, using resources in one partition should not affect the available resources you can use in another partition.&lt;br /&gt;
&lt;br /&gt;
These limits can be listed by running:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sacctmgr show qos format=Name,MaxWall,MaxTRESPU%20,MaxSubmitJobs&lt;br /&gt;
      Name     MaxWall            MaxTRESPU MaxSubmit&lt;br /&gt;
---------- ----------- -------------------- ---------&lt;br /&gt;
    normal  1-00:00:00          mem=127000M          &lt;br /&gt;
     cpu24  1-00:00:00             mem=127G          &lt;br /&gt;
    bigmem  1-00:00:00                               &lt;br /&gt;
       gpu                       gres/gpu=1          &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Time limits ===&lt;br /&gt;
Use the &amp;lt;code&amp;gt;--time&amp;lt;/code&amp;gt; directive to tell the job scheduler the maximum time that your job might run.  For example:&lt;br /&gt;
 #SBATCH --time=hh:mm:ss&lt;br /&gt;
&lt;br /&gt;
You can use &amp;lt;code&amp;gt;scontrol show partitions&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; to see the current maximum time that a job can run.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; highlight=&amp;quot;6&amp;quot;&amp;gt;&lt;br /&gt;
$ scontrol show partitions&lt;br /&gt;
PartitionName=single                                                                 &lt;br /&gt;
   AllowGroups=ALL AllowAccounts=ALL AllowQos=ALL                                    &lt;br /&gt;
   AllocNodes=ALL Default=NO QoS=single                                              &lt;br /&gt;
   DefaultTime=NONE DisableRootJobs=NO ExclusiveUser=NO GraceTime=0 Hidden=NO        &lt;br /&gt;
   MaxNodes=UNLIMITED MaxTime=7-00:00:00 MinNodes=1 LLN=NO MaxCPUsPerNode=UNLIMITED  &lt;br /&gt;
   Nodes=cn[001-168]                                                                 &lt;br /&gt;
   PriorityJobFactor=1 PriorityTier=1 RootOnly=NO ReqResv=NO OverSubscribe=NO        &lt;br /&gt;
   OverTimeLimit=NONE PreemptMode=OFF                                                &lt;br /&gt;
   State=UP TotalCPUs=1344 TotalNodes=168 SelectTypeParameters=NONE                  &lt;br /&gt;
   DefMemPerNode=UNLIMITED MaxMemPerNode=UNLIMITED                                   &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, with &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; under the &amp;lt;code&amp;gt;TIMELIMIT&amp;lt;/code&amp;gt; column:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sinfo                                                     &lt;br /&gt;
PARTITION  AVAIL  TIMELIMIT  NODES  STATE NODELIST               &lt;br /&gt;
single        up 7-00:00:00      1 drain* cn097                  &lt;br /&gt;
single        up 7-00:00:00      1  maint cn002                  &lt;br /&gt;
single        up 7-00:00:00      4 drain* cn[001,061,133,154]    &lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
[[Category:TALC]]&lt;br /&gt;
[[Category:Guides]]&lt;/div&gt;</summary>
		<author><name>Fridman</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=TALC_Cluster_Guide&amp;diff=1605</id>
		<title>TALC Cluster Guide</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=TALC_Cluster_Guide&amp;diff=1605"/>
		<updated>2022-01-06T17:22:43Z</updated>

		<summary type="html">&lt;p&gt;Fridman: /* Getting Support */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Message Box&lt;br /&gt;
|icon=Security Icon.png&lt;br /&gt;
|title=Cybersecurity awareness at the U of C&lt;br /&gt;
|message=Please note that there are typically about 950 phishing attempts targeting University of Calgary accounts each month. This is just a reminder to be careful about computer security issues, both at home and at the University. Please visit https://it.ucalgary.ca/it-security for more information, tips on secure computing, and how to report suspected security problems.}}&lt;br /&gt;
&lt;br /&gt;
This guide gives an overview of the Teaching and Learning Cluster (TALC) at the University of Calgary and is intended to be read by new account holders getting started on TALC. This guide covers topics as the hardware and performance characteristics, available software, usage policies and how to log in and run jobs. &lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
TALC is a cluster of computers created by Research Computing Services (RCS) in response to requests for a central computing resource to support academic courses and workshops offered at the University of Calgary. It is a complement to the Advanced Research Computing (ARC) cluster that is used for research, rather than educational purposes. The software environment in the TALC and ARC clusters is very similar and workflows between the two clusters are identical.  What students learn about using TALC will have direct applicability to using ARC should they go on to use ARC for research work. &lt;br /&gt;
&lt;br /&gt;
If you are the instructor for a course that could benefit from using TALC, please review this guide and the [[TALC Terms of Use]] and then contact us at support@hpc.ucalgary.ca to discuss your requirements.  &lt;br /&gt;
&lt;br /&gt;
Please note that in order to ensure that the appropriate software is available, student accounts are in place, and appropriate training has been provided for your teaching assistants, it is best to start this discussion several months prior to the start of the course.&lt;br /&gt;
&lt;br /&gt;
If you are a student in a course using TALC, please review this guide for basic instructions in using the cluster.  Questions should first be directed to the teaching assistants or instructor for your course.&lt;br /&gt;
&lt;br /&gt;
===Obtaining an account===&lt;br /&gt;
TALC account requests are expected to be submitted by the course instructor rather than from individual students. You must have a University of Calgary IT account in order to use TALC. If you do not have a University of IT account or email address, please register for one at https://itregport.ucalgary.ca/. In order to ensure TALC is provisioned in time for a course start date, the instructor should submit the initial list of @ucalgary.ca accounts needed for the course 2 weeks before the start date.&lt;br /&gt;
&lt;br /&gt;
=== Getting Support ===&lt;br /&gt;
{{Message Box&lt;br /&gt;
|title=Need Help or have other TALC Related Questions?&lt;br /&gt;
|message=&#039;&#039;&#039;Students&#039;&#039;&#039;, please send TALC-related questions to your course instructor or teaching assistants.&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Course instructors and TAs&#039;&#039;&#039;, please report system issues to support@hpc.ucalgary.ca.&lt;br /&gt;
|icon=Support Icon.png}}&lt;br /&gt;
&lt;br /&gt;
==Hardware==&lt;br /&gt;
The TALC cluster is comprised of repurposed research clusters that are a few generations old. As a result, individual processor performance will not be comparable to the latest processors but should be sufficient for educational purposes and course work.  &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Partition&lt;br /&gt;
!Description&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores, Model, and Year&lt;br /&gt;
!Installed Memory&lt;br /&gt;
!GPU&lt;br /&gt;
!Network&lt;br /&gt;
|-&lt;br /&gt;
|gpu&lt;br /&gt;
|GPU Compute&lt;br /&gt;
|3&lt;br /&gt;
|12 cores, 2x Intel(R) Xeon(R) Bronze 3204 CPU @ 1.90GHz (2019)&lt;br /&gt;
|192 GB&lt;br /&gt;
|5x NVIDIA Corporation TU104GL [Tesla T4]&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|cpu24&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|15&lt;br /&gt;
|24 cores, 4x Six-Core AMD Opteron(tm) Processor 8431 (2009)&lt;br /&gt;
|256 GB&lt;br /&gt;
|N/A&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|bigmem&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|2&lt;br /&gt;
|32 cores, 4x Intel(R) Xeon(R) CPU E7- 4830  @ 2.13GHz (2015)&lt;br /&gt;
|1024 GB&lt;br /&gt;
|N/A&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Storage===&lt;br /&gt;
{{Message Box&lt;br /&gt;
| title=No Backup Policy!&lt;br /&gt;
| message=You are responsible for your own backups.   Since accounts on TALC and related data are removed shortly after the associated course has finished, you should download anything you need to save to your own computer before the end of the course.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
TALC is connected to a network disk storage system. This storage is split across the &amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; file systems.  &lt;br /&gt;
====&amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt;: Home file system====&lt;br /&gt;
Each user has a directory under /home and is the default working directory when logging in to TALC. Each home directory has a per-user quota of 500 GB. This limit is fixed and cannot be increased.&lt;br /&gt;
&lt;br /&gt;
Note on file sharing: Due to security concerns, permissions set using &amp;lt;code&amp;gt;chmod&amp;lt;/code&amp;gt; on your home directory to allow other users to read/write to your home directory be automatically reverted by an automated system process unless an explicit exception is made.  If you need to share files with other researchers on the ARC cluster, please write to support@hpc.ucalgary.ca to ask for such an exception.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt;: Scratch file system for large job-oriented storage====&lt;br /&gt;
Associated with each job, under the &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; directory, a subdirectory is created that can be referenced in job scripts as &amp;lt;code&amp;gt;/scratch/${SLURM_JOB_ID}&amp;lt;/code&amp;gt;. You can use that directory for temporary files needed during the course of a job. Up to 30 TB of storage may be used, per user (total for all your jobs) in the &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; file system.  &lt;br /&gt;
&lt;br /&gt;
Data in &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; associated with a given job will be deleted automatically, without exception, five days after the job finishes.&lt;br /&gt;
&lt;br /&gt;
== Software ==&lt;br /&gt;
{{Message Box&lt;br /&gt;
| title=Software Package Requests&lt;br /&gt;
| message=Course instructors or teaching assistants should write to support@hpc.ucalgary.ca if additional software is required for their course.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
All ARC nodes run the latest version of CentOS 7 with the same set of base software packages. For your convenience, we have packaged commonly used software packages and dependencies as modules available under &amp;lt;code&amp;gt;/global/software&amp;lt;/code&amp;gt;. If your software package is not available as a module, you may also try Anaconda which allows users to manage and install custom packages in an isolated environment.&lt;br /&gt;
&lt;br /&gt;
For a list of available packages that have been made available, please see [[ARC Software pages]]. &lt;br /&gt;
&lt;br /&gt;
=== Modules ===&lt;br /&gt;
The setup of the environment for using some of the installed software is through the &amp;lt;code&amp;gt;module&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
Software packages bundled as a module will be available under &amp;lt;code&amp;gt;/global/software&amp;lt;/code&amp;gt; and can be listed with the &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module avail&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To enable Python, load the Python module by running:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module load python/anaconda-3.6-5.1.0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To unload the Python module, run:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module remove python/anaconda-3.6-5.1.0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To see currently loaded modules, run:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module list&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Using TALC==&lt;br /&gt;
{{Message Box&lt;br /&gt;
|title=Usage subject to [[TALC Terms of Use]]&lt;br /&gt;
|message=Please review the [[TALC Terms of Use]] prior to using TALC.&lt;br /&gt;
|icon=Support Icon.png}}&lt;br /&gt;
&lt;br /&gt;
===Logging in===&lt;br /&gt;
To log in to TALC, connect using SSH to talc.ucalgary.ca. Connections to TALC are accepted only from the University of Calgary network (on campus) or through the University of Calgary General VPN (off campus).&lt;br /&gt;
&lt;br /&gt;
See [[Connecting to RCS HPC Systems]] for more information.&lt;br /&gt;
&lt;br /&gt;
===Working interactively===&lt;br /&gt;
&amp;lt;!-- original chunk --&amp;gt;&lt;br /&gt;
ARC uses the Linux operating system. The program that responds to your typed commands and allows you to run other programs is called the Linux shell. There are several different shells available, but, by default you will use one called bash. It is useful to have some knowledge of the shell and a variety of other command-line programs that you can use to manipulate files. If you are new to Linux systems, we recommend that you work through one of the many online tutorials that are available, such as the [http://www.ee.surrey.ac.uk/Teaching/Unix/index.html UNIX Tutorial for Beginners (external link)] provided by the University of Surrey. The tutorial covers such fundamental topics, among others, as creating, renaming and deleting files and directories, how to produce a listing of your files and how to tell how much disk space you are using.  For a more comprehensive introduction to Linux, see [http://linuxcommand.sourceforge.net/tlcl.php The Linux Command Line (external link)].&lt;br /&gt;
&lt;br /&gt;
The TALC login node may be used for such tasks as editing files, compiling programs and running short tests while developing programs. We suggest CPU intensive workloads on the login node be restricted to under 15 minutes as per [[General Cluster Guidelines and Policies|our cluster guidelines]]. For interactive workloads exceeding 15 minutes, use the &#039;&#039;&#039;[[Running_jobs#Interactive_jobs|salloc command]]&#039;&#039;&#039; to allocate an interactive session on a compute node.&lt;br /&gt;
&lt;br /&gt;
The default &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; allocation is 1 CPU and 1 GB of memory. Adjust this by specifying &amp;lt;code&amp;gt;-n CPU#&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--mem Megabytes&amp;lt;/code&amp;gt;. You may request up to 5 hours of CPU time for interactive jobs.&lt;br /&gt;
 salloc --time 5:00:00 --partition cpu24 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Running non-interactive jobs (batch processing)===&lt;br /&gt;
Production runs and longer test runs should be submitted as (non-interactive) batch jobs, in which commands to be executed are listed in a script (text file). Batch jobs scripts are submitted using the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command, part of the Slurm job management and scheduling software. #SBATCH directive lines at the beginning of the script are used to specify the resources needed for the job (cores, memory, run time limit and any specialized hardware needed).&lt;br /&gt;
&lt;br /&gt;
Most of the information on the Running Jobs page on the Compute Canada web site is also relevant for submitting and managing batch jobs and reserving processors for interactive work on TALC.  One major difference between running jobs on the TALC and Compute Canada clusters is in selecting the type of hardware that should be used for a job. On TALC, you choose the hardware to use primarily by specifying a partition, as described below.&lt;br /&gt;
&lt;br /&gt;
===Using JupyterHub on Talc===&lt;br /&gt;
Talc has a Jupyterhub server which runs a Jupyter server on one of the Talc compute nodes and provides all the necessary encryption and plumbing to deliver the notebook to your computer.  To access this service you must have a Talc account. Point your browser at http://talc.ucalgary.ca and login with your usual UC account.  As of this writing, the job that runs the jupyter notebook is 1 cpu and 10GiB of memory on a cpu24 node.&lt;br /&gt;
&lt;br /&gt;
===Selecting a partition===&lt;br /&gt;
TALC currently has the following partitions available for use. The &amp;lt;code&amp;gt;gpu&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;cpu12&amp;lt;/code&amp;gt; partitions are backed by the same nodes. The &amp;lt;code&amp;gt;cpu12&amp;lt;/code&amp;gt; partition was created to only expose the CPUs on the GPU hardware for general purpose use. Each GPU node has 5 Tesla T4 GPUs installed, but you may only request one per job within the TALC environment.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Partition&lt;br /&gt;
!Description&lt;br /&gt;
!Nodes&lt;br /&gt;
!Cores&lt;br /&gt;
!Memory &lt;br /&gt;
!Memory Request Limit&lt;br /&gt;
!Time Limit&lt;br /&gt;
!GPU Request per Job&lt;br /&gt;
!Network&lt;br /&gt;
|-&lt;br /&gt;
|gpu&lt;br /&gt;
|GPU Compute&lt;br /&gt;
|3&lt;br /&gt;
|12 cores&lt;br /&gt;
|192 GB&lt;br /&gt;
|190 GB&lt;br /&gt;
|24 hours&lt;br /&gt;
|1x NVIDIA Corporation TU104GL [Tesla T4]&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|cpu12&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|3&lt;br /&gt;
|12 cores&lt;br /&gt;
|192 GB&lt;br /&gt;
|190 GB&lt;br /&gt;
|24 hours&lt;br /&gt;
|None&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|cpu24&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|15&lt;br /&gt;
|24 cores&lt;br /&gt;
|256 GB&lt;br /&gt;
|254 GB&lt;br /&gt;
|24 hours&lt;br /&gt;
|None&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|bigmem&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|2&lt;br /&gt;
|32 cores&lt;br /&gt;
|1024 GB&lt;br /&gt;
|1022 GB&lt;br /&gt;
|24 hours&lt;br /&gt;
|None&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|}&lt;br /&gt;
There are some aspects to consider when selecting a partition including:&lt;br /&gt;
* Resource requirements in terms of memory and CPU cores&lt;br /&gt;
* Hardware specific requirements, such as GPU or CPU Instruction Set Extensions&lt;br /&gt;
* Partition resource limits and potential wait time&lt;br /&gt;
* Software support parallel processing using Message Passing Interface (MPI), OpenMP, etc.&lt;br /&gt;
** Eg. MPI for parallel processing can distribute memory across multiple nodes, per-node memory requirements could be lower. Whereas, OpenMP or single process code that is restricted to one node would require a higher memory node.&lt;br /&gt;
** Note: MPI code running on hardware with Omni-Path networking should be compiled with Omni-Path networking support. This is provided by loading the &amp;lt;code&amp;gt;openmpi/2.1.3-opa&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;openmpi/3.1.2-opa&amp;lt;/code&amp;gt; modules prior to compiling.&lt;br /&gt;
&lt;br /&gt;
Since resources that are requested are reserved for your job, please request only as much CPU and memory as your job requires to avoid reducing the cluster efficiency.  If you are unsure which partition to use or the specific resource requests that are appropriate for your jobs, please contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] and we would be happy to work with you.&lt;br /&gt;
&lt;br /&gt;
=== Using a partition ===&lt;br /&gt;
&lt;br /&gt;
==== Bigmem and compute-only jobs ====&lt;br /&gt;
To select the &amp;lt;code&amp;gt;cpu24&amp;lt;/code&amp;gt; partition, include the following line in your batch job script:&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --partition=cpu24&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;You may also start an interactive session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt;:&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
$ salloc --time 1:00:00 -p cpu24&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== GPU jobs ====&lt;br /&gt;
In TALC, you are limited to exactly 1 GPU per job. Jobs that request for 0 GPUs or 2 or more GPUs will not be scheduled.&lt;br /&gt;
&lt;br /&gt;
To submit a job using the &amp;lt;code&amp;gt;gpu&amp;lt;/code&amp;gt; partition with one GPU request, include the following to your batch job script:&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --partition=gpu&lt;br /&gt;
#SBATCH --gpus-per-node=1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Like the previous example, you may also request interactive sessions with GPU nodes using &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt;. Just specify the &amp;lt;code&amp;gt;gpu&amp;lt;/code&amp;gt; partition and the number of GPUs required. &amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
$ salloc --time 1:00:00 -p gpu -n 1 --gpus-per-node 1 &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;You may verify that a GPU was assigned to your job or interactive session by running &amp;lt;code&amp;gt;nvidia-smi&amp;lt;/code&amp;gt;. This command will show you the status of the GPU that was assigned to you.&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
$ nvidia-smi&lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
| NVIDIA-SMI 450.80.02    Driver Version: 450.80.02    CUDA Version: 11.0     |&lt;br /&gt;
|-------------------------------+----------------------+----------------------+&lt;br /&gt;
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |&lt;br /&gt;
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |&lt;br /&gt;
|                               |                      |               MIG M. |&lt;br /&gt;
|===============================+======================+======================|&lt;br /&gt;
|   0  Tesla T4            Off  | 00000000:3B:00.0 Off |                    0 |&lt;br /&gt;
| N/A   36C    P0    14W /  70W |      0MiB / 15109MiB |      5%      Default |&lt;br /&gt;
|                               |                      |                  N/A |&lt;br /&gt;
+-------------------------------+----------------------+----------------------+&lt;br /&gt;
                                                                               &lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
| Processes:                                                                  |&lt;br /&gt;
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |&lt;br /&gt;
|        ID   ID                                                   Usage      |&lt;br /&gt;
|=============================================================================|&lt;br /&gt;
|  No running processes found                                                 |&lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Partition limitations ====&lt;br /&gt;
In addition to the hardware limitations of the nodes within the partition, please be aware that there may also be policy limits imposed on your account for each partition. These limits restrict the number of cores, nodes, or GPUs that can be used at any given time. Since the limits are applied on a partition-by-partition basis, using resources in one partition should not affect the available resources you can use in another partition.&lt;br /&gt;
&lt;br /&gt;
These limits can be listed by running:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sacctmgr show qos format=Name,MaxWall,MaxTRESPU%20,MaxSubmitJobs&lt;br /&gt;
      Name     MaxWall            MaxTRESPU MaxSubmit&lt;br /&gt;
---------- ----------- -------------------- ---------&lt;br /&gt;
    normal  1-00:00:00          mem=127000M          &lt;br /&gt;
     cpu24  1-00:00:00             mem=127G          &lt;br /&gt;
    bigmem  1-00:00:00                               &lt;br /&gt;
       gpu                       gres/gpu=1          &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Time limits ===&lt;br /&gt;
Use the &amp;lt;code&amp;gt;--time&amp;lt;/code&amp;gt; directive to tell the job scheduler the maximum time that your job might run.  For example:&lt;br /&gt;
 #SBATCH --time=hh:mm:ss&lt;br /&gt;
&lt;br /&gt;
You can use &amp;lt;code&amp;gt;scontrol show partitions&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; to see the current maximum time that a job can run.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; highlight=&amp;quot;6&amp;quot;&amp;gt;&lt;br /&gt;
$ scontrol show partitions&lt;br /&gt;
PartitionName=single                                                                 &lt;br /&gt;
   AllowGroups=ALL AllowAccounts=ALL AllowQos=ALL                                    &lt;br /&gt;
   AllocNodes=ALL Default=NO QoS=single                                              &lt;br /&gt;
   DefaultTime=NONE DisableRootJobs=NO ExclusiveUser=NO GraceTime=0 Hidden=NO        &lt;br /&gt;
   MaxNodes=UNLIMITED MaxTime=7-00:00:00 MinNodes=1 LLN=NO MaxCPUsPerNode=UNLIMITED  &lt;br /&gt;
   Nodes=cn[001-168]                                                                 &lt;br /&gt;
   PriorityJobFactor=1 PriorityTier=1 RootOnly=NO ReqResv=NO OverSubscribe=NO        &lt;br /&gt;
   OverTimeLimit=NONE PreemptMode=OFF                                                &lt;br /&gt;
   State=UP TotalCPUs=1344 TotalNodes=168 SelectTypeParameters=NONE                  &lt;br /&gt;
   DefMemPerNode=UNLIMITED MaxMemPerNode=UNLIMITED                                   &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, with &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; under the &amp;lt;code&amp;gt;TIMELIMIT&amp;lt;/code&amp;gt; column:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sinfo                                                     &lt;br /&gt;
PARTITION  AVAIL  TIMELIMIT  NODES  STATE NODELIST               &lt;br /&gt;
single        up 7-00:00:00      1 drain* cn097                  &lt;br /&gt;
single        up 7-00:00:00      1  maint cn002                  &lt;br /&gt;
single        up 7-00:00:00      4 drain* cn[001,061,133,154]    &lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
[[Category:TALC]]&lt;br /&gt;
[[Category:Guides]]&lt;/div&gt;</summary>
		<author><name>Fridman</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=TALC_Cluster_Guide&amp;diff=1604</id>
		<title>TALC Cluster Guide</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=TALC_Cluster_Guide&amp;diff=1604"/>
		<updated>2022-01-06T17:13:14Z</updated>

		<summary type="html">&lt;p&gt;Fridman: minor gramatical edits&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Message Box&lt;br /&gt;
|icon=Security Icon.png&lt;br /&gt;
|title=Cybersecurity awareness at the U of C&lt;br /&gt;
|message=Please note that there are typically about 950 phishing attempts targeting University of Calgary accounts each month. This is just a reminder to be careful about computer security issues, both at home and at the University. Please visit https://it.ucalgary.ca/it-security for more information, tips on secure computing, and how to report suspected security problems.}}&lt;br /&gt;
&lt;br /&gt;
This guide gives an overview of the Teaching and Learning Cluster (TALC) at the University of Calgary and is intended to be read by new account holders getting started on TALC. This guide covers topics as the hardware and performance characteristics, available software, usage policies and how to log in and run jobs. &lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
TALC is a cluster of computers created by Research Computing Services (RCS) in response to requests for a central computing resource to support academic courses and workshops offered at the University of Calgary. It is a complement to the Advanced Research Computing (ARC) cluster that is used for research, rather than educational purposes. The software environment in the TALC and ARC clusters is very similar and workflows between the two clusters are identical.  What students learn about using TALC will have direct applicability to using ARC should they go on to use ARC for research work. &lt;br /&gt;
&lt;br /&gt;
If you are the instructor for a course that could benefit from using TALC, please review this guide and the [[TALC Terms of Use]] and then contact us at support@hpc.ucalgary.ca to discuss your requirements.  &lt;br /&gt;
&lt;br /&gt;
Please note that in order to ensure that the appropriate software is available, student accounts are in place, and appropriate training has been provided for your teaching assistants, it is best to start this discussion several months prior to the start of the course.&lt;br /&gt;
&lt;br /&gt;
If you are a student in a course using TALC, please review this guide for basic instructions in using the cluster.  Questions should first be directed to the teaching assistants or instructor for your course.&lt;br /&gt;
&lt;br /&gt;
===Obtaining an account===&lt;br /&gt;
TALC account requests are expected to be submitted by the course instructor rather than from individual students. You must have a University of Calgary IT account in order to use TALC. If you do not have a University of IT account or email address, please register for one at https://itregport.ucalgary.ca/. In order to ensure TALC is provisioned in time for a course start date, the instructor should submit the initial list of @ucalgary.ca accounts needed for the course 2 weeks before the start date.&lt;br /&gt;
&lt;br /&gt;
=== Getting Support ===&lt;br /&gt;
{{Message Box&lt;br /&gt;
|title=Need Help or have other TALC Related Questions?&lt;br /&gt;
|message=&#039;&#039;&#039;Students&#039;&#039;&#039;, please send TALC-related questions to your course instructor or teaching assistants.&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Course instructors and TAs&#039;&#039;&#039;, please report system issues to support@hpc.ucalgary.ca).&lt;br /&gt;
|icon=Support Icon.png}}&lt;br /&gt;
&lt;br /&gt;
==Hardware==&lt;br /&gt;
The TALC cluster is comprised of repurposed research clusters that are a few generations old. As a result, individual processor performance will not be comparable to the latest processors but should be sufficient for educational purposes and course work.  &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Partition&lt;br /&gt;
!Description&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores, Model, and Year&lt;br /&gt;
!Installed Memory&lt;br /&gt;
!GPU&lt;br /&gt;
!Network&lt;br /&gt;
|-&lt;br /&gt;
|gpu&lt;br /&gt;
|GPU Compute&lt;br /&gt;
|3&lt;br /&gt;
|12 cores, 2x Intel(R) Xeon(R) Bronze 3204 CPU @ 1.90GHz (2019)&lt;br /&gt;
|192 GB&lt;br /&gt;
|5x NVIDIA Corporation TU104GL [Tesla T4]&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|cpu24&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|15&lt;br /&gt;
|24 cores, 4x Six-Core AMD Opteron(tm) Processor 8431 (2009)&lt;br /&gt;
|256 GB&lt;br /&gt;
|N/A&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|bigmem&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|2&lt;br /&gt;
|32 cores, 4x Intel(R) Xeon(R) CPU E7- 4830  @ 2.13GHz (2015)&lt;br /&gt;
|1024 GB&lt;br /&gt;
|N/A&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Storage===&lt;br /&gt;
{{Message Box&lt;br /&gt;
| title=No Backup Policy!&lt;br /&gt;
| message=You are responsible for your own backups.   Since accounts on TALC and related data are removed shortly after the associated course has finished, you should download anything you need to save to your own computer before the end of the course.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
TALC is connected to a network disk storage system. This storage is split across the &amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; file systems.  &lt;br /&gt;
====&amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt;: Home file system====&lt;br /&gt;
Each user has a directory under /home and is the default working directory when logging in to TALC. Each home directory has a per-user quota of 500 GB. This limit is fixed and cannot be increased.&lt;br /&gt;
&lt;br /&gt;
Note on file sharing: Due to security concerns, permissions set using &amp;lt;code&amp;gt;chmod&amp;lt;/code&amp;gt; on your home directory to allow other users to read/write to your home directory be automatically reverted by an automated system process unless an explicit exception is made.  If you need to share files with other researchers on the ARC cluster, please write to support@hpc.ucalgary.ca to ask for such an exception.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt;: Scratch file system for large job-oriented storage====&lt;br /&gt;
Associated with each job, under the &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; directory, a subdirectory is created that can be referenced in job scripts as &amp;lt;code&amp;gt;/scratch/${SLURM_JOB_ID}&amp;lt;/code&amp;gt;. You can use that directory for temporary files needed during the course of a job. Up to 30 TB of storage may be used, per user (total for all your jobs) in the &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; file system.  &lt;br /&gt;
&lt;br /&gt;
Data in &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; associated with a given job will be deleted automatically, without exception, five days after the job finishes.&lt;br /&gt;
&lt;br /&gt;
== Software ==&lt;br /&gt;
{{Message Box&lt;br /&gt;
| title=Software Package Requests&lt;br /&gt;
| message=Course instructors or teaching assistants should write to support@hpc.ucalgary.ca if additional software is required for their course.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
All ARC nodes run the latest version of CentOS 7 with the same set of base software packages. For your convenience, we have packaged commonly used software packages and dependencies as modules available under &amp;lt;code&amp;gt;/global/software&amp;lt;/code&amp;gt;. If your software package is not available as a module, you may also try Anaconda which allows users to manage and install custom packages in an isolated environment.&lt;br /&gt;
&lt;br /&gt;
For a list of available packages that have been made available, please see [[ARC Software pages]]. &lt;br /&gt;
&lt;br /&gt;
=== Modules ===&lt;br /&gt;
The setup of the environment for using some of the installed software is through the &amp;lt;code&amp;gt;module&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
Software packages bundled as a module will be available under &amp;lt;code&amp;gt;/global/software&amp;lt;/code&amp;gt; and can be listed with the &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module avail&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To enable Python, load the Python module by running:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module load python/anaconda-3.6-5.1.0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To unload the Python module, run:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module remove python/anaconda-3.6-5.1.0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To see currently loaded modules, run:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module list&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Using TALC==&lt;br /&gt;
{{Message Box&lt;br /&gt;
|title=Usage subject to [[TALC Terms of Use]]&lt;br /&gt;
|message=Please review the [[TALC Terms of Use]] prior to using TALC.&lt;br /&gt;
|icon=Support Icon.png}}&lt;br /&gt;
&lt;br /&gt;
===Logging in===&lt;br /&gt;
To log in to TALC, connect using SSH to talc.ucalgary.ca. Connections to TALC are accepted only from the University of Calgary network (on campus) or through the University of Calgary General VPN (off campus).&lt;br /&gt;
&lt;br /&gt;
See [[Connecting to RCS HPC Systems]] for more information.&lt;br /&gt;
&lt;br /&gt;
===Working interactively===&lt;br /&gt;
&amp;lt;!-- original chunk --&amp;gt;&lt;br /&gt;
ARC uses the Linux operating system. The program that responds to your typed commands and allows you to run other programs is called the Linux shell. There are several different shells available, but, by default you will use one called bash. It is useful to have some knowledge of the shell and a variety of other command-line programs that you can use to manipulate files. If you are new to Linux systems, we recommend that you work through one of the many online tutorials that are available, such as the [http://www.ee.surrey.ac.uk/Teaching/Unix/index.html UNIX Tutorial for Beginners (external link)] provided by the University of Surrey. The tutorial covers such fundamental topics, among others, as creating, renaming and deleting files and directories, how to produce a listing of your files and how to tell how much disk space you are using.  For a more comprehensive introduction to Linux, see [http://linuxcommand.sourceforge.net/tlcl.php The Linux Command Line (external link)].&lt;br /&gt;
&lt;br /&gt;
The TALC login node may be used for such tasks as editing files, compiling programs and running short tests while developing programs. We suggest CPU intensive workloads on the login node be restricted to under 15 minutes as per [[General Cluster Guidelines and Policies|our cluster guidelines]]. For interactive workloads exceeding 15 minutes, use the &#039;&#039;&#039;[[Running_jobs#Interactive_jobs|salloc command]]&#039;&#039;&#039; to allocate an interactive session on a compute node.&lt;br /&gt;
&lt;br /&gt;
The default &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; allocation is 1 CPU and 1 GB of memory. Adjust this by specifying &amp;lt;code&amp;gt;-n CPU#&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--mem Megabytes&amp;lt;/code&amp;gt;. You may request up to 5 hours of CPU time for interactive jobs.&lt;br /&gt;
 salloc --time 5:00:00 --partition cpu24 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Running non-interactive jobs (batch processing)===&lt;br /&gt;
Production runs and longer test runs should be submitted as (non-interactive) batch jobs, in which commands to be executed are listed in a script (text file). Batch jobs scripts are submitted using the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command, part of the Slurm job management and scheduling software. #SBATCH directive lines at the beginning of the script are used to specify the resources needed for the job (cores, memory, run time limit and any specialized hardware needed).&lt;br /&gt;
&lt;br /&gt;
Most of the information on the Running Jobs page on the Compute Canada web site is also relevant for submitting and managing batch jobs and reserving processors for interactive work on TALC.  One major difference between running jobs on the TALC and Compute Canada clusters is in selecting the type of hardware that should be used for a job. On TALC, you choose the hardware to use primarily by specifying a partition, as described below.&lt;br /&gt;
&lt;br /&gt;
===Using JupyterHub on Talc===&lt;br /&gt;
Talc has a Jupyterhub server which runs a Jupyter server on one of the Talc compute nodes and provides all the necessary encryption and plumbing to deliver the notebook to your computer.  To access this service you must have a Talc account. Point your browser at http://talc.ucalgary.ca and login with your usual UC account.  As of this writing, the job that runs the jupyter notebook is 1 cpu and 10GiB of memory on a cpu24 node.&lt;br /&gt;
&lt;br /&gt;
===Selecting a partition===&lt;br /&gt;
TALC currently has the following partitions available for use. The &amp;lt;code&amp;gt;gpu&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;cpu12&amp;lt;/code&amp;gt; partitions are backed by the same nodes. The &amp;lt;code&amp;gt;cpu12&amp;lt;/code&amp;gt; partition was created to only expose the CPUs on the GPU hardware for general purpose use. Each GPU node has 5 Tesla T4 GPUs installed, but you may only request one per job within the TALC environment.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Partition&lt;br /&gt;
!Description&lt;br /&gt;
!Nodes&lt;br /&gt;
!Cores&lt;br /&gt;
!Memory &lt;br /&gt;
!Memory Request Limit&lt;br /&gt;
!Time Limit&lt;br /&gt;
!GPU Request per Job&lt;br /&gt;
!Network&lt;br /&gt;
|-&lt;br /&gt;
|gpu&lt;br /&gt;
|GPU Compute&lt;br /&gt;
|3&lt;br /&gt;
|12 cores&lt;br /&gt;
|192 GB&lt;br /&gt;
|190 GB&lt;br /&gt;
|24 hours&lt;br /&gt;
|1x NVIDIA Corporation TU104GL [Tesla T4]&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|cpu12&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|3&lt;br /&gt;
|12 cores&lt;br /&gt;
|192 GB&lt;br /&gt;
|190 GB&lt;br /&gt;
|24 hours&lt;br /&gt;
|None&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|cpu24&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|15&lt;br /&gt;
|24 cores&lt;br /&gt;
|256 GB&lt;br /&gt;
|254 GB&lt;br /&gt;
|24 hours&lt;br /&gt;
|None&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|bigmem&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|2&lt;br /&gt;
|32 cores&lt;br /&gt;
|1024 GB&lt;br /&gt;
|1022 GB&lt;br /&gt;
|24 hours&lt;br /&gt;
|None&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|}&lt;br /&gt;
There are some aspects to consider when selecting a partition including:&lt;br /&gt;
* Resource requirements in terms of memory and CPU cores&lt;br /&gt;
* Hardware specific requirements, such as GPU or CPU Instruction Set Extensions&lt;br /&gt;
* Partition resource limits and potential wait time&lt;br /&gt;
* Software support parallel processing using Message Passing Interface (MPI), OpenMP, etc.&lt;br /&gt;
** Eg. MPI for parallel processing can distribute memory across multiple nodes, per-node memory requirements could be lower. Whereas, OpenMP or single process code that is restricted to one node would require a higher memory node.&lt;br /&gt;
** Note: MPI code running on hardware with Omni-Path networking should be compiled with Omni-Path networking support. This is provided by loading the &amp;lt;code&amp;gt;openmpi/2.1.3-opa&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;openmpi/3.1.2-opa&amp;lt;/code&amp;gt; modules prior to compiling.&lt;br /&gt;
&lt;br /&gt;
Since resources that are requested are reserved for your job, please request only as much CPU and memory as your job requires to avoid reducing the cluster efficiency.  If you are unsure which partition to use or the specific resource requests that are appropriate for your jobs, please contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] and we would be happy to work with you.&lt;br /&gt;
&lt;br /&gt;
=== Using a partition ===&lt;br /&gt;
&lt;br /&gt;
==== Bigmem and compute-only jobs ====&lt;br /&gt;
To select the &amp;lt;code&amp;gt;cpu24&amp;lt;/code&amp;gt; partition, include the following line in your batch job script:&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --partition=cpu24&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;You may also start an interactive session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt;:&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
$ salloc --time 1:00:00 -p cpu24&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== GPU jobs ====&lt;br /&gt;
In TALC, you are limited to exactly 1 GPU per job. Jobs that request for 0 GPUs or 2 or more GPUs will not be scheduled.&lt;br /&gt;
&lt;br /&gt;
To submit a job using the &amp;lt;code&amp;gt;gpu&amp;lt;/code&amp;gt; partition with one GPU request, include the following to your batch job script:&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --partition=gpu&lt;br /&gt;
#SBATCH --gpus-per-node=1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Like the previous example, you may also request interactive sessions with GPU nodes using &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt;. Just specify the &amp;lt;code&amp;gt;gpu&amp;lt;/code&amp;gt; partition and the number of GPUs required. &amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
$ salloc --time 1:00:00 -p gpu -n 1 --gpus-per-node 1 &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;You may verify that a GPU was assigned to your job or interactive session by running &amp;lt;code&amp;gt;nvidia-smi&amp;lt;/code&amp;gt;. This command will show you the status of the GPU that was assigned to you.&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
$ nvidia-smi&lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
| NVIDIA-SMI 450.80.02    Driver Version: 450.80.02    CUDA Version: 11.0     |&lt;br /&gt;
|-------------------------------+----------------------+----------------------+&lt;br /&gt;
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |&lt;br /&gt;
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |&lt;br /&gt;
|                               |                      |               MIG M. |&lt;br /&gt;
|===============================+======================+======================|&lt;br /&gt;
|   0  Tesla T4            Off  | 00000000:3B:00.0 Off |                    0 |&lt;br /&gt;
| N/A   36C    P0    14W /  70W |      0MiB / 15109MiB |      5%      Default |&lt;br /&gt;
|                               |                      |                  N/A |&lt;br /&gt;
+-------------------------------+----------------------+----------------------+&lt;br /&gt;
                                                                               &lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
| Processes:                                                                  |&lt;br /&gt;
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |&lt;br /&gt;
|        ID   ID                                                   Usage      |&lt;br /&gt;
|=============================================================================|&lt;br /&gt;
|  No running processes found                                                 |&lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Partition limitations ====&lt;br /&gt;
In addition to the hardware limitations of the nodes within the partition, please be aware that there may also be policy limits imposed on your account for each partition. These limits restrict the number of cores, nodes, or GPUs that can be used at any given time. Since the limits are applied on a partition-by-partition basis, using resources in one partition should not affect the available resources you can use in another partition.&lt;br /&gt;
&lt;br /&gt;
These limits can be listed by running:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sacctmgr show qos format=Name,MaxWall,MaxTRESPU%20,MaxSubmitJobs&lt;br /&gt;
      Name     MaxWall            MaxTRESPU MaxSubmit&lt;br /&gt;
---------- ----------- -------------------- ---------&lt;br /&gt;
    normal  1-00:00:00          mem=127000M          &lt;br /&gt;
     cpu24  1-00:00:00             mem=127G          &lt;br /&gt;
    bigmem  1-00:00:00                               &lt;br /&gt;
       gpu                       gres/gpu=1          &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Time limits ===&lt;br /&gt;
Use the &amp;lt;code&amp;gt;--time&amp;lt;/code&amp;gt; directive to tell the job scheduler the maximum time that your job might run.  For example:&lt;br /&gt;
 #SBATCH --time=hh:mm:ss&lt;br /&gt;
&lt;br /&gt;
You can use &amp;lt;code&amp;gt;scontrol show partitions&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; to see the current maximum time that a job can run.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; highlight=&amp;quot;6&amp;quot;&amp;gt;&lt;br /&gt;
$ scontrol show partitions&lt;br /&gt;
PartitionName=single                                                                 &lt;br /&gt;
   AllowGroups=ALL AllowAccounts=ALL AllowQos=ALL                                    &lt;br /&gt;
   AllocNodes=ALL Default=NO QoS=single                                              &lt;br /&gt;
   DefaultTime=NONE DisableRootJobs=NO ExclusiveUser=NO GraceTime=0 Hidden=NO        &lt;br /&gt;
   MaxNodes=UNLIMITED MaxTime=7-00:00:00 MinNodes=1 LLN=NO MaxCPUsPerNode=UNLIMITED  &lt;br /&gt;
   Nodes=cn[001-168]                                                                 &lt;br /&gt;
   PriorityJobFactor=1 PriorityTier=1 RootOnly=NO ReqResv=NO OverSubscribe=NO        &lt;br /&gt;
   OverTimeLimit=NONE PreemptMode=OFF                                                &lt;br /&gt;
   State=UP TotalCPUs=1344 TotalNodes=168 SelectTypeParameters=NONE                  &lt;br /&gt;
   DefMemPerNode=UNLIMITED MaxMemPerNode=UNLIMITED                                   &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, with &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; under the &amp;lt;code&amp;gt;TIMELIMIT&amp;lt;/code&amp;gt; column:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sinfo                                                     &lt;br /&gt;
PARTITION  AVAIL  TIMELIMIT  NODES  STATE NODELIST               &lt;br /&gt;
single        up 7-00:00:00      1 drain* cn097                  &lt;br /&gt;
single        up 7-00:00:00      1  maint cn002                  &lt;br /&gt;
single        up 7-00:00:00      4 drain* cn[001,061,133,154]    &lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
[[Category:TALC]]&lt;br /&gt;
[[Category:Guides]]&lt;/div&gt;</summary>
		<author><name>Fridman</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=List_of_courses_on_TALC&amp;diff=1484</id>
		<title>List of courses on TALC</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=List_of_courses_on_TALC&amp;diff=1484"/>
		<updated>2021-09-08T20:51:34Z</updated>

		<summary type="html">&lt;p&gt;Fridman: /* Current Courses */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Message Box&lt;br /&gt;
|title=Interested in using TALC for your course?&lt;br /&gt;
|message=&#039;&#039;&#039;If you are the instructor for a course that could benefit from using TALC, please contact us at support@hpc.ucalgary.ca to discuss your requirements.&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br &amp;gt;To ensure that the appropriate software is available, student accounts are in place, and appropriate training has been provided for your teaching assistants, it is best to start this discussion several months prior to the start of the course.&lt;br /&gt;
|icon=Support Icon.png}}&lt;br /&gt;
__NOTOC__&lt;br /&gt;
&lt;br /&gt;
== Current Courses ==&lt;br /&gt;
&lt;br /&gt;
* ENSF 619.01 - Ethan MacDonald&lt;br /&gt;
* MDSC 523 - David Anderson&lt;br /&gt;
* ENSF 619 - Roberto Medeiros de Souza&lt;br /&gt;
&lt;br /&gt;
== Previous Courses ==&lt;br /&gt;
=== 2021 Winter ===&lt;br /&gt;
* GLGY 605 - Benjamin Tutolo&lt;br /&gt;
* BMEN 415 - Ethan MacDonald&lt;br /&gt;
* MDSC 201 - David Anderson&lt;br /&gt;
&lt;br /&gt;
=== 2020 Spring ===&lt;br /&gt;
* Bioinformatics workshop Q Zhang&lt;br /&gt;
* Bioinformatics workshop Q Zhang&lt;br /&gt;
&lt;br /&gt;
===2020 Winter===&lt;br /&gt;
* DATA 623 R Walker&lt;br /&gt;
* GLGY 605 B Tutolo&lt;br /&gt;
* DATA 608 P Federl&lt;br /&gt;
* ENSF 612 J Kaur&lt;br /&gt;
&lt;br /&gt;
===2020 Winter Block Week===&lt;br /&gt;
* MDSC 395 D Anderson&lt;br /&gt;
&lt;br /&gt;
== See https://www.ucalgary.ca/pubs/calendar/current/academic-schedule.html for the UofC &#039;s academic schedule. ==&lt;br /&gt;
[[Category:TALC]]&lt;/div&gt;</summary>
		<author><name>Fridman</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=List_of_courses_on_TALC&amp;diff=1483</id>
		<title>List of courses on TALC</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=List_of_courses_on_TALC&amp;diff=1483"/>
		<updated>2021-09-07T20:40:15Z</updated>

		<summary type="html">&lt;p&gt;Fridman: /* Current Courses */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Message Box&lt;br /&gt;
|title=Interested in using TALC for your course?&lt;br /&gt;
|message=&#039;&#039;&#039;If you are the instructor for a course that could benefit from using TALC, please contact us at support@hpc.ucalgary.ca to discuss your requirements.&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br &amp;gt;To ensure that the appropriate software is available, student accounts are in place, and appropriate training has been provided for your teaching assistants, it is best to start this discussion several months prior to the start of the course.&lt;br /&gt;
|icon=Support Icon.png}}&lt;br /&gt;
__NOTOC__&lt;br /&gt;
&lt;br /&gt;
== Current Courses ==&lt;br /&gt;
&lt;br /&gt;
* ENSF61901 - Ethan MacDonald&lt;br /&gt;
* MDSC 523 - David Anderson&lt;br /&gt;
&lt;br /&gt;
== Previous Courses ==&lt;br /&gt;
=== 2021 Winter ===&lt;br /&gt;
* GLGY 605 - Benjamin Tutolo&lt;br /&gt;
* BMEN 415 - Ethan MacDonald&lt;br /&gt;
* MDSC 201 - David Anderson&lt;br /&gt;
&lt;br /&gt;
=== 2020 Spring ===&lt;br /&gt;
* Bioinformatics workshop Q Zhang&lt;br /&gt;
* Bioinformatics workshop Q Zhang&lt;br /&gt;
&lt;br /&gt;
===2020 Winter===&lt;br /&gt;
* DATA 623 R Walker&lt;br /&gt;
* GLGY 605 B Tutolo&lt;br /&gt;
* DATA 608 P Federl&lt;br /&gt;
* ENSF 612 J Kaur&lt;br /&gt;
&lt;br /&gt;
===2020 Winter Block Week===&lt;br /&gt;
* MDSC 395 D Anderson&lt;br /&gt;
&lt;br /&gt;
== See https://www.ucalgary.ca/pubs/calendar/current/academic-schedule.html for the UofC &#039;s academic schedule. ==&lt;br /&gt;
[[Category:TALC]]&lt;/div&gt;</summary>
		<author><name>Fridman</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=List_of_courses_on_TALC&amp;diff=1482</id>
		<title>List of courses on TALC</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=List_of_courses_on_TALC&amp;diff=1482"/>
		<updated>2021-09-07T19:03:25Z</updated>

		<summary type="html">&lt;p&gt;Fridman: /* Previous Courses */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Message Box&lt;br /&gt;
|title=Interested in using TALC for your course?&lt;br /&gt;
|message=&#039;&#039;&#039;If you are the instructor for a course that could benefit from using TALC, please contact us at support@hpc.ucalgary.ca to discuss your requirements.&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br &amp;gt;To ensure that the appropriate software is available, student accounts are in place, and appropriate training has been provided for your teaching assistants, it is best to start this discussion several months prior to the start of the course.&lt;br /&gt;
|icon=Support Icon.png}}&lt;br /&gt;
__NOTOC__&lt;br /&gt;
&lt;br /&gt;
== Current Courses ==&lt;br /&gt;
&lt;br /&gt;
* ENSF61901 - Ethan MacDonald&lt;br /&gt;
&lt;br /&gt;
== Previous Courses ==&lt;br /&gt;
=== 2021 Winter ===&lt;br /&gt;
* GLGY 605 - Benjamin Tutolo&lt;br /&gt;
* BMEN 415 - Ethan MacDonald&lt;br /&gt;
* MDSC 201 - David Anderson&lt;br /&gt;
&lt;br /&gt;
=== 2020 Spring ===&lt;br /&gt;
* Bioinformatics workshop Q Zhang&lt;br /&gt;
* Bioinformatics workshop Q Zhang&lt;br /&gt;
&lt;br /&gt;
===2020 Winter===&lt;br /&gt;
* DATA 623 R Walker&lt;br /&gt;
* GLGY 605 B Tutolo&lt;br /&gt;
* DATA 608 P Federl&lt;br /&gt;
* ENSF 612 J Kaur&lt;br /&gt;
&lt;br /&gt;
===2020 Winter Block Week===&lt;br /&gt;
* MDSC 395 D Anderson&lt;br /&gt;
&lt;br /&gt;
== See https://www.ucalgary.ca/pubs/calendar/current/academic-schedule.html for the UofC &#039;s academic schedule. ==&lt;br /&gt;
[[Category:TALC]]&lt;/div&gt;</summary>
		<author><name>Fridman</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=List_of_courses_on_TALC&amp;diff=1308</id>
		<title>List of courses on TALC</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=List_of_courses_on_TALC&amp;diff=1308"/>
		<updated>2021-04-14T16:31:13Z</updated>

		<summary type="html">&lt;p&gt;Fridman: /* 2020 Winter Block Week */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Message Box&lt;br /&gt;
|title=Interested in using TALC for your course?&lt;br /&gt;
|message=&#039;&#039;&#039;If you are the instructor for a course that could benefit from using TALC, please contact us at support@hpc.ucalgary.ca to discuss your requirements.&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br &amp;gt;To ensure that the appropriate software is available, student accounts are in place, and appropriate training has been provided for your teaching assistants, it is best to start this discussion several months prior to the start of the course.&lt;br /&gt;
|icon=Support Icon.png}}&lt;br /&gt;
__NOTOC__&lt;br /&gt;
== Current Courses ==&lt;br /&gt;
* GLGY 605 - Benjamin Tutolo&lt;br /&gt;
* BMEN 415 - Ethan MacDonald&lt;br /&gt;
* MDSC 201 - David Anderson&lt;br /&gt;
&lt;br /&gt;
== Previous Courses ==&lt;br /&gt;
=== 2020 Spring ===&lt;br /&gt;
* Bioinformatics workshop Q Zhang&lt;br /&gt;
* Bioinformatics workshop Q Zhang&lt;br /&gt;
&lt;br /&gt;
===2020 Winter===&lt;br /&gt;
* DATA 623 R Walker&lt;br /&gt;
* GLGY 605 B Tutolo&lt;br /&gt;
* DATA 608 P Federl&lt;br /&gt;
* ENSF 612 J Kaur&lt;br /&gt;
&lt;br /&gt;
===2020 Winter Block Week===&lt;br /&gt;
* MDSC 395 D Anderson&lt;br /&gt;
&lt;br /&gt;
== See https://www.ucalgary.ca/pubs/calendar/current/academic-schedule.html for the UofC &#039;s academic schedule. ==&lt;br /&gt;
[[Category:TALC]]&lt;/div&gt;</summary>
		<author><name>Fridman</name></author>
	</entry>
</feed>