<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://rcs.ucalgary.ca/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Darcy</id>
	<title>RCSWiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://rcs.ucalgary.ca/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Darcy"/>
	<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/Special:Contributions/Darcy"/>
	<updated>2026-04-05T23:12:24Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.3</generator>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=How_to_get_a_MARC_account&amp;diff=3798</id>
		<title>How to get a MARC account</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=How_to_get_a_MARC_account&amp;diff=3798"/>
		<updated>2025-06-17T20:33:43Z</updated>

		<summary type="html">&lt;p&gt;Darcy: /* Requesting a MARC account */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the process of obtaining an account and how to login to the MARC Cluster.  If you are looking for information about the MARC cluster hardware, please see the [[Marc Cluster Guide]].&lt;br /&gt;
&lt;br /&gt;
== Obtaining a MARC account ==&lt;br /&gt;
&lt;br /&gt;
=== Prerequisites ===&lt;br /&gt;
* Access will only be granted to MARC for those with one or more [https://ucalgary.service-now.com/kb_view.do?sysparm_article=KB0030163 SCDS] project share as this is the only to get data into MARC.&lt;br /&gt;
* Account applicants must have an &#039;&#039;&#039;active UCalgary IT account&#039;&#039;&#039;. For external researchers/collaborators, a [[External collaborators|General Associate]] is required. &lt;br /&gt;
&lt;br /&gt;
=== Requesting a MARC account ===&lt;br /&gt;
# Navigate to the IT Homepage: [https://www.ucalgary.ca/it www.ucalgary.ca/it]&lt;br /&gt;
# Click Login in the upper right corner&lt;br /&gt;
# Click &amp;quot;Order Something&amp;quot;&lt;br /&gt;
# Click &amp;quot;Research Computing&amp;quot;&lt;br /&gt;
# Click &amp;quot;Medical Advanced Research Computing (MARC)&amp;quot;&lt;br /&gt;
# Select &amp;quot;Add Access&amp;quot; from the &amp;quot;What would you like to do?&amp;quot; box&lt;br /&gt;
# Choose your SCDS share from the next box&lt;br /&gt;
# Type a synopsis of the work you plan to do into the &amp;quot;Business Reason&amp;quot; box.&lt;br /&gt;
# You may leave the Additional Information box empty&lt;br /&gt;
# Someone should reply to you when your account is ready.&lt;br /&gt;
&lt;br /&gt;
== Logging in to MARC ==&lt;br /&gt;
&lt;br /&gt;
=== Prerequisites ===&lt;br /&gt;
In order to access MARC, please ensure that you have:&lt;br /&gt;
# Enabled MFA on your University of Calgary IT account.&lt;br /&gt;
# Requested for and have received your MARC account&lt;br /&gt;
&lt;br /&gt;
=== Accessing MARC ===&lt;br /&gt;
MARC can only be accessed via the Citrix NetScaler. You can access Citrix directly both from on-campus and off-campus without using the IT General VPN.&lt;br /&gt;
# Navigate to https://myappmf.ucalgary.ca&lt;br /&gt;
# You may need to log in with MFA.&lt;br /&gt;
&lt;br /&gt;
==== Add Applications ====&lt;br /&gt;
After logging in to Citrix, you will need to launch PuTTY To connect to MARC.&lt;br /&gt;
# On the Citrix NetScalar page, click on Apps in the top navigation menu.&lt;br /&gt;
# Click the PuTTY icon to start PuTTY [[File:RdhAddPutty.png|thumb|none]]&lt;br /&gt;
# After launching PuTTY, you should see the PuTTY configuration menu: [[File:RdhPuttyConfiguration.png|thumb|none]]&lt;br /&gt;
# Enter &amp;quot;&amp;lt;code&amp;gt;marc.ucalgary.ca&amp;lt;/code&amp;gt;&amp;quot; as the host name. Optionally, you may save this as the default setting (although the settings may not be persistent in this environment).&lt;br /&gt;
# Click Open.  You will be presented with a black terminal screen prompting for a username.[[File:RdhPuttyLogin.PNG|thumb|none]]&lt;br /&gt;
# Enter your University of Calgary IT username and hit enter.&lt;br /&gt;
# Enter your University of Calgary IT password when prompted.&lt;br /&gt;
# If the log in is successful, you should see the MARC banner:&lt;br /&gt;
&amp;lt;PRE&amp;gt;&lt;br /&gt;
login as: myusername&lt;br /&gt;
myusername@marc.ucalgary.ca&#039;s password:&lt;br /&gt;
Last login: Thu Apr  9 13:12:21 2020&lt;br /&gt;
===========================================================================&lt;br /&gt;
                  ___________________________________&lt;br /&gt;
                      _   _    __     ____       __&lt;br /&gt;
                      /  /|    / |    /    )   /    )&lt;br /&gt;
                  ---/| /-|---/__|---/___ /---/------&lt;br /&gt;
                    / |/  |  /   |  /    |   /&lt;br /&gt;
                  _/__/___|_/____|_/_____|__(____/___&lt;br /&gt;
                         marc.ucalgary.ca&lt;br /&gt;
               Problems or deficiencies? Send email to:&lt;br /&gt;
                       support@hpc.ucalgary.ca&lt;br /&gt;
&lt;br /&gt;
===========================================================================&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
SYSTEM NOTICES:&lt;br /&gt;
[myusername@marc ~]$&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/PRE&amp;gt;&lt;br /&gt;
Now you may return to the [[Marc Cluster Guide]] page&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:MARC]]&lt;br /&gt;
[[Category:Guides]]&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=Storage_Options&amp;diff=3767</id>
		<title>Storage Options</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=Storage_Options&amp;diff=3767"/>
		<updated>2025-03-20T19:25:37Z</updated>

		<summary type="html">&lt;p&gt;Darcy: /* National Data Management Ifrastructure */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;There are a few options researchers can take advantage of when storing their research data. Please take into account the purpose of the storage, appropriate research data management principles, and the data classification when choosing an appropriate storage solution. &lt;br /&gt;
&lt;br /&gt;
= Data Classification =&lt;br /&gt;
Please review the different data classifications that are outlined by the [https://ucalgary.ca/legal-services/sites/default/files/teams/1/Standards-Legal-Information-Security-Classification-Standard.pdf Information Security Classification Standard]. There are 4 levels of data classification which are summarised in the table below.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Level&lt;br /&gt;
! Description&lt;br /&gt;
! Example&lt;br /&gt;
|-&lt;br /&gt;
| Level 1&lt;br /&gt;
| Public&lt;br /&gt;
|&lt;br /&gt;
* Reference data sets&lt;br /&gt;
* Published research data&lt;br /&gt;
|-&lt;br /&gt;
| Level 2&lt;br /&gt;
| Internal&lt;br /&gt;
|&lt;br /&gt;
* Internal memos&lt;br /&gt;
* Unpublished research data&lt;br /&gt;
* Anonymized or de-identified human subject data&lt;br /&gt;
* Library transactions and journals&lt;br /&gt;
|-&lt;br /&gt;
| Level 3&lt;br /&gt;
| Confidential&lt;br /&gt;
|&lt;br /&gt;
* Faculty/staff employment applications, personnel files, contact information&lt;br /&gt;
* Donor or prospective donor information&lt;br /&gt;
* Contracts&lt;br /&gt;
* Intellectual property&lt;br /&gt;
|-&lt;br /&gt;
| Level 4&lt;br /&gt;
| Restricted&lt;br /&gt;
|&lt;br /&gt;
* Patient identifiable health information&lt;br /&gt;
* identifiable human subject research data&lt;br /&gt;
* information subject to special government requirements&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
When selecting a storage option, you must use one that meets or exceeds the rated security classification.&lt;br /&gt;
&lt;br /&gt;
* See also the Collaboration, storage and file shares article in Service Now:&lt;br /&gt;
: https://ucalgary.service-now.com/it?id=it_catalog_by_category&amp;amp;sys_id=4dbb82ee13661200c524fc04e144b044&lt;br /&gt;
&lt;br /&gt;
= Research Data Management =&lt;br /&gt;
We recommend you follow good Research Data Management (RDM) practices and ensure you have a Data Management Plan (DMP) created to guide your data&#039;s life-cycle. Your DMP can help us support the FAIR (Findable, Accessible, Interoperable, and Reusable) principles for data management.&lt;br /&gt;
&lt;br /&gt;
Please consider contacting Libraries and Cultural Resources for assistance.&lt;br /&gt;
&lt;br /&gt;
=== Resources ===&lt;br /&gt;
&lt;br /&gt;
* A DMP Assistant is a tool created specifically for Canadian scholars and aims to meet any and all Tri-Agency requirements available at: https://assistant.portagenetwork.ca/&lt;br /&gt;
* For guidance on general data management and developing a DMP, consult https://library.ucalgary.ca/guides/researchdatamanagement or contact research.data@ucalgary.ca.&lt;br /&gt;
* For support using PRISM Dataverse, the University of Calgary&#039;s institutional data repository, contact digitize@ucalgary.ca.&lt;br /&gt;
* If you need to share and preserve your large post-publication data set for a mandated period of time, consider using the national Federated Research Data Repository (FRDR). Learn more at https://www.frdr-dfdr.ca/repo/. FRDR aligns with Tri-Agency Principles as a platform for Preservation, Retention and Sharing of research data. see: [http://www.science.gc.ca/eic/site/063.nsf/eng/h_83F7624E.html Tri-Agency Statement of Principles on Digital Data Management]&lt;br /&gt;
&lt;br /&gt;
= University of Calgary IT storage services =&lt;br /&gt;
You can learn more about Information Technologies Storage solutions at https://ucalgary.service-now.com/it?id=kb_article&amp;amp;sys_id=d785de4e1b3ed41422ba4158dc4bcbf1&lt;br /&gt;
&lt;br /&gt;
== OneDrive for Business ==&lt;br /&gt;
OneDrive for Business is a storage solution provided by Microsoft and is available to all faculty and staff.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Capacity&lt;br /&gt;
| 5 TB with quota increases [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=438e6d8313896a0053f2d7b2e144b0b9 available on request].&lt;br /&gt;
|-&lt;br /&gt;
! Classification&lt;br /&gt;
| Level 1 - 4&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
You may use OneDrive for Business to store your personal and work related files. Files stored within OneDrive are by default private only to you but has the option to allow sharing and collaboration with others. &lt;br /&gt;
OneDrive for Business cannot be used as a department or project share space. There is no group/lab offering with OneDrive. &lt;br /&gt;
&lt;br /&gt;
While OneDrive provides a secure/compliant location from an IT Security stand point, it’s not the most adequate location for data the PI is accountable for 5 years upon completion of the study. This is not a security issue, but a data management issue.&lt;br /&gt;
&lt;br /&gt;
For example, if a study was using a personal OneDrive of one of the researchers to store all the records, and the researcher was to leave the university, this OneDrive would be gone in 30 days.&lt;br /&gt;
&lt;br /&gt;
MS has an automation capability for their O365 products. If you have a windows OS machine, you can use the automation product ‘Flow’ to copy a file to a local file system when a new file is created on OneDrive.&lt;br /&gt;
&lt;br /&gt;
To back up data residing on ARC to your personal OneDrive allocation please see:  [[How to transfer data#rclone: rsync for cloud storage]]&lt;br /&gt;
&lt;br /&gt;
OneDrive requires Multi-Factor Authentication (MFA) enabled on your University of Calgary IT account. &lt;br /&gt;
&lt;br /&gt;
More information can be located in the following article https://ucalgary.service-now.com/kb_view.do?sysparm_article=KB0032351&lt;br /&gt;
&lt;br /&gt;
University of Calgary OneDrive data is reportedly hosted in Canada (Markham, Ontario).&lt;br /&gt;
&lt;br /&gt;
=== Support for OneDrive ===&lt;br /&gt;
If you have questions, please contact the [https://ucalgary.service-now.com/it UService Support Centre].&lt;br /&gt;
&lt;br /&gt;
=== Other Resources ===&lt;br /&gt;
For more information on OneDrive for Business:&lt;br /&gt;
* Operating Level of Agreement KB0032404 (https://ucalgary.service-now.com/it?id=kb_article&amp;amp;sys_id=7f57bddcdb56a3047cab5068dc9619b6)&lt;br /&gt;
*OneDrive for Business Getting Started KB0032351 (https://ucalgary.service-now.com/it?id=kb_article&amp;amp;sys_id=60994170db2da7487cab5068dc961900)&lt;br /&gt;
*If you are above 90% of your OneDrive quota, you can request an increase here: ( https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=438e6d8313896a0053f2d7b2e144b0b9) PLEASE NOTE: Microsoft will only increase an allocation while the Cloud Storage is more than 90% full. Please log into your O365 cloud account to review before making your request.&lt;br /&gt;
&lt;br /&gt;
Any questions regarding if data hosted on OneDrive is subject to US jurisdiction discovery or access should be directed to:&lt;br /&gt;
*https://cumming.ucalgary.ca/research-institutes/csm-research-services/legal-research-services (CSM researchers.)&lt;br /&gt;
*https://research.ucalgary.ca/contact/research-services (Not CSM Researchers)&lt;br /&gt;
*https://www.ucalgary.ca/legalservices/  (for teaching/learning – non research enquiries that make their way to you)&lt;br /&gt;
&lt;br /&gt;
== Office365 SharePoint for research groups ==&lt;br /&gt;
To be determined....&lt;br /&gt;
&lt;br /&gt;
Researchers will be able to request an Office 365 SharePoint site for a group at some point in the future &lt;br /&gt;
which could be considered a group cloud sharing platform.&lt;br /&gt;
&lt;br /&gt;
* The official service page:&lt;br /&gt;
: https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=b55f2f72132f5240b5b4df82e144b085&lt;br /&gt;
&lt;br /&gt;
= University of Calgary RCS storage services =&lt;br /&gt;
&lt;br /&gt;
=== Secure Compute Data Storage (SCDS) ===&lt;br /&gt;
Secure Computing Data Storage (SCDS) is a service provided by Research Computing Services that allows researchers to store restricted and confidential data. Collaboration with Level 4 data stored in SCDS is possible using ShareFile, a secure file sharing and collaboration tool by Citrix.&lt;br /&gt;
 &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Capacity&lt;br /&gt;
| 10 GB or more&lt;br /&gt;
|-&lt;br /&gt;
! Classification&lt;br /&gt;
| Level 4&lt;br /&gt;
|-&lt;br /&gt;
! Learn More&lt;br /&gt;
| Visit [https://it.ucalgary.ca/secure-computing-platform The SCDS Website]&lt;br /&gt;
|-&lt;br /&gt;
! Request Access&lt;br /&gt;
| Visit [https://ucalgary.service-now.com/kb_view.do?sysparm_article=KB0030163 ServiceNow to request access]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== ResearchFS ==&lt;br /&gt;
ResearchFS is a University of Calgary-hosted SMB/CIFS storage solution funded and operated by RCS. It is available by request to faculty and staff with active research data.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Capacity&lt;br /&gt;
| 1TB with quota increases available on request. &lt;br /&gt;
|-&lt;br /&gt;
! Classification&lt;br /&gt;
| Level 1 - 2&lt;br /&gt;
|-&lt;br /&gt;
! Request Access&lt;br /&gt;
| Visit [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=fe66b3a7db297300897e4b8b0b96199d ServiceNow to request access]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Service Description ===&lt;br /&gt;
You may use ResearchFS to store your active research data files. ResearchFS is intended to be used as a research group or project share. ResearchFS is available on campus or off campus using the IT supported VPN client. Information on how to download and install the VPN client can be found here: https://ucalgary.service-now.com/it?id=kb_article&amp;amp;sys_id=880e71071381ae006f3afbb2e144b05c (IT account login may be required).&lt;br /&gt;
All ResearchFS users must have a University of Calgary IT account.&lt;br /&gt;
&lt;br /&gt;
=== Data recovery ===&lt;br /&gt;
ResearchFS does daily snapshots at a bit past midnight, which it keeps for 30 days. You should be able to recover a deleted file for up to 30 days, if it was in your share overnight. If you create a file and delete it during a day, no snapshot will be available for you to recover. ResearchFS presents backups using the windows OS &#039;previous versions&#039; functionality. If you are not familiar with using this, or if you are on a Linux or MacOS device, you can request a restore, with Service Now.&lt;br /&gt;
&lt;br /&gt;
For backup, we replicate changes to a distant data center every hour. The storage hardware which hosts your data is located in the basement of the Math Sciences building and our backup is in the HRIC building, so in case of an on campus disaster, your data should be safe.&lt;br /&gt;
&lt;br /&gt;
=== Support for ResearchFS ===&lt;br /&gt;
If you have questions, please contact the IT Support Centre.&lt;br /&gt;
: Mon – Fri: 8:30 am – 5:00 pm; Sat, Sun &amp;amp; holidays: 10:00 am – 2:00 pm.&lt;br /&gt;
: Live Chat: ucalgary.ca/it&lt;br /&gt;
: Email: itsupport@ucalgary.ca&lt;br /&gt;
: Phone: 403.210.9300&lt;br /&gt;
: In person: 773 Math Science&lt;br /&gt;
&lt;br /&gt;
== ARC Cluster Storage ==&lt;br /&gt;
ARC storage is used to &#039;&#039;&#039;support workflows on the ARC computing cluster&#039;&#039;&#039;. The expectation is that storage on ARC will only be used for active and upcoming computational projects. &lt;br /&gt;
It is not suitable for long-term or archival storage as it is not backed-up and is not guaranteed to be available for the time periods that are typical of archiving.&lt;br /&gt;
ARC is a research cluster, which means it has high performance but can be stopped for required maintenance when needed. &lt;br /&gt;
Thus, ARC cannot be relied on for any kind of service that requires constant availability. &lt;br /&gt;
Which means, in turn that ARC&#039;s storage cannot and &#039;&#039;&#039;should not be used as a main storage facility for research data&#039;&#039;&#039;. &lt;br /&gt;
The &#039;&#039;&#039;master copy&#039;&#039;&#039; of research data should be &#039;&#039;&#039;stored elsewhere&#039;&#039;&#039; and only part of that data are expected to be copied to ARC for computational analysis.&lt;br /&gt;
&lt;br /&gt;
=== ARC Home Directories ===&lt;br /&gt;
Every user account on ARC has a static 500GB allocation of storage and a maximum of 1.5 million files (including directories). This cannot be increased or decreased. Home directory storage is connected via a network file system to the rest of the cluster and supports fast data transfer to memory on compute nodes. This also means that basic file system commands (like &amp;lt;code&amp;gt;ls&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;find&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;du&amp;lt;/code&amp;gt;) take longer to run as the number of files in your home directory increases. In particular, we strongly encourage users to stay under 100000 files if it is at all possible. This can be achieved by combining smaller data files into single larger files, using structured data formats rather than large number of text files, or combining collections of files that will be used together into archives (tar, dar, etc). Since top level permissions on home directories are set to prevent other users from reading or executing, home directories are not suitable for sharing data directly with colleagues working on ARC. A Research Group Allocation is a more appropriate place for storing shared data or very large data sets that will be used as part of active computational projects.   &lt;br /&gt;
&lt;br /&gt;
=== ARC /work and /bulk Group Allocation ===&lt;br /&gt;
Any group member who wants to use the shared storage, should send an email to the support@hpc.ucalgary.ca to be added to the access group and CC the PI/ data owner. &#039;&#039;&#039;This will confirm that the PI approves the group member&#039;s request access to the shared storage.&#039;&#039;&#039; Please note that the access permissions inside the directory are expected to be managed by the data owners.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All requests should answer the following questions:&lt;br /&gt;
&lt;br /&gt;
* How much storage is requested and why is that the amount that you need?&lt;br /&gt;
A rationale for a request can be a formal data management plan or something more informal like a rough estimate to the primary dataset used for a project and a rough estimate to the size of outputs expected from your computations that are planned to run on ARC over the &#039;&#039;&#039;next year.&#039;&#039;&#039;&lt;br /&gt;
* What is the requested &#039;&#039;&#039;allocation name&#039;&#039;&#039;? (typically something like &amp;lt;PI name&amp;gt;_lab, &amp;lt;code&amp;gt;smith_lab&amp;lt;/code&amp;gt;, for example)&lt;br /&gt;
* What is the &#039;&#039;&#039;data classification&#039;&#039;&#039; using the University of Calgary data security classification system?&lt;br /&gt;
* Which user or users would be the &#039;&#039;&#039;owner&#039;&#039;&#039; of the allocation? (Full Name and UCalgary Email address, typically the requesting PI but there may be co-PIs)&lt;br /&gt;
* Which members of the allocation should be able to &#039;&#039;&#039;request access&#039;&#039;&#039; for new users? (Full Name and UCalgary Email address for active ARC users)&lt;br /&gt;
* What is the &#039;&#039;&#039;faculty&#039;&#039;&#039; of the owner or owners?&lt;br /&gt;
* Please provide a short description of the lab.&lt;br /&gt;
* Please provide a brief &#039;&#039;&#039;numerical estimate&#039;&#039;&#039; of the required storage space based on &#039;&#039;&#039;projects&#039;&#039;&#039; that will use the allocation and their storage &#039;&#039;&#039;requirements&#039;&#039;&#039; .&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example 1&#039;&#039;&#039;: &amp;quot;We will be processing a 1T dataset by performing 100 experimental runs. &lt;br /&gt;
Each experiment will be processed to produce a 6GB output, giving 600GB of the total output data. &lt;br /&gt;
We will also need 400GB additional space for post-processing and data management. &lt;br /&gt;
Thus, we would like to request &#039;&#039;&#039;2TB&#039;&#039;&#039; of shared space in total.&amp;quot;   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example 2&#039;&#039;&#039;: &amp;quot;3 members of our research group need additional shared space on ARC for their independent projects.&lt;br /&gt;
Project 1 starts with 100GB of initial data and is expected to generate 800GB of the output results. &lt;br /&gt;
Project 2 is going to use simulations and does not use any input data but is expected to generate 2TB of the simulated data for further processing.&lt;br /&gt;
The processing will require 200GB of additional space.&lt;br /&gt;
Project 3 will be working on a 1TB dataset and is expected to generate about 1TB of the output data. &lt;br /&gt;
These projects, therefore, will require 5.1TB of storage. &lt;br /&gt;
For convenience of data manipulation and management we would also like to have additional 400GB of extra storage space.&lt;br /&gt;
Therefore, we would like to request &#039;&#039;&#039;5.5TB&#039;&#039;&#039; of shared storage space in total.&amp;quot;   &lt;br /&gt;
&lt;br /&gt;
Work and Bulk storage can be considerably larger than the home directory allocations. However, there are limits on what RCS can provide as ARC storage provides high-speed access and is expensive to purchase. Typically, &#039;&#039;&#039;any request over 10TB&#039;&#039;&#039; will require some discussion. Work and Bulk allocations differ in a few ways that influence how they are used. Work storage is faster to access as part of computational jobs on ARC although the impact is small for jobs that don&#039;t involve enormous numbers of reads. Bulk storage is designed to be a target for instrument data (which is typically processed in a way that reads data a small number of times per job) and is capable of mounting instruments elsewhere on campus using SMB. A number of questions come up frequently about Work and Bulk storage and these are addressed in an [[Group Storage Allocation FAQ | FAQ]].&lt;br /&gt;
&lt;br /&gt;
= Digital Research Alliance of Canada storage services =&lt;br /&gt;
&lt;br /&gt;
== Storage on the Alliance HPC clusters ==&lt;br /&gt;
* Alliance Wiki article &amp;quot;Storage and file management&amp;quot;:&lt;br /&gt;
: https://docs.alliancecan.ca/wiki/Storage_and_file_management&lt;br /&gt;
&lt;br /&gt;
== The Alliance NextCloud ==&lt;br /&gt;
For personal or level 1 data, you may use an external solution from the Alliance.&lt;br /&gt;
One has to have an Alliance account to use the service.&lt;br /&gt;
This is similar to DropBox or Google drive functionality. &lt;br /&gt;
:https://nextcloud.computecanada.ca&lt;br /&gt;
: 100 GB of storage that can be shared between your computers.&lt;br /&gt;
: Alliance documentation: https://docs.alliancecan.ca/wiki/Nextcloud&lt;br /&gt;
&lt;br /&gt;
= National Data Management Infrastructure =&lt;br /&gt;
&lt;br /&gt;
* The Alliance RDM information:&lt;br /&gt;
: https://alliancecan.ca/en/services/research-data-management&lt;br /&gt;
&lt;br /&gt;
* Alliance notes on Research Data Management:&lt;br /&gt;
: https://docs.alliancecan.ca/wiki/Research_Data_Management&lt;br /&gt;
&lt;br /&gt;
== Borealis Dataverse Repository ==&lt;br /&gt;
&lt;br /&gt;
* https://borealisdata.ca/&lt;br /&gt;
&lt;br /&gt;
Borealis, the Canadian Dataverse Repository, is a bilingual, multidisciplinary, secure, Canadian research data repository, supported by academic libraries and research institutions across Canada. Borealis supports open discovery, management, sharing, and preservation of Canadian research data.&lt;br /&gt;
&lt;br /&gt;
== Federated Research Data Repository (FRDR) ==&lt;br /&gt;
The Federated Research Data Repository (RFRDR) is a suitable storage solution for long-term archive storage for research datasets used in published research work. FRDR is a bilingual publishing platform for sharing and preserving Canadian research data.&lt;br /&gt;
It is a curated, general-purpose repository, custom built for large datasets.&lt;br /&gt;
FRDR is run by the Digital Research Alliance of Canada.&lt;br /&gt;
&lt;br /&gt;
For more information on FRDR visit their web site:  https://www.frdr-dfdr.ca/repo/&lt;br /&gt;
&lt;br /&gt;
= Commercial Cloud Based Storage Options =&lt;br /&gt;
&lt;br /&gt;
== Amazon Web Services ==&lt;br /&gt;
Provided by &#039;&#039;&#039;Amazon Web Services, Inc.&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
* Pricing calculator: https://calculator.aws&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
AWS provides very many different kinds of services, including &#039;&#039;&#039;storage services&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
These options can be a solution for your research needs, but &lt;br /&gt;
* it &#039;&#039;&#039;can be expensive&#039;&#039;&#039;, depending on your needs and the amount of data;&lt;br /&gt;
* &#039;&#039;&#039;pricing schema is complex&#039;&#039;&#039; and can be confusing for new users;&lt;br /&gt;
* the &#039;&#039;&#039;number of options can be overwhelming&#039;&#039;&#039;. A lot of it is designed to provide pricing flexibility, not to increase functionality.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The key points:&lt;br /&gt;
* &#039;&#039;&#039;Uploading data&#039;&#039;&#039; to AWS is &#039;&#039;&#039;free&#039;&#039;&#039;.&lt;br /&gt;
* &#039;&#039;&#039;Storing data&#039;&#039;&#039; on AWS storage is a &#039;&#039;&#039;paid service&#039;&#039;&#039;.&lt;br /&gt;
* &#039;&#039;&#039;Downloading data&#039;&#039;&#039; from AWS storage to your computer is a &#039;&#039;&#039;paid service&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
[[Category:Administration]]&lt;br /&gt;
{{Navbox Administration}}&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=CloudStack_End_User_Agreement&amp;diff=3485</id>
		<title>CloudStack End User Agreement</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=CloudStack_End_User_Agreement&amp;diff=3485"/>
		<updated>2024-08-06T20:00:21Z</updated>

		<summary type="html">&lt;p&gt;Darcy: /* Best Practices */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
CloudStack is an Infrastructure as a Service provided to University of Calgary researchers.  It allows them to quickly deploy Virtual Machines to support their research.&lt;br /&gt;
&lt;br /&gt;
Researchers have a great degree of freedom in how they use CloudStack.  They are solely responsible for maintaining any VMs they deploy and to ensure they backup any data and OS/Software configuration needed to recover or rebuild their VM.&lt;br /&gt;
&lt;br /&gt;
CloudStack is a research environment.  While the system is available 24/7, support is only available during University business hours.&lt;br /&gt;
&lt;br /&gt;
Researchers are asked to follow the best practices listed below to ensure that the system remains available for the campus community.&lt;br /&gt;
&lt;br /&gt;
==Best Practices==&lt;br /&gt;
&lt;br /&gt;
# Please stay abreast of security updates for your OS and apply them.&lt;br /&gt;
# Backup your data and software configuration to non-CloudStack hosted storage (RCS does not provide backups of VMs and their data).&lt;br /&gt;
# Do not run Windows (This infrastructure is not licensed to run Windows).&lt;br /&gt;
# The University&#039;s &amp;quot;Acceptable Use of Electronic Resources and Information Policy&amp;quot; applies to your work using a VM.  Please see [https://www.ucalgary.ca/legal-services/ucalgarys-policies-and-procedures University Policies and Procedures] and search for &amp;quot;Electronic&amp;quot;.&lt;br /&gt;
# This infrastructure is only rated to handle Level 1 and Level 2 data. Please see [https://www.ucalgary.ca/legal-services/university-legal-services/operating-standards-guidelines-forms/ University Legal Services] and select &amp;quot;Information Security Classification Standard&amp;quot; for details.&lt;br /&gt;
# You are responsible for the appropriate use of the VM by any accounts you have created.&lt;br /&gt;
# You should remove/disable accounts that are no longer required.&lt;br /&gt;
# All user accounts on the VM must have good passwords. See [https://it.ucalgary.ca/it-security/passwords-do-i-have-change-them/ here] for details on creating strong passwords.&lt;br /&gt;
# CloudStack is not  meant as a High Performance Computing (HPC) number cruncher. If you have HPC needs, please see &amp;quot;[[How to get an account]]&amp;quot; for details on how to apply for an account on ARC.  Not sure what you need?  Contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca].&lt;br /&gt;
# If your VM faces the outside world, please consider using appropriate security tools.  Contact [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] for assistance.&lt;br /&gt;
&lt;br /&gt;
==Important Notes==&lt;br /&gt;
In the event of a security incident, IT Operations/Security will shut down affected VMs.&lt;br /&gt;
&lt;br /&gt;
Non-critical patches/upgrades to CloudStack that are required will happen on Tuesday of each week.  Running VMs should not be affected.&lt;br /&gt;
&lt;br /&gt;
Non-critical patches/upgrades that require a complete restart of CloudStack will occur after 1 day email notice.&lt;br /&gt;
&lt;br /&gt;
Urgent security patches to CloudStack may happen with little to no notice.&lt;br /&gt;
&lt;br /&gt;
CloudStack is provided as-is, with best effort support.  It is not suitable for mission critical, high availability services.&lt;br /&gt;
&lt;br /&gt;
Please check back with this document as changes may occur.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:CloudStack]]&lt;br /&gt;
{{Navbox CloudStack}}&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=RCS_Summer_School_2024&amp;diff=3361</id>
		<title>RCS Summer School 2024</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=RCS_Summer_School_2024&amp;diff=3361"/>
		<updated>2024-05-17T21:31:43Z</updated>

		<summary type="html">&lt;p&gt;Darcy: /* Registration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Research Computing Services&#039; 3rd annual summer school offers a handful of courses with a wide range of topics to help empower your research. We will cover topics including Linux/Slurm, ARC/HPC, Research Data Management (RDM) and Data Management Plan (DMP), working with research software and workflows, plus much more. The sessions and workshops is available from introductory to intermediate levels and is suitable for everyone interested in research in HPC.&lt;br /&gt;
&lt;br /&gt;
The summer school will run from Monday, June 10 through to Wednesday, June 12, 2024 from 9AM to 5PM. This 3 day event is completely &#039;&#039;&#039;&#039;&#039;&amp;lt;u&amp;gt;free&amp;lt;/u&amp;gt;&#039;&#039;&#039;&#039;&#039; to all University of Calgary members.[[File:RCS Summer School 2024 Poster.png|border|center|frameless|850x850px|RCS Summer School 2024 Poster]]&lt;br /&gt;
&lt;br /&gt;
== Registration ==&lt;br /&gt;
Registration is required to attend the RCS Summer School sessions. Registration is free to all members of the University of Calgary only. &lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;span class=&amp;quot;registerButton&amp;quot;&amp;gt;[https://rcs.ucalgary.ca/registration/summer-2024/ Register now]&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There will be a limit of approximately 100 seats. If you are unable to attend after registering, please cancel/modify your registration or notify us via email.&lt;br /&gt;
&lt;br /&gt;
== Schedule ==&lt;br /&gt;
The summer school sessions will be held in ICT 102 and ICT 114. Refreshments will be available in ICT 114 on all 3 days.&lt;br /&gt;
{| class=&amp;quot;wikitable table-left-aligned&amp;quot;&lt;br /&gt;
! rowspan=&amp;quot;2&amp;quot; |Time&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; |June 10&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; |June 11&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; |June 12&lt;br /&gt;
|-&lt;br /&gt;
!  width=&amp;quot;15%&amp;quot; | Track 1&lt;br /&gt;
!  width=&amp;quot;15%&amp;quot; |Track 2&lt;br /&gt;
!  width=&amp;quot;15%&amp;quot; |Track 1&lt;br /&gt;
!  width=&amp;quot;15%&amp;quot; |Track 2&lt;br /&gt;
!  width=&amp;quot;15%&amp;quot; |Track 1&lt;br /&gt;
!  width=&amp;quot;15%&amp;quot; |Track 2&lt;br /&gt;
|-&lt;br /&gt;
! width=&amp;quot;10%&amp;quot; |8:30 AM&lt;br /&gt;
|  colspan=&amp;quot;2&amp;quot; | &#039;&#039;&#039;Registration &amp;amp; check-in&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102&lt;br /&gt;
|  colspan=&amp;quot;2&amp;quot; | &#039;&#039;&#039;Registration &amp;amp; check-in&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102&lt;br /&gt;
|  colspan=&amp;quot;2&amp;quot; | &#039;&#039;&#039;Registration &amp;amp; check-in&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102&lt;br /&gt;
|-&lt;br /&gt;
!9:00 AM&lt;br /&gt;
|&#039;&#039;&#039;[[RCS Summer School 2024#Introduction to RCS|Introduction to RCS]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 9:00 AM - 9:20 AM&amp;lt;br&amp;gt;Jill Kowalchuk&lt;br /&gt;
|Refreshments &amp;lt;br&amp;gt;ICT 114&lt;br /&gt;
|&#039;&#039;&#039;The Alliance: An Introduction&#039;&#039;&#039;ICT 102, 9:00 AM - 9:20 AM&amp;lt;br&amp;gt;Brock Kahanyshyn&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Refreshments&amp;lt;br&amp;gt;ICT 114&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#Introduction to containers with Apptainer|Introduction to containers with Apptainer]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 9:00 AM - 9:50 AM&amp;lt;br&amp;gt;Tannistha Nandi&lt;br /&gt;
| rowspan=&amp;quot;6&amp;quot; |Refreshments&amp;lt;br&amp;gt;ICT 114&lt;br /&gt;
|-&lt;br /&gt;
!9:30 AM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#Introduction to Linux, Bash,and the command line|Introduction to Linux, Bash,&amp;lt;br&amp;gt;and the command line]]&amp;lt;br&amp;gt;&#039;&#039;&#039;ICT 102, 9:30 AM - 10:30 AM&amp;lt;br&amp;gt;Robert Fridman&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#NVIDIA|Accelerate data science workflows with NVIDIA RAPIDS]]&#039;&#039;&#039; &amp;lt;br&amp;gt;ICT 114, 9:30 AM - 11:50 AM&amp;lt;br&amp;gt;Tarini Bhatnagar &lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#Introduction to HPC resources|Introduction to HPC resources]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 9:30 AM - 10:20 AM&amp;lt;br&amp;gt;Robert Fridman, Dave Schulz&lt;br /&gt;
|-&lt;br /&gt;
!10:00 AM&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#Prefect for Research Workflow Development|Prefect for Research Workflow Development]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 114, 10:00 AM - 11:30 AM&amp;lt;br&amp;gt;David Deepwell, Pedro Martinez&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#Managing scientific software with Conda|Managing scientific software with Conda]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 10:00 AM - 10:50 AM&amp;lt;br&amp;gt;Dmitri Rozmanov &lt;br /&gt;
|-&lt;br /&gt;
!10:30 AM&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#Workshop: Hands on with Linux &amp;amp; Slurm|Hands on with Linux &amp;amp; Slurm]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 10:30 AM - 11:50 AM&amp;lt;br&amp;gt;Robert Fridman&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#Linux tools &amp;amp; utilities for working with large data sets|Linux tools &amp;amp; utilities for working with large data sets]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 10:30 AM - 11:20 AM&amp;lt;br&amp;gt;Leo Leung, Dave Schulz &lt;br /&gt;
|-&lt;br /&gt;
!11:00 AM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#Reproducible Data Management with Datalad|Reproducible Data Management with Datalad: Part I]]&amp;lt;br&amp;gt;&#039;&#039;&#039;ICT 114, 11:00 AM - 11:50 AM&amp;lt;br&amp;gt;David Deepwell, Pedro Martinez&lt;br /&gt;
|-&lt;br /&gt;
!11:30 AM&lt;br /&gt;
|&#039;&#039;&#039;[[RCS Summer School 2024#RCS Q&amp;amp;A period: Ask RCS anything|RCS Q&amp;amp;A period: Ask RCS anything]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 11:30 AM - 12:00 PM&amp;lt;br&amp;gt;RCS Team&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
!12:00 PM&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; rowspan=&amp;quot;2&amp;quot; |&#039;&#039;&#039;Lunch break&#039;&#039;&#039;&amp;lt;br&amp;gt;12:00 PM - 1:00 PM&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; rowspan=&amp;quot;2&amp;quot; |&#039;&#039;&#039;Lunch break&#039;&#039;&#039;&amp;lt;br&amp;gt;12:00 PM - 1:00 PM&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; rowspan=&amp;quot;2&amp;quot; |&#039;&#039;&#039;Lunch break&#039;&#039;&#039;&amp;lt;br&amp;gt;12:00 PM - 1:00 PM&lt;br /&gt;
|-&lt;br /&gt;
! 12:30 PM&lt;br /&gt;
|-&lt;br /&gt;
!1:00 PM&lt;br /&gt;
|&#039;&#039;&#039;[[RCS Summer School 2024#Open OnDemand on ARC|Open OnDemand on ARC]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 1:00 PM - 1:20 PM&amp;lt;br&amp;gt;Leo Leung&lt;br /&gt;
| rowspan=&amp;quot;8&amp;quot; |Refreshments&amp;lt;br&amp;gt;ICT 114&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#Research Data Management and Data File Management|Research Data Management and Data File Management]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 1:00 PM - 2:20 PM&amp;lt;br&amp;gt;Ingrid Reiche, Jennifer Abel, Alex Thistlewood&lt;br /&gt;
|Refreshments&amp;lt;br&amp;gt;ICT 114&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#Dell|Dell &amp;amp; AMD: Machine learning with Dell &amp;amp; AMD]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 1:00 PM - 1:50 PM&lt;br /&gt;
Rob Lucas&lt;br /&gt;
|Refreshments &amp;lt;br&amp;gt;ICT 114&lt;br /&gt;
|-&lt;br /&gt;
!1:30 PM&lt;br /&gt;
|&#039;&#039;&#039;[[RCS Summer School 2024#AWS|AWS: Inspiring the art of the possible]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 1:30 PM - 1:50 PM&lt;br /&gt;
AWS&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
!2:00 PM&lt;br /&gt;
|[[RCS Summer School 2024#AWS|&#039;&#039;&#039;AWS: How AWS works with Researchers&#039;&#039;&#039;]]&amp;lt;br&amp;gt;ICT 102, 2:00 PM - 2:20 PM&lt;br /&gt;
AWS&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
!2:30 PM&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#AWS|AWS: Machine Learning with low-code workshop]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 2:30 PM - 4:50 PM&amp;lt;br&amp;gt;AWS&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#Data in Motion: Navigating Storage Solutions for Active Research Data|Data in Motion: Navigating Storage Solutions for Active Research Data]]&amp;lt;br&amp;gt;&#039;&#039;&#039;ICT 102, 2:30 AM - 4:20 AM&amp;lt;br&amp;gt;Ian Percel, Jennifer Abel, Alex Thistlewood&lt;br /&gt;
|&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#Reproducible Data Management with Datalad|Reproducible Data Management with Datalad: Part II]]&amp;lt;br&amp;gt;&#039;&#039;&#039;ICT 102, 2:30 PM - 3:20 PM&amp;lt;br&amp;gt;David Deepwell, Pedro Martinez&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
!3:00 PM&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
!3:30 PM&lt;br /&gt;
|&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; rowspan=&amp;quot;4&amp;quot; |&#039;&#039;&#039;End of day: 3:30 PM&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
!4:00 PM&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
!4:30 PM &lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; rowspan=&amp;quot;2&amp;quot; |&#039;&#039;&#039;End of day: 4:30 PM&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
!5:00 PM &lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; |&#039;&#039;&#039;End of day: 5:00 PM&#039;&#039;&#039;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Sessions ==&lt;br /&gt;
{| class=&amp;quot;wikitable table-left-aligned&amp;quot;&lt;br /&gt;
! width=&amp;quot;20%&amp;quot; |Session&lt;br /&gt;
! width=&amp;quot;20%&amp;quot; |Time and Location&lt;br /&gt;
! width=&amp;quot;60%&amp;quot; |Synopsis&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
====Introduction to RCS====&lt;br /&gt;
|June 10, 9:00AM - 9:20AM&lt;br /&gt;
ICT 102&lt;br /&gt;
|We will begin the RCS summer school with a quick introduction by Jill Kowalchuk, the Interim director of Research Computing Services. We will introduce the RCS team, provide a high level overview of our services, and how to get help and support from our analysts.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Speaker:&#039;&#039;&#039; Jill Kowalchuk&lt;br /&gt;
*&#039;&#039;&#039;Format:&#039;&#039;&#039; Lecture&lt;br /&gt;
*&#039;&#039;&#039;Level:&#039;&#039;&#039; Introductory&lt;br /&gt;
*&#039;&#039;&#039;Prerequisites:&#039;&#039;&#039; None&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
====Introduction to Linux, Bash, and the command line====&lt;br /&gt;
|June 10, 9:30AM - 10:30AM &lt;br /&gt;
ICT 102&lt;br /&gt;
|This course provides you with essential skills to effectively use the Linux command line. We will go over from ground up how to log-in and interact with our HPC cluster, traverse the filesystem, execute programs, and manage files.&lt;br /&gt;
This beginner friendly session requires no prior experience to Linux. We recommend bringing your own device to follow along. By the end of the course, you should be familiar with what is possible with the Linux command line.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Speaker:&#039;&#039;&#039; Robert Fridman&lt;br /&gt;
*&#039;&#039;&#039;Format:&#039;&#039;&#039; Lecture + Follow along&lt;br /&gt;
*&#039;&#039;&#039;Level:&#039;&#039;&#039; Introductory&lt;br /&gt;
*&#039;&#039;&#039;Prerequisites:&#039;&#039;&#039; None&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
====Workshop: Hands on with Linux &amp;amp; Slurm====&lt;br /&gt;
|June 10, 10:30AM - 11:50 AM &lt;br /&gt;
ICT 102&lt;br /&gt;
|This follow-up workshop comes immediately after the Introduction to Linux session. We will build on what we learned in the previous session and go into details on how to use the HPC cluster using the Slurm scheduler.&lt;br /&gt;
This workshop will provide you with the skills necessary to write a simple Slurm batch script, submit jobs to Slurm, view and manage your jobs. By the end of the course, you will be familiar with what Slurm is, how it fits in in a HPC environment, and how to start using Slurm on our HPC clusters for your research.&lt;br /&gt;
&lt;br /&gt;
This is a beginner friendly workshop. You should be familiar with the Linux command line. We recommend bringing your own device to follow along.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Speaker:&#039;&#039;&#039; Robert Fridman&lt;br /&gt;
*&#039;&#039;&#039;Format:&#039;&#039;&#039; Workshop + Hands on&lt;br /&gt;
*&#039;&#039;&#039;Level:&#039;&#039;&#039; Introductory&lt;br /&gt;
*&#039;&#039;&#039;Prerequisites:&#039;&#039;&#039; None&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
====Open OnDemand on ARC====&lt;br /&gt;
|June 10, 1:00 PM - 1:20 PM&lt;br /&gt;
ICT 102&lt;br /&gt;
| Did you know you can run a Linux desktop and graphical tools on ARC? This session will cover what ARC Open OnDemand is and how it may help with your research. We will show you how to:&lt;br /&gt;
&lt;br /&gt;
*Connect to Open OnDemand through your browser&lt;br /&gt;
*Start a graphical desktop environment in our ARC HPC cluster environment&lt;br /&gt;
*View and mange files in your home directory via Open OnDemand&lt;br /&gt;
*Connect to ARC through your web browser&lt;br /&gt;
*View the status of your submitted jobs&lt;br /&gt;
&lt;br /&gt;
By the end of this session, you will be familiar with the options available on Open OnDemand and be able to start graphical sessions through this service. This is a beginner friendly workshop and no prior experience is necessary. We recommend bringing your own device to follow along.&lt;br /&gt;
*&#039;&#039;&#039;Speaker:&#039;&#039;&#039; Leo Leung&lt;br /&gt;
*&#039;&#039;&#039;Format:&#039;&#039;&#039; Lecture + Follow along&lt;br /&gt;
*&#039;&#039;&#039;Level:&#039;&#039;&#039; Introductory&lt;br /&gt;
*&#039;&#039;&#039;Prerequisites:&#039;&#039;&#039; None&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
====Data in Motion: Navigating Storage Solutions for Active Research Data====&lt;br /&gt;
|June 12, 9:00 AM - 10:50 AM&lt;br /&gt;
ICT 114, Track 2&lt;br /&gt;
|Planning for and requesting specialized storage for large research projects can be a daunting proposition. The variety of storage options and the expected justifications for allocations locally to UCalgary, at national supercomputing sites, and in the public cloud can quickly become overwhelming. This talk aims to provide an introduction to the cost/benefit tradeoff in using different storage systems, when to reach out to different support services around the university for help in making critical decisions, and basic techniques for providing a quantitative justification for a storage request.&lt;br /&gt;
By the end of the session, you will be familiar with the types of storage related questions that should be answered when tackling large research projects and the different types of solutions that the University offers our researchers.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Speaker:&#039;&#039;&#039; Ian Percel, Jennifer Abel, Alex Thistlewood&lt;br /&gt;
*&#039;&#039;&#039;Format:&#039;&#039;&#039; Lecture&lt;br /&gt;
*&#039;&#039;&#039;Level:&#039;&#039;&#039; Introductory&lt;br /&gt;
*&#039;&#039;&#039;Prerequisites:&#039;&#039;&#039; None&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
====Reproducible Data Management with Datalad====&lt;br /&gt;
|June 11, 11:00 AM - 11:50 AM&lt;br /&gt;
ICT 114, Track 2&lt;br /&gt;
June 12, 11:00 AM - 11:50 AM&lt;br /&gt;
&lt;br /&gt;
ICT 102, Track 1&lt;br /&gt;
|Data management and research data  is critical to research. This is a two part workshop that introduces you to DataLad, a digital data management system based on the Git version control system. &lt;br /&gt;
Content to be covered in the two-part session includes: &lt;br /&gt;
&lt;br /&gt;
*Dataset basics,&lt;br /&gt;
*Capturing data-provenance, and&lt;br /&gt;
*Collaborative data analysis.&lt;br /&gt;
Background content will be covered before conducting the primary hands-on training where attendees will create a small demonstrative research project containing data provenance. Although no git knowledge is required, familiarity with git is strongly advised. Command line experience is required.&lt;br /&gt;
*&#039;&#039;&#039;Speaker:&#039;&#039;&#039; David Deepwell and Pedro Martinez&lt;br /&gt;
*&#039;&#039;&#039;Format:&#039;&#039;&#039; Lecture + Hands on&lt;br /&gt;
*&#039;&#039;&#039;Level:&#039;&#039;&#039; Introductory&lt;br /&gt;
*&#039;&#039;&#039;Prerequisites:&#039;&#039;&#039; Command line experience, Familiarity with Git&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
====Introduction to HPC resources====&lt;br /&gt;
|June 11, 9:30AM - 10:20AM&lt;br /&gt;
ICT 102&lt;br /&gt;
|This session is a primer for those new to high performance computing (HPC) or computing on remote resources. We will build on the foundations built from our previous Linux and Slurm introductory sessions and expand on the larger picture, including:&lt;br /&gt;
&lt;br /&gt;
*Motivation for using HPC&lt;br /&gt;
*Finding available resources on HPC resources&lt;br /&gt;
*Issues and pitfalls to avoid (such as incorrect job resource requests)&lt;br /&gt;
*Troubleshooting job failures&lt;br /&gt;
*High level overview of parallel programming with Slurm&lt;br /&gt;
*How to transfer data to/from other institutionsThis is a beginner friendly workshop. You should be familiar with the Linux command line. We recommend bringing your own device to follow along.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Speaker:&#039;&#039;&#039; Robert Fridman, Dave Schulz&lt;br /&gt;
*&#039;&#039;&#039;Format:&#039;&#039;&#039; Lecture&lt;br /&gt;
*&#039;&#039;&#039;Level:&#039;&#039;&#039; Introductory&lt;br /&gt;
*&#039;&#039;&#039;Prerequisites:&#039;&#039;&#039; Linux command line, Slurm&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
====Linux tools &amp;amp; utilities for working with large data sets====&lt;br /&gt;
|June 11, 10:30AM - 11:20AM&lt;br /&gt;
ICT 102&lt;br /&gt;
|This session introduces more intermediate to advanced uses of the Linux environment for handling large data sets. The course will demonstrate the power of shell pipes and how you can work with large datasets with just standard Linux tools and utilities that is built-in to the system.&lt;br /&gt;
We will cover some common use cases including:&lt;br /&gt;
&lt;br /&gt;
*How to download large datasets from the Internet&lt;br /&gt;
*How to parsing text-based data using tools such as sed, awk, grep&lt;br /&gt;
*How to build powerful text mining, conversion, and visualization with just the command line&lt;br /&gt;
&lt;br /&gt;
This is an intermediate course. You should be familiar with the Linux command line and some common Linux utilities prior to the course. Some understanding of regular expressions may be useful.&lt;br /&gt;
*&#039;&#039;&#039;Speaker:&#039;&#039;&#039; Leo Leung, Dave Schulz&lt;br /&gt;
*&#039;&#039;&#039;Format:&#039;&#039;&#039; Lecture&lt;br /&gt;
*&#039;&#039;&#039;Level:&#039;&#039;&#039; Introductory to Intermediate&lt;br /&gt;
*&#039;&#039;&#039;Prerequisites:&#039;&#039;&#039; Command line experience&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
====RCS Q&amp;amp;A period: Ask RCS anything====&lt;br /&gt;
|June 11, 11:30AM - 12:00PM&lt;br /&gt;
ICT 102&lt;br /&gt;
|This is a general question and answers period where you may ask the Research Computing Services team questions related to RCS and HPC. You may ask both technical and non-technical questions.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Speaker:&#039;&#039;&#039; The RCS team&lt;br /&gt;
*&#039;&#039;&#039;Format:&#039;&#039;&#039; Question &amp;amp; Answer period&lt;br /&gt;
*&#039;&#039;&#039;Level:&#039;&#039;&#039; Introductory&lt;br /&gt;
*&#039;&#039;&#039;Prerequisites:&#039;&#039;&#039; None&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
====&#039;&#039;&#039;Research Data Management and Data File Management&#039;&#039;&#039;====&lt;br /&gt;
|June 11, 1:00PM - 2:20PM&lt;br /&gt;
ICT 102&lt;br /&gt;
|Managing your digital files and research materials is critical for keeping yourself organized, collaborating, and communicating with colleagues. In this session, we will cover Research Data Management (RDM) and Data Management Plan (DMP). We will also go over best practices in digital file management depending on your individual and organizational needs. &lt;br /&gt;
This presentation will also discuss best practices, versioning, and how to document and share your file and folder convention using a README file.&lt;br /&gt;
By the end of this session, you should be familiar with RDM and DMP concepts to help keep your research materials organized.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Speaker:&#039;&#039;&#039; Ingrid Reiche, Jennifer Abel, Alex Thistlewood (from The University of Calgary Libraries and Cultural Resources)&lt;br /&gt;
*&#039;&#039;&#039;Format:&#039;&#039;&#039; Lecture&lt;br /&gt;
*&#039;&#039;&#039;Level:&#039;&#039;&#039; Introductory&lt;br /&gt;
*&#039;&#039;&#039;Prerequisites:&#039;&#039;&#039; None&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
====Introduction to containers with Apptainer====&lt;br /&gt;
|June 11, 2:30PM - 3:20PM&lt;br /&gt;
ICT 102&lt;br /&gt;
|Reproducible research workflows is essential for repeatability. This session will cover the basics of using containers with Apptainer, a secure container technology designed to be used on for high performance compute clusters. We will cover:&lt;br /&gt;
&lt;br /&gt;
*How to use Apptainer to run a containerized environment&lt;br /&gt;
*How to build Apptainer containers&lt;br /&gt;
*How to deploy software inside Apptainer containers&lt;br /&gt;
*How to use Apptainer containers with your Slurm job submissions.&lt;br /&gt;
&lt;br /&gt;
The instructor for this session will be remote and will be streamed in ICT 102. We will provide a zoom link for those who wishes to attend virtually.&lt;br /&gt;
*&#039;&#039;&#039;Speaker:&#039;&#039;&#039; Tannistha Nandi&lt;br /&gt;
*&#039;&#039;&#039;Format:&#039;&#039;&#039; Lecture + Hands on&lt;br /&gt;
*&#039;&#039;&#039;Level:&#039;&#039;&#039; Introductory&lt;br /&gt;
*&#039;&#039;&#039;Prerequisites:&#039;&#039;&#039; None&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
====Managing scientific software with Conda====&lt;br /&gt;
|June 11, 3:30PM - 4:20PM&lt;br /&gt;
ICT 102&lt;br /&gt;
|Running customized scientific software on a shared HPC environment may be challenging. This session, we will go over how to set up customized software environments using Conda.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Speaker:&#039;&#039;&#039; Dmitri Rozmanov&lt;br /&gt;
*&#039;&#039;&#039;Level:&#039;&#039;&#039; Introductory&lt;br /&gt;
*&#039;&#039;&#039;Prerequisites:&#039;&#039;&#039; None&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
====Prefect for Research Workflow Development====&lt;br /&gt;
|June 12, 2:30PM - 3:50PM&lt;br /&gt;
ICT 102&lt;br /&gt;
|Modernize your research workflows using Prefect, an open source workflow orchestration tool. In this session we will cover some of the fundamentals of building workflows with Prefect, with examples on how to deploy Prefect on local and distributed computing infrastructure.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Speaker:&#039;&#039;&#039; David Deepwell and Pedro Martinez&lt;br /&gt;
*&#039;&#039;&#039;Format:&#039;&#039;&#039; Lecture + Hands on&lt;br /&gt;
*&#039;&#039;&#039;Level:&#039;&#039;&#039; Introductory&lt;br /&gt;
*&#039;&#039;&#039;Prerequisites:&#039;&#039;&#039; None&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
====AWS: Inspiring the art of the possible====&lt;br /&gt;
‎&amp;lt;span id=&amp;quot;AWS&amp;quot;&amp;gt;‎&amp;lt;/span&amp;gt;&lt;br /&gt;
|June 11, 1:30PM - 1:50PM&lt;br /&gt;
ICT 102&lt;br /&gt;
|Learn what is possible on AWS Cloud for research.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Speaker:&#039;&#039;&#039; AWS&lt;br /&gt;
*&#039;&#039;&#039;Format:&#039;&#039;&#039; Lecture&lt;br /&gt;
*&#039;&#039;&#039;Level:&#039;&#039;&#039; Introductory&lt;br /&gt;
*&#039;&#039;&#039;Prerequisites:&#039;&#039;&#039; None&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
====AWS: How AWS works with Researchers====&lt;br /&gt;
|June 11, 1:30PM - 1:50PM&lt;br /&gt;
ICT 102&lt;br /&gt;
| AWS has many programs to support researchers such as credits, letter of supports, immersion days, working on proof of concepts. In this session, we will cover how we engage with researchers and what programs are out there to help accelerate your research with the AWS Cloud.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Speaker:&#039;&#039;&#039; AWS&lt;br /&gt;
*&#039;&#039;&#039;Format:&#039;&#039;&#039; Lecture&lt;br /&gt;
*&#039;&#039;&#039;Level:&#039;&#039;&#039; Introductory&lt;br /&gt;
*&#039;&#039;&#039;Prerequisites:&#039;&#039;&#039; None&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
====AWS: Machine learning with low-code workshop====&lt;br /&gt;
|June 11, 1:30 PM - 4:45 PM&lt;br /&gt;
ICT 102&lt;br /&gt;
| The Machine Learning (ML) journey requires continuous experimentation and rapid prototyping to be successful. In order to create highly accurate and performant models, data scientists have to first experiment with feature engineering, model selection and  optimization techniques. These processes are traditionally time consuming and expensive.&lt;br /&gt;
In this workshop attendees will learn the following:&lt;br /&gt;
*How the Low-Code ML capabilities found in Amazon SageMaker Data Wrangler, Autopilot and Jumpstart, make it easier to experiment faster and bring highly accurate models to production more quickly and efficiently&lt;br /&gt;
*How to simplify the process of data preparation and feature engineering, and complete each step of the data preparation workflow&lt;br /&gt;
*Understand how to automatically build, train, and tune the best machine learning models based on your data, while allowing you to maintain full control and visibility.&lt;br /&gt;
*Get started with ML easily and quickly using pre-built solutions for common financial use cases and open source models from popular model zoos.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Speaker:&#039;&#039;&#039; AWS&lt;br /&gt;
*&#039;&#039;&#039;Format:&#039;&#039;&#039; Workshop + Hands on&lt;br /&gt;
*&#039;&#039;&#039;Level:&#039;&#039;&#039; Introductory&lt;br /&gt;
*&#039;&#039;&#039;Prerequisites:&#039;&#039;&#039; None&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
====&#039;&#039;&#039;Accelerate data science workflows with NVIDIA RAPIDS&#039;&#039;&#039;====&lt;br /&gt;
‎&amp;lt;span id=&amp;quot;NVIDIA&amp;quot;&amp;gt;‎&amp;lt;/span&amp;gt;&lt;br /&gt;
|June 10, 9:30 AM - 11:50 AM&lt;br /&gt;
ICT 102&lt;br /&gt;
|Unlock the power of GPU acceleration for your data science projects in our hands-on workshop. This session is designed to introduce participants to NVIDIA RAPIDS, a suite of open-source software libraries and APIs built on CUDA. RAPIDS enables data scientists and analysts to execute end-to-end data science and analytics pipelines entirely on GPUs, significantly speeding up workflows.&lt;br /&gt;
In this interactive session, we will:&lt;br /&gt;
&lt;br /&gt;
*Introduce NVIDIA RAPIDS and its possibilities for data scientists&lt;br /&gt;
*Run RAPIDS in a Jupyter notebook environment on ARC&lt;br /&gt;
*With with sample datasets to perform data manipulation and visualization tasks&lt;br /&gt;
*Explore hands-on coding exercises that illustrate the advantages of GPU accelerated processing&lt;br /&gt;
&lt;br /&gt;
By the end of this session, you will have the basic practical skills necessary to start using RAPIDS for GPU-accelerated research work on our HPC infrastructure.&lt;br /&gt;
*&#039;&#039;&#039;Speaker:&#039;&#039;&#039; Tarini Bhatnagar from NVIDIA&lt;br /&gt;
*&#039;&#039;&#039;Format:&#039;&#039;&#039; Lecture + Follow Along&lt;br /&gt;
*&#039;&#039;&#039;Level:&#039;&#039;&#039; Introductory&lt;br /&gt;
*&#039;&#039;&#039;Prerequisites:&#039;&#039;&#039; Introductory Python and Pandas recommended&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
====Dell Presentation: TBD====&lt;br /&gt;
‎&amp;lt;span id=&amp;quot;Dell&amp;quot;&amp;gt;‎&amp;lt;/span&amp;gt;&lt;br /&gt;
|June 12, 1:00 PM - 1:50 PM&lt;br /&gt;
ICT 102&lt;br /&gt;
|TBD&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Speaker:&#039;&#039;&#039; Rob Lucas from Dell&lt;br /&gt;
*&#039;&#039;&#039;Format:&#039;&#039;&#039; Lecture&lt;br /&gt;
*&#039;&#039;&#039;Level:&#039;&#039;&#039; Introductory&lt;br /&gt;
*&#039;&#039;&#039;Prerequisites:&#039;&#039;&#039; None&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=RCS_Summer_School_2024&amp;diff=3330</id>
		<title>RCS Summer School 2024</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=RCS_Summer_School_2024&amp;diff=3330"/>
		<updated>2024-05-15T22:22:43Z</updated>

		<summary type="html">&lt;p&gt;Darcy: /* Schedule */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Research Computing Services&#039; 3rd annual summer school will run from Monday, June 10 through to Wednesday, June 12, 2024 from 9AM to 5PM. This summer school consists of various sessions and workshops throughout these 3 days and is completely &#039;&#039;&#039;&#039;&#039;&amp;lt;u&amp;gt;free&amp;lt;/u&amp;gt;&#039;&#039;&#039;&#039;&#039; to all University of Calgary members.&lt;br /&gt;
&lt;br /&gt;
Our goal for this year&#039;s summer school is to &#039;&#039;&#039;Empower our researchers:&#039;&#039;&#039; Inspiring what is possible on HPC infrastructure.&lt;br /&gt;
[[File:RCS Summer School 2024 Poster.png|border|center|frameless|850x850px|RCS Summer School 2024 Poster]]&lt;br /&gt;
&lt;br /&gt;
== Registration ==&lt;br /&gt;
Registration is required to attend the RCS Summer School sessions. Registration is free to all members of the University of Calgary. &lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;span class=&amp;quot;registerButton&amp;quot;&amp;gt;[https://rcs.ucalgary.ca/registration/summer-2024/ Register now]&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There will be a limit of approximately 100 seats. If you are unable to attend after registering, please cancel/modify your registration or notify us via email.&lt;br /&gt;
&lt;br /&gt;
== Topics ==&lt;br /&gt;
&lt;br /&gt;
* Introduction to RCS services and HPC resources&lt;br /&gt;
* Introduction to Linux &amp;amp; Bash command line&lt;br /&gt;
* Using Linux utilities for large datasets&lt;br /&gt;
* Hands on with Linux &amp;amp; Slurm: Workshop&lt;br /&gt;
* Using Open OnDemand on ARC&lt;br /&gt;
* Develop a research data management plan&lt;br /&gt;
* Reproducible data management with Datalad&lt;br /&gt;
* Digital File Management&lt;br /&gt;
* Using containers in HPC with Apptainer&lt;br /&gt;
* Managing scientific software with Conda&lt;br /&gt;
* Research workflow development with Prefect&lt;br /&gt;
* AWS: ML in the Cloud, a walkthrough followed by a workshop&lt;br /&gt;
* NVIDIA: Workflow optimization using NVIDIA GPUs&lt;br /&gt;
* Dell &amp;amp; AMD: Machine learning with Dell and AMD&lt;br /&gt;
&lt;br /&gt;
== Schedule ==&lt;br /&gt;
The summer school sessions will be held in ICT 102 and ICT 114. Refreshments will be available in ICT 114 on all 3 days.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Time&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; |June 10&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; |June 11&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; |June 12&lt;br /&gt;
|-&lt;br /&gt;
! width=&amp;quot;10%&amp;quot; |8:30 AM&lt;br /&gt;
| width=&amp;quot;15%&amp;quot; |&#039;&#039;&#039;Registration &amp;amp; check-in&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102&lt;br /&gt;
| width=&amp;quot;15%&amp;quot; rowspan=&amp;quot;2&amp;quot; |Refreshments &amp;lt;br&amp;gt;ICT 114&lt;br /&gt;
| width=&amp;quot;15%&amp;quot; |&#039;&#039;&#039;Registration &amp;amp; check-in&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102&lt;br /&gt;
| width=&amp;quot;15%&amp;quot; rowspan=&amp;quot;2&amp;quot; |Refreshments&amp;lt;br&amp;gt;ICT 114&lt;br /&gt;
| width=&amp;quot;15%&amp;quot; |&#039;&#039;&#039;Registration &amp;amp; check-in&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102&lt;br /&gt;
| width=&amp;quot;15%&amp;quot; rowspan=&amp;quot;15&amp;quot; |Refreshments&amp;lt;br&amp;gt;ICT 114&lt;br /&gt;
|-&lt;br /&gt;
!9:00 AM&lt;br /&gt;
|&#039;&#039;&#039;[[RCS Summer School 2024#Introduction to RCS|Introduction to RCS]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 9:00AM - 9:20AM&amp;lt;br&amp;gt;Jill Kowalchuk&lt;br /&gt;
|&#039;&#039;&#039;The Alliance: Introduction&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102&amp;lt;br&amp;gt;Brock Kahanyshyn&lt;br /&gt;
|&#039;&#039;&#039;TBD&#039;&#039;&#039;&lt;br /&gt;
ICT 102&lt;br /&gt;
|-&lt;br /&gt;
!9:30 AM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#Introduction to Linux, Bash,and the command line|Introduction to Linux, Bash,&amp;lt;br&amp;gt;and the command line]]&amp;lt;br&amp;gt;&#039;&#039;&#039;ICT 102, 9:30AM - 10:30AM&amp;lt;br&amp;gt;Robert Fridman&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#Developing a Research Data Management Plan with technical storage requirements|Developing a Research Data Management Plan with technical storage requirements]]&amp;lt;br&amp;gt;&#039;&#039;&#039;ICT 114, 9:30AM - 11:20AM&amp;lt;br&amp;gt;Ian Percel&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#Introduction to HPC resources|Introduction to HPC resources]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 9:30AM - 10:20AM&amp;lt;br&amp;gt;Robert Fridman, Dave Schulz&lt;br /&gt;
|&#039;&#039;&#039;[[RCS Summer School 2024#Reproducible Data Management with Datalad|Reproducible Data Management with Datalad: Part II]]&amp;lt;br&amp;gt;&#039;&#039;&#039;ICT 114, 9:30AM - 10:20AM&amp;lt;br&amp;gt;David Deepwell, Pedro Martinez&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#NVIDIA|NVIDIA: Workflow Optimization with NVIDIA GPUs]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 9:30AM - 12:00PM&amp;lt;br&amp;gt;Jonathan Dursi&lt;br /&gt;
|-&lt;br /&gt;
!10:00 AM&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Refreshments&amp;lt;br&amp;gt;ICT 114&lt;br /&gt;
|-&lt;br /&gt;
!10:30 AM&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#Workshop: Hands on with Linux &amp;amp; Slurm|Workshop: Hands on with Linux &amp;amp; Slurm]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 10:30AM - 11:50 AM&amp;lt;br&amp;gt;Robert Fridman&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#Linux tools &amp;amp; utilities for working with large data sets|Linux tools &amp;amp; utilities for working with large data sets]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 10:30AM - 11:20AM&amp;lt;br&amp;gt;Leo Leung, Dave Schulz&lt;br /&gt;
|-&lt;br /&gt;
!11:00 AM&lt;br /&gt;
|-&lt;br /&gt;
!11:30 AM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#Reproducible Data Management with Datalad|Reproducible Data Management with Datalad: Part I]]&amp;lt;br&amp;gt;&#039;&#039;&#039;ICT 114, 11:30AM - 12:20AM&amp;lt;br&amp;gt;David Deepwell, Pedro Martinez&lt;br /&gt;
|&#039;&#039;&#039;[[RCS Summer School 2024#RCS Q&amp;amp;A period: Ask RCS anything|RCS Q&amp;amp;A period: Ask RCS anything]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 11:30AM - 12:00PM&amp;lt;br&amp;gt;RCS Team&lt;br /&gt;
|-&lt;br /&gt;
!12:00 PM&lt;br /&gt;
|&#039;&#039;&#039;[[RCS Summer School 2024#Open OnDemand on ARC|Open OnDemand on ARC]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 12:00 AM - 12:20 AM&amp;lt;br&amp;gt;Leo Leung&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; rowspan=&amp;quot;2&amp;quot; |&#039;&#039;&#039;Lunch break&#039;&#039;&#039;&amp;lt;br&amp;gt;12:00PM - 1:00PM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |&#039;&#039;&#039;Lunch break&#039;&#039;&#039;&amp;lt;br&amp;gt;12:00PM - 1:00PM&lt;br /&gt;
|-&lt;br /&gt;
!12:30 PM&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; rowspan=&amp;quot;2&amp;quot; |&#039;&#039;&#039;Lunch break&#039;&#039;&#039;&amp;lt;br&amp;gt;12:30PM - 1:30PM&lt;br /&gt;
|-&lt;br /&gt;
!1:00 PM&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#Research Data Management and Data File Management|Research Data Management and Data File Management]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 1:00PM - 2:20PM&amp;lt;br&amp;gt;Jennifer Abel, Alex Thistlewood, Ingrid Reiche&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot; |Refreshments&amp;lt;br&amp;gt;ICT 114&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#Dell &amp;amp; AMD|Dell &amp;amp; AMD: Machine learning with Dell &amp;amp; AMD]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 1:00PM - 1:50PM&lt;br /&gt;
Rob Lucas&lt;br /&gt;
|-&lt;br /&gt;
!1:30 PM&lt;br /&gt;
|&#039;&#039;&#039;[[RCS Summer School 2024#AWS|AWS: Inspiring the art of the possible]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 1:30PM - 1:50PM&lt;br /&gt;
AWS&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot; |Refreshments&amp;lt;br&amp;gt;ICT 114&lt;br /&gt;
|-&lt;br /&gt;
!2:00 PM&lt;br /&gt;
|[[RCS Summer School 2024#AWS|&#039;&#039;&#039;AWS: How AWS works with Researchers&#039;&#039;&#039;]]&amp;lt;br&amp;gt;ICT 102, 2:00PM - 2:20PM&lt;br /&gt;
AWS&lt;br /&gt;
|-&lt;br /&gt;
!2:30 PM&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#AWS|AWS: Machine Learning with low-code workshop]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 2:30PM - 4:50PM&amp;lt;br&amp;gt;AWS&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#Introduction to containers with Apptainer|Introduction to containers with Apptainer]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 2:30PM - 3:20PM&amp;lt;br&amp;gt;Tannistha Nandi&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#Prefect for Research Workflow Development|Prefect for Research Workflow Development]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 2:30PM - 3:50PM&amp;lt;br&amp;gt;David Deepwell, Pedro Martinez&lt;br /&gt;
|-&lt;br /&gt;
!3:00 PM&lt;br /&gt;
|-&lt;br /&gt;
!3:30 PM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#Managing scientific software with Conda|Managing scientific software with Conda]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 3:30PM - 4:20PM&amp;lt;br&amp;gt;Dmitri Rozmanov&lt;br /&gt;
|-&lt;br /&gt;
!4:00 PM&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; rowspan=&amp;quot;3&amp;quot; |&#039;&#039;&#039;End of day: 4:00PM&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
!4:30 PM&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; rowspan=&amp;quot;2&amp;quot; |&#039;&#039;&#039;End of day: 4:30PM&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
!5:00 PM&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; |&#039;&#039;&#039;End of day: 5:00PM&#039;&#039;&#039;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Sessions ==&lt;br /&gt;
&lt;br /&gt;
=== Introduction to RCS ===&lt;br /&gt;
ICT 102, 9:00AM - 9:20AM by Jill Kowalchuk&lt;br /&gt;
&lt;br /&gt;
We will begin the summer school with a quick introduction by Jill Kowalchuk, the Interim director of Research Computing Services. We&#039;ll go through who RCS is and the services that we offer.&lt;br /&gt;
&lt;br /&gt;
=== Introduction to Linux, Bash, and the command line ===&lt;br /&gt;
ICT 102, 9:30AM - 10:30AM by Robert Fridman&lt;br /&gt;
&lt;br /&gt;
A quick crash course on how to use Linux, bash shell, and the command line in general. This beginner friendly session requires no prior experience to Linux. We recommend bringing your own device to follow along.&lt;br /&gt;
&lt;br /&gt;
=== Workshop: Hands on with Linux &amp;amp; Slurm ===&lt;br /&gt;
ICT 102, 10:30AM - 11:50 AM by Robert Fridman&lt;br /&gt;
&lt;br /&gt;
A follow-up workshop that builds on the basics covered in the Linux introduction session and goes into depth on how to use Slurm, the scheduler that RCS uses in their high performance computing clusters. We recommend bringing your own device to follow along.&lt;br /&gt;
&lt;br /&gt;
=== Open OnDemand on ARC ===&lt;br /&gt;
ICT 102, 12:00 AM - 12:20 AM by Leo Leung&lt;br /&gt;
&lt;br /&gt;
Did you know you can run a Linux desktop on ARC? In this session, we will do a quick demo of ARC Open OnDemand, a web interface that allows users to submit jobs that need graphical user interfaces. We will also cover how to monitor your jobs through Open OnDemand.&lt;br /&gt;
&lt;br /&gt;
=== Developing a Research Data Management Plan with technical storage requirements ===&lt;br /&gt;
ICT 114, 9:30AM - 11:20AM by Ian Percel&lt;br /&gt;
&lt;br /&gt;
Effective management of your research data is paramount. Join us as we delve into crafting robust data management plans tailored to your specific research needs.&lt;br /&gt;
&lt;br /&gt;
=== Reproducible Data Management with Datalad ===&lt;br /&gt;
Part I: ICT 114, 10:30AM - 11:20AM by David Deepwell and Pedro Martinez&lt;br /&gt;
&lt;br /&gt;
Part II: ICT 114, 9:30AM - 10:20AM by David Deepwell and Pedro Martinez&lt;br /&gt;
&lt;br /&gt;
This workshop provides an introduction to digital data management with DataLad. Background content will be covered before conducting the primary hands-on training where attendees will create a small demonstrative research project containing data provenance. &lt;br /&gt;
&lt;br /&gt;
Content to be covered includes: dataset basics, capturing data-provenance, and collaborative data analysis.&lt;br /&gt;
&lt;br /&gt;
DataLad is a git-based version control system. Although no git knowledge is required, familiarity with git is strongly advised. Command line experience is required.&lt;br /&gt;
&lt;br /&gt;
=== Introduction to HPC resources ===&lt;br /&gt;
ICT 102, 9:30AM - 10:20AM by Robert Fridman, Dave Schulz&lt;br /&gt;
&lt;br /&gt;
An introduction to high performance computing resources offered by RCS. We will go over how our infrastructure ties in to your research and how to make the most out of Slurm. How to download and transfer data with other institutions.&lt;br /&gt;
&lt;br /&gt;
=== Linux tools &amp;amp; utilities for working with large data sets ===&lt;br /&gt;
ICT 102, 10:30AM - 11:20AM by Leo Leung&lt;br /&gt;
&lt;br /&gt;
As researchers use larger and larger datasets, it is imperative to effectively handle and manage these datasets. In this session, we will go through some common methods to work with datasets using standard Linux tools and utilities. We will cover common use cases on how to download large datasets from the Internet, parsing text-based data using tools such as sed, awk, grep, and will then tie everything together with pipes.&lt;br /&gt;
&lt;br /&gt;
=== RCS Q&amp;amp;A period: Ask RCS anything ===&lt;br /&gt;
ICT 102, 11:30AM - 12:00PM by the RCS team&lt;br /&gt;
&lt;br /&gt;
A general question and answers period where you can ask us anything related to RCS and HPC.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Research Data Management and Data File Management&#039;&#039;&#039; ===&lt;br /&gt;
ICT 102, 1:00PM - 2:20PM by Jennifer Abel, Alex Thistlewood, and Ingrid Reiche (from The University of Calgary Libraries and Cultural Resources)&lt;br /&gt;
&lt;br /&gt;
Managing your digital files and research materials is critical for keeping yourself organized, collaborating, and communicating with colleagues. In this session, we will cover Research Data Management (RDM) and Data Management Plan (DMP). We will also go over best practices in digital file management depending on your individual and organizational needs. This presentation will also discuss best practices, versioning, and how to document and share your file and folder convention using a README file.&lt;br /&gt;
&lt;br /&gt;
=== Introduction to containers with Apptainer ===&lt;br /&gt;
ICT 102, 2:30PM - 3:20PM by Tannistha Nandi&lt;br /&gt;
&lt;br /&gt;
Make your research workflows reproducible through the power of containers. We will go through in detail how to run containers on ARC using Apptainer.&lt;br /&gt;
&lt;br /&gt;
=== Managing scientific software with Conda ===&lt;br /&gt;
ICT 102, 3:30PM - 4:20PM by Dmitri Rozmanov&lt;br /&gt;
&lt;br /&gt;
Running customized scientific software on a shared HPC environment may be challenging. This session, we will go over how to set up customized software environments using Conda.&lt;br /&gt;
&lt;br /&gt;
=== Prefect for Research Workflow Development ===&lt;br /&gt;
ICT 102, 2:30PM - 3:50PM by David Deepwell and Pedro Martinez&lt;br /&gt;
&lt;br /&gt;
Modernize your research workflows using Prefect, an open source workflow orchestration tool.  We will show how you can build and deploy resilient workflows.&lt;br /&gt;
&lt;br /&gt;
=== AWS ===&lt;br /&gt;
==== AWS: Inspiring the art of the possible ====&lt;br /&gt;
ICT 102, 1:30PM - 1:50PM by AWS&lt;br /&gt;
&lt;br /&gt;
Learn what is possible on AWS Cloud for research.&lt;br /&gt;
&lt;br /&gt;
==== AWS: How AWS works with Researchers ====&lt;br /&gt;
ICT 102, 1:30PM - 1:50PM by AWS&lt;br /&gt;
&lt;br /&gt;
AWS has many programs to support researchers such as credits, letter of supports, immersion days, working on proof of concepts. In this session, we will cover how we engage with researchers and what programs are out there to help accelerate your research with the AWS Cloud. &lt;br /&gt;
&lt;br /&gt;
==== AWS: Machine learning with low-code workshop ====&lt;br /&gt;
ICT 102, 1:30 PM - 4:45 PM by AWS&lt;br /&gt;
&lt;br /&gt;
The Machine Learning (ML) journey requires continuous experimentation and rapid prototyping to be successful. In order to create highly accurate and performant models, data scientists have to first experiment with feature engineering, model selection and  optimization techniques. These processes are traditionally time consuming and expensive. In this workshop attendees will learn the following:&lt;br /&gt;
&lt;br /&gt;
* How the Low-Code ML capabilities found in Amazon SageMaker Data Wrangler, Autopilot and Jumpstart, make it easier to experiment faster and bring highly accurate models to production more quickly and efficiently&lt;br /&gt;
* How to simplify the process of data preparation and feature engineering, and complete each step of the data preparation workflow&lt;br /&gt;
* Understand how to automatically build, train, and tune the best machine learning models based on your data, while allowing you to maintain full control and visibility.&lt;br /&gt;
* Get started with ML easily and quickly using pre-built solutions for common financial use cases and open source models from popular model zoos.&lt;br /&gt;
&lt;br /&gt;
=== NVIDIA ===&lt;br /&gt;
&lt;br /&gt;
==== Workflow Optimization with NVIDIA GPUs ====&lt;br /&gt;
ICT 102, 9:30AM - 12:20AM by NVIDIA&lt;br /&gt;
&lt;br /&gt;
We will discuss how to optimizing workflows with NVIDIA powered GPUs to help accelerate your research.&lt;br /&gt;
&lt;br /&gt;
=== Dell &amp;amp; AMD ===&lt;br /&gt;
&lt;br /&gt;
==== Machine learning with Dell &amp;amp; AMD ====&lt;br /&gt;
ICT 102, 1:00PM - 1:50PM by Rob Lucas&lt;br /&gt;
&lt;br /&gt;
To be announced.&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=RCS_Summer_School_2024&amp;diff=3329</id>
		<title>RCS Summer School 2024</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=RCS_Summer_School_2024&amp;diff=3329"/>
		<updated>2024-05-15T22:17:52Z</updated>

		<summary type="html">&lt;p&gt;Darcy: /* Schedule */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Research Computing Services&#039; 3rd annual summer school will run from Monday, June 10 through to Wednesday, June 12, 2024 from 9AM to 5PM. This summer school consists of various sessions and workshops throughout these 3 days and is completely &#039;&#039;&#039;&#039;&#039;&amp;lt;u&amp;gt;free&amp;lt;/u&amp;gt;&#039;&#039;&#039;&#039;&#039; to all University of Calgary members.&lt;br /&gt;
&lt;br /&gt;
Our goal for this year&#039;s summer school is to &#039;&#039;&#039;Empower our researchers:&#039;&#039;&#039; Inspiring what is possible on HPC infrastructure.&lt;br /&gt;
[[File:RCS Summer School 2024 Poster.png|border|center|frameless|850x850px|RCS Summer School 2024 Poster]]&lt;br /&gt;
&lt;br /&gt;
== Registration ==&lt;br /&gt;
Registration is required to attend the RCS Summer School sessions. Registration is free to all members of the University of Calgary. &lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;span class=&amp;quot;registerButton&amp;quot;&amp;gt;[https://rcs.ucalgary.ca/registration/summer-2024/ Register now]&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There will be a limit of approximately 100 seats. If you are unable to attend after registering, please cancel/modify your registration or notify us via email.&lt;br /&gt;
&lt;br /&gt;
== Topics ==&lt;br /&gt;
&lt;br /&gt;
* Introduction to RCS services and HPC resources&lt;br /&gt;
* Introduction to Linux &amp;amp; Bash command line&lt;br /&gt;
* Using Linux utilities for large datasets&lt;br /&gt;
* Hands on with Linux &amp;amp; Slurm: Workshop&lt;br /&gt;
* Using Open OnDemand on ARC&lt;br /&gt;
* Develop a research data management plan&lt;br /&gt;
* Reproducible data management with Datalad&lt;br /&gt;
* Digital File Management&lt;br /&gt;
* Using containers in HPC with Apptainer&lt;br /&gt;
* Managing scientific software with Conda&lt;br /&gt;
* Research workflow development with Prefect&lt;br /&gt;
* AWS: ML in the Cloud, a walkthrough followed by a workshop&lt;br /&gt;
* NVIDIA: Workflow optimization using NVIDIA GPUs&lt;br /&gt;
* Dell &amp;amp; AMD: Machine learning with Dell and AMD&lt;br /&gt;
&lt;br /&gt;
== Schedule ==&lt;br /&gt;
The summer school sessions will be held in ICT 102 and ICT 114. Refreshments will be available in ICT 114 on all 3 days.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Time&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; |June 10&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; |June 11&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; |June 12&lt;br /&gt;
|-&lt;br /&gt;
! width=&amp;quot;10%&amp;quot; |8:30 AM&lt;br /&gt;
| width=&amp;quot;15%&amp;quot; |&#039;&#039;&#039;Registration &amp;amp; check-in&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102&lt;br /&gt;
| width=&amp;quot;15%&amp;quot; rowspan=&amp;quot;2&amp;quot; |Refreshments &amp;lt;br&amp;gt;ICT 114&lt;br /&gt;
| width=&amp;quot;15%&amp;quot; |&#039;&#039;&#039;Registration &amp;amp; check-in&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102&lt;br /&gt;
| width=&amp;quot;15%&amp;quot; rowspan=&amp;quot;2&amp;quot; |Refreshments&amp;lt;br&amp;gt;ICT 114&lt;br /&gt;
| width=&amp;quot;15%&amp;quot; |&#039;&#039;&#039;Registration &amp;amp; check-in&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102&lt;br /&gt;
| width=&amp;quot;15%&amp;quot; rowspan=&amp;quot;15&amp;quot; |Refreshments&amp;lt;br&amp;gt;ICT 114&lt;br /&gt;
|-&lt;br /&gt;
!9:00 AM&lt;br /&gt;
|&#039;&#039;&#039;[[RCS Summer School 2024#Introduction to RCS|Introduction to RCS]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 9:00AM - 9:20AM&amp;lt;br&amp;gt;Jill Kowalchuk&lt;br /&gt;
|&#039;&#039;&#039;The Alliance: Introduction&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102&amp;lt;br&amp;gt;Brock Kahanyshyn&lt;br /&gt;
|&#039;&#039;&#039;TBD&#039;&#039;&#039;&lt;br /&gt;
ICT 102&lt;br /&gt;
|-&lt;br /&gt;
!9:30 AM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#Introduction to Linux, Bash,and the command line|Introduction to Linux, Bash,&amp;lt;br&amp;gt;and the command line]]&amp;lt;br&amp;gt;&#039;&#039;&#039;ICT 102, 9:30AM - 10:30AM&amp;lt;br&amp;gt;Robert Fridman&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#Developing a Research Data Management Plan with technical storage requirements|Developing a Research Data Management Plan with technical storage requirements]]&amp;lt;br&amp;gt;&#039;&#039;&#039;ICT 114, 9:30AM - 11:20AM&amp;lt;br&amp;gt;Ian Percel&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#Introduction to HPC resources|Introduction to HPC resources]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 9:30AM - 10:20AM&amp;lt;br&amp;gt;Robert Fridman, Dave Schulz&lt;br /&gt;
|&#039;&#039;&#039;[[RCS Summer School 2024#Reproducible Data Management with Datalad|Reproducible Data Management with Datalad: Part II]]&amp;lt;br&amp;gt;&#039;&#039;&#039;ICT 114, 9:30AM - 10:20AM&amp;lt;br&amp;gt;David Deepwell, Pedro Martinez&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#NVIDIA|NVIDIA: Workflow Optimization with NVIDIA GPUs]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 9:30AM - 12:00PM&amp;lt;br&amp;gt;Jonathan Dursi&lt;br /&gt;
|-&lt;br /&gt;
!10:00 AM&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Refreshments&amp;lt;br&amp;gt;ICT 114&lt;br /&gt;
|-&lt;br /&gt;
!10:30 AM&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#Workshop: Hands on with Linux &amp;amp; Slurm|Workshop: Hands on with Linux &amp;amp; Slurm]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 10:30AM - 11:50 AM&amp;lt;br&amp;gt;Robert Fridman&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#Linux tools &amp;amp; utilities for working with large data sets|Linux tools &amp;amp; utilities for working with large data sets]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 10:30AM - 11:20AM&amp;lt;br&amp;gt;Leo Leung&lt;br /&gt;
|-&lt;br /&gt;
!11:00 AM&lt;br /&gt;
|-&lt;br /&gt;
!11:30 AM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#Reproducible Data Management with Datalad|Reproducible Data Management with Datalad: Part I]]&amp;lt;br&amp;gt;&#039;&#039;&#039;ICT 114, 11:30AM - 12:20AM&amp;lt;br&amp;gt;David Deepwell, Pedro Martinez&lt;br /&gt;
|&#039;&#039;&#039;[[RCS Summer School 2024#RCS Q&amp;amp;A period: Ask RCS anything|RCS Q&amp;amp;A period: Ask RCS anything]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 11:30AM - 12:00PM&amp;lt;br&amp;gt;RCS Team&lt;br /&gt;
|-&lt;br /&gt;
!12:00 PM&lt;br /&gt;
|&#039;&#039;&#039;[[RCS Summer School 2024#Open OnDemand on ARC|Open OnDemand on ARC]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 12:00 AM - 12:20 AM&amp;lt;br&amp;gt;Leo Leung&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; rowspan=&amp;quot;2&amp;quot; |&#039;&#039;&#039;Lunch break&#039;&#039;&#039;&amp;lt;br&amp;gt;12:00PM - 1:00PM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |&#039;&#039;&#039;Lunch break&#039;&#039;&#039;&amp;lt;br&amp;gt;12:00PM - 1:00PM&lt;br /&gt;
|-&lt;br /&gt;
!12:30 PM&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; rowspan=&amp;quot;2&amp;quot; |&#039;&#039;&#039;Lunch break&#039;&#039;&#039;&amp;lt;br&amp;gt;12:30PM - 1:30PM&lt;br /&gt;
|-&lt;br /&gt;
!1:00 PM&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#Research Data Management and Data File Management|Research Data Management and Data File Management]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 1:00PM - 2:20PM&amp;lt;br&amp;gt;Jennifer Abel, Alex Thistlewood, Ingrid Reiche&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot; |Refreshments&amp;lt;br&amp;gt;ICT 114&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#Dell &amp;amp; AMD|Dell &amp;amp; AMD: Machine learning with Dell &amp;amp; AMD]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 1:00PM - 1:50PM&lt;br /&gt;
Rob Lucas&lt;br /&gt;
|-&lt;br /&gt;
!1:30 PM&lt;br /&gt;
|&#039;&#039;&#039;[[RCS Summer School 2024#AWS|AWS: Inspiring the art of the possible]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 1:30PM - 1:50PM&lt;br /&gt;
AWS&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot; |Refreshments&amp;lt;br&amp;gt;ICT 114&lt;br /&gt;
|-&lt;br /&gt;
!2:00 PM&lt;br /&gt;
|[[RCS Summer School 2024#AWS|&#039;&#039;&#039;AWS: How AWS works with Researchers&#039;&#039;&#039;]]&amp;lt;br&amp;gt;ICT 102, 2:00PM - 2:20PM&lt;br /&gt;
AWS&lt;br /&gt;
|-&lt;br /&gt;
!2:30 PM&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#AWS|AWS: Machine Learning with low-code workshop]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 2:30PM - 4:50PM&amp;lt;br&amp;gt;AWS&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#Introduction to containers with Apptainer|Introduction to containers with Apptainer]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 2:30PM - 3:20PM&amp;lt;br&amp;gt;Tannistha Nandi&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#Prefect for Research Workflow Development|Prefect for Research Workflow Development]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 2:30PM - 3:50PM&amp;lt;br&amp;gt;David Deepwell, Pedro Martinez&lt;br /&gt;
|-&lt;br /&gt;
!3:00 PM&lt;br /&gt;
|-&lt;br /&gt;
!3:30 PM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#Managing scientific software with Conda|Managing scientific software with Conda]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 3:30PM - 4:20PM&amp;lt;br&amp;gt;Dmitri Rozmanov&lt;br /&gt;
|-&lt;br /&gt;
!4:00 PM&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; rowspan=&amp;quot;3&amp;quot; |&#039;&#039;&#039;End of day: 4:00PM&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
!4:30 PM&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; rowspan=&amp;quot;2&amp;quot; |&#039;&#039;&#039;End of day: 4:30PM&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
!5:00 PM&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; |&#039;&#039;&#039;End of day: 5:00PM&#039;&#039;&#039;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Sessions ==&lt;br /&gt;
&lt;br /&gt;
=== Introduction to RCS ===&lt;br /&gt;
ICT 102, 9:00AM - 9:20AM by Jill Kowalchuk&lt;br /&gt;
&lt;br /&gt;
We will begin the summer school with a quick introduction by Jill Kowalchuk, the Interim director of Research Computing Services. We&#039;ll go through who RCS is and the services that we offer.&lt;br /&gt;
&lt;br /&gt;
=== Introduction to Linux, Bash, and the command line ===&lt;br /&gt;
ICT 102, 9:30AM - 10:30AM by Robert Fridman&lt;br /&gt;
&lt;br /&gt;
A quick crash course on how to use Linux, bash shell, and the command line in general. This beginner friendly session requires no prior experience to Linux. We recommend bringing your own device to follow along.&lt;br /&gt;
&lt;br /&gt;
=== Workshop: Hands on with Linux &amp;amp; Slurm ===&lt;br /&gt;
ICT 102, 10:30AM - 11:50 AM by Robert Fridman&lt;br /&gt;
&lt;br /&gt;
A follow-up workshop that builds on the basics covered in the Linux introduction session and goes into depth on how to use Slurm, the scheduler that RCS uses in their high performance computing clusters. We recommend bringing your own device to follow along.&lt;br /&gt;
&lt;br /&gt;
=== Open OnDemand on ARC ===&lt;br /&gt;
ICT 102, 12:00 AM - 12:20 AM by Leo Leung&lt;br /&gt;
&lt;br /&gt;
Did you know you can run a Linux desktop on ARC? In this session, we will do a quick demo of ARC Open OnDemand, a web interface that allows users to submit jobs that need graphical user interfaces. We will also cover how to monitor your jobs through Open OnDemand.&lt;br /&gt;
&lt;br /&gt;
=== Developing a Research Data Management Plan with technical storage requirements ===&lt;br /&gt;
ICT 114, 9:30AM - 11:20AM by Ian Percel&lt;br /&gt;
&lt;br /&gt;
Effective management of your research data is paramount. Join us as we delve into crafting robust data management plans tailored to your specific research needs.&lt;br /&gt;
&lt;br /&gt;
=== Reproducible Data Management with Datalad ===&lt;br /&gt;
Part I: ICT 114, 10:30AM - 11:20AM by David Deepwell and Pedro Martinez&lt;br /&gt;
&lt;br /&gt;
Part II: ICT 114, 9:30AM - 10:20AM by David Deepwell and Pedro Martinez&lt;br /&gt;
&lt;br /&gt;
This workshop provides an introduction to digital data management with DataLad. Background content will be covered before conducting the primary hands-on training where attendees will create a small demonstrative research project containing data provenance. &lt;br /&gt;
&lt;br /&gt;
Content to be covered includes: dataset basics, capturing data-provenance, and collaborative data analysis.&lt;br /&gt;
&lt;br /&gt;
DataLad is a git-based version control system. Although no git knowledge is required, familiarity with git is strongly advised. Command line experience is required.&lt;br /&gt;
&lt;br /&gt;
=== Introduction to HPC resources ===&lt;br /&gt;
ICT 102, 9:30AM - 10:20AM by Robert Fridman, Dave Schulz&lt;br /&gt;
&lt;br /&gt;
An introduction to high performance computing resources offered by RCS. We will go over how our infrastructure ties in to your research and how to make the most out of Slurm. How to download and transfer data with other institutions.&lt;br /&gt;
&lt;br /&gt;
=== Linux tools &amp;amp; utilities for working with large data sets ===&lt;br /&gt;
ICT 102, 10:30AM - 11:20AM by Leo Leung&lt;br /&gt;
&lt;br /&gt;
As researchers use larger and larger datasets, it is imperative to effectively handle and manage these datasets. In this session, we will go through some common methods to work with datasets using standard Linux tools and utilities. We will cover common use cases on how to download large datasets from the Internet, parsing text-based data using tools such as sed, awk, grep, and will then tie everything together with pipes.&lt;br /&gt;
&lt;br /&gt;
=== RCS Q&amp;amp;A period: Ask RCS anything ===&lt;br /&gt;
ICT 102, 11:30AM - 12:00PM by the RCS team&lt;br /&gt;
&lt;br /&gt;
A general question and answers period where you can ask us anything related to RCS and HPC.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Research Data Management and Data File Management&#039;&#039;&#039; ===&lt;br /&gt;
ICT 102, 1:00PM - 2:20PM by Jennifer Abel, Alex Thistlewood, and Ingrid Reiche (from The University of Calgary Libraries and Cultural Resources)&lt;br /&gt;
&lt;br /&gt;
Managing your digital files and research materials is critical for keeping yourself organized, collaborating, and communicating with colleagues. In this session, we will cover Research Data Management (RDM) and Data Management Plan (DMP). We will also go over best practices in digital file management depending on your individual and organizational needs. This presentation will also discuss best practices, versioning, and how to document and share your file and folder convention using a README file.&lt;br /&gt;
&lt;br /&gt;
=== Introduction to containers with Apptainer ===&lt;br /&gt;
ICT 102, 2:30PM - 3:20PM by Tannistha Nandi&lt;br /&gt;
&lt;br /&gt;
Make your research workflows reproducible through the power of containers. We will go through in detail how to run containers on ARC using Apptainer.&lt;br /&gt;
&lt;br /&gt;
=== Managing scientific software with Conda ===&lt;br /&gt;
ICT 102, 3:30PM - 4:20PM by Dmitri Rozmanov&lt;br /&gt;
&lt;br /&gt;
Running customized scientific software on a shared HPC environment may be challenging. This session, we will go over how to set up customized software environments using Conda.&lt;br /&gt;
&lt;br /&gt;
=== Prefect for Research Workflow Development ===&lt;br /&gt;
ICT 102, 2:30PM - 3:50PM by David Deepwell and Pedro Martinez&lt;br /&gt;
&lt;br /&gt;
Modernize your research workflows using Prefect, an open source workflow orchestration tool.  We will show how you can build and deploy resilient workflows.&lt;br /&gt;
&lt;br /&gt;
=== AWS ===&lt;br /&gt;
==== AWS: Inspiring the art of the possible ====&lt;br /&gt;
ICT 102, 1:30PM - 1:50PM by AWS&lt;br /&gt;
&lt;br /&gt;
Learn what is possible on AWS Cloud for research.&lt;br /&gt;
&lt;br /&gt;
==== AWS: How AWS works with Researchers ====&lt;br /&gt;
ICT 102, 1:30PM - 1:50PM by AWS&lt;br /&gt;
&lt;br /&gt;
AWS has many programs to support researchers such as credits, letter of supports, immersion days, working on proof of concepts. In this session, we will cover how we engage with researchers and what programs are out there to help accelerate your research with the AWS Cloud. &lt;br /&gt;
&lt;br /&gt;
==== AWS: Machine learning with low-code workshop ====&lt;br /&gt;
ICT 102, 1:30 PM - 4:45 PM by AWS&lt;br /&gt;
&lt;br /&gt;
The Machine Learning (ML) journey requires continuous experimentation and rapid prototyping to be successful. In order to create highly accurate and performant models, data scientists have to first experiment with feature engineering, model selection and  optimization techniques. These processes are traditionally time consuming and expensive. In this workshop attendees will learn the following:&lt;br /&gt;
&lt;br /&gt;
* How the Low-Code ML capabilities found in Amazon SageMaker Data Wrangler, Autopilot and Jumpstart, make it easier to experiment faster and bring highly accurate models to production more quickly and efficiently&lt;br /&gt;
* How to simplify the process of data preparation and feature engineering, and complete each step of the data preparation workflow&lt;br /&gt;
* Understand how to automatically build, train, and tune the best machine learning models based on your data, while allowing you to maintain full control and visibility.&lt;br /&gt;
* Get started with ML easily and quickly using pre-built solutions for common financial use cases and open source models from popular model zoos.&lt;br /&gt;
&lt;br /&gt;
=== NVIDIA ===&lt;br /&gt;
&lt;br /&gt;
==== Workflow Optimization with NVIDIA GPUs ====&lt;br /&gt;
ICT 102, 9:30AM - 12:20AM by NVIDIA&lt;br /&gt;
&lt;br /&gt;
We will discuss how to optimizing workflows with NVIDIA powered GPUs to help accelerate your research.&lt;br /&gt;
&lt;br /&gt;
=== Dell &amp;amp; AMD ===&lt;br /&gt;
&lt;br /&gt;
==== Machine learning with Dell &amp;amp; AMD ====&lt;br /&gt;
ICT 102, 1:00PM - 1:50PM by Rob Lucas&lt;br /&gt;
&lt;br /&gt;
To be announced.&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=RCS_Summer_School_2024&amp;diff=3328</id>
		<title>RCS Summer School 2024</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=RCS_Summer_School_2024&amp;diff=3328"/>
		<updated>2024-05-15T22:17:16Z</updated>

		<summary type="html">&lt;p&gt;Darcy: /* Schedule */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Research Computing Services&#039; 3rd annual summer school will run from Monday, June 10 through to Wednesday, June 12, 2024 from 9AM to 5PM. This summer school consists of various sessions and workshops throughout these 3 days and is completely &#039;&#039;&#039;&#039;&#039;&amp;lt;u&amp;gt;free&amp;lt;/u&amp;gt;&#039;&#039;&#039;&#039;&#039; to all University of Calgary members.&lt;br /&gt;
&lt;br /&gt;
Our goal for this year&#039;s summer school is to &#039;&#039;&#039;Empower our researchers:&#039;&#039;&#039; Inspiring what is possible on HPC infrastructure.&lt;br /&gt;
[[File:RCS Summer School 2024 Poster.png|border|center|frameless|850x850px|RCS Summer School 2024 Poster]]&lt;br /&gt;
&lt;br /&gt;
== Registration ==&lt;br /&gt;
Registration is required to attend the RCS Summer School sessions. Registration is free to all members of the University of Calgary. &lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;span class=&amp;quot;registerButton&amp;quot;&amp;gt;[https://rcs.ucalgary.ca/registration/summer-2024/ Register now]&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There will be a limit of approximately 100 seats. If you are unable to attend after registering, please cancel/modify your registration or notify us via email.&lt;br /&gt;
&lt;br /&gt;
== Topics ==&lt;br /&gt;
&lt;br /&gt;
* Introduction to RCS services and HPC resources&lt;br /&gt;
* Introduction to Linux &amp;amp; Bash command line&lt;br /&gt;
* Using Linux utilities for large datasets&lt;br /&gt;
* Hands on with Linux &amp;amp; Slurm: Workshop&lt;br /&gt;
* Using Open OnDemand on ARC&lt;br /&gt;
* Develop a research data management plan&lt;br /&gt;
* Reproducible data management with Datalad&lt;br /&gt;
* Digital File Management&lt;br /&gt;
* Using containers in HPC with Apptainer&lt;br /&gt;
* Managing scientific software with Conda&lt;br /&gt;
* Research workflow development with Prefect&lt;br /&gt;
* AWS: ML in the Cloud, a walkthrough followed by a workshop&lt;br /&gt;
* NVIDIA: Workflow optimization using NVIDIA GPUs&lt;br /&gt;
* Dell &amp;amp; AMD: Machine learning with Dell and AMD&lt;br /&gt;
&lt;br /&gt;
== Schedule ==&lt;br /&gt;
The summer school sessions will be held in ICT 102 and ICT 114. Refreshments will be available in ICT 114 on all 3 days.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Time&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; |June 10&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; |June 11&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; |June 12&lt;br /&gt;
|-&lt;br /&gt;
! width=&amp;quot;10%&amp;quot; |8:30 AM&lt;br /&gt;
| width=&amp;quot;15%&amp;quot; |&#039;&#039;&#039;Registration &amp;amp; check-in&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102&lt;br /&gt;
| width=&amp;quot;15%&amp;quot; rowspan=&amp;quot;2&amp;quot; |Refreshments &amp;lt;br&amp;gt;ICT 114&lt;br /&gt;
| width=&amp;quot;15%&amp;quot; |&#039;&#039;&#039;Registration &amp;amp; check-in&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102&lt;br /&gt;
| width=&amp;quot;15%&amp;quot; rowspan=&amp;quot;2&amp;quot; |Refreshments&amp;lt;br&amp;gt;ICT 114&lt;br /&gt;
| width=&amp;quot;15%&amp;quot; |&#039;&#039;&#039;Registration &amp;amp; check-in&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102&lt;br /&gt;
| width=&amp;quot;15%&amp;quot; rowspan=&amp;quot;15&amp;quot; |Refreshments&amp;lt;br&amp;gt;ICT 114&lt;br /&gt;
|-&lt;br /&gt;
!9:00 AM&lt;br /&gt;
|&#039;&#039;&#039;[[RCS Summer School 2024#Introduction to RCS|Introduction to RCS]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 9:00AM - 9:20AM&amp;lt;br&amp;gt;Jill Kowalchuk&lt;br /&gt;
|&#039;&#039;&#039;The Alliance: Introduction&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102&amp;lt;br&amp;gt;Brock Kahanyshyn&lt;br /&gt;
|&#039;&#039;&#039;TBD&#039;&#039;&#039;&lt;br /&gt;
ICT 102&lt;br /&gt;
|-&lt;br /&gt;
!9:30 AM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#Introduction to Linux, Bash,and the command line|Introduction to Linux, Bash,&amp;lt;br&amp;gt;and the command line]]&amp;lt;br&amp;gt;&#039;&#039;&#039;ICT 102, 9:30AM - 10:30AM&amp;lt;br&amp;gt;Robert Fridman&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#Developing a Research Data Management Plan with technical storage requirements|Developing a Research Data Management Plan with technical storage requirements]]&amp;lt;br&amp;gt;&#039;&#039;&#039;ICT 114, 9:30AM - 11:20AM&amp;lt;br&amp;gt;Ian Percel&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#Introduction to HPC resources|Introduction to HPC resources]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 9:30AM - 10:20AM&amp;lt;br&amp;gt;Robert Fridman, Dave Schulz&lt;br /&gt;
|&#039;&#039;&#039;[[RCS Summer School 2024#Reproducible Data Management with Datalad|Reproducible Data Management with Datalad: Part II]]&amp;lt;br&amp;gt;&#039;&#039;&#039;ICT 114, 9:30AM - 10:20AM&amp;lt;br&amp;gt;David Deepwell, Pedro Martinez&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#NVIDIA|NVIDIA: Workflow Optimization with NVIDIA GPUs]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 9:30AM - 12:00PM&amp;lt;br&amp;gt;Jonathan Dursi&lt;br /&gt;
|-&lt;br /&gt;
!10:00 AM&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Refreshments&amp;lt;br&amp;gt;ICT 114&lt;br /&gt;
|-&lt;br /&gt;
!10:30 AM&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#Workshop: Hands on with Linux &amp;amp; Slurm|Workshop: Hands on with Linux &amp;amp; Slurm]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 10:30AM - 11:50 AM&amp;lt;br&amp;gt;Robert Fridman&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#Linux tools &amp;amp; utilities for working with large data sets|Linux tools &amp;amp; utilities for working with large data sets]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 10:30AM - 11:20AM&amp;lt;br&amp;gt;Leo Leung&lt;br /&gt;
|-&lt;br /&gt;
!11:00 AM&lt;br /&gt;
|-&lt;br /&gt;
!11:30 AM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#Reproducible Data Management with Datalad|Reproducible Data Management with Datalad: Part I]]&amp;lt;br&amp;gt;&#039;&#039;&#039;ICT 114, 11:30AM - 12:20AM&amp;lt;br&amp;gt;David Deepwell, Pedro Martinez&lt;br /&gt;
|&#039;&#039;&#039;[[RCS Summer School 2024#RCS Q&amp;amp;A period: Ask RCS anything|RCS Q&amp;amp;A period: Ask RCS anything]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 11:30AM - 12:00PM&amp;lt;br&amp;gt;RCS Team&lt;br /&gt;
|-&lt;br /&gt;
!12:00 PM&lt;br /&gt;
|&#039;&#039;&#039;[[RCS Summer School 2024#Open OnDemand on ARC|Open OnDemand on ARC]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 12:00 AM - 12:20 AM&amp;lt;br&amp;gt;Leo Leung&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; rowspan=&amp;quot;2&amp;quot; |&#039;&#039;&#039;Lunch break&#039;&#039;&#039;&amp;lt;br&amp;gt;12:00PM - 1:00PM&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
!12:30 PM&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; rowspan=&amp;quot;2&amp;quot; |&#039;&#039;&#039;Lunch break&#039;&#039;&#039;&amp;lt;br&amp;gt;12:30PM - 1:30PM&lt;br /&gt;
|&#039;&#039;&#039;Lunch break&#039;&#039;&#039;&amp;lt;br&amp;gt;12:00PM - 1:00PM&lt;br /&gt;
|-&lt;br /&gt;
!1:00 PM&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#Research Data Management and Data File Management|Research Data Management and Data File Management]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 1:00PM - 2:20PM&amp;lt;br&amp;gt;Jennifer Abel, Alex Thistlewood, Ingrid Reiche&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot; |Refreshments&amp;lt;br&amp;gt;ICT 114&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#Dell &amp;amp; AMD|Dell &amp;amp; AMD: Machine learning with Dell &amp;amp; AMD]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 1:00PM - 1:50PM&lt;br /&gt;
Rob Lucas&lt;br /&gt;
|-&lt;br /&gt;
!1:30 PM&lt;br /&gt;
|&#039;&#039;&#039;[[RCS Summer School 2024#AWS|AWS: Inspiring the art of the possible]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 1:30PM - 1:50PM&lt;br /&gt;
AWS&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot; |Refreshments&amp;lt;br&amp;gt;ICT 114&lt;br /&gt;
|-&lt;br /&gt;
!2:00 PM&lt;br /&gt;
|[[RCS Summer School 2024#AWS|&#039;&#039;&#039;AWS: How AWS works with Researchers&#039;&#039;&#039;]]&amp;lt;br&amp;gt;ICT 102, 2:00PM - 2:20PM&lt;br /&gt;
AWS&lt;br /&gt;
|-&lt;br /&gt;
!2:30 PM&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#AWS|AWS: Machine Learning with low-code workshop]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 2:30PM - 4:50PM&amp;lt;br&amp;gt;AWS&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#Introduction to containers with Apptainer|Introduction to containers with Apptainer]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 2:30PM - 3:20PM&amp;lt;br&amp;gt;Tannistha Nandi&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#Prefect for Research Workflow Development|Prefect for Research Workflow Development]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 2:30PM - 3:50PM&amp;lt;br&amp;gt;David Deepwell, Pedro Martinez&lt;br /&gt;
|-&lt;br /&gt;
!3:00 PM&lt;br /&gt;
|-&lt;br /&gt;
!3:30 PM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |&#039;&#039;&#039;[[RCS Summer School 2024#Managing scientific software with Conda|Managing scientific software with Conda]]&#039;&#039;&#039;&amp;lt;br&amp;gt;ICT 102, 3:30PM - 4:20PM&amp;lt;br&amp;gt;Dmitri Rozmanov&lt;br /&gt;
|-&lt;br /&gt;
!4:00 PM&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; rowspan=&amp;quot;3&amp;quot; |&#039;&#039;&#039;End of day: 4:00PM&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
!4:30 PM&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; rowspan=&amp;quot;2&amp;quot; |&#039;&#039;&#039;End of day: 4:30PM&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
!5:00 PM&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; |&#039;&#039;&#039;End of day: 5:00PM&#039;&#039;&#039;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Sessions ==&lt;br /&gt;
&lt;br /&gt;
=== Introduction to RCS ===&lt;br /&gt;
ICT 102, 9:00AM - 9:20AM by Jill Kowalchuk&lt;br /&gt;
&lt;br /&gt;
We will begin the summer school with a quick introduction by Jill Kowalchuk, the Interim director of Research Computing Services. We&#039;ll go through who RCS is and the services that we offer.&lt;br /&gt;
&lt;br /&gt;
=== Introduction to Linux, Bash, and the command line ===&lt;br /&gt;
ICT 102, 9:30AM - 10:30AM by Robert Fridman&lt;br /&gt;
&lt;br /&gt;
A quick crash course on how to use Linux, bash shell, and the command line in general. This beginner friendly session requires no prior experience to Linux. We recommend bringing your own device to follow along.&lt;br /&gt;
&lt;br /&gt;
=== Workshop: Hands on with Linux &amp;amp; Slurm ===&lt;br /&gt;
ICT 102, 10:30AM - 11:50 AM by Robert Fridman&lt;br /&gt;
&lt;br /&gt;
A follow-up workshop that builds on the basics covered in the Linux introduction session and goes into depth on how to use Slurm, the scheduler that RCS uses in their high performance computing clusters. We recommend bringing your own device to follow along.&lt;br /&gt;
&lt;br /&gt;
=== Open OnDemand on ARC ===&lt;br /&gt;
ICT 102, 12:00 AM - 12:20 AM by Leo Leung&lt;br /&gt;
&lt;br /&gt;
Did you know you can run a Linux desktop on ARC? In this session, we will do a quick demo of ARC Open OnDemand, a web interface that allows users to submit jobs that need graphical user interfaces. We will also cover how to monitor your jobs through Open OnDemand.&lt;br /&gt;
&lt;br /&gt;
=== Developing a Research Data Management Plan with technical storage requirements ===&lt;br /&gt;
ICT 114, 9:30AM - 11:20AM by Ian Percel&lt;br /&gt;
&lt;br /&gt;
Effective management of your research data is paramount. Join us as we delve into crafting robust data management plans tailored to your specific research needs.&lt;br /&gt;
&lt;br /&gt;
=== Reproducible Data Management with Datalad ===&lt;br /&gt;
Part I: ICT 114, 10:30AM - 11:20AM by David Deepwell and Pedro Martinez&lt;br /&gt;
&lt;br /&gt;
Part II: ICT 114, 9:30AM - 10:20AM by David Deepwell and Pedro Martinez&lt;br /&gt;
&lt;br /&gt;
This workshop provides an introduction to digital data management with DataLad. Background content will be covered before conducting the primary hands-on training where attendees will create a small demonstrative research project containing data provenance. &lt;br /&gt;
&lt;br /&gt;
Content to be covered includes: dataset basics, capturing data-provenance, and collaborative data analysis.&lt;br /&gt;
&lt;br /&gt;
DataLad is a git-based version control system. Although no git knowledge is required, familiarity with git is strongly advised. Command line experience is required.&lt;br /&gt;
&lt;br /&gt;
=== Introduction to HPC resources ===&lt;br /&gt;
ICT 102, 9:30AM - 10:20AM by Robert Fridman, Dave Schulz&lt;br /&gt;
&lt;br /&gt;
An introduction to high performance computing resources offered by RCS. We will go over how our infrastructure ties in to your research and how to make the most out of Slurm. How to download and transfer data with other institutions.&lt;br /&gt;
&lt;br /&gt;
=== Linux tools &amp;amp; utilities for working with large data sets ===&lt;br /&gt;
ICT 102, 10:30AM - 11:20AM by Leo Leung&lt;br /&gt;
&lt;br /&gt;
As researchers use larger and larger datasets, it is imperative to effectively handle and manage these datasets. In this session, we will go through some common methods to work with datasets using standard Linux tools and utilities. We will cover common use cases on how to download large datasets from the Internet, parsing text-based data using tools such as sed, awk, grep, and will then tie everything together with pipes.&lt;br /&gt;
&lt;br /&gt;
=== RCS Q&amp;amp;A period: Ask RCS anything ===&lt;br /&gt;
ICT 102, 11:30AM - 12:00PM by the RCS team&lt;br /&gt;
&lt;br /&gt;
A general question and answers period where you can ask us anything related to RCS and HPC.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Research Data Management and Data File Management&#039;&#039;&#039; ===&lt;br /&gt;
ICT 102, 1:00PM - 2:20PM by Jennifer Abel, Alex Thistlewood, and Ingrid Reiche (from The University of Calgary Libraries and Cultural Resources)&lt;br /&gt;
&lt;br /&gt;
Managing your digital files and research materials is critical for keeping yourself organized, collaborating, and communicating with colleagues. In this session, we will cover Research Data Management (RDM) and Data Management Plan (DMP). We will also go over best practices in digital file management depending on your individual and organizational needs. This presentation will also discuss best practices, versioning, and how to document and share your file and folder convention using a README file.&lt;br /&gt;
&lt;br /&gt;
=== Introduction to containers with Apptainer ===&lt;br /&gt;
ICT 102, 2:30PM - 3:20PM by Tannistha Nandi&lt;br /&gt;
&lt;br /&gt;
Make your research workflows reproducible through the power of containers. We will go through in detail how to run containers on ARC using Apptainer.&lt;br /&gt;
&lt;br /&gt;
=== Managing scientific software with Conda ===&lt;br /&gt;
ICT 102, 3:30PM - 4:20PM by Dmitri Rozmanov&lt;br /&gt;
&lt;br /&gt;
Running customized scientific software on a shared HPC environment may be challenging. This session, we will go over how to set up customized software environments using Conda.&lt;br /&gt;
&lt;br /&gt;
=== Prefect for Research Workflow Development ===&lt;br /&gt;
ICT 102, 2:30PM - 3:50PM by David Deepwell and Pedro Martinez&lt;br /&gt;
&lt;br /&gt;
Modernize your research workflows using Prefect, an open source workflow orchestration tool.  We will show how you can build and deploy resilient workflows.&lt;br /&gt;
&lt;br /&gt;
=== AWS ===&lt;br /&gt;
==== AWS: Inspiring the art of the possible ====&lt;br /&gt;
ICT 102, 1:30PM - 1:50PM by AWS&lt;br /&gt;
&lt;br /&gt;
Learn what is possible on AWS Cloud for research.&lt;br /&gt;
&lt;br /&gt;
==== AWS: How AWS works with Researchers ====&lt;br /&gt;
ICT 102, 1:30PM - 1:50PM by AWS&lt;br /&gt;
&lt;br /&gt;
AWS has many programs to support researchers such as credits, letter of supports, immersion days, working on proof of concepts. In this session, we will cover how we engage with researchers and what programs are out there to help accelerate your research with the AWS Cloud. &lt;br /&gt;
&lt;br /&gt;
==== AWS: Machine learning with low-code workshop ====&lt;br /&gt;
ICT 102, 1:30 PM - 4:45 PM by AWS&lt;br /&gt;
&lt;br /&gt;
The Machine Learning (ML) journey requires continuous experimentation and rapid prototyping to be successful. In order to create highly accurate and performant models, data scientists have to first experiment with feature engineering, model selection and  optimization techniques. These processes are traditionally time consuming and expensive. In this workshop attendees will learn the following:&lt;br /&gt;
&lt;br /&gt;
* How the Low-Code ML capabilities found in Amazon SageMaker Data Wrangler, Autopilot and Jumpstart, make it easier to experiment faster and bring highly accurate models to production more quickly and efficiently&lt;br /&gt;
* How to simplify the process of data preparation and feature engineering, and complete each step of the data preparation workflow&lt;br /&gt;
* Understand how to automatically build, train, and tune the best machine learning models based on your data, while allowing you to maintain full control and visibility.&lt;br /&gt;
* Get started with ML easily and quickly using pre-built solutions for common financial use cases and open source models from popular model zoos.&lt;br /&gt;
&lt;br /&gt;
=== NVIDIA ===&lt;br /&gt;
&lt;br /&gt;
==== Workflow Optimization with NVIDIA GPUs ====&lt;br /&gt;
ICT 102, 9:30AM - 12:20AM by NVIDIA&lt;br /&gt;
&lt;br /&gt;
We will discuss how to optimizing workflows with NVIDIA powered GPUs to help accelerate your research.&lt;br /&gt;
&lt;br /&gt;
=== Dell &amp;amp; AMD ===&lt;br /&gt;
&lt;br /&gt;
==== Machine learning with Dell &amp;amp; AMD ====&lt;br /&gt;
ICT 102, 1:00PM - 1:50PM by Rob Lucas&lt;br /&gt;
&lt;br /&gt;
To be announced.&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=Storage_Options&amp;diff=3316</id>
		<title>Storage Options</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=Storage_Options&amp;diff=3316"/>
		<updated>2024-05-14T21:40:20Z</updated>

		<summary type="html">&lt;p&gt;Darcy: /* Archive Storage */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= General =&lt;br /&gt;
&lt;br /&gt;
There are a few options researchers can take advantage of when storing their research data. &lt;br /&gt;
 &lt;br /&gt;
== Data Classification ==&lt;br /&gt;
Please review the different data classifications that are outlined by the [https://ucalgary.ca/legal-services/sites/default/files/teams/1/Standards-Legal-Information-Security-Classification-Standard.pdf Information Security Classification Standard]. There are 4 levels of data classification which are summarized in the table below.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Level&lt;br /&gt;
! Description&lt;br /&gt;
! Example&lt;br /&gt;
|-&lt;br /&gt;
| Level 1&lt;br /&gt;
| Public&lt;br /&gt;
|&lt;br /&gt;
* Reference data sets&lt;br /&gt;
* Published research data&lt;br /&gt;
|-&lt;br /&gt;
| Level 2&lt;br /&gt;
| Internal&lt;br /&gt;
|&lt;br /&gt;
* Internal memos&lt;br /&gt;
* Unpublished research data&lt;br /&gt;
* Anonymized or de-identified human subject data&lt;br /&gt;
* Library transactions and journals&lt;br /&gt;
|-&lt;br /&gt;
| Level 3&lt;br /&gt;
| Confidential&lt;br /&gt;
|&lt;br /&gt;
* Faculty/staff employment applications, personnel files, contact information&lt;br /&gt;
* Donor or prospective donor information&lt;br /&gt;
* Contracts&lt;br /&gt;
* Intellectual property&lt;br /&gt;
|-&lt;br /&gt;
| Level 4&lt;br /&gt;
| Restricted&lt;br /&gt;
|&lt;br /&gt;
* Patient identifiable health information&lt;br /&gt;
* identifiable human subject research data&lt;br /&gt;
* information subject to special government requirements&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
When selecting a storage option, you must use one that meets or exceeds the rated security classification.&lt;br /&gt;
&lt;br /&gt;
* See also the Collaboration, storage and file shares article in Service Now:&lt;br /&gt;
: https://ucalgary.service-now.com/it?id=it_catalog_by_category&amp;amp;sys_id=4dbb82ee13661200c524fc04e144b044&lt;br /&gt;
&lt;br /&gt;
== Research Data Management ==&lt;br /&gt;
We recommend you follow good Research Data Management practices and ensure you have a DMP (Data Management Plan) created to guide your data&#039;s lifecycle. DMP Assistant has been created specifically for Canadian scholars and aims to meet any and all Tri-Agency requirements. See: https://assistant.portagenetwork.ca/&lt;br /&gt;
&lt;br /&gt;
Your DMP can help us support the FAIR (findable, accessible, interoperable and reusable) principles for data management.&lt;br /&gt;
&lt;br /&gt;
Please consider contacting Libraries and Cultural Resources for assistance. For guidance on general data management and developing a DMP, consult https://library.ucalgary.ca/guides/researchdatamanagement or contact research.data@ucalgary.ca.&lt;br /&gt;
&lt;br /&gt;
For support using PRISM Dataverse, UofC&#039;s institutional data repository, contact digitize@ucalgary.ca.&lt;br /&gt;
&lt;br /&gt;
If you need to share and preserve your large post-publication data set for a mandated period of time, please visit https://www.frdr-dfdr.ca/repo/ in order to learn more about the national Federated Research Data Repository. &lt;br /&gt;
&lt;br /&gt;
FRDR aligns with Tri-Agency Principles as a platform for Preservation, Retention and Sharing of research data. see: [http://www.science.gc.ca/eic/site/063.nsf/eng/h_83F7624E.html Tri-Agency Statement of Principles on Digital Data Management]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= University of Calgary RCS storage services =&lt;br /&gt;
&lt;br /&gt;
== Secure Compute Data Storage (SCDS) ==&lt;br /&gt;
Secure Computing Data Storage (SCDS) is a service provided by Research Computing Services that allows researchers to store restricted and confidential data. Collaboration with Level 4 data stored in SCDS is possible using ShareFile, a secure file sharing and collaboration tool by Citrix.&lt;br /&gt;
 &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Capacity&lt;br /&gt;
| 10 GB or more&lt;br /&gt;
|-&lt;br /&gt;
! Classification&lt;br /&gt;
| Level 4&lt;br /&gt;
|-&lt;br /&gt;
! Learn More&lt;br /&gt;
| Visit [https://it.ucalgary.ca/secure-computing-platform The SCDS Website]&lt;br /&gt;
|-&lt;br /&gt;
! Request Access&lt;br /&gt;
| Visit [https://ucalgary.service-now.com/kb_view.do?sysparm_article=KB0030163 ServiceNow to request access]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== ResearchFS ==&lt;br /&gt;
ResearchFS is a UofC hosted SMB/CIFS storage solution funded and operated by RCS. It is available by request to faculty and staff with active research data.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Capacity&lt;br /&gt;
| 1TB with quota increases available on request. &lt;br /&gt;
|-&lt;br /&gt;
! Classification&lt;br /&gt;
| Level 1 - 2&lt;br /&gt;
|-&lt;br /&gt;
! Request Access&lt;br /&gt;
| Visit [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=fe66b3a7db297300897e4b8b0b96199d ServiceNow to request access]&lt;br /&gt;
|}&lt;br /&gt;
=== Service Description ===&lt;br /&gt;
You may use ResearchFS to store your active research data files. ResearchFS is intended to be used as a research group or project share. ResearchFS is available on campus or off campus using the IT supported VPN client. Information on how to download and install the VPN client can be found here: https://ucalgary.service-now.com/it?id=kb_article&amp;amp;sys_id=880e71071381ae006f3afbb2e144b05c (IT account login may be required).&lt;br /&gt;
All ResearchFS users must have a UofC IT account.&lt;br /&gt;
&lt;br /&gt;
=== Data recovery ===&lt;br /&gt;
ResearchFS does daily snapshots at a bit past midnight, which it keeps for 30 days. You should be able to recover a deleted file for up to 30 days, if it was in your share overnight. If you create a file and delete it during a day, no snapshot will be available for you to recover. ResearchFS presents backups using the windows OS &#039;previous versions&#039; functionality. If you are not familiar with using this, or if you are on a Linux or MacOS device, you can request a restore, with Service Now.&lt;br /&gt;
&lt;br /&gt;
For backup, we replicate changes to a distant data center every hour. The storage hardware which hosts your data is located in the basement of the Math Sciences building and our backup is in the HRIC building, so in case of an on campus disaster, your data should be safe.&lt;br /&gt;
 &lt;br /&gt;
=== Support for ResearchFS ===&lt;br /&gt;
If you have questions, please contact the IT Support Centre.&lt;br /&gt;
: Mon – Fri: 8:30 am – 5:00 pm; Sat, Sun &amp;amp; holidays: 10:00 am – 2:00 pm.&lt;br /&gt;
: Live Chat: ucalgary.ca/it&lt;br /&gt;
: Email: itsupport@ucalgary.ca&lt;br /&gt;
: Phone: 403.220.5555&lt;br /&gt;
: In person: 773 Math Science&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= University of Calgary IT storage services =&lt;br /&gt;
&lt;br /&gt;
== OneDrive for Business ==&lt;br /&gt;
OneDrive for Business is a storage solution provided by Microsoft and is available to all faculty and staff.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Capacity&lt;br /&gt;
| 5 TB with quota increases [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=438e6d8313896a0053f2d7b2e144b0b9 available on request].&lt;br /&gt;
|-&lt;br /&gt;
! Classification&lt;br /&gt;
| Level 1 - 4&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
You may use OneDrive for Business to store your personal and work related files. &lt;br /&gt;
Files stored within OneDrive are by default private only to you but has the option to allow sharing and collaboration with others. &lt;br /&gt;
OneDrive for Business cannot be used as a department or project share space.&lt;br /&gt;
There is no group/lab offering with OneDrive. &lt;br /&gt;
&lt;br /&gt;
While OneDrive provides a secure/compliant location from an IT Security stand point, &lt;br /&gt;
it’s not the most adequate location for data the PI is accountable for 5 years upon completion of the study. &lt;br /&gt;
This is not a security issue, but a data management issue.&lt;br /&gt;
&lt;br /&gt;
For example, if a study was using a personal OneDrive of one of the researchers to store all the records, &lt;br /&gt;
and the researcher was to leave the university, this OneDrive would be gone in 30 days.&lt;br /&gt;
&lt;br /&gt;
MS has an automation capability for their O365 products.&lt;br /&gt;
If you have a windows OS machine, you can use the automation product ‘Flow’ to copy a file to a local file system when a new file is created on OneDrive.&lt;br /&gt;
&lt;br /&gt;
To back up data residing on ARC to your personal OneDrive allocation please see:  [[How to transfer data#rclone: rsync for cloud storage]]&lt;br /&gt;
&lt;br /&gt;
OneDrive requires Multi-Factor Authentication (MFA) enabled on your University of Calgary IT account. &lt;br /&gt;
&lt;br /&gt;
More information can be located in the following article https://ucalgary.service-now.com/kb_view.do?sysparm_article=KB0032351&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
UofC OneDrive data is reportedly hosted in Canada (Markham Ont).&lt;br /&gt;
&lt;br /&gt;
===Support for OneDrive===&lt;br /&gt;
If you have questions, please contact the [https://ucalgary.service-now.com/it UService Support Centre].&lt;br /&gt;
: Email:it@ucalgary.ca Phone:403.210.9300  1.888.342.3802  Mon - Fri: 8:00 a.m. to noon and 1:00 p.m. to 4:30 p.m.  (closed over the lunch hour)  Walk-in service  Math Sciences 7th floor, Room 773  Tues - Thurs: 1:00 p.m. to 4:30 p.m.&lt;br /&gt;
&lt;br /&gt;
===Data recovery===&lt;br /&gt;
&lt;br /&gt;
===Other Resources===&lt;br /&gt;
For more information on OneDrive for Business:&lt;br /&gt;
* Operating Level of Agreement KB0032404 (https://ucalgary.service-now.com/it?id=kb_article&amp;amp;sys_id=7f57bddcdb56a3047cab5068dc9619b6)&lt;br /&gt;
*OneDrive for Business Getting Started KB0032351 (https://ucalgary.service-now.com/it?id=kb_article&amp;amp;sys_id=60994170db2da7487cab5068dc961900)&lt;br /&gt;
*If you are above 90% of your OneDrive quota, you can request an increase here: ( https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=438e6d8313896a0053f2d7b2e144b0b9) PLEASE NOTE: Microsoft will only increase an allocation while the Cloud Storage is more than 90% full. Please log into your O365 cloud account to review before making your request.&lt;br /&gt;
&lt;br /&gt;
Any questions regarding if data hosted on OneDrive is subject to US jurisdiction discovery or access should be directed to:&lt;br /&gt;
*https://cumming.ucalgary.ca/research-institutes/csm-research-services/legal-research-services (CSM researchers.)&lt;br /&gt;
*https://research.ucalgary.ca/contact/research-services (Not CSM Researchers)&lt;br /&gt;
*https://www.ucalgary.ca/legalservices/  (for teaching/learning – non research enquiries that make their way to you)&lt;br /&gt;
&lt;br /&gt;
==Office365 SharePoint for research groups==&lt;br /&gt;
&lt;br /&gt;
To be determined....&lt;br /&gt;
&lt;br /&gt;
Researchers will be able to request an Office 365 SharePoint site for a group at some point in the future &lt;br /&gt;
which could be considered a group cloud sharing platform.&lt;br /&gt;
&lt;br /&gt;
* The official service page:&lt;br /&gt;
: https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=b55f2f72132f5240b5b4df82e144b085&lt;br /&gt;
&lt;br /&gt;
= Digital Research Alliance of Canada storage services =&lt;br /&gt;
&lt;br /&gt;
== Storage on the Alliance HPC clusters ==&lt;br /&gt;
&lt;br /&gt;
* Alliance Wiki article &amp;quot;Storage and file management&amp;quot;:&lt;br /&gt;
: https://docs.alliancecan.ca/wiki/Storage_and_file_management&lt;br /&gt;
&lt;br /&gt;
==Personal storage options==&lt;br /&gt;
For personal or level 1 data, you may use an external solution from the Alliance.&lt;br /&gt;
One has to have an Alliance account to use the service.&lt;br /&gt;
This is similar to DropBox or Google drive functionality. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;The Alliance NextCloud&#039;&#039;&#039;:&lt;br /&gt;
:https://nextcloud.computecanada.ca&lt;br /&gt;
: 100 GB of storage that can be shared between your computers.&lt;br /&gt;
: Alliance documentation: https://docs.alliancecan.ca/wiki/Nextcloud&lt;br /&gt;
&lt;br /&gt;
= Commercial Cloud Based Storage Options =&lt;br /&gt;
&lt;br /&gt;
== AWS ==&lt;br /&gt;
&lt;br /&gt;
Provided by &#039;&#039;&#039;Amazon Web Services, Inc.&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
* Pricing calculator: https://calculator.aws&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
AWS provides very many different kinds of services, including &#039;&#039;&#039;storage services&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
These options can be a solution for your research needs, but &lt;br /&gt;
* it &#039;&#039;&#039;can be expensive&#039;&#039;&#039;, depending on your needs and the amount of data;&lt;br /&gt;
* &#039;&#039;&#039;pricing schema is complex&#039;&#039;&#039; and can be confusing for new users;&lt;br /&gt;
* the &#039;&#039;&#039;number of options can be overwhelming&#039;&#039;&#039;. A lot of it is designed to provide pricing flexibility, not to increase functionality.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The key points:&lt;br /&gt;
* &#039;&#039;&#039;Uploading data&#039;&#039;&#039; to AWS is &#039;&#039;&#039;free&#039;&#039;&#039;.&lt;br /&gt;
* &#039;&#039;&#039;Storing data&#039;&#039;&#039; on AWS storage is a &#039;&#039;&#039;paid service&#039;&#039;&#039;.&lt;br /&gt;
* &#039;&#039;&#039;Downloading data&#039;&#039;&#039; from AWS storage to your computer is a &#039;&#039;&#039;paid service&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
= ARC Cluster Storage =&lt;br /&gt;
&lt;br /&gt;
ARC storage is used to &#039;&#039;&#039;support workflows on the ARC computing cluster&#039;&#039;&#039;. The expectation is that storage on ARC will only be used for active and upcoming computational projects. &lt;br /&gt;
It is not suitable for long-term or archival storage as it is not backed-up and is not guaranteed to be available for the time periods that are typical of archiving.&lt;br /&gt;
ARC is a research cluster, which means it has high performance but can be stopped for required maintenance when needed. &lt;br /&gt;
Thus, ARC cannot be relied on for any kind of service that requires constant availability. &lt;br /&gt;
Which means, in turn that ARC&#039;s storage cannot and &#039;&#039;&#039;should not be used as a main storage facility for research data&#039;&#039;&#039;. &lt;br /&gt;
The &#039;&#039;&#039;master copy&#039;&#039;&#039; of research data should be &#039;&#039;&#039;stored elsewhere&#039;&#039;&#039; and only part of that data are expected to be copied to ARC for computational analysis.&lt;br /&gt;
&lt;br /&gt;
== Home Directories ==&lt;br /&gt;
Every user account on ARC has a static 500GB allocation of storage and a maximum of 1.5 million files (including directories). This cannot be increased or decreased. Home directory storage is connected via a network file system to the rest of the cluster and supports fast data transfer to memory on compute nodes. This also means that basic file system commands (like &amp;lt;code&amp;gt;ls&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;find&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;du&amp;lt;/code&amp;gt;) take longer to run as the number of files in your home directory increases. In particular, we strongly encourage users to stay under 100000 files if it is at all possible. This can be achieved by combining smaller data files into single larger files, using structured data formats rather than large number of text files, or combining collections of files that will be used together into archives (tar, dar, etc). Since top level permissions on home directories are set to prevent other users from reading or executing, home directories are not suitable for sharing data directly with colleagues working on ARC. A Research Group Allocation is a more appropriate place for storing shared data or very large data sets that will be used as part of active computational projects.   &lt;br /&gt;
&lt;br /&gt;
== Research Group Allocations (&amp;lt;code&amp;gt;/work&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/bulk&amp;lt;/code&amp;gt;) ==&lt;br /&gt;
The principal investigator (PI) for a research group may request an extended shared allocation for the research group by contacting support@hpc.ucalgary.ca with answers to the following questions (please copy the full text of the questions into your email and write answers under it):&lt;br /&gt;
&lt;br /&gt;
* How much storage is requested and why is that the amount that you need? &lt;br /&gt;
A rationale for a request can be a formal data management plan or something more informal like a rough estimate to the primary dataset used for a project and a rough estimate to the size of outputs expected from your computations that are planned to run on ARC over the &#039;&#039;&#039;next year.&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* What is the requested &#039;&#039;&#039;allocation name&#039;&#039;&#039;? (typically something like &amp;lt;PI name&amp;gt;_lab, &amp;lt;code&amp;gt;smith_lab&amp;lt;/code&amp;gt;, for example)&lt;br /&gt;
* What is the &#039;&#039;&#039;data classification&#039;&#039;&#039; using the University of Calgary data security classification system?&lt;br /&gt;
* Which user or users would be the &#039;&#039;&#039;owner&#039;&#039;&#039; of the allocation? (Full Name and UCalgary Email address, typically the requesting PI but there may be co-PIs)&lt;br /&gt;
* Which members of the allocation should be able to &#039;&#039;&#039;request access&#039;&#039;&#039; for new users? (Full Name and UCalgary Email address for active ARC users) &lt;br /&gt;
* What is the &#039;&#039;&#039;faculty&#039;&#039;&#039; of the owner or owners? &lt;br /&gt;
* Please provide a short description of the lab.&lt;br /&gt;
* Please provide a brief &#039;&#039;&#039;numerical estimate&#039;&#039;&#039; of the required storage space based on &#039;&#039;&#039;projects&#039;&#039;&#039; that will use the allocation and their storage &#039;&#039;&#039;requirements&#039;&#039;&#039; .&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example 1&#039;&#039;&#039;: &amp;quot;We will be processing a 1T dataset by performing 100 experimental runs. &lt;br /&gt;
Each experiment will be processed to produce a 6GB output, giving 600GB of the total output data. &lt;br /&gt;
We will also need 400GB additional space for post-processing and data management. &lt;br /&gt;
Thus, we would like to request &#039;&#039;&#039;2TB&#039;&#039;&#039; of shared space in total.&amp;quot;   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example 2&#039;&#039;&#039;: &amp;quot;3 members of our research group need additional shared space on ARC for their independent projects.&lt;br /&gt;
Project 1 starts with 100GB of initial data and is expected to generate 800GB of the output results. &lt;br /&gt;
Project 2 is going to use simulations and does not use any input data but is expected to generate 2TB of the simulated data for further processing.&lt;br /&gt;
The processing will require 200GB of additional space.&lt;br /&gt;
Project 3 will be working on a 1TB dataset and is expected to generate about 1TB of the output data. &lt;br /&gt;
These projects, therefore, will require 5.1TB of storage. &lt;br /&gt;
For convenience of data manipulation and management we would also like to have additional 400GB of extra storage space.&lt;br /&gt;
Therefore, we would like to request &#039;&#039;&#039;5.5TB&#039;&#039;&#039; of shared storage space in total.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Work and Bulk storage can be considerably larger than the home directory allocations. However, there are limits on what RCS can provide as ARC storage provides high-speed access and is expensive to purchase. Typically, &#039;&#039;&#039;any request over 10TB&#039;&#039;&#039; will require some discussion. Work and Bulk allocations differ in a few ways that influence how they are used. Work storage is faster to access as part of computational jobs on ARC although the impact is small for jobs that don&#039;t involve enormous numbers of reads. Bulk storage is designed to be a target for instrument data (which is typically processed in a way that reads data a small number of times per job) and is capable of mounting instruments elsewhere on campus using SMB. A number of questions come up frequently about Work and Bulk storage and these are addressed in an [[Group Storage Allocation FAQ | FAQ]].&lt;br /&gt;
&lt;br /&gt;
===== How to a add group member to the access list (&amp;lt;code&amp;gt;/work&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/bulk&amp;lt;/code&amp;gt;)?=====&lt;br /&gt;
&lt;br /&gt;
Any group member who wants to use the shared storage, should send an email to the support@hpc.ucalgary.ca to be added to the access group and CC the PI/ data owner. &#039;&#039;&#039;This will confirm that the PI approves the group member&#039;s request access to the shared storage.&#039;&#039;&#039; Please note that the access permissions inside the directory are expected to be managed by the data owners.&lt;br /&gt;
&lt;br /&gt;
= Archive Storage =&lt;br /&gt;
&lt;br /&gt;
Archive storage for data sets supporting published research is available through the Federated Research Data Repository (FRDR).&lt;br /&gt;
FRDR is a bilingual publishing platform for sharing and preserving Canadian research data.&lt;br /&gt;
It is a curated, general-purpose repository, custom built for large datasets.&lt;br /&gt;
FRDR is run by the Digital Research Alliance of Canada.&lt;br /&gt;
&lt;br /&gt;
For more information on FRDR visit their web site:  https://www.frdr-dfdr.ca/repo/&lt;br /&gt;
&lt;br /&gt;
[[Category:Administration]]&lt;br /&gt;
{{Navbox Administration}}&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=Storage_Options&amp;diff=3315</id>
		<title>Storage Options</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=Storage_Options&amp;diff=3315"/>
		<updated>2024-05-14T21:35:19Z</updated>

		<summary type="html">&lt;p&gt;Darcy: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= General =&lt;br /&gt;
&lt;br /&gt;
There are a few options researchers can take advantage of when storing their research data. &lt;br /&gt;
 &lt;br /&gt;
== Data Classification ==&lt;br /&gt;
Please review the different data classifications that are outlined by the [https://ucalgary.ca/legal-services/sites/default/files/teams/1/Standards-Legal-Information-Security-Classification-Standard.pdf Information Security Classification Standard]. There are 4 levels of data classification which are summarized in the table below.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Level&lt;br /&gt;
! Description&lt;br /&gt;
! Example&lt;br /&gt;
|-&lt;br /&gt;
| Level 1&lt;br /&gt;
| Public&lt;br /&gt;
|&lt;br /&gt;
* Reference data sets&lt;br /&gt;
* Published research data&lt;br /&gt;
|-&lt;br /&gt;
| Level 2&lt;br /&gt;
| Internal&lt;br /&gt;
|&lt;br /&gt;
* Internal memos&lt;br /&gt;
* Unpublished research data&lt;br /&gt;
* Anonymized or de-identified human subject data&lt;br /&gt;
* Library transactions and journals&lt;br /&gt;
|-&lt;br /&gt;
| Level 3&lt;br /&gt;
| Confidential&lt;br /&gt;
|&lt;br /&gt;
* Faculty/staff employment applications, personnel files, contact information&lt;br /&gt;
* Donor or prospective donor information&lt;br /&gt;
* Contracts&lt;br /&gt;
* Intellectual property&lt;br /&gt;
|-&lt;br /&gt;
| Level 4&lt;br /&gt;
| Restricted&lt;br /&gt;
|&lt;br /&gt;
* Patient identifiable health information&lt;br /&gt;
* identifiable human subject research data&lt;br /&gt;
* information subject to special government requirements&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
When selecting a storage option, you must use one that meets or exceeds the rated security classification.&lt;br /&gt;
&lt;br /&gt;
* See also the Collaboration, storage and file shares article in Service Now:&lt;br /&gt;
: https://ucalgary.service-now.com/it?id=it_catalog_by_category&amp;amp;sys_id=4dbb82ee13661200c524fc04e144b044&lt;br /&gt;
&lt;br /&gt;
== Research Data Management ==&lt;br /&gt;
We recommend you follow good Research Data Management practices and ensure you have a DMP (Data Management Plan) created to guide your data&#039;s lifecycle. DMP Assistant has been created specifically for Canadian scholars and aims to meet any and all Tri-Agency requirements. See: https://assistant.portagenetwork.ca/&lt;br /&gt;
&lt;br /&gt;
Your DMP can help us support the FAIR (findable, accessible, interoperable and reusable) principles for data management.&lt;br /&gt;
&lt;br /&gt;
Please consider contacting Libraries and Cultural Resources for assistance. For guidance on general data management and developing a DMP, consult https://library.ucalgary.ca/guides/researchdatamanagement or contact research.data@ucalgary.ca.&lt;br /&gt;
&lt;br /&gt;
For support using PRISM Dataverse, UofC&#039;s institutional data repository, contact digitize@ucalgary.ca.&lt;br /&gt;
&lt;br /&gt;
If you need to share and preserve your large post-publication data set for a mandated period of time, please visit https://www.frdr-dfdr.ca/repo/ in order to learn more about the national Federated Research Data Repository. &lt;br /&gt;
&lt;br /&gt;
FRDR aligns with Tri-Agency Principles as a platform for Preservation, Retention and Sharing of research data. see: [http://www.science.gc.ca/eic/site/063.nsf/eng/h_83F7624E.html Tri-Agency Statement of Principles on Digital Data Management]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= University of Calgary RCS storage services =&lt;br /&gt;
&lt;br /&gt;
== Secure Compute Data Storage (SCDS) ==&lt;br /&gt;
Secure Computing Data Storage (SCDS) is a service provided by Research Computing Services that allows researchers to store restricted and confidential data. Collaboration with Level 4 data stored in SCDS is possible using ShareFile, a secure file sharing and collaboration tool by Citrix.&lt;br /&gt;
 &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Capacity&lt;br /&gt;
| 10 GB or more&lt;br /&gt;
|-&lt;br /&gt;
! Classification&lt;br /&gt;
| Level 4&lt;br /&gt;
|-&lt;br /&gt;
! Learn More&lt;br /&gt;
| Visit [https://it.ucalgary.ca/secure-computing-platform The SCDS Website]&lt;br /&gt;
|-&lt;br /&gt;
! Request Access&lt;br /&gt;
| Visit [https://ucalgary.service-now.com/kb_view.do?sysparm_article=KB0030163 ServiceNow to request access]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== ResearchFS ==&lt;br /&gt;
ResearchFS is a UofC hosted SMB/CIFS storage solution funded and operated by RCS. It is available by request to faculty and staff with active research data.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Capacity&lt;br /&gt;
| 1TB with quota increases available on request. &lt;br /&gt;
|-&lt;br /&gt;
! Classification&lt;br /&gt;
| Level 1 - 2&lt;br /&gt;
|-&lt;br /&gt;
! Request Access&lt;br /&gt;
| Visit [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=fe66b3a7db297300897e4b8b0b96199d ServiceNow to request access]&lt;br /&gt;
|}&lt;br /&gt;
=== Service Description ===&lt;br /&gt;
You may use ResearchFS to store your active research data files. ResearchFS is intended to be used as a research group or project share. ResearchFS is available on campus or off campus using the IT supported VPN client. Information on how to download and install the VPN client can be found here: https://ucalgary.service-now.com/it?id=kb_article&amp;amp;sys_id=880e71071381ae006f3afbb2e144b05c (IT account login may be required).&lt;br /&gt;
All ResearchFS users must have a UofC IT account.&lt;br /&gt;
&lt;br /&gt;
=== Data recovery ===&lt;br /&gt;
ResearchFS does daily snapshots at a bit past midnight, which it keeps for 30 days. You should be able to recover a deleted file for up to 30 days, if it was in your share overnight. If you create a file and delete it during a day, no snapshot will be available for you to recover. ResearchFS presents backups using the windows OS &#039;previous versions&#039; functionality. If you are not familiar with using this, or if you are on a Linux or MacOS device, you can request a restore, with Service Now.&lt;br /&gt;
&lt;br /&gt;
For backup, we replicate changes to a distant data center every hour. The storage hardware which hosts your data is located in the basement of the Math Sciences building and our backup is in the HRIC building, so in case of an on campus disaster, your data should be safe.&lt;br /&gt;
 &lt;br /&gt;
=== Support for ResearchFS ===&lt;br /&gt;
If you have questions, please contact the IT Support Centre.&lt;br /&gt;
: Mon – Fri: 8:30 am – 5:00 pm; Sat, Sun &amp;amp; holidays: 10:00 am – 2:00 pm.&lt;br /&gt;
: Live Chat: ucalgary.ca/it&lt;br /&gt;
: Email: itsupport@ucalgary.ca&lt;br /&gt;
: Phone: 403.220.5555&lt;br /&gt;
: In person: 773 Math Science&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= University of Calgary IT storage services =&lt;br /&gt;
&lt;br /&gt;
== OneDrive for Business ==&lt;br /&gt;
OneDrive for Business is a storage solution provided by Microsoft and is available to all faculty and staff.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Capacity&lt;br /&gt;
| 5 TB with quota increases [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=438e6d8313896a0053f2d7b2e144b0b9 available on request].&lt;br /&gt;
|-&lt;br /&gt;
! Classification&lt;br /&gt;
| Level 1 - 4&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
You may use OneDrive for Business to store your personal and work related files. &lt;br /&gt;
Files stored within OneDrive are by default private only to you but has the option to allow sharing and collaboration with others. &lt;br /&gt;
OneDrive for Business cannot be used as a department or project share space.&lt;br /&gt;
There is no group/lab offering with OneDrive. &lt;br /&gt;
&lt;br /&gt;
While OneDrive provides a secure/compliant location from an IT Security stand point, &lt;br /&gt;
it’s not the most adequate location for data the PI is accountable for 5 years upon completion of the study. &lt;br /&gt;
This is not a security issue, but a data management issue.&lt;br /&gt;
&lt;br /&gt;
For example, if a study was using a personal OneDrive of one of the researchers to store all the records, &lt;br /&gt;
and the researcher was to leave the university, this OneDrive would be gone in 30 days.&lt;br /&gt;
&lt;br /&gt;
MS has an automation capability for their O365 products.&lt;br /&gt;
If you have a windows OS machine, you can use the automation product ‘Flow’ to copy a file to a local file system when a new file is created on OneDrive.&lt;br /&gt;
&lt;br /&gt;
To back up data residing on ARC to your personal OneDrive allocation please see:  [[How to transfer data#rclone: rsync for cloud storage]]&lt;br /&gt;
&lt;br /&gt;
OneDrive requires Multi-Factor Authentication (MFA) enabled on your University of Calgary IT account. &lt;br /&gt;
&lt;br /&gt;
More information can be located in the following article https://ucalgary.service-now.com/kb_view.do?sysparm_article=KB0032351&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
UofC OneDrive data is reportedly hosted in Canada (Markham Ont).&lt;br /&gt;
&lt;br /&gt;
===Support for OneDrive===&lt;br /&gt;
If you have questions, please contact the [https://ucalgary.service-now.com/it UService Support Centre].&lt;br /&gt;
: Email:it@ucalgary.ca Phone:403.210.9300  1.888.342.3802  Mon - Fri: 8:00 a.m. to noon and 1:00 p.m. to 4:30 p.m.  (closed over the lunch hour)  Walk-in service  Math Sciences 7th floor, Room 773  Tues - Thurs: 1:00 p.m. to 4:30 p.m.&lt;br /&gt;
&lt;br /&gt;
===Data recovery===&lt;br /&gt;
&lt;br /&gt;
===Other Resources===&lt;br /&gt;
For more information on OneDrive for Business:&lt;br /&gt;
* Operating Level of Agreement KB0032404 (https://ucalgary.service-now.com/it?id=kb_article&amp;amp;sys_id=7f57bddcdb56a3047cab5068dc9619b6)&lt;br /&gt;
*OneDrive for Business Getting Started KB0032351 (https://ucalgary.service-now.com/it?id=kb_article&amp;amp;sys_id=60994170db2da7487cab5068dc961900)&lt;br /&gt;
*If you are above 90% of your OneDrive quota, you can request an increase here: ( https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=438e6d8313896a0053f2d7b2e144b0b9) PLEASE NOTE: Microsoft will only increase an allocation while the Cloud Storage is more than 90% full. Please log into your O365 cloud account to review before making your request.&lt;br /&gt;
&lt;br /&gt;
Any questions regarding if data hosted on OneDrive is subject to US jurisdiction discovery or access should be directed to:&lt;br /&gt;
*https://cumming.ucalgary.ca/research-institutes/csm-research-services/legal-research-services (CSM researchers.)&lt;br /&gt;
*https://research.ucalgary.ca/contact/research-services (Not CSM Researchers)&lt;br /&gt;
*https://www.ucalgary.ca/legalservices/  (for teaching/learning – non research enquiries that make their way to you)&lt;br /&gt;
&lt;br /&gt;
==Office365 SharePoint for research groups==&lt;br /&gt;
&lt;br /&gt;
To be determined....&lt;br /&gt;
&lt;br /&gt;
Researchers will be able to request an Office 365 SharePoint site for a group at some point in the future &lt;br /&gt;
which could be considered a group cloud sharing platform.&lt;br /&gt;
&lt;br /&gt;
* The official service page:&lt;br /&gt;
: https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=b55f2f72132f5240b5b4df82e144b085&lt;br /&gt;
&lt;br /&gt;
= Digital Research Alliance of Canada storage services =&lt;br /&gt;
&lt;br /&gt;
== Storage on the Alliance HPC clusters ==&lt;br /&gt;
&lt;br /&gt;
* Alliance Wiki article &amp;quot;Storage and file management&amp;quot;:&lt;br /&gt;
: https://docs.alliancecan.ca/wiki/Storage_and_file_management&lt;br /&gt;
&lt;br /&gt;
==Personal storage options==&lt;br /&gt;
For personal or level 1 data, you may use an external solution from the Alliance.&lt;br /&gt;
One has to have an Alliance account to use the service.&lt;br /&gt;
This is similar to DropBox or Google drive functionality. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;The Alliance NextCloud&#039;&#039;&#039;:&lt;br /&gt;
:https://nextcloud.computecanada.ca&lt;br /&gt;
: 100 GB of storage that can be shared between your computers.&lt;br /&gt;
: Alliance documentation: https://docs.alliancecan.ca/wiki/Nextcloud&lt;br /&gt;
&lt;br /&gt;
= Commercial Cloud Based Storage Options =&lt;br /&gt;
&lt;br /&gt;
== AWS ==&lt;br /&gt;
&lt;br /&gt;
Provided by &#039;&#039;&#039;Amazon Web Services, Inc.&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
* Pricing calculator: https://calculator.aws&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
AWS provides very many different kinds of services, including &#039;&#039;&#039;storage services&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
These options can be a solution for your research needs, but &lt;br /&gt;
* it &#039;&#039;&#039;can be expensive&#039;&#039;&#039;, depending on your needs and the amount of data;&lt;br /&gt;
* &#039;&#039;&#039;pricing schema is complex&#039;&#039;&#039; and can be confusing for new users;&lt;br /&gt;
* the &#039;&#039;&#039;number of options can be overwhelming&#039;&#039;&#039;. A lot of it is designed to provide pricing flexibility, not to increase functionality.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The key points:&lt;br /&gt;
* &#039;&#039;&#039;Uploading data&#039;&#039;&#039; to AWS is &#039;&#039;&#039;free&#039;&#039;&#039;.&lt;br /&gt;
* &#039;&#039;&#039;Storing data&#039;&#039;&#039; on AWS storage is a &#039;&#039;&#039;paid service&#039;&#039;&#039;.&lt;br /&gt;
* &#039;&#039;&#039;Downloading data&#039;&#039;&#039; from AWS storage to your computer is a &#039;&#039;&#039;paid service&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
= ARC Cluster Storage =&lt;br /&gt;
&lt;br /&gt;
ARC storage is used to &#039;&#039;&#039;support workflows on the ARC computing cluster&#039;&#039;&#039;. The expectation is that storage on ARC will only be used for active and upcoming computational projects. &lt;br /&gt;
It is not suitable for long-term or archival storage as it is not backed-up and is not guaranteed to be available for the time periods that are typical of archiving.&lt;br /&gt;
ARC is a research cluster, which means it has high performance but can be stopped for required maintenance when needed. &lt;br /&gt;
Thus, ARC cannot be relied on for any kind of service that requires constant availability. &lt;br /&gt;
Which means, in turn that ARC&#039;s storage cannot and &#039;&#039;&#039;should not be used as a main storage facility for research data&#039;&#039;&#039;. &lt;br /&gt;
The &#039;&#039;&#039;master copy&#039;&#039;&#039; of research data should be &#039;&#039;&#039;stored elsewhere&#039;&#039;&#039; and only part of that data are expected to be copied to ARC for computational analysis.&lt;br /&gt;
&lt;br /&gt;
== Home Directories ==&lt;br /&gt;
Every user account on ARC has a static 500GB allocation of storage and a maximum of 1.5 million files (including directories). This cannot be increased or decreased. Home directory storage is connected via a network file system to the rest of the cluster and supports fast data transfer to memory on compute nodes. This also means that basic file system commands (like &amp;lt;code&amp;gt;ls&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;find&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;du&amp;lt;/code&amp;gt;) take longer to run as the number of files in your home directory increases. In particular, we strongly encourage users to stay under 100000 files if it is at all possible. This can be achieved by combining smaller data files into single larger files, using structured data formats rather than large number of text files, or combining collections of files that will be used together into archives (tar, dar, etc). Since top level permissions on home directories are set to prevent other users from reading or executing, home directories are not suitable for sharing data directly with colleagues working on ARC. A Research Group Allocation is a more appropriate place for storing shared data or very large data sets that will be used as part of active computational projects.   &lt;br /&gt;
&lt;br /&gt;
== Research Group Allocations (&amp;lt;code&amp;gt;/work&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/bulk&amp;lt;/code&amp;gt;) ==&lt;br /&gt;
The principal investigator (PI) for a research group may request an extended shared allocation for the research group by contacting support@hpc.ucalgary.ca with answers to the following questions (please copy the full text of the questions into your email and write answers under it):&lt;br /&gt;
&lt;br /&gt;
* How much storage is requested and why is that the amount that you need? &lt;br /&gt;
A rationale for a request can be a formal data management plan or something more informal like a rough estimate to the primary dataset used for a project and a rough estimate to the size of outputs expected from your computations that are planned to run on ARC over the &#039;&#039;&#039;next year.&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* What is the requested &#039;&#039;&#039;allocation name&#039;&#039;&#039;? (typically something like &amp;lt;PI name&amp;gt;_lab, &amp;lt;code&amp;gt;smith_lab&amp;lt;/code&amp;gt;, for example)&lt;br /&gt;
* What is the &#039;&#039;&#039;data classification&#039;&#039;&#039; using the University of Calgary data security classification system?&lt;br /&gt;
* Which user or users would be the &#039;&#039;&#039;owner&#039;&#039;&#039; of the allocation? (Full Name and UCalgary Email address, typically the requesting PI but there may be co-PIs)&lt;br /&gt;
* Which members of the allocation should be able to &#039;&#039;&#039;request access&#039;&#039;&#039; for new users? (Full Name and UCalgary Email address for active ARC users) &lt;br /&gt;
* What is the &#039;&#039;&#039;faculty&#039;&#039;&#039; of the owner or owners? &lt;br /&gt;
* Please provide a short description of the lab.&lt;br /&gt;
* Please provide a brief &#039;&#039;&#039;numerical estimate&#039;&#039;&#039; of the required storage space based on &#039;&#039;&#039;projects&#039;&#039;&#039; that will use the allocation and their storage &#039;&#039;&#039;requirements&#039;&#039;&#039; .&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example 1&#039;&#039;&#039;: &amp;quot;We will be processing a 1T dataset by performing 100 experimental runs. &lt;br /&gt;
Each experiment will be processed to produce a 6GB output, giving 600GB of the total output data. &lt;br /&gt;
We will also need 400GB additional space for post-processing and data management. &lt;br /&gt;
Thus, we would like to request &#039;&#039;&#039;2TB&#039;&#039;&#039; of shared space in total.&amp;quot;   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example 2&#039;&#039;&#039;: &amp;quot;3 members of our research group need additional shared space on ARC for their independent projects.&lt;br /&gt;
Project 1 starts with 100GB of initial data and is expected to generate 800GB of the output results. &lt;br /&gt;
Project 2 is going to use simulations and does not use any input data but is expected to generate 2TB of the simulated data for further processing.&lt;br /&gt;
The processing will require 200GB of additional space.&lt;br /&gt;
Project 3 will be working on a 1TB dataset and is expected to generate about 1TB of the output data. &lt;br /&gt;
These projects, therefore, will require 5.1TB of storage. &lt;br /&gt;
For convenience of data manipulation and management we would also like to have additional 400GB of extra storage space.&lt;br /&gt;
Therefore, we would like to request &#039;&#039;&#039;5.5TB&#039;&#039;&#039; of shared storage space in total.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Work and Bulk storage can be considerably larger than the home directory allocations. However, there are limits on what RCS can provide as ARC storage provides high-speed access and is expensive to purchase. Typically, &#039;&#039;&#039;any request over 10TB&#039;&#039;&#039; will require some discussion. Work and Bulk allocations differ in a few ways that influence how they are used. Work storage is faster to access as part of computational jobs on ARC although the impact is small for jobs that don&#039;t involve enormous numbers of reads. Bulk storage is designed to be a target for instrument data (which is typically processed in a way that reads data a small number of times per job) and is capable of mounting instruments elsewhere on campus using SMB. A number of questions come up frequently about Work and Bulk storage and these are addressed in an [[Group Storage Allocation FAQ | FAQ]].&lt;br /&gt;
&lt;br /&gt;
===== How to a add group member to the access list (&amp;lt;code&amp;gt;/work&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/bulk&amp;lt;/code&amp;gt;)?=====&lt;br /&gt;
&lt;br /&gt;
Any group member who wants to use the shared storage, should send an email to the support@hpc.ucalgary.ca to be added to the access group and CC the PI/ data owner. &#039;&#039;&#039;This will confirm that the PI approves the group member&#039;s request access to the shared storage.&#039;&#039;&#039; Please note that the access permissions inside the directory are expected to be managed by the data owners.&lt;br /&gt;
&lt;br /&gt;
= Archive Storage =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Administration]]&lt;br /&gt;
{{Navbox Administration}}&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=ARC_Cluster_Guide&amp;diff=2641</id>
		<title>ARC Cluster Guide</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=ARC_Cluster_Guide&amp;diff=2641"/>
		<updated>2023-09-14T23:13:01Z</updated>

		<summary type="html">&lt;p&gt;Darcy: /* ARC Cluster Storage */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{ARC Cluster Status}}&lt;br /&gt;
&lt;br /&gt;
{{Message Box&lt;br /&gt;
|title=[[Support|Need Help or have other ARC Related Questions?]]&lt;br /&gt;
|message=For all general RCS related issues, questions, or comments, please contact us at support@hpc.ucalgary.ca.&lt;br /&gt;
|icon=Support Icon.png}}&lt;br /&gt;
&lt;br /&gt;
This guide gives an overview of the Advanced Research Computing (ARC) cluster at the University of Calgary and is intended to be read by new account holders getting started on ARC. This guide covers topics such as the hardware and performance characteristics, available software, usage policies and how to log in and run jobs. ARC can be used with data that a Researcher has classified as Lv1 and Lv2 as described in the UCalgary [https://www.ucalgary.ca/legal-services/sites/default/files/teams/1/Standards-Legal-Information-Security-Classification-Standard.pdf Information Security Classification Standard] &lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
The ARC is a high performance compute (HPC) cluster that is available for research projects based at the University of Calgary. This compute cluster is comprised of hundreds of severs interconnected with a high bandwidth interconnect. Special resources within the cluster include nodes with large memory installed and GPUs are also available. You may learn more about ARC&#039;s hardware in the [[ARC Cluster Guide#Hardware|hardware section below]]. ARC can be accessed through a [[Linux Introduction|command line interface]] or via a web interface called Open OnDemand.&lt;br /&gt;
&lt;br /&gt;
This cluster can be used for running large numbers (hundreds) of concurrent serial (one core) jobs, OpenMP or other thread-based jobs, shared-memory parallel code using up to 40 or 80 threads per job (depending on the partition), distributed-memory (MPI-based) parallel code using up to hundreds of cores, or jobs that take advantage of Graphics Processing Units (GPUs).&lt;br /&gt;
&lt;br /&gt;
Historically, ARC is primarily comprised of older, disparate Linux-based clusters that were formerly offered to researchers from across Canada such as Breezy, Lattice, and Parallel.  In addition, a large-memory compute node (Bigbyte) was salvaged from the now-retired local Storm cluster. In January 2019, a major addition to ARC with modern hardware was purchased. In 2020, compute clusters from CHGI have been migrated into ARC.&lt;br /&gt;
&lt;br /&gt;
=== How to Get Started ===&lt;br /&gt;
If you have a project you think would be appropriate for ARC, please email support@hpc.ucalgary.ca and mention the intended research and software you plan to use. You must have a University of Calgary IT account in order to use ARC.&lt;br /&gt;
* For users that do not have a University of IT account or email address, please register for one at https://itregport.ucalgary.ca/.&lt;br /&gt;
* For users external to the University, such as for users collaborating on a research project at the University of Calgary, please contact us and mention the project leader you are collaborating with.&lt;br /&gt;
&lt;br /&gt;
Once your access to ARC has been granted, you will be able to immediately make use of the cluster using your University of Calgary IT account by following the [[ARC_Cluster_Guide#Using_ARC|usage guide outlined below]].&lt;br /&gt;
&lt;br /&gt;
== Using ARC ==&lt;br /&gt;
&lt;br /&gt;
{{Message Box&lt;br /&gt;
|icon=Security Icon.png&lt;br /&gt;
|title=Cybersecurity awareness at the U of C&lt;br /&gt;
|message=Please note that there are typically about 950 phishing attempts targeting University of Calgary accounts each month. This is just a reminder to be careful about computer security issues, both at home and at the University. Please visit https://it.ucalgary.ca/it-security for more information, tips on secure computing, and how to report suspected security problems.}}&lt;br /&gt;
&lt;br /&gt;
=== Logging in ===&lt;br /&gt;
To log in to ARC, connect using SSH to &amp;lt;code&amp;gt;arc.ucalgary.ca&amp;lt;/code&amp;gt; on port &amp;lt;code&amp;gt;22&amp;lt;/code&amp;gt;. Connections to ARC are accepted only from the University of Calgary network (on campus) or through the University of Calgary General VPN (off campus).&lt;br /&gt;
&lt;br /&gt;
See [[Connecting to RCS HPC Systems]] for more information.&lt;br /&gt;
=== How to interact with ARC ===&lt;br /&gt;
&lt;br /&gt;
ARC cluster is a collection of several compute nodes connected by a high-speed network. On ARC, computations get submitted as jobs. Once submitted, the jobs are then assigned to compute nodes by the job scheduler as resources become available.&lt;br /&gt;
&lt;br /&gt;
[[File:Cluster.png]]&lt;br /&gt;
&lt;br /&gt;
You can access ARC with your UCalgary IT user credentials. Once connected, you will get placed in the ARC login node, for basic tasks such as job submission, monitor job status, manage files, edit text, etc. It is a shared resource where multiple users get connected at the same time. Thus, any intensive tasks is not allowed on the login node as it may block other potential users to connect/submit their computations. &lt;br /&gt;
         [tannistha.nandi@arc ~]$ &lt;br /&gt;
The job scheduling system on ARC is called SLURM.  On ARC, there are two SLURM commands that can allocate resources to a job under appropriate conditions: ‘salloc’ and ‘sbatch’. They both accept the same set of command line options with respect to resource allocation. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;‘salloc’&#039;&#039;&#039; is to launch an interactive session, typically for tasks under 5 hours. &lt;br /&gt;
Once an interactive job session is created, you can do things like explore research datasets, start R or python sessions to test your code, compile software applications etc.&lt;br /&gt;
&lt;br /&gt;
a. Example 1: The following command requests for 1 cpu on 1 node for 1 task along with 1 GB of RAM for an hour. &lt;br /&gt;
          [tannistha.nandi@arc ~]$ salloc --mem=1G -c 1 -N 1 -n 1  -t 01:00:00&lt;br /&gt;
          salloc: Granted job allocation 6758015&lt;br /&gt;
          salloc: Waiting for resource configuration&lt;br /&gt;
          salloc: Nodes fc4 are ready for job&lt;br /&gt;
          [tannistha.nandi@fc4 ~]$ &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
b. Example 2:  The following command requests for 1 GPU to be used from 1 node belonging to the gpu-v100 partition along with 1 GB of RAM for 1 hour.  Generic resource scheduling (--gres) is used to request for GPU resources.&lt;br /&gt;
         [tannistha.nandi@arc ~]$ salloc --mem=1G -t 01:00:00 -p gpu-v100 --gres=gpu:1&lt;br /&gt;
         salloc: Granted job allocation 6760460&lt;br /&gt;
         salloc: Waiting for resource configuration&lt;br /&gt;
         salloc: Nodes fg3 are ready for job&lt;br /&gt;
         [tannistha.nandi@fg3 ~]$&lt;br /&gt;
&lt;br /&gt;
Once you finish the work, type &#039;exit&#039; at the command prompt to end the interactive session,&lt;br /&gt;
         [tannistha.nandi@fg3 ~]$ exit&lt;br /&gt;
         [tannistha.nandi@fg3 ~]$ salloc: Relinquishing job allocation 6760460&lt;br /&gt;
It is to ensure that the allocated resources are released from your job and now available to other users.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;‘sbatch’&#039;&#039;&#039; is to submit computations as jobs to run on the cluster. You can submit a job-script.slurm via &#039;sbatch&#039; for execution.   &lt;br /&gt;
         [tannistha.nandi@arc ~]$ sbatch job-script.slurm&lt;br /&gt;
When resources become available, they get allocated to this task. Batch jobs are suited for tasks that run for long periods of time without any user supervision. When the job-script terminates, the allocation is released. &lt;br /&gt;
Please review the section on how to prepare job scripts for more information.&lt;br /&gt;
&lt;br /&gt;
=== Prepare job scripts  ===&lt;br /&gt;
Job scripts are text files saved with an extension &#039;.slurm&#039;, for example, &#039;job-script.slurm&#039;. &lt;br /&gt;
A job script looks something like this:&lt;br /&gt;
    &#039;&#039;#!/bin/bash&#039;&#039;&lt;br /&gt;
    ####### Reserve computing resources #############&lt;br /&gt;
    #SBATCH --nodes=1&lt;br /&gt;
    #SBATCH --ntasks=1&lt;br /&gt;
    #SBATCH --cpus-per-task=1&lt;br /&gt;
    #SBATCH --time=01:00:00&lt;br /&gt;
    #SBATCH --mem=1G&lt;br /&gt;
    #SBATCH --partition=cpu2019&amp;lt;br&amp;gt;&lt;br /&gt;
    ####### Set environment variables ###############&lt;br /&gt;
    module load python/anaconda3-2018.12&amp;lt;br&amp;gt;&lt;br /&gt;
    ####### Run your script #########################&lt;br /&gt;
    python myscript.py&lt;br /&gt;
&lt;br /&gt;
The first line contains the text &amp;quot;#!/bin/bash&amp;quot; to interpret it as a bash script.&lt;br /&gt;
&lt;br /&gt;
It is followed by lines that start with a &#039;#SBATCH&#039; to communicate with  &#039;SLURM&#039;. You may add as many #SBATCH directives as needed to reserve computing resources for your task. The above example requests for one cpu on a single node for 1 task along with 1GB RAM for an hour on cpu2019 partition.&lt;br /&gt;
&lt;br /&gt;
Next, you have to set up environment variables either by loading the modules centrally installed on ARC or export path to the software in your home directory. The above example loads an available python module.&lt;br /&gt;
&lt;br /&gt;
Finally, include the Linux command to execute the local script.&lt;br /&gt;
&lt;br /&gt;
Note that failing to specify part of a resource allocation request (most notably &#039;&#039;&#039;time&#039;&#039;&#039; and &#039;&#039;&#039;memory&#039;&#039;&#039;) will result in bad resource requests as the defaults are not appropriate to most cases. Please refer to the section &#039;Running non-interactive jobs&#039; for more examples.&lt;br /&gt;
&lt;br /&gt;
== Hardware ==&lt;br /&gt;
Since the ARC cluster is a conglomeration of many different compute clusters, the hardware within ARC can vary widely in terms of performance and capabilities.  To mitigate any compatibility issues with different hardware, we combine similar hardware into their own Slurm partition to ensure your workload runs as consistently as possible within one partition. Please carefully review the hardware specs for each of the partitions below to avoid any surprises.&lt;br /&gt;
&lt;br /&gt;
=== Partition Hardware Specs ===&lt;br /&gt;
When submitting jobs to ARC, you may specify a partition that your job will run on.  Please choose a partition that is most appropriate for your work.&lt;br /&gt;
&lt;br /&gt;
* See also [[How to find available partitions on ARC]].&lt;br /&gt;
&lt;br /&gt;
A few things to keep in mind when choosing a partition:&lt;br /&gt;
* Specific workloads requiring special Intel Instruction Set Extensions may only work on newer Intel CPUs. &lt;br /&gt;
* If working with multi-node parallel processing, ensure your software and libraries support the partition&#039;s interconnect networking.&lt;br /&gt;
* While older partitions may be slower, they may be less busy and have little to no wait times.&lt;br /&gt;
&lt;br /&gt;
If you are unsure which partition to use or need assistance on selecting an appropriate partition, please see [[#Selecting_a_Partition|the Selecting a Partition Section]] below. &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Partition&lt;br /&gt;
! Description&lt;br /&gt;
! Nodes&lt;br /&gt;
! CPU Cores, Model, and Year&lt;br /&gt;
! Memory&lt;br /&gt;
! GPU&lt;br /&gt;
! Network&lt;br /&gt;
|-&lt;br /&gt;
| -&lt;br /&gt;
| ARC Login Node&lt;br /&gt;
| 1&lt;br /&gt;
| 16 cores, 2x Intel(R) Xeon(R) CPU E5620  @ 2.40GHz (Westmere, 2010)&lt;br /&gt;
| 48 GB&lt;br /&gt;
| N/A&lt;br /&gt;
| 40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
| gpu-v100&lt;br /&gt;
| GPU Parition&lt;br /&gt;
| 13&lt;br /&gt;
| 80 cores, 2x Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz (Skylake, 2019)&lt;br /&gt;
| 754 GB&lt;br /&gt;
| 2x Tesla V100-PCIE-16GB&lt;br /&gt;
| 100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|gpu-a100&lt;br /&gt;
|GPU Partition&lt;br /&gt;
|5&lt;br /&gt;
|40 cores, 1x Intel(R) Xeon(R) Silver 4316 CPU @ 2.30GHz (Ice Lake, 2021)&lt;br /&gt;
|512 GB&lt;br /&gt;
|2x GA100 A100 PCIe 80GB&lt;br /&gt;
|100 Gbit/s Mellanox Infiniband&lt;br /&gt;
|-&lt;br /&gt;
|cpu2022&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|52&lt;br /&gt;
|52 cores, 2x Intel(R) Xeon(R) Gold 5320 CPU @ 2.20GHz (Ice Lake)&lt;br /&gt;
|256 GB&lt;br /&gt;
|N/A&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
| cpu2021&lt;br /&gt;
| General Purpose Compute&lt;br /&gt;
| 48&lt;br /&gt;
| 48 cores, 2x Intel(R) Xeon(R) Gold 6240R CPU @ 2.40GHz (Cascade Lake, 2021)&lt;br /&gt;
| 185 GB&lt;br /&gt;
| N/A &lt;br /&gt;
| 100 Gbit/s Mellanox Infiniband&lt;br /&gt;
|-&lt;br /&gt;
| cpu2019&lt;br /&gt;
| General Purpose Compute&lt;br /&gt;
| 14&lt;br /&gt;
| 40 cores, 2x Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz (Skylake, 2019)&lt;br /&gt;
| 190 GB&lt;br /&gt;
| N/A &lt;br /&gt;
| 100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
| apophis&lt;br /&gt;
| General Purpose Compute&lt;br /&gt;
| 21&lt;br /&gt;
| 40 cores, 2x Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz (Skylake, 2019)&lt;br /&gt;
| 190 GB&lt;br /&gt;
| N/A &lt;br /&gt;
| 100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
| razi&lt;br /&gt;
| General Purpose Compute&lt;br /&gt;
| 41&lt;br /&gt;
| 40 cores, 2x Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz (Skylake, 2019)&lt;br /&gt;
| 190 GB&lt;br /&gt;
| N/A &lt;br /&gt;
| 100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
| bigmem&lt;br /&gt;
| Big Memory Nodes&lt;br /&gt;
| 2&lt;br /&gt;
| 80 cores, 4x Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz (Skylake, 2019)&lt;br /&gt;
| 3022 GB&lt;br /&gt;
| N/A &lt;br /&gt;
| 100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
| pawson&lt;br /&gt;
| General Purpose Compute&lt;br /&gt;
| 13&lt;br /&gt;
| 40 cores, 2x Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz (Skylake, 2019)&lt;br /&gt;
| 190 GB&lt;br /&gt;
| N/A&lt;br /&gt;
| 100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|cpu2017&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|14&lt;br /&gt;
|56 cores, 2x Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz (Sandy Bridge, 2012)&lt;br /&gt;
|256 GB&lt;br /&gt;
|N/A&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
| theia&lt;br /&gt;
| Former Theia cluster&lt;br /&gt;
| 20&lt;br /&gt;
| 56 cores, 2x Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz (Sandy Bridge, 2012)&lt;br /&gt;
| 188 GB&lt;br /&gt;
| N/A &lt;br /&gt;
| 40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
| cpu2013&lt;br /&gt;
| Former hyperion cluster&lt;br /&gt;
| 12&lt;br /&gt;
| 32 cores, 2x Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz (Sandy Bridge, 2012)&lt;br /&gt;
| 126 GB&lt;br /&gt;
| N/A&lt;br /&gt;
| 40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
| lattice&lt;br /&gt;
| Former Lattice cluster&lt;br /&gt;
| 307&lt;br /&gt;
| 8 cores, 2x Intel(R) Xeon(R) CPU L5520  @ 2.27GHz (Nehalem, 2009)&lt;br /&gt;
| 12 GB&lt;br /&gt;
| N/A&lt;br /&gt;
| 40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
| single&lt;br /&gt;
| Former Lattice cluster&lt;br /&gt;
| 168&lt;br /&gt;
| 8 cores, 2x Intel(R) Xeon(R) CPU L5520  @ 2.27GHz (Nehalem, 2009)&lt;br /&gt;
| 12 GB&lt;br /&gt;
| N/A&lt;br /&gt;
| 40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
| parallel&lt;br /&gt;
| Former Parallel Cluster&lt;br /&gt;
| 576&lt;br /&gt;
| 12 cores, 2x Intel(R) Xeon(R) CPU E5649  @ 2.53GHz (Westmere, 2011)&lt;br /&gt;
| 24 GB&lt;br /&gt;
| N/A&lt;br /&gt;
| 40 Gbit/s InfiniBand&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===ARC Cluster Storage===&lt;br /&gt;
Usage of ARC cluster storage is outlined by our [[ARC Storage Terms of Use]] page.&lt;br /&gt;
&lt;br /&gt;
{{Warning Box&lt;br /&gt;
| title=Data Storage&lt;br /&gt;
| message=ARC storage is not suitable for long-term or archival storage.  It is not backed-up and does not have sufficient redundancy to be used as a primary storage system.  It is not guaranteed to be available for the time periods that are typical of archiving.&lt;br /&gt;
&lt;br /&gt;
Please ensure that the only data you keep on ARC is used for active computations.&lt;br /&gt;
&lt;br /&gt;
For information on available campus storage options, please see [[Storage Options]].&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message Box&lt;br /&gt;
| title=No Backup Policy!&lt;br /&gt;
| message=You are responsible for your own backups.  Many researchers will have accounts with Compute Canada and may choose to back up their data there (the Project file system accessible through the Cedar cluster would often be used). &lt;br /&gt;
&lt;br /&gt;
Please contact us at support@hpc.ucalgary.ca if you want more information about this option.&lt;br /&gt;
&lt;br /&gt;
You can also back up data to your UofC OneDrive for business allocation see: https://rcs.ucalgary.ca/How_to_transfer_data#rclone:_rsync_for_cloud_storage This allocation starts at 5TB. Contact the support center for questions regarding OneDrive for Business.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
The ARC cluster has around 2 petabyte of shared disk storage available across the entire cluster as well as temporary storage local to each of the compute nodes. Please refer to the individual sections below on the capacity limitations and usage policies. &lt;br /&gt;
&lt;br /&gt;
Use the &amp;lt;code&amp;gt;arc.quota&amp;lt;/code&amp;gt; command on ARC to determine the available space on your various volumes and home directory.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Partition&lt;br /&gt;
!Description&lt;br /&gt;
!Capacity&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt;&lt;br /&gt;
|User home directories&lt;br /&gt;
|500 GB (per user)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;/work&amp;lt;/code&amp;gt;&lt;br /&gt;
|Research project storage&lt;br /&gt;
|Up to 100&#039;s of TB&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt;&lt;br /&gt;
|Scratch space for temporary files&lt;br /&gt;
|Up to 15 TB&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;/tmp&amp;lt;/code&amp;gt;&lt;br /&gt;
|Temporary space local to the compute cluster&lt;br /&gt;
|Dependent on available storage on nodes. Verify with &amp;lt;code&amp;gt;df -h&amp;lt;/code&amp;gt;.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;/dev/shm&amp;lt;/code&amp;gt;&lt;br /&gt;
|Small temporary in-memory disk space local to the compute cluster&lt;br /&gt;
|Dependent on memory size set in your Slurm job.&lt;br /&gt;
|}&lt;br /&gt;
====&amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt;: Home file system====&lt;br /&gt;
Each user has a directory under /home and is the default working directory when logging in to ARC. Each home directory has a per-user quota of 500 GB. This limit is fixed and cannot be increased. Researchers requiring additional storage exceeding what is available on their home directory may use &amp;lt;code&amp;gt;/work&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Note on file sharing: Due to security concerns, permissions set using &amp;lt;code&amp;gt;chmod&amp;lt;/code&amp;gt; on your home directory to allow other users to read/write to your home directory be automatically reverted by an automated system process unless an explicit exception is made.  If you need to share files with other researchers on the ARC cluster, please write to support@hpc.ucalgary.ca to ask for such an exception.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt;: Scratch file system for large job-oriented storage====&lt;br /&gt;
Associated with each job, under the &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; directory, a subdirectory is created that can be referenced in job scripts as &amp;lt;code&amp;gt;/scratch/${SLURM_JOB_ID}&amp;lt;/code&amp;gt;. You can use that directory for temporary files needed during the course of a job. Up to 15 TB of storage may be used, per user (total for all your jobs) in the &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; file system. &lt;br /&gt;
&lt;br /&gt;
Data in &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; associated with a given job will be deleted automatically, without exception, five days after the job finishes.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;code&amp;gt;/work&amp;lt;/code&amp;gt;: Work file system for larger projects====&lt;br /&gt;
If you need more space than provided in &amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt; and the &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; job-oriented space is not appropriate for you case, please write to support@hpc.ucalgary.ca with an explanation, including an indication of how much storage you expect to need and for how long.  If approved, you will then be assigned a directory under &amp;lt;code&amp;gt;/work&amp;lt;/code&amp;gt; with an appropriately large quota.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;code&amp;gt;/tmp&amp;lt;/code&amp;gt;,&amp;lt;code&amp;gt;/var/tmp&amp;lt;/code&amp;gt;: Temporary files====&lt;br /&gt;
You may use &amp;lt;code&amp;gt;/tmp&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;/var/tmp&amp;lt;/code&amp;gt; for storing temporary files generated by your job. The &amp;lt;code&amp;gt;/tmp&amp;lt;/code&amp;gt; is stored on a disk local to the compute node and is not shared across the cluster. The files stored here will be removed immediately after your job terminates.&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;code&amp;gt;/dev/shm&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;/run/user/$uid&amp;lt;/code&amp;gt;: In-memory temporary files ====&lt;br /&gt;
&amp;lt;code&amp;gt;/dev/shm&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/run/user/$UID&amp;lt;/code&amp;gt; is writable location for temporary files backed by virtual memory. This can be used if faster I/O is required. This is ideal for workloads that require many small read/writes to share data between processes or as a fast cache. The amount of data you can write here is dependent on the amount of free memory available to your job. The files stored at these locations will be removed immediately after your job terminates.&lt;br /&gt;
&lt;br /&gt;
== Software ==&lt;br /&gt;
All ARC nodes run the latest version of Rocky Linux 8 with the same set of base software packages. To maintain the stability and consistency of all nodes, any additional dependencies that your software requires must be installed under your account.  For your convenience, we have packaged commonly used software packages and dependencies as modules available under &amp;lt;code&amp;gt;/global/software&amp;lt;/code&amp;gt;. If your software package is not available as a module, you may also try Anaconda which allows users to manage and install custom packages in an isolated environment.&lt;br /&gt;
&lt;br /&gt;
For a list of available packages that have been made available, please see [[ARC Software pages]]. &lt;br /&gt;
&lt;br /&gt;
Please contact us at support@hpc.ucalgary.ca if you need additional software installed.&lt;br /&gt;
&lt;br /&gt;
==== Modules ====&lt;br /&gt;
The setup of the environment for using some of the installed software is through the &amp;lt;code&amp;gt;module&amp;lt;/code&amp;gt; command. An overview of [https://www.westgrid.ca//support/modules modules on WestGrid (external link)] is largely applicable to ARC.&lt;br /&gt;
&lt;br /&gt;
Software packages bundled as a module will be available under &amp;lt;code&amp;gt;/global/software&amp;lt;/code&amp;gt; and can be listed with the &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module avail&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To enable Python, load the Python module by running:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module load python/anaconda-3.6-5.1.0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To unload the Python module, run:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module remove python/anaconda-3.6-5.1.0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To see currently loaded modules, run:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module list&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By default, no modules are loaded on ARC. If you wish to use a specific module, such as the Intel compilers or the Open MPI parallel programming packages, you must load the appropriate module.&lt;br /&gt;
&lt;br /&gt;
== Job submission ==&lt;br /&gt;
&lt;br /&gt;
=== Interactive Jobs ===&lt;br /&gt;
The ARC login node may be used for such tasks as editing files, compiling programs and running short tests while developing programs. We suggest CPU intensive workloads on the login node be restricted to under 15 minutes as per [[General Cluster Guidelines and Policies|our cluster guidelines]]. For interactive workloads exceeding 15 minutes, use the &#039;&#039;&#039;[[Running_jobs#Interactive_jobs|salloc command]]&#039;&#039;&#039; to allocate an interactive session on a compute node.&lt;br /&gt;
&lt;br /&gt;
The default salloc allocation is 1 CPU and 1 GB of memory. Adjust this by specifying &amp;lt;code&amp;gt;-n CPU#&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--mem Megabytes&amp;lt;/code&amp;gt;. You may request up to 5 hours of CPU time for interactive jobs.&lt;br /&gt;
 salloc --time=5:00:00 --partition=cpu2019&lt;br /&gt;
&lt;br /&gt;
Always use salloc or srun to start an interactive job. Do not SSH directly to a compute node as SSH sessions will be refused without an active job running.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- This information doesn&#039;t seem that useful or relevant to running interactive jobs. Move to getting started section?&lt;br /&gt;
ARC uses the Linux operating system. The program that responds to your typed commands and allows you to run other programs is called the Linux shell. There are several different shells available, but, by default you will use one called bash. It is useful to have some knowledge of the shell and a variety of other command-line programs that you can use to manipulate files. If you are new to Linux systems, we recommend that you work through one of the many online tutorials that are available, such as the [http://www.ee.surrey.ac.uk/Teaching/Unix/index.html UNIX Tutorial for Beginners (external link)] provided by the University of Surrey. The tutorial covers such fundamental topics, among others, as creating, renaming and deleting files and directories, how to produce a listing of your files and how to tell how much disk space you are using.  For a more comprehensive introduction to Linux, see [http://linuxcommand.sourceforge.net/tlcl.php The Linux Command Line (external link)].&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Running non-interactive jobs (batch processing) ===&lt;br /&gt;
Production runs and longer test runs should be submitted as (non-interactive) batch jobs, in which commands to be executed are listed in a script (text file). Batch jobs scripts are submitted using the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command, part of the Slurm job management and scheduling software. #SBATCH directive lines at the beginning of the script are used to specify the resources needed for the job (cores, memory, run time limit and any specialized hardware needed).&lt;br /&gt;
&lt;br /&gt;
Most of the information on the [https://docs.computecanada.ca/wiki/Running_jobs Running Jobs (external link)] page on the Compute Canada web site is also relevant for submitting and managing batch jobs and reserving processors for interactive work on ARC.  One major difference between running jobs on the ARC and Compute Canada clusters is in selecting the type of hardware that should be used for a job. On ARC, you choose the hardware to use primarily by specifying a partition, as described below.&lt;br /&gt;
&lt;br /&gt;
=== Selecting a Partition ===&lt;br /&gt;
There are some aspects to consider when selecting a partition including:&lt;br /&gt;
* Resource requirements in terms of memory and CPU cores&lt;br /&gt;
* Hardware specific requirements, such as GPU or CPU Instruction Set Extensions&lt;br /&gt;
* Partition resource limits and potential wait time&lt;br /&gt;
* Software support parallel processing using Message Passing Interface (MPI), OpenMP, etc.&lt;br /&gt;
** Eg. MPI for parallel processing can distribute memory across multiple nodes, per-node memory requirements could be lower. Whereas, OpenMP or single process code that is restricted to one node would require a higher memory node.&lt;br /&gt;
** Note: MPI code running on hardware with Omni-Path networking should be compiled with Omni-Path networking support. This is provided by loading the &amp;lt;code&amp;gt;openmpi/2.1.3-opa&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;openmpi/3.1.2-opa&amp;lt;/code&amp;gt; modules prior to compiling.&lt;br /&gt;
&lt;br /&gt;
Since resources that are requested are reserved for your job, please request only as much CPU and memory as your job requires to avoid reducing the cluster efficiency.  If you are unsure which partition to use or the specific resource requests that are appropriate for your jobs, please contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] and we would be happy to work with you.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%;&amp;quot;&lt;br /&gt;
!Partition&lt;br /&gt;
!Description&lt;br /&gt;
!Cores/node&lt;br /&gt;
!Memory Request Limit&lt;br /&gt;
!Time Limit&lt;br /&gt;
!GPU&lt;br /&gt;
!Networking&lt;br /&gt;
|-&lt;br /&gt;
|cpu2021&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|48&lt;br /&gt;
|185,000 MB&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|cpu2019&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|40&lt;br /&gt;
|185,000 MB&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|bigmem&lt;br /&gt;
|Big Memory Compute&lt;br /&gt;
|80&lt;br /&gt;
|3,000,000 MB&lt;br /&gt;
|24 hours ‡&lt;br /&gt;
|&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|gpu-v100&lt;br /&gt;
|GPU Compute&lt;br /&gt;
|80&lt;br /&gt;
|753,000 MB&lt;br /&gt;
|24 hours ‡&lt;br /&gt;
|2&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|apophis&amp;amp;dagger;&lt;br /&gt;
|Private Research Partition&lt;br /&gt;
|40&lt;br /&gt;
|185,000 MB&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|razi&amp;amp;dagger;&lt;br /&gt;
|Private Research Partition&lt;br /&gt;
|40&lt;br /&gt;
|185,000 MB&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|pawson&amp;amp;dagger;&lt;br /&gt;
|Private Research Partition&lt;br /&gt;
|40&lt;br /&gt;
|185,000 MB&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|sherlock&amp;amp;dagger;&lt;br /&gt;
|Private Research Partition&lt;br /&gt;
|7&lt;br /&gt;
|185,000 MB&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|theia&amp;amp;dagger;&lt;br /&gt;
|Private Research Partition&lt;br /&gt;
|28&lt;br /&gt;
|188,000 MB&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|synergy&amp;amp;dagger;&lt;br /&gt;
|Private Research Partition&lt;br /&gt;
|14&lt;br /&gt;
|245,000 MB&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|cpu2013&lt;br /&gt;
|Legacy General Purpose Compute&lt;br /&gt;
|16&lt;br /&gt;
|120000&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|lattice&lt;br /&gt;
|Legacy General Purpose Compute&lt;br /&gt;
|8&lt;br /&gt;
|12000&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|parallel&lt;br /&gt;
|Legacy General Purpose Compute&lt;br /&gt;
|12&lt;br /&gt;
|23000&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|single&lt;br /&gt;
|Legacy Single-Node Job Compute&lt;br /&gt;
|8&lt;br /&gt;
|12000&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|cpu2021-bf24&lt;br /&gt;
|Back-fill Compute (2021-era hardware, 24h)&lt;br /&gt;
|48&lt;br /&gt;
|185,000 MB&lt;br /&gt;
|24 hours ‡&lt;br /&gt;
|&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|cpu2019-bf05&lt;br /&gt;
|Back-fill Compute (2019-era hardware, 5h)&lt;br /&gt;
|40&lt;br /&gt;
|185,000 MB&lt;br /&gt;
|5 hours ‡&lt;br /&gt;
|&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|cpu2017-bf05&lt;br /&gt;
|Back-fill Compute (2017-era hardware, 5h)&lt;br /&gt;
|14&lt;br /&gt;
|245,000 MB&lt;br /&gt;
|5 hours ‡&lt;br /&gt;
|&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|+ style=&amp;quot;caption-side: bottom; text-align: left; font-weight: normal;&amp;quot; | &amp;amp;dagger; These partitions contain hardware contributed to ARC by particular researchers and should only be used by members of their research groups. However, they have generously allowed their compute nodes to be shared with others outside their research groups for short jobs.  A special &#039;back-fill&#039; or -bf partition is available for use by all ARC users for jobs shorter than 5 hours.&amp;lt;br /&amp;gt;‡ As time limits may be changed by administrators to adjust to maintenance schedules or system load, the values given in the tables are not definitive.  See the Time limits section below for commands you can use on ARC itself to determine current limits.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==== Backfill partitions ====&lt;br /&gt;
Backfill partitions can be used by all users on ARC for short-term jobs. The hardware backing these partitions are generously contributed by researchers.  We recommend including the backfill partitions for short term jobs as it may help reduce your job&#039;s wait time and increase the overall cluster throughput.&lt;br /&gt;
&lt;br /&gt;
Previously, each contributing research group had their own backfill partition. Since June 2021, we have merged:&lt;br /&gt;
&lt;br /&gt;
* apophis-bf, pawson-bf, and razi-bf into cpu2019-bf05 &lt;br /&gt;
* theia-bf and synergy-bf into cpu2017-bf05&lt;br /&gt;
&lt;br /&gt;
The naming scheme of the backfill partitions is the CPU generation year, followed by -bf and the time limit in hours.  For example, cpu2017-bf05 would represent a backfill partition containing processors from 2017 with a time limit of 5 hours.&lt;br /&gt;
&lt;br /&gt;
==== Hardware resource and job policy limits ====&lt;br /&gt;
In addition to the hardware limitations, please be aware that there may also be policy limits imposed on your account for each partition. These limits restrict the number of cores, nodes, or GPUs that can be used at any given time. Since the limits are applied on a partition-by-partition basis, using resources in one partition should not affect the available resources you can use in another partition.&lt;br /&gt;
&lt;br /&gt;
These limits can be listed by running:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sacctmgr show qos format=Name,MaxWall,MaxTRESPU%20,MaxSubmitJobs&lt;br /&gt;
      Name     MaxWall            MaxTRESPU MaxSubmit&lt;br /&gt;
---------- ----------- -------------------- ---------&lt;br /&gt;
    normal  7-00:00:00                           2000&lt;br /&gt;
    breezy  3-00:00:00              cpu=384      2000&lt;br /&gt;
       gpu  7-00:00:00                          13000&lt;br /&gt;
   cpu2019  7-00:00:00              cpu=240      2000&lt;br /&gt;
  gpu-v100  1-00:00:00    cpu=80,gres/gpu=4      2000&lt;br /&gt;
    single  7-00:00:00      cpu=408,node=75      2000&lt;br /&gt;
      razi  7-00:00:00                           2000&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Specifying a partition in a job ====&lt;br /&gt;
One you have decided which partitions best suits your computation, you can select one or more partition on a job-by-job basis by including the &amp;lt;code&amp;gt;partition&amp;lt;/code&amp;gt; keyword for an &amp;lt;code&amp;gt;SBATCH&amp;lt;/code&amp;gt; directive in your batch job. Multiple partitions should be comma separated.  If you omit the partition specification, the system will try to assign your job to appropriate hardware based on other aspects of your request. &lt;br /&gt;
&lt;br /&gt;
In some cases, you really should specify the partition explicitly.  For example, if you are running single-node jobs with thread-based parallel processing requesting 8 cores you could use:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --mem=0              ❶&lt;br /&gt;
#SBATCH --nodes=1            ❷&lt;br /&gt;
#SBATCH --ntasks=1           ❸&lt;br /&gt;
#SBATCH --cpus-per-task=8    ❹&lt;br /&gt;
#SBATCH --partition=single,lattice   ❺ &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A few things to mention in this example:&lt;br /&gt;
# &amp;lt;code&amp;gt;--mem=0&amp;lt;/code&amp;gt; allocates all available memory on the compute node for the job. This effectively allocates the entire node for your job.&lt;br /&gt;
# &amp;lt;code&amp;gt;--nodes=1&amp;lt;/code&amp;gt; allocates 1 node for the job&lt;br /&gt;
# &amp;lt;code&amp;gt;--ntasks=1&amp;lt;/code&amp;gt; your job has a single task&lt;br /&gt;
# &amp;lt;code&amp;gt;--cpus-per-task=8&amp;lt;/code&amp;gt; asks for 8 CPUs per task. This job in total will request 8 * 1, or 8 CPUs.&lt;br /&gt;
# &amp;lt;code&amp;gt;--partition=single,lattice&amp;lt;/code&amp;gt; specifies that this job can run on either single or lattice.&lt;br /&gt;
Suppose that your job requires at most 8 CPU cores and 10 GB of memory. The above Slurm request would be valid and optimal since your job fits neatly in a single node on the single and parallel partition.  However, if you failed to specify the partition, Slurm may try to schedule your job to a partition with larger nodes, such as cpu2019 where each node has 40 cores and 190 GB of memory. If your job is scheduled on such a node, your job will be effectively wasting 32 cores and 180 GB of memory because &amp;lt;code&amp;gt;--mem=0&amp;lt;/code&amp;gt; not only requests for 190 GB on this node, but also prevents other jobs from being scheduled on the same node.&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t specify a partition, please give greater thought to the memory specification to make sure that the scheduler will not assign your job more resources than are needed.&lt;br /&gt;
&lt;br /&gt;
Parameters such as &#039;&#039;&#039;--ntasks-per-cpu&#039;&#039;&#039;, &#039;&#039;&#039;--cpus-per-task&#039;&#039;&#039;, &#039;&#039;&#039;--mem&#039;&#039;&#039; and &#039;&#039;&#039;--mem-per-cpu&amp;gt;&#039;&#039;&#039; have to be adjusted according to the capabilities of the hardware also. The product of --ntasks-per-cpu and --cpus-per-task should be less than or equal to the number given in the &amp;quot;Cores/node&amp;quot; column.  The &#039;&#039;&#039;--mem&amp;gt;&#039;&#039;&#039; parameter (or the product of &#039;&#039;&#039;--mem-per-cpu&#039;&#039;&#039; and &#039;&#039;&#039;--cpus-per-task&#039;&#039;&#039;) should be less than the &amp;quot;Memory limit&amp;quot; shown. If using whole nodes, you can specify &#039;&#039;&#039;--mem=0&#039;&#039;&#039; to request the maximum amount of memory per node.&lt;br /&gt;
&lt;br /&gt;
===== Examples =====&lt;br /&gt;
Here are some examples of specifying the various partitions.&lt;br /&gt;
&lt;br /&gt;
As mentioned in the [[#Hardware|Hardware]] section above, the ARC cluster was expanded in January 2019.  To select the 40-core general purpose nodes specify:&lt;br /&gt;
&lt;br /&gt;
 #SBATCH --partition=cpu2019&lt;br /&gt;
&lt;br /&gt;
To run on the Tesla V100 GPU-enabled nodes, use the &#039;&#039;&#039;gpu-v100&#039;&#039;&#039; partition.  You will also need to include an SBATCH directive in the form &#039;&#039;&#039;--gres=gpu:n&#039;&#039;&#039; to specify the number of GPUs, n, that you need.  For example, if the software you are running can make use of both GPUs on a gpu-v100 partition compute node, use:&lt;br /&gt;
&lt;br /&gt;
 #SBATCH --partition=gpu-v100 --gres=gpu:2&lt;br /&gt;
&lt;br /&gt;
For very large memory jobs (more than 185000 MB), specify the bigmem partition:&lt;br /&gt;
&lt;br /&gt;
 #SBATCH --partition=bigmem&lt;br /&gt;
&lt;br /&gt;
If the more modern computers are too busy or you have a job well-suited to run on the compute nodes described in the legacy hardware section above, choose the cpu2013, Lattice or Parallel compute nodes by specifying the corresponding partition keyword:&lt;br /&gt;
&lt;br /&gt;
 #SBATCH --partition=cpu2013&lt;br /&gt;
 #SBATCH --partition=lattice&lt;br /&gt;
 #SBATCH --partition=parallel&lt;br /&gt;
&lt;br /&gt;
There is an additional partition called &#039;&#039;&#039;single&#039;&#039;&#039; that provides nodes similar to the lattice partition, but, is intended for single-node jobs. Select the single partition with&lt;br /&gt;
&lt;br /&gt;
 #SBATCH --partition=single&lt;br /&gt;
&lt;br /&gt;
=== Time limits ===&lt;br /&gt;
Use the &amp;lt;code&amp;gt;--time&amp;lt;/code&amp;gt; directive to tell the job scheduler the maximum time that your job might run.  For example:&lt;br /&gt;
 #SBATCH --time=hh:mm:ss&lt;br /&gt;
&lt;br /&gt;
You can use &amp;lt;code&amp;gt;scontrol show partitions&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; to see the current maximum time that a job can run.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; highlight=&amp;quot;6&amp;quot;&amp;gt;&lt;br /&gt;
$ scontrol show partitions&lt;br /&gt;
PartitionName=single                                                                 &lt;br /&gt;
   AllowGroups=ALL AllowAccounts=ALL AllowQos=ALL                                    &lt;br /&gt;
   AllocNodes=ALL Default=NO QoS=single                                              &lt;br /&gt;
   DefaultTime=NONE DisableRootJobs=NO ExclusiveUser=NO GraceTime=0 Hidden=NO        &lt;br /&gt;
   MaxNodes=UNLIMITED MaxTime=7-00:00:00 MinNodes=1 LLN=NO MaxCPUsPerNode=UNLIMITED  &lt;br /&gt;
   Nodes=cn[001-168]                                                                 &lt;br /&gt;
   PriorityJobFactor=1 PriorityTier=1 RootOnly=NO ReqResv=NO OverSubscribe=NO        &lt;br /&gt;
   OverTimeLimit=NONE PreemptMode=OFF                                                &lt;br /&gt;
   State=UP TotalCPUs=1344 TotalNodes=168 SelectTypeParameters=NONE                  &lt;br /&gt;
   DefMemPerNode=UNLIMITED MaxMemPerNode=UNLIMITED                                   &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, with &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; under the &amp;lt;code&amp;gt;TIMELIMIT&amp;lt;/code&amp;gt; column:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sinfo                                                     &lt;br /&gt;
PARTITION  AVAIL  TIMELIMIT  NODES  STATE NODELIST               &lt;br /&gt;
single        up 7-00:00:00      1 drain* cn097                  &lt;br /&gt;
single        up 7-00:00:00      1  maint cn002                  &lt;br /&gt;
single        up 7-00:00:00      4 drain* cn[001,061,133,154]    &lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Support ==&lt;br /&gt;
{{Message Box&lt;br /&gt;
|title=[[Support|Need Help or have other ARC Related Questions?]]&lt;br /&gt;
|message=For all general RCS related issues, questions, or comments, please contact us at support@hpc.ucalgary.ca.&lt;br /&gt;
|icon=Support Icon.png}}&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t hesitate to [[Support|contact us]] directly by email if you need help using ARC or require guidance on migrating and running your workflows to ARC.&lt;br /&gt;
&lt;br /&gt;
[[Category:ARC]]&lt;br /&gt;
[[Category:Guides]]&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=ARC_Cluster_Guide&amp;diff=2640</id>
		<title>ARC Cluster Guide</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=ARC_Cluster_Guide&amp;diff=2640"/>
		<updated>2023-09-14T23:12:49Z</updated>

		<summary type="html">&lt;p&gt;Darcy: /* ARC Cluster Storage */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{ARC Cluster Status}}&lt;br /&gt;
&lt;br /&gt;
{{Message Box&lt;br /&gt;
|title=[[Support|Need Help or have other ARC Related Questions?]]&lt;br /&gt;
|message=For all general RCS related issues, questions, or comments, please contact us at support@hpc.ucalgary.ca.&lt;br /&gt;
|icon=Support Icon.png}}&lt;br /&gt;
&lt;br /&gt;
This guide gives an overview of the Advanced Research Computing (ARC) cluster at the University of Calgary and is intended to be read by new account holders getting started on ARC. This guide covers topics such as the hardware and performance characteristics, available software, usage policies and how to log in and run jobs. ARC can be used with data that a Researcher has classified as Lv1 and Lv2 as described in the UCalgary [https://www.ucalgary.ca/legal-services/sites/default/files/teams/1/Standards-Legal-Information-Security-Classification-Standard.pdf Information Security Classification Standard] &lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
The ARC is a high performance compute (HPC) cluster that is available for research projects based at the University of Calgary. This compute cluster is comprised of hundreds of severs interconnected with a high bandwidth interconnect. Special resources within the cluster include nodes with large memory installed and GPUs are also available. You may learn more about ARC&#039;s hardware in the [[ARC Cluster Guide#Hardware|hardware section below]]. ARC can be accessed through a [[Linux Introduction|command line interface]] or via a web interface called Open OnDemand.&lt;br /&gt;
&lt;br /&gt;
This cluster can be used for running large numbers (hundreds) of concurrent serial (one core) jobs, OpenMP or other thread-based jobs, shared-memory parallel code using up to 40 or 80 threads per job (depending on the partition), distributed-memory (MPI-based) parallel code using up to hundreds of cores, or jobs that take advantage of Graphics Processing Units (GPUs).&lt;br /&gt;
&lt;br /&gt;
Historically, ARC is primarily comprised of older, disparate Linux-based clusters that were formerly offered to researchers from across Canada such as Breezy, Lattice, and Parallel.  In addition, a large-memory compute node (Bigbyte) was salvaged from the now-retired local Storm cluster. In January 2019, a major addition to ARC with modern hardware was purchased. In 2020, compute clusters from CHGI have been migrated into ARC.&lt;br /&gt;
&lt;br /&gt;
=== How to Get Started ===&lt;br /&gt;
If you have a project you think would be appropriate for ARC, please email support@hpc.ucalgary.ca and mention the intended research and software you plan to use. You must have a University of Calgary IT account in order to use ARC.&lt;br /&gt;
* For users that do not have a University of IT account or email address, please register for one at https://itregport.ucalgary.ca/.&lt;br /&gt;
* For users external to the University, such as for users collaborating on a research project at the University of Calgary, please contact us and mention the project leader you are collaborating with.&lt;br /&gt;
&lt;br /&gt;
Once your access to ARC has been granted, you will be able to immediately make use of the cluster using your University of Calgary IT account by following the [[ARC_Cluster_Guide#Using_ARC|usage guide outlined below]].&lt;br /&gt;
&lt;br /&gt;
== Using ARC ==&lt;br /&gt;
&lt;br /&gt;
{{Message Box&lt;br /&gt;
|icon=Security Icon.png&lt;br /&gt;
|title=Cybersecurity awareness at the U of C&lt;br /&gt;
|message=Please note that there are typically about 950 phishing attempts targeting University of Calgary accounts each month. This is just a reminder to be careful about computer security issues, both at home and at the University. Please visit https://it.ucalgary.ca/it-security for more information, tips on secure computing, and how to report suspected security problems.}}&lt;br /&gt;
&lt;br /&gt;
=== Logging in ===&lt;br /&gt;
To log in to ARC, connect using SSH to &amp;lt;code&amp;gt;arc.ucalgary.ca&amp;lt;/code&amp;gt; on port &amp;lt;code&amp;gt;22&amp;lt;/code&amp;gt;. Connections to ARC are accepted only from the University of Calgary network (on campus) or through the University of Calgary General VPN (off campus).&lt;br /&gt;
&lt;br /&gt;
See [[Connecting to RCS HPC Systems]] for more information.&lt;br /&gt;
=== How to interact with ARC ===&lt;br /&gt;
&lt;br /&gt;
ARC cluster is a collection of several compute nodes connected by a high-speed network. On ARC, computations get submitted as jobs. Once submitted, the jobs are then assigned to compute nodes by the job scheduler as resources become available.&lt;br /&gt;
&lt;br /&gt;
[[File:Cluster.png]]&lt;br /&gt;
&lt;br /&gt;
You can access ARC with your UCalgary IT user credentials. Once connected, you will get placed in the ARC login node, for basic tasks such as job submission, monitor job status, manage files, edit text, etc. It is a shared resource where multiple users get connected at the same time. Thus, any intensive tasks is not allowed on the login node as it may block other potential users to connect/submit their computations. &lt;br /&gt;
         [tannistha.nandi@arc ~]$ &lt;br /&gt;
The job scheduling system on ARC is called SLURM.  On ARC, there are two SLURM commands that can allocate resources to a job under appropriate conditions: ‘salloc’ and ‘sbatch’. They both accept the same set of command line options with respect to resource allocation. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;‘salloc’&#039;&#039;&#039; is to launch an interactive session, typically for tasks under 5 hours. &lt;br /&gt;
Once an interactive job session is created, you can do things like explore research datasets, start R or python sessions to test your code, compile software applications etc.&lt;br /&gt;
&lt;br /&gt;
a. Example 1: The following command requests for 1 cpu on 1 node for 1 task along with 1 GB of RAM for an hour. &lt;br /&gt;
          [tannistha.nandi@arc ~]$ salloc --mem=1G -c 1 -N 1 -n 1  -t 01:00:00&lt;br /&gt;
          salloc: Granted job allocation 6758015&lt;br /&gt;
          salloc: Waiting for resource configuration&lt;br /&gt;
          salloc: Nodes fc4 are ready for job&lt;br /&gt;
          [tannistha.nandi@fc4 ~]$ &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
b. Example 2:  The following command requests for 1 GPU to be used from 1 node belonging to the gpu-v100 partition along with 1 GB of RAM for 1 hour.  Generic resource scheduling (--gres) is used to request for GPU resources.&lt;br /&gt;
         [tannistha.nandi@arc ~]$ salloc --mem=1G -t 01:00:00 -p gpu-v100 --gres=gpu:1&lt;br /&gt;
         salloc: Granted job allocation 6760460&lt;br /&gt;
         salloc: Waiting for resource configuration&lt;br /&gt;
         salloc: Nodes fg3 are ready for job&lt;br /&gt;
         [tannistha.nandi@fg3 ~]$&lt;br /&gt;
&lt;br /&gt;
Once you finish the work, type &#039;exit&#039; at the command prompt to end the interactive session,&lt;br /&gt;
         [tannistha.nandi@fg3 ~]$ exit&lt;br /&gt;
         [tannistha.nandi@fg3 ~]$ salloc: Relinquishing job allocation 6760460&lt;br /&gt;
It is to ensure that the allocated resources are released from your job and now available to other users.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;‘sbatch’&#039;&#039;&#039; is to submit computations as jobs to run on the cluster. You can submit a job-script.slurm via &#039;sbatch&#039; for execution.   &lt;br /&gt;
         [tannistha.nandi@arc ~]$ sbatch job-script.slurm&lt;br /&gt;
When resources become available, they get allocated to this task. Batch jobs are suited for tasks that run for long periods of time without any user supervision. When the job-script terminates, the allocation is released. &lt;br /&gt;
Please review the section on how to prepare job scripts for more information.&lt;br /&gt;
&lt;br /&gt;
=== Prepare job scripts  ===&lt;br /&gt;
Job scripts are text files saved with an extension &#039;.slurm&#039;, for example, &#039;job-script.slurm&#039;. &lt;br /&gt;
A job script looks something like this:&lt;br /&gt;
    &#039;&#039;#!/bin/bash&#039;&#039;&lt;br /&gt;
    ####### Reserve computing resources #############&lt;br /&gt;
    #SBATCH --nodes=1&lt;br /&gt;
    #SBATCH --ntasks=1&lt;br /&gt;
    #SBATCH --cpus-per-task=1&lt;br /&gt;
    #SBATCH --time=01:00:00&lt;br /&gt;
    #SBATCH --mem=1G&lt;br /&gt;
    #SBATCH --partition=cpu2019&amp;lt;br&amp;gt;&lt;br /&gt;
    ####### Set environment variables ###############&lt;br /&gt;
    module load python/anaconda3-2018.12&amp;lt;br&amp;gt;&lt;br /&gt;
    ####### Run your script #########################&lt;br /&gt;
    python myscript.py&lt;br /&gt;
&lt;br /&gt;
The first line contains the text &amp;quot;#!/bin/bash&amp;quot; to interpret it as a bash script.&lt;br /&gt;
&lt;br /&gt;
It is followed by lines that start with a &#039;#SBATCH&#039; to communicate with  &#039;SLURM&#039;. You may add as many #SBATCH directives as needed to reserve computing resources for your task. The above example requests for one cpu on a single node for 1 task along with 1GB RAM for an hour on cpu2019 partition.&lt;br /&gt;
&lt;br /&gt;
Next, you have to set up environment variables either by loading the modules centrally installed on ARC or export path to the software in your home directory. The above example loads an available python module.&lt;br /&gt;
&lt;br /&gt;
Finally, include the Linux command to execute the local script.&lt;br /&gt;
&lt;br /&gt;
Note that failing to specify part of a resource allocation request (most notably &#039;&#039;&#039;time&#039;&#039;&#039; and &#039;&#039;&#039;memory&#039;&#039;&#039;) will result in bad resource requests as the defaults are not appropriate to most cases. Please refer to the section &#039;Running non-interactive jobs&#039; for more examples.&lt;br /&gt;
&lt;br /&gt;
== Hardware ==&lt;br /&gt;
Since the ARC cluster is a conglomeration of many different compute clusters, the hardware within ARC can vary widely in terms of performance and capabilities.  To mitigate any compatibility issues with different hardware, we combine similar hardware into their own Slurm partition to ensure your workload runs as consistently as possible within one partition. Please carefully review the hardware specs for each of the partitions below to avoid any surprises.&lt;br /&gt;
&lt;br /&gt;
=== Partition Hardware Specs ===&lt;br /&gt;
When submitting jobs to ARC, you may specify a partition that your job will run on.  Please choose a partition that is most appropriate for your work.&lt;br /&gt;
&lt;br /&gt;
* See also [[How to find available partitions on ARC]].&lt;br /&gt;
&lt;br /&gt;
A few things to keep in mind when choosing a partition:&lt;br /&gt;
* Specific workloads requiring special Intel Instruction Set Extensions may only work on newer Intel CPUs. &lt;br /&gt;
* If working with multi-node parallel processing, ensure your software and libraries support the partition&#039;s interconnect networking.&lt;br /&gt;
* While older partitions may be slower, they may be less busy and have little to no wait times.&lt;br /&gt;
&lt;br /&gt;
If you are unsure which partition to use or need assistance on selecting an appropriate partition, please see [[#Selecting_a_Partition|the Selecting a Partition Section]] below. &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Partition&lt;br /&gt;
! Description&lt;br /&gt;
! Nodes&lt;br /&gt;
! CPU Cores, Model, and Year&lt;br /&gt;
! Memory&lt;br /&gt;
! GPU&lt;br /&gt;
! Network&lt;br /&gt;
|-&lt;br /&gt;
| -&lt;br /&gt;
| ARC Login Node&lt;br /&gt;
| 1&lt;br /&gt;
| 16 cores, 2x Intel(R) Xeon(R) CPU E5620  @ 2.40GHz (Westmere, 2010)&lt;br /&gt;
| 48 GB&lt;br /&gt;
| N/A&lt;br /&gt;
| 40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
| gpu-v100&lt;br /&gt;
| GPU Parition&lt;br /&gt;
| 13&lt;br /&gt;
| 80 cores, 2x Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz (Skylake, 2019)&lt;br /&gt;
| 754 GB&lt;br /&gt;
| 2x Tesla V100-PCIE-16GB&lt;br /&gt;
| 100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|gpu-a100&lt;br /&gt;
|GPU Partition&lt;br /&gt;
|5&lt;br /&gt;
|40 cores, 1x Intel(R) Xeon(R) Silver 4316 CPU @ 2.30GHz (Ice Lake, 2021)&lt;br /&gt;
|512 GB&lt;br /&gt;
|2x GA100 A100 PCIe 80GB&lt;br /&gt;
|100 Gbit/s Mellanox Infiniband&lt;br /&gt;
|-&lt;br /&gt;
|cpu2022&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|52&lt;br /&gt;
|52 cores, 2x Intel(R) Xeon(R) Gold 5320 CPU @ 2.20GHz (Ice Lake)&lt;br /&gt;
|256 GB&lt;br /&gt;
|N/A&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
| cpu2021&lt;br /&gt;
| General Purpose Compute&lt;br /&gt;
| 48&lt;br /&gt;
| 48 cores, 2x Intel(R) Xeon(R) Gold 6240R CPU @ 2.40GHz (Cascade Lake, 2021)&lt;br /&gt;
| 185 GB&lt;br /&gt;
| N/A &lt;br /&gt;
| 100 Gbit/s Mellanox Infiniband&lt;br /&gt;
|-&lt;br /&gt;
| cpu2019&lt;br /&gt;
| General Purpose Compute&lt;br /&gt;
| 14&lt;br /&gt;
| 40 cores, 2x Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz (Skylake, 2019)&lt;br /&gt;
| 190 GB&lt;br /&gt;
| N/A &lt;br /&gt;
| 100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
| apophis&lt;br /&gt;
| General Purpose Compute&lt;br /&gt;
| 21&lt;br /&gt;
| 40 cores, 2x Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz (Skylake, 2019)&lt;br /&gt;
| 190 GB&lt;br /&gt;
| N/A &lt;br /&gt;
| 100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
| razi&lt;br /&gt;
| General Purpose Compute&lt;br /&gt;
| 41&lt;br /&gt;
| 40 cores, 2x Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz (Skylake, 2019)&lt;br /&gt;
| 190 GB&lt;br /&gt;
| N/A &lt;br /&gt;
| 100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
| bigmem&lt;br /&gt;
| Big Memory Nodes&lt;br /&gt;
| 2&lt;br /&gt;
| 80 cores, 4x Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz (Skylake, 2019)&lt;br /&gt;
| 3022 GB&lt;br /&gt;
| N/A &lt;br /&gt;
| 100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
| pawson&lt;br /&gt;
| General Purpose Compute&lt;br /&gt;
| 13&lt;br /&gt;
| 40 cores, 2x Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz (Skylake, 2019)&lt;br /&gt;
| 190 GB&lt;br /&gt;
| N/A&lt;br /&gt;
| 100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|cpu2017&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|14&lt;br /&gt;
|56 cores, 2x Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz (Sandy Bridge, 2012)&lt;br /&gt;
|256 GB&lt;br /&gt;
|N/A&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
| theia&lt;br /&gt;
| Former Theia cluster&lt;br /&gt;
| 20&lt;br /&gt;
| 56 cores, 2x Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz (Sandy Bridge, 2012)&lt;br /&gt;
| 188 GB&lt;br /&gt;
| N/A &lt;br /&gt;
| 40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
| cpu2013&lt;br /&gt;
| Former hyperion cluster&lt;br /&gt;
| 12&lt;br /&gt;
| 32 cores, 2x Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz (Sandy Bridge, 2012)&lt;br /&gt;
| 126 GB&lt;br /&gt;
| N/A&lt;br /&gt;
| 40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
| lattice&lt;br /&gt;
| Former Lattice cluster&lt;br /&gt;
| 307&lt;br /&gt;
| 8 cores, 2x Intel(R) Xeon(R) CPU L5520  @ 2.27GHz (Nehalem, 2009)&lt;br /&gt;
| 12 GB&lt;br /&gt;
| N/A&lt;br /&gt;
| 40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
| single&lt;br /&gt;
| Former Lattice cluster&lt;br /&gt;
| 168&lt;br /&gt;
| 8 cores, 2x Intel(R) Xeon(R) CPU L5520  @ 2.27GHz (Nehalem, 2009)&lt;br /&gt;
| 12 GB&lt;br /&gt;
| N/A&lt;br /&gt;
| 40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
| parallel&lt;br /&gt;
| Former Parallel Cluster&lt;br /&gt;
| 576&lt;br /&gt;
| 12 cores, 2x Intel(R) Xeon(R) CPU E5649  @ 2.53GHz (Westmere, 2011)&lt;br /&gt;
| 24 GB&lt;br /&gt;
| N/A&lt;br /&gt;
| 40 Gbit/s InfiniBand&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===ARC Cluster Storage===&lt;br /&gt;
Usage of ARC cluster storage is outlined by our [[ARC Storage Terms of Use]] page.&lt;br /&gt;
&lt;br /&gt;
{{Warning Box&lt;br /&gt;
| title=Data Storage&lt;br /&gt;
| message=ARC storage is not suitable for long-term or archival storage.  It is not backed-up and does not have sufficient redundancy to be used as a primary storage system.  It is not guaranteed to be available for the time periods that are typical of archiving.&lt;br /&gt;
&lt;br /&gt;
Please ensure that the only data you keep on ARC is used for active computations.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For information on available campus storage options, please see [[Storage Options]].&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message Box&lt;br /&gt;
| title=No Backup Policy!&lt;br /&gt;
| message=You are responsible for your own backups.  Many researchers will have accounts with Compute Canada and may choose to back up their data there (the Project file system accessible through the Cedar cluster would often be used). &lt;br /&gt;
&lt;br /&gt;
Please contact us at support@hpc.ucalgary.ca if you want more information about this option.&lt;br /&gt;
&lt;br /&gt;
You can also back up data to your UofC OneDrive for business allocation see: https://rcs.ucalgary.ca/How_to_transfer_data#rclone:_rsync_for_cloud_storage This allocation starts at 5TB. Contact the support center for questions regarding OneDrive for Business.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
The ARC cluster has around 2 petabyte of shared disk storage available across the entire cluster as well as temporary storage local to each of the compute nodes. Please refer to the individual sections below on the capacity limitations and usage policies. &lt;br /&gt;
&lt;br /&gt;
Use the &amp;lt;code&amp;gt;arc.quota&amp;lt;/code&amp;gt; command on ARC to determine the available space on your various volumes and home directory.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Partition&lt;br /&gt;
!Description&lt;br /&gt;
!Capacity&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt;&lt;br /&gt;
|User home directories&lt;br /&gt;
|500 GB (per user)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;/work&amp;lt;/code&amp;gt;&lt;br /&gt;
|Research project storage&lt;br /&gt;
|Up to 100&#039;s of TB&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt;&lt;br /&gt;
|Scratch space for temporary files&lt;br /&gt;
|Up to 15 TB&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;/tmp&amp;lt;/code&amp;gt;&lt;br /&gt;
|Temporary space local to the compute cluster&lt;br /&gt;
|Dependent on available storage on nodes. Verify with &amp;lt;code&amp;gt;df -h&amp;lt;/code&amp;gt;.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;/dev/shm&amp;lt;/code&amp;gt;&lt;br /&gt;
|Small temporary in-memory disk space local to the compute cluster&lt;br /&gt;
|Dependent on memory size set in your Slurm job.&lt;br /&gt;
|}&lt;br /&gt;
====&amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt;: Home file system====&lt;br /&gt;
Each user has a directory under /home and is the default working directory when logging in to ARC. Each home directory has a per-user quota of 500 GB. This limit is fixed and cannot be increased. Researchers requiring additional storage exceeding what is available on their home directory may use &amp;lt;code&amp;gt;/work&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Note on file sharing: Due to security concerns, permissions set using &amp;lt;code&amp;gt;chmod&amp;lt;/code&amp;gt; on your home directory to allow other users to read/write to your home directory be automatically reverted by an automated system process unless an explicit exception is made.  If you need to share files with other researchers on the ARC cluster, please write to support@hpc.ucalgary.ca to ask for such an exception.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt;: Scratch file system for large job-oriented storage====&lt;br /&gt;
Associated with each job, under the &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; directory, a subdirectory is created that can be referenced in job scripts as &amp;lt;code&amp;gt;/scratch/${SLURM_JOB_ID}&amp;lt;/code&amp;gt;. You can use that directory for temporary files needed during the course of a job. Up to 15 TB of storage may be used, per user (total for all your jobs) in the &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; file system. &lt;br /&gt;
&lt;br /&gt;
Data in &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; associated with a given job will be deleted automatically, without exception, five days after the job finishes.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;code&amp;gt;/work&amp;lt;/code&amp;gt;: Work file system for larger projects====&lt;br /&gt;
If you need more space than provided in &amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt; and the &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; job-oriented space is not appropriate for you case, please write to support@hpc.ucalgary.ca with an explanation, including an indication of how much storage you expect to need and for how long.  If approved, you will then be assigned a directory under &amp;lt;code&amp;gt;/work&amp;lt;/code&amp;gt; with an appropriately large quota.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;code&amp;gt;/tmp&amp;lt;/code&amp;gt;,&amp;lt;code&amp;gt;/var/tmp&amp;lt;/code&amp;gt;: Temporary files====&lt;br /&gt;
You may use &amp;lt;code&amp;gt;/tmp&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;/var/tmp&amp;lt;/code&amp;gt; for storing temporary files generated by your job. The &amp;lt;code&amp;gt;/tmp&amp;lt;/code&amp;gt; is stored on a disk local to the compute node and is not shared across the cluster. The files stored here will be removed immediately after your job terminates.&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;code&amp;gt;/dev/shm&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;/run/user/$uid&amp;lt;/code&amp;gt;: In-memory temporary files ====&lt;br /&gt;
&amp;lt;code&amp;gt;/dev/shm&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/run/user/$UID&amp;lt;/code&amp;gt; is writable location for temporary files backed by virtual memory. This can be used if faster I/O is required. This is ideal for workloads that require many small read/writes to share data between processes or as a fast cache. The amount of data you can write here is dependent on the amount of free memory available to your job. The files stored at these locations will be removed immediately after your job terminates.&lt;br /&gt;
&lt;br /&gt;
== Software ==&lt;br /&gt;
All ARC nodes run the latest version of Rocky Linux 8 with the same set of base software packages. To maintain the stability and consistency of all nodes, any additional dependencies that your software requires must be installed under your account.  For your convenience, we have packaged commonly used software packages and dependencies as modules available under &amp;lt;code&amp;gt;/global/software&amp;lt;/code&amp;gt;. If your software package is not available as a module, you may also try Anaconda which allows users to manage and install custom packages in an isolated environment.&lt;br /&gt;
&lt;br /&gt;
For a list of available packages that have been made available, please see [[ARC Software pages]]. &lt;br /&gt;
&lt;br /&gt;
Please contact us at support@hpc.ucalgary.ca if you need additional software installed.&lt;br /&gt;
&lt;br /&gt;
==== Modules ====&lt;br /&gt;
The setup of the environment for using some of the installed software is through the &amp;lt;code&amp;gt;module&amp;lt;/code&amp;gt; command. An overview of [https://www.westgrid.ca//support/modules modules on WestGrid (external link)] is largely applicable to ARC.&lt;br /&gt;
&lt;br /&gt;
Software packages bundled as a module will be available under &amp;lt;code&amp;gt;/global/software&amp;lt;/code&amp;gt; and can be listed with the &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module avail&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To enable Python, load the Python module by running:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module load python/anaconda-3.6-5.1.0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To unload the Python module, run:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module remove python/anaconda-3.6-5.1.0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To see currently loaded modules, run:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module list&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By default, no modules are loaded on ARC. If you wish to use a specific module, such as the Intel compilers or the Open MPI parallel programming packages, you must load the appropriate module.&lt;br /&gt;
&lt;br /&gt;
== Job submission ==&lt;br /&gt;
&lt;br /&gt;
=== Interactive Jobs ===&lt;br /&gt;
The ARC login node may be used for such tasks as editing files, compiling programs and running short tests while developing programs. We suggest CPU intensive workloads on the login node be restricted to under 15 minutes as per [[General Cluster Guidelines and Policies|our cluster guidelines]]. For interactive workloads exceeding 15 minutes, use the &#039;&#039;&#039;[[Running_jobs#Interactive_jobs|salloc command]]&#039;&#039;&#039; to allocate an interactive session on a compute node.&lt;br /&gt;
&lt;br /&gt;
The default salloc allocation is 1 CPU and 1 GB of memory. Adjust this by specifying &amp;lt;code&amp;gt;-n CPU#&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--mem Megabytes&amp;lt;/code&amp;gt;. You may request up to 5 hours of CPU time for interactive jobs.&lt;br /&gt;
 salloc --time=5:00:00 --partition=cpu2019&lt;br /&gt;
&lt;br /&gt;
Always use salloc or srun to start an interactive job. Do not SSH directly to a compute node as SSH sessions will be refused without an active job running.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- This information doesn&#039;t seem that useful or relevant to running interactive jobs. Move to getting started section?&lt;br /&gt;
ARC uses the Linux operating system. The program that responds to your typed commands and allows you to run other programs is called the Linux shell. There are several different shells available, but, by default you will use one called bash. It is useful to have some knowledge of the shell and a variety of other command-line programs that you can use to manipulate files. If you are new to Linux systems, we recommend that you work through one of the many online tutorials that are available, such as the [http://www.ee.surrey.ac.uk/Teaching/Unix/index.html UNIX Tutorial for Beginners (external link)] provided by the University of Surrey. The tutorial covers such fundamental topics, among others, as creating, renaming and deleting files and directories, how to produce a listing of your files and how to tell how much disk space you are using.  For a more comprehensive introduction to Linux, see [http://linuxcommand.sourceforge.net/tlcl.php The Linux Command Line (external link)].&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Running non-interactive jobs (batch processing) ===&lt;br /&gt;
Production runs and longer test runs should be submitted as (non-interactive) batch jobs, in which commands to be executed are listed in a script (text file). Batch jobs scripts are submitted using the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command, part of the Slurm job management and scheduling software. #SBATCH directive lines at the beginning of the script are used to specify the resources needed for the job (cores, memory, run time limit and any specialized hardware needed).&lt;br /&gt;
&lt;br /&gt;
Most of the information on the [https://docs.computecanada.ca/wiki/Running_jobs Running Jobs (external link)] page on the Compute Canada web site is also relevant for submitting and managing batch jobs and reserving processors for interactive work on ARC.  One major difference between running jobs on the ARC and Compute Canada clusters is in selecting the type of hardware that should be used for a job. On ARC, you choose the hardware to use primarily by specifying a partition, as described below.&lt;br /&gt;
&lt;br /&gt;
=== Selecting a Partition ===&lt;br /&gt;
There are some aspects to consider when selecting a partition including:&lt;br /&gt;
* Resource requirements in terms of memory and CPU cores&lt;br /&gt;
* Hardware specific requirements, such as GPU or CPU Instruction Set Extensions&lt;br /&gt;
* Partition resource limits and potential wait time&lt;br /&gt;
* Software support parallel processing using Message Passing Interface (MPI), OpenMP, etc.&lt;br /&gt;
** Eg. MPI for parallel processing can distribute memory across multiple nodes, per-node memory requirements could be lower. Whereas, OpenMP or single process code that is restricted to one node would require a higher memory node.&lt;br /&gt;
** Note: MPI code running on hardware with Omni-Path networking should be compiled with Omni-Path networking support. This is provided by loading the &amp;lt;code&amp;gt;openmpi/2.1.3-opa&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;openmpi/3.1.2-opa&amp;lt;/code&amp;gt; modules prior to compiling.&lt;br /&gt;
&lt;br /&gt;
Since resources that are requested are reserved for your job, please request only as much CPU and memory as your job requires to avoid reducing the cluster efficiency.  If you are unsure which partition to use or the specific resource requests that are appropriate for your jobs, please contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] and we would be happy to work with you.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%;&amp;quot;&lt;br /&gt;
!Partition&lt;br /&gt;
!Description&lt;br /&gt;
!Cores/node&lt;br /&gt;
!Memory Request Limit&lt;br /&gt;
!Time Limit&lt;br /&gt;
!GPU&lt;br /&gt;
!Networking&lt;br /&gt;
|-&lt;br /&gt;
|cpu2021&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|48&lt;br /&gt;
|185,000 MB&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|cpu2019&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|40&lt;br /&gt;
|185,000 MB&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|bigmem&lt;br /&gt;
|Big Memory Compute&lt;br /&gt;
|80&lt;br /&gt;
|3,000,000 MB&lt;br /&gt;
|24 hours ‡&lt;br /&gt;
|&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|gpu-v100&lt;br /&gt;
|GPU Compute&lt;br /&gt;
|80&lt;br /&gt;
|753,000 MB&lt;br /&gt;
|24 hours ‡&lt;br /&gt;
|2&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|apophis&amp;amp;dagger;&lt;br /&gt;
|Private Research Partition&lt;br /&gt;
|40&lt;br /&gt;
|185,000 MB&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|razi&amp;amp;dagger;&lt;br /&gt;
|Private Research Partition&lt;br /&gt;
|40&lt;br /&gt;
|185,000 MB&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|pawson&amp;amp;dagger;&lt;br /&gt;
|Private Research Partition&lt;br /&gt;
|40&lt;br /&gt;
|185,000 MB&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|sherlock&amp;amp;dagger;&lt;br /&gt;
|Private Research Partition&lt;br /&gt;
|7&lt;br /&gt;
|185,000 MB&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|theia&amp;amp;dagger;&lt;br /&gt;
|Private Research Partition&lt;br /&gt;
|28&lt;br /&gt;
|188,000 MB&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|synergy&amp;amp;dagger;&lt;br /&gt;
|Private Research Partition&lt;br /&gt;
|14&lt;br /&gt;
|245,000 MB&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|cpu2013&lt;br /&gt;
|Legacy General Purpose Compute&lt;br /&gt;
|16&lt;br /&gt;
|120000&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|lattice&lt;br /&gt;
|Legacy General Purpose Compute&lt;br /&gt;
|8&lt;br /&gt;
|12000&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|parallel&lt;br /&gt;
|Legacy General Purpose Compute&lt;br /&gt;
|12&lt;br /&gt;
|23000&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|single&lt;br /&gt;
|Legacy Single-Node Job Compute&lt;br /&gt;
|8&lt;br /&gt;
|12000&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|cpu2021-bf24&lt;br /&gt;
|Back-fill Compute (2021-era hardware, 24h)&lt;br /&gt;
|48&lt;br /&gt;
|185,000 MB&lt;br /&gt;
|24 hours ‡&lt;br /&gt;
|&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|cpu2019-bf05&lt;br /&gt;
|Back-fill Compute (2019-era hardware, 5h)&lt;br /&gt;
|40&lt;br /&gt;
|185,000 MB&lt;br /&gt;
|5 hours ‡&lt;br /&gt;
|&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|cpu2017-bf05&lt;br /&gt;
|Back-fill Compute (2017-era hardware, 5h)&lt;br /&gt;
|14&lt;br /&gt;
|245,000 MB&lt;br /&gt;
|5 hours ‡&lt;br /&gt;
|&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|+ style=&amp;quot;caption-side: bottom; text-align: left; font-weight: normal;&amp;quot; | &amp;amp;dagger; These partitions contain hardware contributed to ARC by particular researchers and should only be used by members of their research groups. However, they have generously allowed their compute nodes to be shared with others outside their research groups for short jobs.  A special &#039;back-fill&#039; or -bf partition is available for use by all ARC users for jobs shorter than 5 hours.&amp;lt;br /&amp;gt;‡ As time limits may be changed by administrators to adjust to maintenance schedules or system load, the values given in the tables are not definitive.  See the Time limits section below for commands you can use on ARC itself to determine current limits.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==== Backfill partitions ====&lt;br /&gt;
Backfill partitions can be used by all users on ARC for short-term jobs. The hardware backing these partitions are generously contributed by researchers.  We recommend including the backfill partitions for short term jobs as it may help reduce your job&#039;s wait time and increase the overall cluster throughput.&lt;br /&gt;
&lt;br /&gt;
Previously, each contributing research group had their own backfill partition. Since June 2021, we have merged:&lt;br /&gt;
&lt;br /&gt;
* apophis-bf, pawson-bf, and razi-bf into cpu2019-bf05 &lt;br /&gt;
* theia-bf and synergy-bf into cpu2017-bf05&lt;br /&gt;
&lt;br /&gt;
The naming scheme of the backfill partitions is the CPU generation year, followed by -bf and the time limit in hours.  For example, cpu2017-bf05 would represent a backfill partition containing processors from 2017 with a time limit of 5 hours.&lt;br /&gt;
&lt;br /&gt;
==== Hardware resource and job policy limits ====&lt;br /&gt;
In addition to the hardware limitations, please be aware that there may also be policy limits imposed on your account for each partition. These limits restrict the number of cores, nodes, or GPUs that can be used at any given time. Since the limits are applied on a partition-by-partition basis, using resources in one partition should not affect the available resources you can use in another partition.&lt;br /&gt;
&lt;br /&gt;
These limits can be listed by running:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sacctmgr show qos format=Name,MaxWall,MaxTRESPU%20,MaxSubmitJobs&lt;br /&gt;
      Name     MaxWall            MaxTRESPU MaxSubmit&lt;br /&gt;
---------- ----------- -------------------- ---------&lt;br /&gt;
    normal  7-00:00:00                           2000&lt;br /&gt;
    breezy  3-00:00:00              cpu=384      2000&lt;br /&gt;
       gpu  7-00:00:00                          13000&lt;br /&gt;
   cpu2019  7-00:00:00              cpu=240      2000&lt;br /&gt;
  gpu-v100  1-00:00:00    cpu=80,gres/gpu=4      2000&lt;br /&gt;
    single  7-00:00:00      cpu=408,node=75      2000&lt;br /&gt;
      razi  7-00:00:00                           2000&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Specifying a partition in a job ====&lt;br /&gt;
One you have decided which partitions best suits your computation, you can select one or more partition on a job-by-job basis by including the &amp;lt;code&amp;gt;partition&amp;lt;/code&amp;gt; keyword for an &amp;lt;code&amp;gt;SBATCH&amp;lt;/code&amp;gt; directive in your batch job. Multiple partitions should be comma separated.  If you omit the partition specification, the system will try to assign your job to appropriate hardware based on other aspects of your request. &lt;br /&gt;
&lt;br /&gt;
In some cases, you really should specify the partition explicitly.  For example, if you are running single-node jobs with thread-based parallel processing requesting 8 cores you could use:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --mem=0              ❶&lt;br /&gt;
#SBATCH --nodes=1            ❷&lt;br /&gt;
#SBATCH --ntasks=1           ❸&lt;br /&gt;
#SBATCH --cpus-per-task=8    ❹&lt;br /&gt;
#SBATCH --partition=single,lattice   ❺ &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A few things to mention in this example:&lt;br /&gt;
# &amp;lt;code&amp;gt;--mem=0&amp;lt;/code&amp;gt; allocates all available memory on the compute node for the job. This effectively allocates the entire node for your job.&lt;br /&gt;
# &amp;lt;code&amp;gt;--nodes=1&amp;lt;/code&amp;gt; allocates 1 node for the job&lt;br /&gt;
# &amp;lt;code&amp;gt;--ntasks=1&amp;lt;/code&amp;gt; your job has a single task&lt;br /&gt;
# &amp;lt;code&amp;gt;--cpus-per-task=8&amp;lt;/code&amp;gt; asks for 8 CPUs per task. This job in total will request 8 * 1, or 8 CPUs.&lt;br /&gt;
# &amp;lt;code&amp;gt;--partition=single,lattice&amp;lt;/code&amp;gt; specifies that this job can run on either single or lattice.&lt;br /&gt;
Suppose that your job requires at most 8 CPU cores and 10 GB of memory. The above Slurm request would be valid and optimal since your job fits neatly in a single node on the single and parallel partition.  However, if you failed to specify the partition, Slurm may try to schedule your job to a partition with larger nodes, such as cpu2019 where each node has 40 cores and 190 GB of memory. If your job is scheduled on such a node, your job will be effectively wasting 32 cores and 180 GB of memory because &amp;lt;code&amp;gt;--mem=0&amp;lt;/code&amp;gt; not only requests for 190 GB on this node, but also prevents other jobs from being scheduled on the same node.&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t specify a partition, please give greater thought to the memory specification to make sure that the scheduler will not assign your job more resources than are needed.&lt;br /&gt;
&lt;br /&gt;
Parameters such as &#039;&#039;&#039;--ntasks-per-cpu&#039;&#039;&#039;, &#039;&#039;&#039;--cpus-per-task&#039;&#039;&#039;, &#039;&#039;&#039;--mem&#039;&#039;&#039; and &#039;&#039;&#039;--mem-per-cpu&amp;gt;&#039;&#039;&#039; have to be adjusted according to the capabilities of the hardware also. The product of --ntasks-per-cpu and --cpus-per-task should be less than or equal to the number given in the &amp;quot;Cores/node&amp;quot; column.  The &#039;&#039;&#039;--mem&amp;gt;&#039;&#039;&#039; parameter (or the product of &#039;&#039;&#039;--mem-per-cpu&#039;&#039;&#039; and &#039;&#039;&#039;--cpus-per-task&#039;&#039;&#039;) should be less than the &amp;quot;Memory limit&amp;quot; shown. If using whole nodes, you can specify &#039;&#039;&#039;--mem=0&#039;&#039;&#039; to request the maximum amount of memory per node.&lt;br /&gt;
&lt;br /&gt;
===== Examples =====&lt;br /&gt;
Here are some examples of specifying the various partitions.&lt;br /&gt;
&lt;br /&gt;
As mentioned in the [[#Hardware|Hardware]] section above, the ARC cluster was expanded in January 2019.  To select the 40-core general purpose nodes specify:&lt;br /&gt;
&lt;br /&gt;
 #SBATCH --partition=cpu2019&lt;br /&gt;
&lt;br /&gt;
To run on the Tesla V100 GPU-enabled nodes, use the &#039;&#039;&#039;gpu-v100&#039;&#039;&#039; partition.  You will also need to include an SBATCH directive in the form &#039;&#039;&#039;--gres=gpu:n&#039;&#039;&#039; to specify the number of GPUs, n, that you need.  For example, if the software you are running can make use of both GPUs on a gpu-v100 partition compute node, use:&lt;br /&gt;
&lt;br /&gt;
 #SBATCH --partition=gpu-v100 --gres=gpu:2&lt;br /&gt;
&lt;br /&gt;
For very large memory jobs (more than 185000 MB), specify the bigmem partition:&lt;br /&gt;
&lt;br /&gt;
 #SBATCH --partition=bigmem&lt;br /&gt;
&lt;br /&gt;
If the more modern computers are too busy or you have a job well-suited to run on the compute nodes described in the legacy hardware section above, choose the cpu2013, Lattice or Parallel compute nodes by specifying the corresponding partition keyword:&lt;br /&gt;
&lt;br /&gt;
 #SBATCH --partition=cpu2013&lt;br /&gt;
 #SBATCH --partition=lattice&lt;br /&gt;
 #SBATCH --partition=parallel&lt;br /&gt;
&lt;br /&gt;
There is an additional partition called &#039;&#039;&#039;single&#039;&#039;&#039; that provides nodes similar to the lattice partition, but, is intended for single-node jobs. Select the single partition with&lt;br /&gt;
&lt;br /&gt;
 #SBATCH --partition=single&lt;br /&gt;
&lt;br /&gt;
=== Time limits ===&lt;br /&gt;
Use the &amp;lt;code&amp;gt;--time&amp;lt;/code&amp;gt; directive to tell the job scheduler the maximum time that your job might run.  For example:&lt;br /&gt;
 #SBATCH --time=hh:mm:ss&lt;br /&gt;
&lt;br /&gt;
You can use &amp;lt;code&amp;gt;scontrol show partitions&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; to see the current maximum time that a job can run.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; highlight=&amp;quot;6&amp;quot;&amp;gt;&lt;br /&gt;
$ scontrol show partitions&lt;br /&gt;
PartitionName=single                                                                 &lt;br /&gt;
   AllowGroups=ALL AllowAccounts=ALL AllowQos=ALL                                    &lt;br /&gt;
   AllocNodes=ALL Default=NO QoS=single                                              &lt;br /&gt;
   DefaultTime=NONE DisableRootJobs=NO ExclusiveUser=NO GraceTime=0 Hidden=NO        &lt;br /&gt;
   MaxNodes=UNLIMITED MaxTime=7-00:00:00 MinNodes=1 LLN=NO MaxCPUsPerNode=UNLIMITED  &lt;br /&gt;
   Nodes=cn[001-168]                                                                 &lt;br /&gt;
   PriorityJobFactor=1 PriorityTier=1 RootOnly=NO ReqResv=NO OverSubscribe=NO        &lt;br /&gt;
   OverTimeLimit=NONE PreemptMode=OFF                                                &lt;br /&gt;
   State=UP TotalCPUs=1344 TotalNodes=168 SelectTypeParameters=NONE                  &lt;br /&gt;
   DefMemPerNode=UNLIMITED MaxMemPerNode=UNLIMITED                                   &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, with &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; under the &amp;lt;code&amp;gt;TIMELIMIT&amp;lt;/code&amp;gt; column:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sinfo                                                     &lt;br /&gt;
PARTITION  AVAIL  TIMELIMIT  NODES  STATE NODELIST               &lt;br /&gt;
single        up 7-00:00:00      1 drain* cn097                  &lt;br /&gt;
single        up 7-00:00:00      1  maint cn002                  &lt;br /&gt;
single        up 7-00:00:00      4 drain* cn[001,061,133,154]    &lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Support ==&lt;br /&gt;
{{Message Box&lt;br /&gt;
|title=[[Support|Need Help or have other ARC Related Questions?]]&lt;br /&gt;
|message=For all general RCS related issues, questions, or comments, please contact us at support@hpc.ucalgary.ca.&lt;br /&gt;
|icon=Support Icon.png}}&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t hesitate to [[Support|contact us]] directly by email if you need help using ARC or require guidance on migrating and running your workflows to ARC.&lt;br /&gt;
&lt;br /&gt;
[[Category:ARC]]&lt;br /&gt;
[[Category:Guides]]&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=ARC_Cluster_Guide&amp;diff=2639</id>
		<title>ARC Cluster Guide</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=ARC_Cluster_Guide&amp;diff=2639"/>
		<updated>2023-09-14T23:12:20Z</updated>

		<summary type="html">&lt;p&gt;Darcy: /* ARC Cluster Storage */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{ARC Cluster Status}}&lt;br /&gt;
&lt;br /&gt;
{{Message Box&lt;br /&gt;
|title=[[Support|Need Help or have other ARC Related Questions?]]&lt;br /&gt;
|message=For all general RCS related issues, questions, or comments, please contact us at support@hpc.ucalgary.ca.&lt;br /&gt;
|icon=Support Icon.png}}&lt;br /&gt;
&lt;br /&gt;
This guide gives an overview of the Advanced Research Computing (ARC) cluster at the University of Calgary and is intended to be read by new account holders getting started on ARC. This guide covers topics such as the hardware and performance characteristics, available software, usage policies and how to log in and run jobs. ARC can be used with data that a Researcher has classified as Lv1 and Lv2 as described in the UCalgary [https://www.ucalgary.ca/legal-services/sites/default/files/teams/1/Standards-Legal-Information-Security-Classification-Standard.pdf Information Security Classification Standard] &lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
The ARC is a high performance compute (HPC) cluster that is available for research projects based at the University of Calgary. This compute cluster is comprised of hundreds of severs interconnected with a high bandwidth interconnect. Special resources within the cluster include nodes with large memory installed and GPUs are also available. You may learn more about ARC&#039;s hardware in the [[ARC Cluster Guide#Hardware|hardware section below]]. ARC can be accessed through a [[Linux Introduction|command line interface]] or via a web interface called Open OnDemand.&lt;br /&gt;
&lt;br /&gt;
This cluster can be used for running large numbers (hundreds) of concurrent serial (one core) jobs, OpenMP or other thread-based jobs, shared-memory parallel code using up to 40 or 80 threads per job (depending on the partition), distributed-memory (MPI-based) parallel code using up to hundreds of cores, or jobs that take advantage of Graphics Processing Units (GPUs).&lt;br /&gt;
&lt;br /&gt;
Historically, ARC is primarily comprised of older, disparate Linux-based clusters that were formerly offered to researchers from across Canada such as Breezy, Lattice, and Parallel.  In addition, a large-memory compute node (Bigbyte) was salvaged from the now-retired local Storm cluster. In January 2019, a major addition to ARC with modern hardware was purchased. In 2020, compute clusters from CHGI have been migrated into ARC.&lt;br /&gt;
&lt;br /&gt;
=== How to Get Started ===&lt;br /&gt;
If you have a project you think would be appropriate for ARC, please email support@hpc.ucalgary.ca and mention the intended research and software you plan to use. You must have a University of Calgary IT account in order to use ARC.&lt;br /&gt;
* For users that do not have a University of IT account or email address, please register for one at https://itregport.ucalgary.ca/.&lt;br /&gt;
* For users external to the University, such as for users collaborating on a research project at the University of Calgary, please contact us and mention the project leader you are collaborating with.&lt;br /&gt;
&lt;br /&gt;
Once your access to ARC has been granted, you will be able to immediately make use of the cluster using your University of Calgary IT account by following the [[ARC_Cluster_Guide#Using_ARC|usage guide outlined below]].&lt;br /&gt;
&lt;br /&gt;
== Using ARC ==&lt;br /&gt;
&lt;br /&gt;
{{Message Box&lt;br /&gt;
|icon=Security Icon.png&lt;br /&gt;
|title=Cybersecurity awareness at the U of C&lt;br /&gt;
|message=Please note that there are typically about 950 phishing attempts targeting University of Calgary accounts each month. This is just a reminder to be careful about computer security issues, both at home and at the University. Please visit https://it.ucalgary.ca/it-security for more information, tips on secure computing, and how to report suspected security problems.}}&lt;br /&gt;
&lt;br /&gt;
=== Logging in ===&lt;br /&gt;
To log in to ARC, connect using SSH to &amp;lt;code&amp;gt;arc.ucalgary.ca&amp;lt;/code&amp;gt; on port &amp;lt;code&amp;gt;22&amp;lt;/code&amp;gt;. Connections to ARC are accepted only from the University of Calgary network (on campus) or through the University of Calgary General VPN (off campus).&lt;br /&gt;
&lt;br /&gt;
See [[Connecting to RCS HPC Systems]] for more information.&lt;br /&gt;
=== How to interact with ARC ===&lt;br /&gt;
&lt;br /&gt;
ARC cluster is a collection of several compute nodes connected by a high-speed network. On ARC, computations get submitted as jobs. Once submitted, the jobs are then assigned to compute nodes by the job scheduler as resources become available.&lt;br /&gt;
&lt;br /&gt;
[[File:Cluster.png]]&lt;br /&gt;
&lt;br /&gt;
You can access ARC with your UCalgary IT user credentials. Once connected, you will get placed in the ARC login node, for basic tasks such as job submission, monitor job status, manage files, edit text, etc. It is a shared resource where multiple users get connected at the same time. Thus, any intensive tasks is not allowed on the login node as it may block other potential users to connect/submit their computations. &lt;br /&gt;
         [tannistha.nandi@arc ~]$ &lt;br /&gt;
The job scheduling system on ARC is called SLURM.  On ARC, there are two SLURM commands that can allocate resources to a job under appropriate conditions: ‘salloc’ and ‘sbatch’. They both accept the same set of command line options with respect to resource allocation. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;‘salloc’&#039;&#039;&#039; is to launch an interactive session, typically for tasks under 5 hours. &lt;br /&gt;
Once an interactive job session is created, you can do things like explore research datasets, start R or python sessions to test your code, compile software applications etc.&lt;br /&gt;
&lt;br /&gt;
a. Example 1: The following command requests for 1 cpu on 1 node for 1 task along with 1 GB of RAM for an hour. &lt;br /&gt;
          [tannistha.nandi@arc ~]$ salloc --mem=1G -c 1 -N 1 -n 1  -t 01:00:00&lt;br /&gt;
          salloc: Granted job allocation 6758015&lt;br /&gt;
          salloc: Waiting for resource configuration&lt;br /&gt;
          salloc: Nodes fc4 are ready for job&lt;br /&gt;
          [tannistha.nandi@fc4 ~]$ &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
b. Example 2:  The following command requests for 1 GPU to be used from 1 node belonging to the gpu-v100 partition along with 1 GB of RAM for 1 hour.  Generic resource scheduling (--gres) is used to request for GPU resources.&lt;br /&gt;
         [tannistha.nandi@arc ~]$ salloc --mem=1G -t 01:00:00 -p gpu-v100 --gres=gpu:1&lt;br /&gt;
         salloc: Granted job allocation 6760460&lt;br /&gt;
         salloc: Waiting for resource configuration&lt;br /&gt;
         salloc: Nodes fg3 are ready for job&lt;br /&gt;
         [tannistha.nandi@fg3 ~]$&lt;br /&gt;
&lt;br /&gt;
Once you finish the work, type &#039;exit&#039; at the command prompt to end the interactive session,&lt;br /&gt;
         [tannistha.nandi@fg3 ~]$ exit&lt;br /&gt;
         [tannistha.nandi@fg3 ~]$ salloc: Relinquishing job allocation 6760460&lt;br /&gt;
It is to ensure that the allocated resources are released from your job and now available to other users.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;‘sbatch’&#039;&#039;&#039; is to submit computations as jobs to run on the cluster. You can submit a job-script.slurm via &#039;sbatch&#039; for execution.   &lt;br /&gt;
         [tannistha.nandi@arc ~]$ sbatch job-script.slurm&lt;br /&gt;
When resources become available, they get allocated to this task. Batch jobs are suited for tasks that run for long periods of time without any user supervision. When the job-script terminates, the allocation is released. &lt;br /&gt;
Please review the section on how to prepare job scripts for more information.&lt;br /&gt;
&lt;br /&gt;
=== Prepare job scripts  ===&lt;br /&gt;
Job scripts are text files saved with an extension &#039;.slurm&#039;, for example, &#039;job-script.slurm&#039;. &lt;br /&gt;
A job script looks something like this:&lt;br /&gt;
    &#039;&#039;#!/bin/bash&#039;&#039;&lt;br /&gt;
    ####### Reserve computing resources #############&lt;br /&gt;
    #SBATCH --nodes=1&lt;br /&gt;
    #SBATCH --ntasks=1&lt;br /&gt;
    #SBATCH --cpus-per-task=1&lt;br /&gt;
    #SBATCH --time=01:00:00&lt;br /&gt;
    #SBATCH --mem=1G&lt;br /&gt;
    #SBATCH --partition=cpu2019&amp;lt;br&amp;gt;&lt;br /&gt;
    ####### Set environment variables ###############&lt;br /&gt;
    module load python/anaconda3-2018.12&amp;lt;br&amp;gt;&lt;br /&gt;
    ####### Run your script #########################&lt;br /&gt;
    python myscript.py&lt;br /&gt;
&lt;br /&gt;
The first line contains the text &amp;quot;#!/bin/bash&amp;quot; to interpret it as a bash script.&lt;br /&gt;
&lt;br /&gt;
It is followed by lines that start with a &#039;#SBATCH&#039; to communicate with  &#039;SLURM&#039;. You may add as many #SBATCH directives as needed to reserve computing resources for your task. The above example requests for one cpu on a single node for 1 task along with 1GB RAM for an hour on cpu2019 partition.&lt;br /&gt;
&lt;br /&gt;
Next, you have to set up environment variables either by loading the modules centrally installed on ARC or export path to the software in your home directory. The above example loads an available python module.&lt;br /&gt;
&lt;br /&gt;
Finally, include the Linux command to execute the local script.&lt;br /&gt;
&lt;br /&gt;
Note that failing to specify part of a resource allocation request (most notably &#039;&#039;&#039;time&#039;&#039;&#039; and &#039;&#039;&#039;memory&#039;&#039;&#039;) will result in bad resource requests as the defaults are not appropriate to most cases. Please refer to the section &#039;Running non-interactive jobs&#039; for more examples.&lt;br /&gt;
&lt;br /&gt;
== Hardware ==&lt;br /&gt;
Since the ARC cluster is a conglomeration of many different compute clusters, the hardware within ARC can vary widely in terms of performance and capabilities.  To mitigate any compatibility issues with different hardware, we combine similar hardware into their own Slurm partition to ensure your workload runs as consistently as possible within one partition. Please carefully review the hardware specs for each of the partitions below to avoid any surprises.&lt;br /&gt;
&lt;br /&gt;
=== Partition Hardware Specs ===&lt;br /&gt;
When submitting jobs to ARC, you may specify a partition that your job will run on.  Please choose a partition that is most appropriate for your work.&lt;br /&gt;
&lt;br /&gt;
* See also [[How to find available partitions on ARC]].&lt;br /&gt;
&lt;br /&gt;
A few things to keep in mind when choosing a partition:&lt;br /&gt;
* Specific workloads requiring special Intel Instruction Set Extensions may only work on newer Intel CPUs. &lt;br /&gt;
* If working with multi-node parallel processing, ensure your software and libraries support the partition&#039;s interconnect networking.&lt;br /&gt;
* While older partitions may be slower, they may be less busy and have little to no wait times.&lt;br /&gt;
&lt;br /&gt;
If you are unsure which partition to use or need assistance on selecting an appropriate partition, please see [[#Selecting_a_Partition|the Selecting a Partition Section]] below. &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Partition&lt;br /&gt;
! Description&lt;br /&gt;
! Nodes&lt;br /&gt;
! CPU Cores, Model, and Year&lt;br /&gt;
! Memory&lt;br /&gt;
! GPU&lt;br /&gt;
! Network&lt;br /&gt;
|-&lt;br /&gt;
| -&lt;br /&gt;
| ARC Login Node&lt;br /&gt;
| 1&lt;br /&gt;
| 16 cores, 2x Intel(R) Xeon(R) CPU E5620  @ 2.40GHz (Westmere, 2010)&lt;br /&gt;
| 48 GB&lt;br /&gt;
| N/A&lt;br /&gt;
| 40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
| gpu-v100&lt;br /&gt;
| GPU Parition&lt;br /&gt;
| 13&lt;br /&gt;
| 80 cores, 2x Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz (Skylake, 2019)&lt;br /&gt;
| 754 GB&lt;br /&gt;
| 2x Tesla V100-PCIE-16GB&lt;br /&gt;
| 100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|gpu-a100&lt;br /&gt;
|GPU Partition&lt;br /&gt;
|5&lt;br /&gt;
|40 cores, 1x Intel(R) Xeon(R) Silver 4316 CPU @ 2.30GHz (Ice Lake, 2021)&lt;br /&gt;
|512 GB&lt;br /&gt;
|2x GA100 A100 PCIe 80GB&lt;br /&gt;
|100 Gbit/s Mellanox Infiniband&lt;br /&gt;
|-&lt;br /&gt;
|cpu2022&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|52&lt;br /&gt;
|52 cores, 2x Intel(R) Xeon(R) Gold 5320 CPU @ 2.20GHz (Ice Lake)&lt;br /&gt;
|256 GB&lt;br /&gt;
|N/A&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
| cpu2021&lt;br /&gt;
| General Purpose Compute&lt;br /&gt;
| 48&lt;br /&gt;
| 48 cores, 2x Intel(R) Xeon(R) Gold 6240R CPU @ 2.40GHz (Cascade Lake, 2021)&lt;br /&gt;
| 185 GB&lt;br /&gt;
| N/A &lt;br /&gt;
| 100 Gbit/s Mellanox Infiniband&lt;br /&gt;
|-&lt;br /&gt;
| cpu2019&lt;br /&gt;
| General Purpose Compute&lt;br /&gt;
| 14&lt;br /&gt;
| 40 cores, 2x Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz (Skylake, 2019)&lt;br /&gt;
| 190 GB&lt;br /&gt;
| N/A &lt;br /&gt;
| 100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
| apophis&lt;br /&gt;
| General Purpose Compute&lt;br /&gt;
| 21&lt;br /&gt;
| 40 cores, 2x Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz (Skylake, 2019)&lt;br /&gt;
| 190 GB&lt;br /&gt;
| N/A &lt;br /&gt;
| 100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
| razi&lt;br /&gt;
| General Purpose Compute&lt;br /&gt;
| 41&lt;br /&gt;
| 40 cores, 2x Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz (Skylake, 2019)&lt;br /&gt;
| 190 GB&lt;br /&gt;
| N/A &lt;br /&gt;
| 100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
| bigmem&lt;br /&gt;
| Big Memory Nodes&lt;br /&gt;
| 2&lt;br /&gt;
| 80 cores, 4x Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz (Skylake, 2019)&lt;br /&gt;
| 3022 GB&lt;br /&gt;
| N/A &lt;br /&gt;
| 100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
| pawson&lt;br /&gt;
| General Purpose Compute&lt;br /&gt;
| 13&lt;br /&gt;
| 40 cores, 2x Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz (Skylake, 2019)&lt;br /&gt;
| 190 GB&lt;br /&gt;
| N/A&lt;br /&gt;
| 100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|cpu2017&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|14&lt;br /&gt;
|56 cores, 2x Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz (Sandy Bridge, 2012)&lt;br /&gt;
|256 GB&lt;br /&gt;
|N/A&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
| theia&lt;br /&gt;
| Former Theia cluster&lt;br /&gt;
| 20&lt;br /&gt;
| 56 cores, 2x Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz (Sandy Bridge, 2012)&lt;br /&gt;
| 188 GB&lt;br /&gt;
| N/A &lt;br /&gt;
| 40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
| cpu2013&lt;br /&gt;
| Former hyperion cluster&lt;br /&gt;
| 12&lt;br /&gt;
| 32 cores, 2x Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz (Sandy Bridge, 2012)&lt;br /&gt;
| 126 GB&lt;br /&gt;
| N/A&lt;br /&gt;
| 40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
| lattice&lt;br /&gt;
| Former Lattice cluster&lt;br /&gt;
| 307&lt;br /&gt;
| 8 cores, 2x Intel(R) Xeon(R) CPU L5520  @ 2.27GHz (Nehalem, 2009)&lt;br /&gt;
| 12 GB&lt;br /&gt;
| N/A&lt;br /&gt;
| 40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
| single&lt;br /&gt;
| Former Lattice cluster&lt;br /&gt;
| 168&lt;br /&gt;
| 8 cores, 2x Intel(R) Xeon(R) CPU L5520  @ 2.27GHz (Nehalem, 2009)&lt;br /&gt;
| 12 GB&lt;br /&gt;
| N/A&lt;br /&gt;
| 40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
| parallel&lt;br /&gt;
| Former Parallel Cluster&lt;br /&gt;
| 576&lt;br /&gt;
| 12 cores, 2x Intel(R) Xeon(R) CPU E5649  @ 2.53GHz (Westmere, 2011)&lt;br /&gt;
| 24 GB&lt;br /&gt;
| N/A&lt;br /&gt;
| 40 Gbit/s InfiniBand&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===ARC Cluster Storage===&lt;br /&gt;
Usage of ARC cluster storage is outlined by our [[ARC Storage Terms of Use]] page.&lt;br /&gt;
&lt;br /&gt;
{{Warning Box&lt;br /&gt;
| title=Data Storage&lt;br /&gt;
| message=ARC storage is not suitable for long-term or archival storage.  It is not backed-up and does not have sufficient redundancy to be used as a primary storage system.  It is not guaranteed to be available for the time periods that are typical of archiving.  For information on available campus storage options, please see [[Storage Options]].&lt;br /&gt;
&lt;br /&gt;
Please ensure that the only data you keep on ARC is used for active computations.&lt;br /&gt;
&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message Box&lt;br /&gt;
| title=No Backup Policy!&lt;br /&gt;
| message=You are responsible for your own backups.  Many researchers will have accounts with Compute Canada and may choose to back up their data there (the Project file system accessible through the Cedar cluster would often be used). &lt;br /&gt;
&lt;br /&gt;
Please contact us at support@hpc.ucalgary.ca if you want more information about this option.&lt;br /&gt;
&lt;br /&gt;
You can also back up data to your UofC OneDrive for business allocation see: https://rcs.ucalgary.ca/How_to_transfer_data#rclone:_rsync_for_cloud_storage This allocation starts at 5TB. Contact the support center for questions regarding OneDrive for Business.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
The ARC cluster has around 2 petabyte of shared disk storage available across the entire cluster as well as temporary storage local to each of the compute nodes. Please refer to the individual sections below on the capacity limitations and usage policies. &lt;br /&gt;
&lt;br /&gt;
Use the &amp;lt;code&amp;gt;arc.quota&amp;lt;/code&amp;gt; command on ARC to determine the available space on your various volumes and home directory.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Partition&lt;br /&gt;
!Description&lt;br /&gt;
!Capacity&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt;&lt;br /&gt;
|User home directories&lt;br /&gt;
|500 GB (per user)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;/work&amp;lt;/code&amp;gt;&lt;br /&gt;
|Research project storage&lt;br /&gt;
|Up to 100&#039;s of TB&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt;&lt;br /&gt;
|Scratch space for temporary files&lt;br /&gt;
|Up to 15 TB&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;/tmp&amp;lt;/code&amp;gt;&lt;br /&gt;
|Temporary space local to the compute cluster&lt;br /&gt;
|Dependent on available storage on nodes. Verify with &amp;lt;code&amp;gt;df -h&amp;lt;/code&amp;gt;.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;/dev/shm&amp;lt;/code&amp;gt;&lt;br /&gt;
|Small temporary in-memory disk space local to the compute cluster&lt;br /&gt;
|Dependent on memory size set in your Slurm job.&lt;br /&gt;
|}&lt;br /&gt;
====&amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt;: Home file system====&lt;br /&gt;
Each user has a directory under /home and is the default working directory when logging in to ARC. Each home directory has a per-user quota of 500 GB. This limit is fixed and cannot be increased. Researchers requiring additional storage exceeding what is available on their home directory may use &amp;lt;code&amp;gt;/work&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Note on file sharing: Due to security concerns, permissions set using &amp;lt;code&amp;gt;chmod&amp;lt;/code&amp;gt; on your home directory to allow other users to read/write to your home directory be automatically reverted by an automated system process unless an explicit exception is made.  If you need to share files with other researchers on the ARC cluster, please write to support@hpc.ucalgary.ca to ask for such an exception.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt;: Scratch file system for large job-oriented storage====&lt;br /&gt;
Associated with each job, under the &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; directory, a subdirectory is created that can be referenced in job scripts as &amp;lt;code&amp;gt;/scratch/${SLURM_JOB_ID}&amp;lt;/code&amp;gt;. You can use that directory for temporary files needed during the course of a job. Up to 15 TB of storage may be used, per user (total for all your jobs) in the &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; file system. &lt;br /&gt;
&lt;br /&gt;
Data in &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; associated with a given job will be deleted automatically, without exception, five days after the job finishes.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;code&amp;gt;/work&amp;lt;/code&amp;gt;: Work file system for larger projects====&lt;br /&gt;
If you need more space than provided in &amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt; and the &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; job-oriented space is not appropriate for you case, please write to support@hpc.ucalgary.ca with an explanation, including an indication of how much storage you expect to need and for how long.  If approved, you will then be assigned a directory under &amp;lt;code&amp;gt;/work&amp;lt;/code&amp;gt; with an appropriately large quota.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;code&amp;gt;/tmp&amp;lt;/code&amp;gt;,&amp;lt;code&amp;gt;/var/tmp&amp;lt;/code&amp;gt;: Temporary files====&lt;br /&gt;
You may use &amp;lt;code&amp;gt;/tmp&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;/var/tmp&amp;lt;/code&amp;gt; for storing temporary files generated by your job. The &amp;lt;code&amp;gt;/tmp&amp;lt;/code&amp;gt; is stored on a disk local to the compute node and is not shared across the cluster. The files stored here will be removed immediately after your job terminates.&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;code&amp;gt;/dev/shm&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;/run/user/$uid&amp;lt;/code&amp;gt;: In-memory temporary files ====&lt;br /&gt;
&amp;lt;code&amp;gt;/dev/shm&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/run/user/$UID&amp;lt;/code&amp;gt; is writable location for temporary files backed by virtual memory. This can be used if faster I/O is required. This is ideal for workloads that require many small read/writes to share data between processes or as a fast cache. The amount of data you can write here is dependent on the amount of free memory available to your job. The files stored at these locations will be removed immediately after your job terminates.&lt;br /&gt;
&lt;br /&gt;
== Software ==&lt;br /&gt;
All ARC nodes run the latest version of Rocky Linux 8 with the same set of base software packages. To maintain the stability and consistency of all nodes, any additional dependencies that your software requires must be installed under your account.  For your convenience, we have packaged commonly used software packages and dependencies as modules available under &amp;lt;code&amp;gt;/global/software&amp;lt;/code&amp;gt;. If your software package is not available as a module, you may also try Anaconda which allows users to manage and install custom packages in an isolated environment.&lt;br /&gt;
&lt;br /&gt;
For a list of available packages that have been made available, please see [[ARC Software pages]]. &lt;br /&gt;
&lt;br /&gt;
Please contact us at support@hpc.ucalgary.ca if you need additional software installed.&lt;br /&gt;
&lt;br /&gt;
==== Modules ====&lt;br /&gt;
The setup of the environment for using some of the installed software is through the &amp;lt;code&amp;gt;module&amp;lt;/code&amp;gt; command. An overview of [https://www.westgrid.ca//support/modules modules on WestGrid (external link)] is largely applicable to ARC.&lt;br /&gt;
&lt;br /&gt;
Software packages bundled as a module will be available under &amp;lt;code&amp;gt;/global/software&amp;lt;/code&amp;gt; and can be listed with the &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module avail&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To enable Python, load the Python module by running:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module load python/anaconda-3.6-5.1.0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To unload the Python module, run:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module remove python/anaconda-3.6-5.1.0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To see currently loaded modules, run:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module list&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By default, no modules are loaded on ARC. If you wish to use a specific module, such as the Intel compilers or the Open MPI parallel programming packages, you must load the appropriate module.&lt;br /&gt;
&lt;br /&gt;
== Job submission ==&lt;br /&gt;
&lt;br /&gt;
=== Interactive Jobs ===&lt;br /&gt;
The ARC login node may be used for such tasks as editing files, compiling programs and running short tests while developing programs. We suggest CPU intensive workloads on the login node be restricted to under 15 minutes as per [[General Cluster Guidelines and Policies|our cluster guidelines]]. For interactive workloads exceeding 15 minutes, use the &#039;&#039;&#039;[[Running_jobs#Interactive_jobs|salloc command]]&#039;&#039;&#039; to allocate an interactive session on a compute node.&lt;br /&gt;
&lt;br /&gt;
The default salloc allocation is 1 CPU and 1 GB of memory. Adjust this by specifying &amp;lt;code&amp;gt;-n CPU#&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--mem Megabytes&amp;lt;/code&amp;gt;. You may request up to 5 hours of CPU time for interactive jobs.&lt;br /&gt;
 salloc --time=5:00:00 --partition=cpu2019&lt;br /&gt;
&lt;br /&gt;
Always use salloc or srun to start an interactive job. Do not SSH directly to a compute node as SSH sessions will be refused without an active job running.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- This information doesn&#039;t seem that useful or relevant to running interactive jobs. Move to getting started section?&lt;br /&gt;
ARC uses the Linux operating system. The program that responds to your typed commands and allows you to run other programs is called the Linux shell. There are several different shells available, but, by default you will use one called bash. It is useful to have some knowledge of the shell and a variety of other command-line programs that you can use to manipulate files. If you are new to Linux systems, we recommend that you work through one of the many online tutorials that are available, such as the [http://www.ee.surrey.ac.uk/Teaching/Unix/index.html UNIX Tutorial for Beginners (external link)] provided by the University of Surrey. The tutorial covers such fundamental topics, among others, as creating, renaming and deleting files and directories, how to produce a listing of your files and how to tell how much disk space you are using.  For a more comprehensive introduction to Linux, see [http://linuxcommand.sourceforge.net/tlcl.php The Linux Command Line (external link)].&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Running non-interactive jobs (batch processing) ===&lt;br /&gt;
Production runs and longer test runs should be submitted as (non-interactive) batch jobs, in which commands to be executed are listed in a script (text file). Batch jobs scripts are submitted using the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command, part of the Slurm job management and scheduling software. #SBATCH directive lines at the beginning of the script are used to specify the resources needed for the job (cores, memory, run time limit and any specialized hardware needed).&lt;br /&gt;
&lt;br /&gt;
Most of the information on the [https://docs.computecanada.ca/wiki/Running_jobs Running Jobs (external link)] page on the Compute Canada web site is also relevant for submitting and managing batch jobs and reserving processors for interactive work on ARC.  One major difference between running jobs on the ARC and Compute Canada clusters is in selecting the type of hardware that should be used for a job. On ARC, you choose the hardware to use primarily by specifying a partition, as described below.&lt;br /&gt;
&lt;br /&gt;
=== Selecting a Partition ===&lt;br /&gt;
There are some aspects to consider when selecting a partition including:&lt;br /&gt;
* Resource requirements in terms of memory and CPU cores&lt;br /&gt;
* Hardware specific requirements, such as GPU or CPU Instruction Set Extensions&lt;br /&gt;
* Partition resource limits and potential wait time&lt;br /&gt;
* Software support parallel processing using Message Passing Interface (MPI), OpenMP, etc.&lt;br /&gt;
** Eg. MPI for parallel processing can distribute memory across multiple nodes, per-node memory requirements could be lower. Whereas, OpenMP or single process code that is restricted to one node would require a higher memory node.&lt;br /&gt;
** Note: MPI code running on hardware with Omni-Path networking should be compiled with Omni-Path networking support. This is provided by loading the &amp;lt;code&amp;gt;openmpi/2.1.3-opa&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;openmpi/3.1.2-opa&amp;lt;/code&amp;gt; modules prior to compiling.&lt;br /&gt;
&lt;br /&gt;
Since resources that are requested are reserved for your job, please request only as much CPU and memory as your job requires to avoid reducing the cluster efficiency.  If you are unsure which partition to use or the specific resource requests that are appropriate for your jobs, please contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] and we would be happy to work with you.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%;&amp;quot;&lt;br /&gt;
!Partition&lt;br /&gt;
!Description&lt;br /&gt;
!Cores/node&lt;br /&gt;
!Memory Request Limit&lt;br /&gt;
!Time Limit&lt;br /&gt;
!GPU&lt;br /&gt;
!Networking&lt;br /&gt;
|-&lt;br /&gt;
|cpu2021&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|48&lt;br /&gt;
|185,000 MB&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|cpu2019&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|40&lt;br /&gt;
|185,000 MB&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|bigmem&lt;br /&gt;
|Big Memory Compute&lt;br /&gt;
|80&lt;br /&gt;
|3,000,000 MB&lt;br /&gt;
|24 hours ‡&lt;br /&gt;
|&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|gpu-v100&lt;br /&gt;
|GPU Compute&lt;br /&gt;
|80&lt;br /&gt;
|753,000 MB&lt;br /&gt;
|24 hours ‡&lt;br /&gt;
|2&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|apophis&amp;amp;dagger;&lt;br /&gt;
|Private Research Partition&lt;br /&gt;
|40&lt;br /&gt;
|185,000 MB&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|razi&amp;amp;dagger;&lt;br /&gt;
|Private Research Partition&lt;br /&gt;
|40&lt;br /&gt;
|185,000 MB&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|pawson&amp;amp;dagger;&lt;br /&gt;
|Private Research Partition&lt;br /&gt;
|40&lt;br /&gt;
|185,000 MB&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|sherlock&amp;amp;dagger;&lt;br /&gt;
|Private Research Partition&lt;br /&gt;
|7&lt;br /&gt;
|185,000 MB&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|theia&amp;amp;dagger;&lt;br /&gt;
|Private Research Partition&lt;br /&gt;
|28&lt;br /&gt;
|188,000 MB&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|synergy&amp;amp;dagger;&lt;br /&gt;
|Private Research Partition&lt;br /&gt;
|14&lt;br /&gt;
|245,000 MB&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|cpu2013&lt;br /&gt;
|Legacy General Purpose Compute&lt;br /&gt;
|16&lt;br /&gt;
|120000&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|lattice&lt;br /&gt;
|Legacy General Purpose Compute&lt;br /&gt;
|8&lt;br /&gt;
|12000&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|parallel&lt;br /&gt;
|Legacy General Purpose Compute&lt;br /&gt;
|12&lt;br /&gt;
|23000&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|single&lt;br /&gt;
|Legacy Single-Node Job Compute&lt;br /&gt;
|8&lt;br /&gt;
|12000&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|cpu2021-bf24&lt;br /&gt;
|Back-fill Compute (2021-era hardware, 24h)&lt;br /&gt;
|48&lt;br /&gt;
|185,000 MB&lt;br /&gt;
|24 hours ‡&lt;br /&gt;
|&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|cpu2019-bf05&lt;br /&gt;
|Back-fill Compute (2019-era hardware, 5h)&lt;br /&gt;
|40&lt;br /&gt;
|185,000 MB&lt;br /&gt;
|5 hours ‡&lt;br /&gt;
|&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|cpu2017-bf05&lt;br /&gt;
|Back-fill Compute (2017-era hardware, 5h)&lt;br /&gt;
|14&lt;br /&gt;
|245,000 MB&lt;br /&gt;
|5 hours ‡&lt;br /&gt;
|&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|+ style=&amp;quot;caption-side: bottom; text-align: left; font-weight: normal;&amp;quot; | &amp;amp;dagger; These partitions contain hardware contributed to ARC by particular researchers and should only be used by members of their research groups. However, they have generously allowed their compute nodes to be shared with others outside their research groups for short jobs.  A special &#039;back-fill&#039; or -bf partition is available for use by all ARC users for jobs shorter than 5 hours.&amp;lt;br /&amp;gt;‡ As time limits may be changed by administrators to adjust to maintenance schedules or system load, the values given in the tables are not definitive.  See the Time limits section below for commands you can use on ARC itself to determine current limits.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==== Backfill partitions ====&lt;br /&gt;
Backfill partitions can be used by all users on ARC for short-term jobs. The hardware backing these partitions are generously contributed by researchers.  We recommend including the backfill partitions for short term jobs as it may help reduce your job&#039;s wait time and increase the overall cluster throughput.&lt;br /&gt;
&lt;br /&gt;
Previously, each contributing research group had their own backfill partition. Since June 2021, we have merged:&lt;br /&gt;
&lt;br /&gt;
* apophis-bf, pawson-bf, and razi-bf into cpu2019-bf05 &lt;br /&gt;
* theia-bf and synergy-bf into cpu2017-bf05&lt;br /&gt;
&lt;br /&gt;
The naming scheme of the backfill partitions is the CPU generation year, followed by -bf and the time limit in hours.  For example, cpu2017-bf05 would represent a backfill partition containing processors from 2017 with a time limit of 5 hours.&lt;br /&gt;
&lt;br /&gt;
==== Hardware resource and job policy limits ====&lt;br /&gt;
In addition to the hardware limitations, please be aware that there may also be policy limits imposed on your account for each partition. These limits restrict the number of cores, nodes, or GPUs that can be used at any given time. Since the limits are applied on a partition-by-partition basis, using resources in one partition should not affect the available resources you can use in another partition.&lt;br /&gt;
&lt;br /&gt;
These limits can be listed by running:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sacctmgr show qos format=Name,MaxWall,MaxTRESPU%20,MaxSubmitJobs&lt;br /&gt;
      Name     MaxWall            MaxTRESPU MaxSubmit&lt;br /&gt;
---------- ----------- -------------------- ---------&lt;br /&gt;
    normal  7-00:00:00                           2000&lt;br /&gt;
    breezy  3-00:00:00              cpu=384      2000&lt;br /&gt;
       gpu  7-00:00:00                          13000&lt;br /&gt;
   cpu2019  7-00:00:00              cpu=240      2000&lt;br /&gt;
  gpu-v100  1-00:00:00    cpu=80,gres/gpu=4      2000&lt;br /&gt;
    single  7-00:00:00      cpu=408,node=75      2000&lt;br /&gt;
      razi  7-00:00:00                           2000&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Specifying a partition in a job ====&lt;br /&gt;
One you have decided which partitions best suits your computation, you can select one or more partition on a job-by-job basis by including the &amp;lt;code&amp;gt;partition&amp;lt;/code&amp;gt; keyword for an &amp;lt;code&amp;gt;SBATCH&amp;lt;/code&amp;gt; directive in your batch job. Multiple partitions should be comma separated.  If you omit the partition specification, the system will try to assign your job to appropriate hardware based on other aspects of your request. &lt;br /&gt;
&lt;br /&gt;
In some cases, you really should specify the partition explicitly.  For example, if you are running single-node jobs with thread-based parallel processing requesting 8 cores you could use:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --mem=0              ❶&lt;br /&gt;
#SBATCH --nodes=1            ❷&lt;br /&gt;
#SBATCH --ntasks=1           ❸&lt;br /&gt;
#SBATCH --cpus-per-task=8    ❹&lt;br /&gt;
#SBATCH --partition=single,lattice   ❺ &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A few things to mention in this example:&lt;br /&gt;
# &amp;lt;code&amp;gt;--mem=0&amp;lt;/code&amp;gt; allocates all available memory on the compute node for the job. This effectively allocates the entire node for your job.&lt;br /&gt;
# &amp;lt;code&amp;gt;--nodes=1&amp;lt;/code&amp;gt; allocates 1 node for the job&lt;br /&gt;
# &amp;lt;code&amp;gt;--ntasks=1&amp;lt;/code&amp;gt; your job has a single task&lt;br /&gt;
# &amp;lt;code&amp;gt;--cpus-per-task=8&amp;lt;/code&amp;gt; asks for 8 CPUs per task. This job in total will request 8 * 1, or 8 CPUs.&lt;br /&gt;
# &amp;lt;code&amp;gt;--partition=single,lattice&amp;lt;/code&amp;gt; specifies that this job can run on either single or lattice.&lt;br /&gt;
Suppose that your job requires at most 8 CPU cores and 10 GB of memory. The above Slurm request would be valid and optimal since your job fits neatly in a single node on the single and parallel partition.  However, if you failed to specify the partition, Slurm may try to schedule your job to a partition with larger nodes, such as cpu2019 where each node has 40 cores and 190 GB of memory. If your job is scheduled on such a node, your job will be effectively wasting 32 cores and 180 GB of memory because &amp;lt;code&amp;gt;--mem=0&amp;lt;/code&amp;gt; not only requests for 190 GB on this node, but also prevents other jobs from being scheduled on the same node.&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t specify a partition, please give greater thought to the memory specification to make sure that the scheduler will not assign your job more resources than are needed.&lt;br /&gt;
&lt;br /&gt;
Parameters such as &#039;&#039;&#039;--ntasks-per-cpu&#039;&#039;&#039;, &#039;&#039;&#039;--cpus-per-task&#039;&#039;&#039;, &#039;&#039;&#039;--mem&#039;&#039;&#039; and &#039;&#039;&#039;--mem-per-cpu&amp;gt;&#039;&#039;&#039; have to be adjusted according to the capabilities of the hardware also. The product of --ntasks-per-cpu and --cpus-per-task should be less than or equal to the number given in the &amp;quot;Cores/node&amp;quot; column.  The &#039;&#039;&#039;--mem&amp;gt;&#039;&#039;&#039; parameter (or the product of &#039;&#039;&#039;--mem-per-cpu&#039;&#039;&#039; and &#039;&#039;&#039;--cpus-per-task&#039;&#039;&#039;) should be less than the &amp;quot;Memory limit&amp;quot; shown. If using whole nodes, you can specify &#039;&#039;&#039;--mem=0&#039;&#039;&#039; to request the maximum amount of memory per node.&lt;br /&gt;
&lt;br /&gt;
===== Examples =====&lt;br /&gt;
Here are some examples of specifying the various partitions.&lt;br /&gt;
&lt;br /&gt;
As mentioned in the [[#Hardware|Hardware]] section above, the ARC cluster was expanded in January 2019.  To select the 40-core general purpose nodes specify:&lt;br /&gt;
&lt;br /&gt;
 #SBATCH --partition=cpu2019&lt;br /&gt;
&lt;br /&gt;
To run on the Tesla V100 GPU-enabled nodes, use the &#039;&#039;&#039;gpu-v100&#039;&#039;&#039; partition.  You will also need to include an SBATCH directive in the form &#039;&#039;&#039;--gres=gpu:n&#039;&#039;&#039; to specify the number of GPUs, n, that you need.  For example, if the software you are running can make use of both GPUs on a gpu-v100 partition compute node, use:&lt;br /&gt;
&lt;br /&gt;
 #SBATCH --partition=gpu-v100 --gres=gpu:2&lt;br /&gt;
&lt;br /&gt;
For very large memory jobs (more than 185000 MB), specify the bigmem partition:&lt;br /&gt;
&lt;br /&gt;
 #SBATCH --partition=bigmem&lt;br /&gt;
&lt;br /&gt;
If the more modern computers are too busy or you have a job well-suited to run on the compute nodes described in the legacy hardware section above, choose the cpu2013, Lattice or Parallel compute nodes by specifying the corresponding partition keyword:&lt;br /&gt;
&lt;br /&gt;
 #SBATCH --partition=cpu2013&lt;br /&gt;
 #SBATCH --partition=lattice&lt;br /&gt;
 #SBATCH --partition=parallel&lt;br /&gt;
&lt;br /&gt;
There is an additional partition called &#039;&#039;&#039;single&#039;&#039;&#039; that provides nodes similar to the lattice partition, but, is intended for single-node jobs. Select the single partition with&lt;br /&gt;
&lt;br /&gt;
 #SBATCH --partition=single&lt;br /&gt;
&lt;br /&gt;
=== Time limits ===&lt;br /&gt;
Use the &amp;lt;code&amp;gt;--time&amp;lt;/code&amp;gt; directive to tell the job scheduler the maximum time that your job might run.  For example:&lt;br /&gt;
 #SBATCH --time=hh:mm:ss&lt;br /&gt;
&lt;br /&gt;
You can use &amp;lt;code&amp;gt;scontrol show partitions&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; to see the current maximum time that a job can run.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; highlight=&amp;quot;6&amp;quot;&amp;gt;&lt;br /&gt;
$ scontrol show partitions&lt;br /&gt;
PartitionName=single                                                                 &lt;br /&gt;
   AllowGroups=ALL AllowAccounts=ALL AllowQos=ALL                                    &lt;br /&gt;
   AllocNodes=ALL Default=NO QoS=single                                              &lt;br /&gt;
   DefaultTime=NONE DisableRootJobs=NO ExclusiveUser=NO GraceTime=0 Hidden=NO        &lt;br /&gt;
   MaxNodes=UNLIMITED MaxTime=7-00:00:00 MinNodes=1 LLN=NO MaxCPUsPerNode=UNLIMITED  &lt;br /&gt;
   Nodes=cn[001-168]                                                                 &lt;br /&gt;
   PriorityJobFactor=1 PriorityTier=1 RootOnly=NO ReqResv=NO OverSubscribe=NO        &lt;br /&gt;
   OverTimeLimit=NONE PreemptMode=OFF                                                &lt;br /&gt;
   State=UP TotalCPUs=1344 TotalNodes=168 SelectTypeParameters=NONE                  &lt;br /&gt;
   DefMemPerNode=UNLIMITED MaxMemPerNode=UNLIMITED                                   &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, with &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; under the &amp;lt;code&amp;gt;TIMELIMIT&amp;lt;/code&amp;gt; column:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sinfo                                                     &lt;br /&gt;
PARTITION  AVAIL  TIMELIMIT  NODES  STATE NODELIST               &lt;br /&gt;
single        up 7-00:00:00      1 drain* cn097                  &lt;br /&gt;
single        up 7-00:00:00      1  maint cn002                  &lt;br /&gt;
single        up 7-00:00:00      4 drain* cn[001,061,133,154]    &lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Support ==&lt;br /&gt;
{{Message Box&lt;br /&gt;
|title=[[Support|Need Help or have other ARC Related Questions?]]&lt;br /&gt;
|message=For all general RCS related issues, questions, or comments, please contact us at support@hpc.ucalgary.ca.&lt;br /&gt;
|icon=Support Icon.png}}&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t hesitate to [[Support|contact us]] directly by email if you need help using ARC or require guidance on migrating and running your workflows to ARC.&lt;br /&gt;
&lt;br /&gt;
[[Category:ARC]]&lt;br /&gt;
[[Category:Guides]]&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=ARC_Cluster_Guide&amp;diff=2638</id>
		<title>ARC Cluster Guide</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=ARC_Cluster_Guide&amp;diff=2638"/>
		<updated>2023-09-14T23:07:50Z</updated>

		<summary type="html">&lt;p&gt;Darcy: /* ARC Cluster Storage */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{ARC Cluster Status}}&lt;br /&gt;
&lt;br /&gt;
{{Message Box&lt;br /&gt;
|title=[[Support|Need Help or have other ARC Related Questions?]]&lt;br /&gt;
|message=For all general RCS related issues, questions, or comments, please contact us at support@hpc.ucalgary.ca.&lt;br /&gt;
|icon=Support Icon.png}}&lt;br /&gt;
&lt;br /&gt;
This guide gives an overview of the Advanced Research Computing (ARC) cluster at the University of Calgary and is intended to be read by new account holders getting started on ARC. This guide covers topics such as the hardware and performance characteristics, available software, usage policies and how to log in and run jobs. ARC can be used with data that a Researcher has classified as Lv1 and Lv2 as described in the UCalgary [https://www.ucalgary.ca/legal-services/sites/default/files/teams/1/Standards-Legal-Information-Security-Classification-Standard.pdf Information Security Classification Standard] &lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
The ARC is a high performance compute (HPC) cluster that is available for research projects based at the University of Calgary. This compute cluster is comprised of hundreds of severs interconnected with a high bandwidth interconnect. Special resources within the cluster include nodes with large memory installed and GPUs are also available. You may learn more about ARC&#039;s hardware in the [[ARC Cluster Guide#Hardware|hardware section below]]. ARC can be accessed through a [[Linux Introduction|command line interface]] or via a web interface called Open OnDemand.&lt;br /&gt;
&lt;br /&gt;
This cluster can be used for running large numbers (hundreds) of concurrent serial (one core) jobs, OpenMP or other thread-based jobs, shared-memory parallel code using up to 40 or 80 threads per job (depending on the partition), distributed-memory (MPI-based) parallel code using up to hundreds of cores, or jobs that take advantage of Graphics Processing Units (GPUs).&lt;br /&gt;
&lt;br /&gt;
Historically, ARC is primarily comprised of older, disparate Linux-based clusters that were formerly offered to researchers from across Canada such as Breezy, Lattice, and Parallel.  In addition, a large-memory compute node (Bigbyte) was salvaged from the now-retired local Storm cluster. In January 2019, a major addition to ARC with modern hardware was purchased. In 2020, compute clusters from CHGI have been migrated into ARC.&lt;br /&gt;
&lt;br /&gt;
=== How to Get Started ===&lt;br /&gt;
If you have a project you think would be appropriate for ARC, please email support@hpc.ucalgary.ca and mention the intended research and software you plan to use. You must have a University of Calgary IT account in order to use ARC.&lt;br /&gt;
* For users that do not have a University of IT account or email address, please register for one at https://itregport.ucalgary.ca/.&lt;br /&gt;
* For users external to the University, such as for users collaborating on a research project at the University of Calgary, please contact us and mention the project leader you are collaborating with.&lt;br /&gt;
&lt;br /&gt;
Once your access to ARC has been granted, you will be able to immediately make use of the cluster using your University of Calgary IT account by following the [[ARC_Cluster_Guide#Using_ARC|usage guide outlined below]].&lt;br /&gt;
&lt;br /&gt;
== Using ARC ==&lt;br /&gt;
&lt;br /&gt;
{{Message Box&lt;br /&gt;
|icon=Security Icon.png&lt;br /&gt;
|title=Cybersecurity awareness at the U of C&lt;br /&gt;
|message=Please note that there are typically about 950 phishing attempts targeting University of Calgary accounts each month. This is just a reminder to be careful about computer security issues, both at home and at the University. Please visit https://it.ucalgary.ca/it-security for more information, tips on secure computing, and how to report suspected security problems.}}&lt;br /&gt;
&lt;br /&gt;
=== Logging in ===&lt;br /&gt;
To log in to ARC, connect using SSH to &amp;lt;code&amp;gt;arc.ucalgary.ca&amp;lt;/code&amp;gt; on port &amp;lt;code&amp;gt;22&amp;lt;/code&amp;gt;. Connections to ARC are accepted only from the University of Calgary network (on campus) or through the University of Calgary General VPN (off campus).&lt;br /&gt;
&lt;br /&gt;
See [[Connecting to RCS HPC Systems]] for more information.&lt;br /&gt;
=== How to interact with ARC ===&lt;br /&gt;
&lt;br /&gt;
ARC cluster is a collection of several compute nodes connected by a high-speed network. On ARC, computations get submitted as jobs. Once submitted, the jobs are then assigned to compute nodes by the job scheduler as resources become available.&lt;br /&gt;
&lt;br /&gt;
[[File:Cluster.png]]&lt;br /&gt;
&lt;br /&gt;
You can access ARC with your UCalgary IT user credentials. Once connected, you will get placed in the ARC login node, for basic tasks such as job submission, monitor job status, manage files, edit text, etc. It is a shared resource where multiple users get connected at the same time. Thus, any intensive tasks is not allowed on the login node as it may block other potential users to connect/submit their computations. &lt;br /&gt;
         [tannistha.nandi@arc ~]$ &lt;br /&gt;
The job scheduling system on ARC is called SLURM.  On ARC, there are two SLURM commands that can allocate resources to a job under appropriate conditions: ‘salloc’ and ‘sbatch’. They both accept the same set of command line options with respect to resource allocation. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;‘salloc’&#039;&#039;&#039; is to launch an interactive session, typically for tasks under 5 hours. &lt;br /&gt;
Once an interactive job session is created, you can do things like explore research datasets, start R or python sessions to test your code, compile software applications etc.&lt;br /&gt;
&lt;br /&gt;
a. Example 1: The following command requests for 1 cpu on 1 node for 1 task along with 1 GB of RAM for an hour. &lt;br /&gt;
          [tannistha.nandi@arc ~]$ salloc --mem=1G -c 1 -N 1 -n 1  -t 01:00:00&lt;br /&gt;
          salloc: Granted job allocation 6758015&lt;br /&gt;
          salloc: Waiting for resource configuration&lt;br /&gt;
          salloc: Nodes fc4 are ready for job&lt;br /&gt;
          [tannistha.nandi@fc4 ~]$ &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
b. Example 2:  The following command requests for 1 GPU to be used from 1 node belonging to the gpu-v100 partition along with 1 GB of RAM for 1 hour.  Generic resource scheduling (--gres) is used to request for GPU resources.&lt;br /&gt;
         [tannistha.nandi@arc ~]$ salloc --mem=1G -t 01:00:00 -p gpu-v100 --gres=gpu:1&lt;br /&gt;
         salloc: Granted job allocation 6760460&lt;br /&gt;
         salloc: Waiting for resource configuration&lt;br /&gt;
         salloc: Nodes fg3 are ready for job&lt;br /&gt;
         [tannistha.nandi@fg3 ~]$&lt;br /&gt;
&lt;br /&gt;
Once you finish the work, type &#039;exit&#039; at the command prompt to end the interactive session,&lt;br /&gt;
         [tannistha.nandi@fg3 ~]$ exit&lt;br /&gt;
         [tannistha.nandi@fg3 ~]$ salloc: Relinquishing job allocation 6760460&lt;br /&gt;
It is to ensure that the allocated resources are released from your job and now available to other users.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;‘sbatch’&#039;&#039;&#039; is to submit computations as jobs to run on the cluster. You can submit a job-script.slurm via &#039;sbatch&#039; for execution.   &lt;br /&gt;
         [tannistha.nandi@arc ~]$ sbatch job-script.slurm&lt;br /&gt;
When resources become available, they get allocated to this task. Batch jobs are suited for tasks that run for long periods of time without any user supervision. When the job-script terminates, the allocation is released. &lt;br /&gt;
Please review the section on how to prepare job scripts for more information.&lt;br /&gt;
&lt;br /&gt;
=== Prepare job scripts  ===&lt;br /&gt;
Job scripts are text files saved with an extension &#039;.slurm&#039;, for example, &#039;job-script.slurm&#039;. &lt;br /&gt;
A job script looks something like this:&lt;br /&gt;
    &#039;&#039;#!/bin/bash&#039;&#039;&lt;br /&gt;
    ####### Reserve computing resources #############&lt;br /&gt;
    #SBATCH --nodes=1&lt;br /&gt;
    #SBATCH --ntasks=1&lt;br /&gt;
    #SBATCH --cpus-per-task=1&lt;br /&gt;
    #SBATCH --time=01:00:00&lt;br /&gt;
    #SBATCH --mem=1G&lt;br /&gt;
    #SBATCH --partition=cpu2019&amp;lt;br&amp;gt;&lt;br /&gt;
    ####### Set environment variables ###############&lt;br /&gt;
    module load python/anaconda3-2018.12&amp;lt;br&amp;gt;&lt;br /&gt;
    ####### Run your script #########################&lt;br /&gt;
    python myscript.py&lt;br /&gt;
&lt;br /&gt;
The first line contains the text &amp;quot;#!/bin/bash&amp;quot; to interpret it as a bash script.&lt;br /&gt;
&lt;br /&gt;
It is followed by lines that start with a &#039;#SBATCH&#039; to communicate with  &#039;SLURM&#039;. You may add as many #SBATCH directives as needed to reserve computing resources for your task. The above example requests for one cpu on a single node for 1 task along with 1GB RAM for an hour on cpu2019 partition.&lt;br /&gt;
&lt;br /&gt;
Next, you have to set up environment variables either by loading the modules centrally installed on ARC or export path to the software in your home directory. The above example loads an available python module.&lt;br /&gt;
&lt;br /&gt;
Finally, include the Linux command to execute the local script.&lt;br /&gt;
&lt;br /&gt;
Note that failing to specify part of a resource allocation request (most notably &#039;&#039;&#039;time&#039;&#039;&#039; and &#039;&#039;&#039;memory&#039;&#039;&#039;) will result in bad resource requests as the defaults are not appropriate to most cases. Please refer to the section &#039;Running non-interactive jobs&#039; for more examples.&lt;br /&gt;
&lt;br /&gt;
== Hardware ==&lt;br /&gt;
Since the ARC cluster is a conglomeration of many different compute clusters, the hardware within ARC can vary widely in terms of performance and capabilities.  To mitigate any compatibility issues with different hardware, we combine similar hardware into their own Slurm partition to ensure your workload runs as consistently as possible within one partition. Please carefully review the hardware specs for each of the partitions below to avoid any surprises.&lt;br /&gt;
&lt;br /&gt;
=== Partition Hardware Specs ===&lt;br /&gt;
When submitting jobs to ARC, you may specify a partition that your job will run on.  Please choose a partition that is most appropriate for your work.&lt;br /&gt;
&lt;br /&gt;
* See also [[How to find available partitions on ARC]].&lt;br /&gt;
&lt;br /&gt;
A few things to keep in mind when choosing a partition:&lt;br /&gt;
* Specific workloads requiring special Intel Instruction Set Extensions may only work on newer Intel CPUs. &lt;br /&gt;
* If working with multi-node parallel processing, ensure your software and libraries support the partition&#039;s interconnect networking.&lt;br /&gt;
* While older partitions may be slower, they may be less busy and have little to no wait times.&lt;br /&gt;
&lt;br /&gt;
If you are unsure which partition to use or need assistance on selecting an appropriate partition, please see [[#Selecting_a_Partition|the Selecting a Partition Section]] below. &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Partition&lt;br /&gt;
! Description&lt;br /&gt;
! Nodes&lt;br /&gt;
! CPU Cores, Model, and Year&lt;br /&gt;
! Memory&lt;br /&gt;
! GPU&lt;br /&gt;
! Network&lt;br /&gt;
|-&lt;br /&gt;
| -&lt;br /&gt;
| ARC Login Node&lt;br /&gt;
| 1&lt;br /&gt;
| 16 cores, 2x Intel(R) Xeon(R) CPU E5620  @ 2.40GHz (Westmere, 2010)&lt;br /&gt;
| 48 GB&lt;br /&gt;
| N/A&lt;br /&gt;
| 40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
| gpu-v100&lt;br /&gt;
| GPU Parition&lt;br /&gt;
| 13&lt;br /&gt;
| 80 cores, 2x Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz (Skylake, 2019)&lt;br /&gt;
| 754 GB&lt;br /&gt;
| 2x Tesla V100-PCIE-16GB&lt;br /&gt;
| 100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|gpu-a100&lt;br /&gt;
|GPU Partition&lt;br /&gt;
|5&lt;br /&gt;
|40 cores, 1x Intel(R) Xeon(R) Silver 4316 CPU @ 2.30GHz (Ice Lake, 2021)&lt;br /&gt;
|512 GB&lt;br /&gt;
|2x GA100 A100 PCIe 80GB&lt;br /&gt;
|100 Gbit/s Mellanox Infiniband&lt;br /&gt;
|-&lt;br /&gt;
|cpu2022&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|52&lt;br /&gt;
|52 cores, 2x Intel(R) Xeon(R) Gold 5320 CPU @ 2.20GHz (Ice Lake)&lt;br /&gt;
|256 GB&lt;br /&gt;
|N/A&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
| cpu2021&lt;br /&gt;
| General Purpose Compute&lt;br /&gt;
| 48&lt;br /&gt;
| 48 cores, 2x Intel(R) Xeon(R) Gold 6240R CPU @ 2.40GHz (Cascade Lake, 2021)&lt;br /&gt;
| 185 GB&lt;br /&gt;
| N/A &lt;br /&gt;
| 100 Gbit/s Mellanox Infiniband&lt;br /&gt;
|-&lt;br /&gt;
| cpu2019&lt;br /&gt;
| General Purpose Compute&lt;br /&gt;
| 14&lt;br /&gt;
| 40 cores, 2x Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz (Skylake, 2019)&lt;br /&gt;
| 190 GB&lt;br /&gt;
| N/A &lt;br /&gt;
| 100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
| apophis&lt;br /&gt;
| General Purpose Compute&lt;br /&gt;
| 21&lt;br /&gt;
| 40 cores, 2x Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz (Skylake, 2019)&lt;br /&gt;
| 190 GB&lt;br /&gt;
| N/A &lt;br /&gt;
| 100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
| razi&lt;br /&gt;
| General Purpose Compute&lt;br /&gt;
| 41&lt;br /&gt;
| 40 cores, 2x Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz (Skylake, 2019)&lt;br /&gt;
| 190 GB&lt;br /&gt;
| N/A &lt;br /&gt;
| 100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
| bigmem&lt;br /&gt;
| Big Memory Nodes&lt;br /&gt;
| 2&lt;br /&gt;
| 80 cores, 4x Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz (Skylake, 2019)&lt;br /&gt;
| 3022 GB&lt;br /&gt;
| N/A &lt;br /&gt;
| 100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
| pawson&lt;br /&gt;
| General Purpose Compute&lt;br /&gt;
| 13&lt;br /&gt;
| 40 cores, 2x Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz (Skylake, 2019)&lt;br /&gt;
| 190 GB&lt;br /&gt;
| N/A&lt;br /&gt;
| 100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|cpu2017&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|14&lt;br /&gt;
|56 cores, 2x Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz (Sandy Bridge, 2012)&lt;br /&gt;
|256 GB&lt;br /&gt;
|N/A&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
| theia&lt;br /&gt;
| Former Theia cluster&lt;br /&gt;
| 20&lt;br /&gt;
| 56 cores, 2x Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz (Sandy Bridge, 2012)&lt;br /&gt;
| 188 GB&lt;br /&gt;
| N/A &lt;br /&gt;
| 40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
| cpu2013&lt;br /&gt;
| Former hyperion cluster&lt;br /&gt;
| 12&lt;br /&gt;
| 32 cores, 2x Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz (Sandy Bridge, 2012)&lt;br /&gt;
| 126 GB&lt;br /&gt;
| N/A&lt;br /&gt;
| 40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
| lattice&lt;br /&gt;
| Former Lattice cluster&lt;br /&gt;
| 307&lt;br /&gt;
| 8 cores, 2x Intel(R) Xeon(R) CPU L5520  @ 2.27GHz (Nehalem, 2009)&lt;br /&gt;
| 12 GB&lt;br /&gt;
| N/A&lt;br /&gt;
| 40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
| single&lt;br /&gt;
| Former Lattice cluster&lt;br /&gt;
| 168&lt;br /&gt;
| 8 cores, 2x Intel(R) Xeon(R) CPU L5520  @ 2.27GHz (Nehalem, 2009)&lt;br /&gt;
| 12 GB&lt;br /&gt;
| N/A&lt;br /&gt;
| 40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
| parallel&lt;br /&gt;
| Former Parallel Cluster&lt;br /&gt;
| 576&lt;br /&gt;
| 12 cores, 2x Intel(R) Xeon(R) CPU E5649  @ 2.53GHz (Westmere, 2011)&lt;br /&gt;
| 24 GB&lt;br /&gt;
| N/A&lt;br /&gt;
| 40 Gbit/s InfiniBand&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===ARC Cluster Storage===&lt;br /&gt;
Usage of ARC cluster storage is outlined by our [[ARC Storage Terms of Use]] page.&lt;br /&gt;
&lt;br /&gt;
{{Warning Box&lt;br /&gt;
| title=Data Storage&lt;br /&gt;
| message=ARC storage is not suitable for long-term or archival storage.  It is not backed-up and does not have sufficient redundancy to be used as a primary storage system.  It is not guaranteed to be available for the time periods that are typical of archiving.&lt;br /&gt;
&lt;br /&gt;
Please ensure that the only data you keep on ARC is used for active computations.&lt;br /&gt;
&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message Box&lt;br /&gt;
| title=No Backup Policy!&lt;br /&gt;
| message=You are responsible for your own backups.  Many researchers will have accounts with Compute Canada and may choose to back up their data there (the Project file system accessible through the Cedar cluster would often be used). &lt;br /&gt;
&lt;br /&gt;
Please contact us at support@hpc.ucalgary.ca if you want more information about this option.&lt;br /&gt;
&lt;br /&gt;
You can also back up data to your UofC OneDrive for business allocation see: https://rcs.ucalgary.ca/How_to_transfer_data#rclone:_rsync_for_cloud_storage This allocation starts at 5TB. Contact the support center for questions regarding OneDrive for Business.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
The ARC cluster has around 2 petabyte of shared disk storage available across the entire cluster as well as temporary storage local to each of the compute nodes. Please refer to the individual sections below on the capacity limitations and usage policies. &lt;br /&gt;
&lt;br /&gt;
Use the &amp;lt;code&amp;gt;arc.quota&amp;lt;/code&amp;gt; command on ARC to determine the available space on your various volumes and home directory.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Partition&lt;br /&gt;
!Description&lt;br /&gt;
!Capacity&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt;&lt;br /&gt;
|User home directories&lt;br /&gt;
|500 GB (per user)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;/work&amp;lt;/code&amp;gt;&lt;br /&gt;
|Research project storage&lt;br /&gt;
|Up to 100&#039;s of TB&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt;&lt;br /&gt;
|Scratch space for temporary files&lt;br /&gt;
|Up to 15 TB&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;/tmp&amp;lt;/code&amp;gt;&lt;br /&gt;
|Temporary space local to the compute cluster&lt;br /&gt;
|Dependent on available storage on nodes. Verify with &amp;lt;code&amp;gt;df -h&amp;lt;/code&amp;gt;.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;/dev/shm&amp;lt;/code&amp;gt;&lt;br /&gt;
|Small temporary in-memory disk space local to the compute cluster&lt;br /&gt;
|Dependent on memory size set in your Slurm job.&lt;br /&gt;
|}&lt;br /&gt;
====&amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt;: Home file system====&lt;br /&gt;
Each user has a directory under /home and is the default working directory when logging in to ARC. Each home directory has a per-user quota of 500 GB. This limit is fixed and cannot be increased. Researchers requiring additional storage exceeding what is available on their home directory may use &amp;lt;code&amp;gt;/work&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Note on file sharing: Due to security concerns, permissions set using &amp;lt;code&amp;gt;chmod&amp;lt;/code&amp;gt; on your home directory to allow other users to read/write to your home directory be automatically reverted by an automated system process unless an explicit exception is made.  If you need to share files with other researchers on the ARC cluster, please write to support@hpc.ucalgary.ca to ask for such an exception.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt;: Scratch file system for large job-oriented storage====&lt;br /&gt;
Associated with each job, under the &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; directory, a subdirectory is created that can be referenced in job scripts as &amp;lt;code&amp;gt;/scratch/${SLURM_JOB_ID}&amp;lt;/code&amp;gt;. You can use that directory for temporary files needed during the course of a job. Up to 15 TB of storage may be used, per user (total for all your jobs) in the &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; file system. &lt;br /&gt;
&lt;br /&gt;
Data in &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; associated with a given job will be deleted automatically, without exception, five days after the job finishes.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;code&amp;gt;/work&amp;lt;/code&amp;gt;: Work file system for larger projects====&lt;br /&gt;
If you need more space than provided in &amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt; and the &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; job-oriented space is not appropriate for you case, please write to support@hpc.ucalgary.ca with an explanation, including an indication of how much storage you expect to need and for how long.  If approved, you will then be assigned a directory under &amp;lt;code&amp;gt;/work&amp;lt;/code&amp;gt; with an appropriately large quota.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;code&amp;gt;/tmp&amp;lt;/code&amp;gt;,&amp;lt;code&amp;gt;/var/tmp&amp;lt;/code&amp;gt;: Temporary files====&lt;br /&gt;
You may use &amp;lt;code&amp;gt;/tmp&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;/var/tmp&amp;lt;/code&amp;gt; for storing temporary files generated by your job. The &amp;lt;code&amp;gt;/tmp&amp;lt;/code&amp;gt; is stored on a disk local to the compute node and is not shared across the cluster. The files stored here will be removed immediately after your job terminates.&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;code&amp;gt;/dev/shm&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;/run/user/$uid&amp;lt;/code&amp;gt;: In-memory temporary files ====&lt;br /&gt;
&amp;lt;code&amp;gt;/dev/shm&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/run/user/$UID&amp;lt;/code&amp;gt; is writable location for temporary files backed by virtual memory. This can be used if faster I/O is required. This is ideal for workloads that require many small read/writes to share data between processes or as a fast cache. The amount of data you can write here is dependent on the amount of free memory available to your job. The files stored at these locations will be removed immediately after your job terminates.&lt;br /&gt;
&lt;br /&gt;
== Software ==&lt;br /&gt;
All ARC nodes run the latest version of Rocky Linux 8 with the same set of base software packages. To maintain the stability and consistency of all nodes, any additional dependencies that your software requires must be installed under your account.  For your convenience, we have packaged commonly used software packages and dependencies as modules available under &amp;lt;code&amp;gt;/global/software&amp;lt;/code&amp;gt;. If your software package is not available as a module, you may also try Anaconda which allows users to manage and install custom packages in an isolated environment.&lt;br /&gt;
&lt;br /&gt;
For a list of available packages that have been made available, please see [[ARC Software pages]]. &lt;br /&gt;
&lt;br /&gt;
Please contact us at support@hpc.ucalgary.ca if you need additional software installed.&lt;br /&gt;
&lt;br /&gt;
==== Modules ====&lt;br /&gt;
The setup of the environment for using some of the installed software is through the &amp;lt;code&amp;gt;module&amp;lt;/code&amp;gt; command. An overview of [https://www.westgrid.ca//support/modules modules on WestGrid (external link)] is largely applicable to ARC.&lt;br /&gt;
&lt;br /&gt;
Software packages bundled as a module will be available under &amp;lt;code&amp;gt;/global/software&amp;lt;/code&amp;gt; and can be listed with the &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module avail&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To enable Python, load the Python module by running:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module load python/anaconda-3.6-5.1.0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To unload the Python module, run:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module remove python/anaconda-3.6-5.1.0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To see currently loaded modules, run:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module list&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By default, no modules are loaded on ARC. If you wish to use a specific module, such as the Intel compilers or the Open MPI parallel programming packages, you must load the appropriate module.&lt;br /&gt;
&lt;br /&gt;
== Job submission ==&lt;br /&gt;
&lt;br /&gt;
=== Interactive Jobs ===&lt;br /&gt;
The ARC login node may be used for such tasks as editing files, compiling programs and running short tests while developing programs. We suggest CPU intensive workloads on the login node be restricted to under 15 minutes as per [[General Cluster Guidelines and Policies|our cluster guidelines]]. For interactive workloads exceeding 15 minutes, use the &#039;&#039;&#039;[[Running_jobs#Interactive_jobs|salloc command]]&#039;&#039;&#039; to allocate an interactive session on a compute node.&lt;br /&gt;
&lt;br /&gt;
The default salloc allocation is 1 CPU and 1 GB of memory. Adjust this by specifying &amp;lt;code&amp;gt;-n CPU#&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--mem Megabytes&amp;lt;/code&amp;gt;. You may request up to 5 hours of CPU time for interactive jobs.&lt;br /&gt;
 salloc --time=5:00:00 --partition=cpu2019&lt;br /&gt;
&lt;br /&gt;
Always use salloc or srun to start an interactive job. Do not SSH directly to a compute node as SSH sessions will be refused without an active job running.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- This information doesn&#039;t seem that useful or relevant to running interactive jobs. Move to getting started section?&lt;br /&gt;
ARC uses the Linux operating system. The program that responds to your typed commands and allows you to run other programs is called the Linux shell. There are several different shells available, but, by default you will use one called bash. It is useful to have some knowledge of the shell and a variety of other command-line programs that you can use to manipulate files. If you are new to Linux systems, we recommend that you work through one of the many online tutorials that are available, such as the [http://www.ee.surrey.ac.uk/Teaching/Unix/index.html UNIX Tutorial for Beginners (external link)] provided by the University of Surrey. The tutorial covers such fundamental topics, among others, as creating, renaming and deleting files and directories, how to produce a listing of your files and how to tell how much disk space you are using.  For a more comprehensive introduction to Linux, see [http://linuxcommand.sourceforge.net/tlcl.php The Linux Command Line (external link)].&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Running non-interactive jobs (batch processing) ===&lt;br /&gt;
Production runs and longer test runs should be submitted as (non-interactive) batch jobs, in which commands to be executed are listed in a script (text file). Batch jobs scripts are submitted using the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command, part of the Slurm job management and scheduling software. #SBATCH directive lines at the beginning of the script are used to specify the resources needed for the job (cores, memory, run time limit and any specialized hardware needed).&lt;br /&gt;
&lt;br /&gt;
Most of the information on the [https://docs.computecanada.ca/wiki/Running_jobs Running Jobs (external link)] page on the Compute Canada web site is also relevant for submitting and managing batch jobs and reserving processors for interactive work on ARC.  One major difference between running jobs on the ARC and Compute Canada clusters is in selecting the type of hardware that should be used for a job. On ARC, you choose the hardware to use primarily by specifying a partition, as described below.&lt;br /&gt;
&lt;br /&gt;
=== Selecting a Partition ===&lt;br /&gt;
There are some aspects to consider when selecting a partition including:&lt;br /&gt;
* Resource requirements in terms of memory and CPU cores&lt;br /&gt;
* Hardware specific requirements, such as GPU or CPU Instruction Set Extensions&lt;br /&gt;
* Partition resource limits and potential wait time&lt;br /&gt;
* Software support parallel processing using Message Passing Interface (MPI), OpenMP, etc.&lt;br /&gt;
** Eg. MPI for parallel processing can distribute memory across multiple nodes, per-node memory requirements could be lower. Whereas, OpenMP or single process code that is restricted to one node would require a higher memory node.&lt;br /&gt;
** Note: MPI code running on hardware with Omni-Path networking should be compiled with Omni-Path networking support. This is provided by loading the &amp;lt;code&amp;gt;openmpi/2.1.3-opa&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;openmpi/3.1.2-opa&amp;lt;/code&amp;gt; modules prior to compiling.&lt;br /&gt;
&lt;br /&gt;
Since resources that are requested are reserved for your job, please request only as much CPU and memory as your job requires to avoid reducing the cluster efficiency.  If you are unsure which partition to use or the specific resource requests that are appropriate for your jobs, please contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] and we would be happy to work with you.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%;&amp;quot;&lt;br /&gt;
!Partition&lt;br /&gt;
!Description&lt;br /&gt;
!Cores/node&lt;br /&gt;
!Memory Request Limit&lt;br /&gt;
!Time Limit&lt;br /&gt;
!GPU&lt;br /&gt;
!Networking&lt;br /&gt;
|-&lt;br /&gt;
|cpu2021&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|48&lt;br /&gt;
|185,000 MB&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|cpu2019&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|40&lt;br /&gt;
|185,000 MB&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|bigmem&lt;br /&gt;
|Big Memory Compute&lt;br /&gt;
|80&lt;br /&gt;
|3,000,000 MB&lt;br /&gt;
|24 hours ‡&lt;br /&gt;
|&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|gpu-v100&lt;br /&gt;
|GPU Compute&lt;br /&gt;
|80&lt;br /&gt;
|753,000 MB&lt;br /&gt;
|24 hours ‡&lt;br /&gt;
|2&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|apophis&amp;amp;dagger;&lt;br /&gt;
|Private Research Partition&lt;br /&gt;
|40&lt;br /&gt;
|185,000 MB&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|razi&amp;amp;dagger;&lt;br /&gt;
|Private Research Partition&lt;br /&gt;
|40&lt;br /&gt;
|185,000 MB&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|pawson&amp;amp;dagger;&lt;br /&gt;
|Private Research Partition&lt;br /&gt;
|40&lt;br /&gt;
|185,000 MB&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|sherlock&amp;amp;dagger;&lt;br /&gt;
|Private Research Partition&lt;br /&gt;
|7&lt;br /&gt;
|185,000 MB&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|theia&amp;amp;dagger;&lt;br /&gt;
|Private Research Partition&lt;br /&gt;
|28&lt;br /&gt;
|188,000 MB&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|synergy&amp;amp;dagger;&lt;br /&gt;
|Private Research Partition&lt;br /&gt;
|14&lt;br /&gt;
|245,000 MB&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|cpu2013&lt;br /&gt;
|Legacy General Purpose Compute&lt;br /&gt;
|16&lt;br /&gt;
|120000&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|lattice&lt;br /&gt;
|Legacy General Purpose Compute&lt;br /&gt;
|8&lt;br /&gt;
|12000&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|parallel&lt;br /&gt;
|Legacy General Purpose Compute&lt;br /&gt;
|12&lt;br /&gt;
|23000&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|single&lt;br /&gt;
|Legacy Single-Node Job Compute&lt;br /&gt;
|8&lt;br /&gt;
|12000&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|cpu2021-bf24&lt;br /&gt;
|Back-fill Compute (2021-era hardware, 24h)&lt;br /&gt;
|48&lt;br /&gt;
|185,000 MB&lt;br /&gt;
|24 hours ‡&lt;br /&gt;
|&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|cpu2019-bf05&lt;br /&gt;
|Back-fill Compute (2019-era hardware, 5h)&lt;br /&gt;
|40&lt;br /&gt;
|185,000 MB&lt;br /&gt;
|5 hours ‡&lt;br /&gt;
|&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|cpu2017-bf05&lt;br /&gt;
|Back-fill Compute (2017-era hardware, 5h)&lt;br /&gt;
|14&lt;br /&gt;
|245,000 MB&lt;br /&gt;
|5 hours ‡&lt;br /&gt;
|&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|+ style=&amp;quot;caption-side: bottom; text-align: left; font-weight: normal;&amp;quot; | &amp;amp;dagger; These partitions contain hardware contributed to ARC by particular researchers and should only be used by members of their research groups. However, they have generously allowed their compute nodes to be shared with others outside their research groups for short jobs.  A special &#039;back-fill&#039; or -bf partition is available for use by all ARC users for jobs shorter than 5 hours.&amp;lt;br /&amp;gt;‡ As time limits may be changed by administrators to adjust to maintenance schedules or system load, the values given in the tables are not definitive.  See the Time limits section below for commands you can use on ARC itself to determine current limits.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==== Backfill partitions ====&lt;br /&gt;
Backfill partitions can be used by all users on ARC for short-term jobs. The hardware backing these partitions are generously contributed by researchers.  We recommend including the backfill partitions for short term jobs as it may help reduce your job&#039;s wait time and increase the overall cluster throughput.&lt;br /&gt;
&lt;br /&gt;
Previously, each contributing research group had their own backfill partition. Since June 2021, we have merged:&lt;br /&gt;
&lt;br /&gt;
* apophis-bf, pawson-bf, and razi-bf into cpu2019-bf05 &lt;br /&gt;
* theia-bf and synergy-bf into cpu2017-bf05&lt;br /&gt;
&lt;br /&gt;
The naming scheme of the backfill partitions is the CPU generation year, followed by -bf and the time limit in hours.  For example, cpu2017-bf05 would represent a backfill partition containing processors from 2017 with a time limit of 5 hours.&lt;br /&gt;
&lt;br /&gt;
==== Hardware resource and job policy limits ====&lt;br /&gt;
In addition to the hardware limitations, please be aware that there may also be policy limits imposed on your account for each partition. These limits restrict the number of cores, nodes, or GPUs that can be used at any given time. Since the limits are applied on a partition-by-partition basis, using resources in one partition should not affect the available resources you can use in another partition.&lt;br /&gt;
&lt;br /&gt;
These limits can be listed by running:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sacctmgr show qos format=Name,MaxWall,MaxTRESPU%20,MaxSubmitJobs&lt;br /&gt;
      Name     MaxWall            MaxTRESPU MaxSubmit&lt;br /&gt;
---------- ----------- -------------------- ---------&lt;br /&gt;
    normal  7-00:00:00                           2000&lt;br /&gt;
    breezy  3-00:00:00              cpu=384      2000&lt;br /&gt;
       gpu  7-00:00:00                          13000&lt;br /&gt;
   cpu2019  7-00:00:00              cpu=240      2000&lt;br /&gt;
  gpu-v100  1-00:00:00    cpu=80,gres/gpu=4      2000&lt;br /&gt;
    single  7-00:00:00      cpu=408,node=75      2000&lt;br /&gt;
      razi  7-00:00:00                           2000&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Specifying a partition in a job ====&lt;br /&gt;
One you have decided which partitions best suits your computation, you can select one or more partition on a job-by-job basis by including the &amp;lt;code&amp;gt;partition&amp;lt;/code&amp;gt; keyword for an &amp;lt;code&amp;gt;SBATCH&amp;lt;/code&amp;gt; directive in your batch job. Multiple partitions should be comma separated.  If you omit the partition specification, the system will try to assign your job to appropriate hardware based on other aspects of your request. &lt;br /&gt;
&lt;br /&gt;
In some cases, you really should specify the partition explicitly.  For example, if you are running single-node jobs with thread-based parallel processing requesting 8 cores you could use:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --mem=0              ❶&lt;br /&gt;
#SBATCH --nodes=1            ❷&lt;br /&gt;
#SBATCH --ntasks=1           ❸&lt;br /&gt;
#SBATCH --cpus-per-task=8    ❹&lt;br /&gt;
#SBATCH --partition=single,lattice   ❺ &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A few things to mention in this example:&lt;br /&gt;
# &amp;lt;code&amp;gt;--mem=0&amp;lt;/code&amp;gt; allocates all available memory on the compute node for the job. This effectively allocates the entire node for your job.&lt;br /&gt;
# &amp;lt;code&amp;gt;--nodes=1&amp;lt;/code&amp;gt; allocates 1 node for the job&lt;br /&gt;
# &amp;lt;code&amp;gt;--ntasks=1&amp;lt;/code&amp;gt; your job has a single task&lt;br /&gt;
# &amp;lt;code&amp;gt;--cpus-per-task=8&amp;lt;/code&amp;gt; asks for 8 CPUs per task. This job in total will request 8 * 1, or 8 CPUs.&lt;br /&gt;
# &amp;lt;code&amp;gt;--partition=single,lattice&amp;lt;/code&amp;gt; specifies that this job can run on either single or lattice.&lt;br /&gt;
Suppose that your job requires at most 8 CPU cores and 10 GB of memory. The above Slurm request would be valid and optimal since your job fits neatly in a single node on the single and parallel partition.  However, if you failed to specify the partition, Slurm may try to schedule your job to a partition with larger nodes, such as cpu2019 where each node has 40 cores and 190 GB of memory. If your job is scheduled on such a node, your job will be effectively wasting 32 cores and 180 GB of memory because &amp;lt;code&amp;gt;--mem=0&amp;lt;/code&amp;gt; not only requests for 190 GB on this node, but also prevents other jobs from being scheduled on the same node.&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t specify a partition, please give greater thought to the memory specification to make sure that the scheduler will not assign your job more resources than are needed.&lt;br /&gt;
&lt;br /&gt;
Parameters such as &#039;&#039;&#039;--ntasks-per-cpu&#039;&#039;&#039;, &#039;&#039;&#039;--cpus-per-task&#039;&#039;&#039;, &#039;&#039;&#039;--mem&#039;&#039;&#039; and &#039;&#039;&#039;--mem-per-cpu&amp;gt;&#039;&#039;&#039; have to be adjusted according to the capabilities of the hardware also. The product of --ntasks-per-cpu and --cpus-per-task should be less than or equal to the number given in the &amp;quot;Cores/node&amp;quot; column.  The &#039;&#039;&#039;--mem&amp;gt;&#039;&#039;&#039; parameter (or the product of &#039;&#039;&#039;--mem-per-cpu&#039;&#039;&#039; and &#039;&#039;&#039;--cpus-per-task&#039;&#039;&#039;) should be less than the &amp;quot;Memory limit&amp;quot; shown. If using whole nodes, you can specify &#039;&#039;&#039;--mem=0&#039;&#039;&#039; to request the maximum amount of memory per node.&lt;br /&gt;
&lt;br /&gt;
===== Examples =====&lt;br /&gt;
Here are some examples of specifying the various partitions.&lt;br /&gt;
&lt;br /&gt;
As mentioned in the [[#Hardware|Hardware]] section above, the ARC cluster was expanded in January 2019.  To select the 40-core general purpose nodes specify:&lt;br /&gt;
&lt;br /&gt;
 #SBATCH --partition=cpu2019&lt;br /&gt;
&lt;br /&gt;
To run on the Tesla V100 GPU-enabled nodes, use the &#039;&#039;&#039;gpu-v100&#039;&#039;&#039; partition.  You will also need to include an SBATCH directive in the form &#039;&#039;&#039;--gres=gpu:n&#039;&#039;&#039; to specify the number of GPUs, n, that you need.  For example, if the software you are running can make use of both GPUs on a gpu-v100 partition compute node, use:&lt;br /&gt;
&lt;br /&gt;
 #SBATCH --partition=gpu-v100 --gres=gpu:2&lt;br /&gt;
&lt;br /&gt;
For very large memory jobs (more than 185000 MB), specify the bigmem partition:&lt;br /&gt;
&lt;br /&gt;
 #SBATCH --partition=bigmem&lt;br /&gt;
&lt;br /&gt;
If the more modern computers are too busy or you have a job well-suited to run on the compute nodes described in the legacy hardware section above, choose the cpu2013, Lattice or Parallel compute nodes by specifying the corresponding partition keyword:&lt;br /&gt;
&lt;br /&gt;
 #SBATCH --partition=cpu2013&lt;br /&gt;
 #SBATCH --partition=lattice&lt;br /&gt;
 #SBATCH --partition=parallel&lt;br /&gt;
&lt;br /&gt;
There is an additional partition called &#039;&#039;&#039;single&#039;&#039;&#039; that provides nodes similar to the lattice partition, but, is intended for single-node jobs. Select the single partition with&lt;br /&gt;
&lt;br /&gt;
 #SBATCH --partition=single&lt;br /&gt;
&lt;br /&gt;
=== Time limits ===&lt;br /&gt;
Use the &amp;lt;code&amp;gt;--time&amp;lt;/code&amp;gt; directive to tell the job scheduler the maximum time that your job might run.  For example:&lt;br /&gt;
 #SBATCH --time=hh:mm:ss&lt;br /&gt;
&lt;br /&gt;
You can use &amp;lt;code&amp;gt;scontrol show partitions&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; to see the current maximum time that a job can run.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; highlight=&amp;quot;6&amp;quot;&amp;gt;&lt;br /&gt;
$ scontrol show partitions&lt;br /&gt;
PartitionName=single                                                                 &lt;br /&gt;
   AllowGroups=ALL AllowAccounts=ALL AllowQos=ALL                                    &lt;br /&gt;
   AllocNodes=ALL Default=NO QoS=single                                              &lt;br /&gt;
   DefaultTime=NONE DisableRootJobs=NO ExclusiveUser=NO GraceTime=0 Hidden=NO        &lt;br /&gt;
   MaxNodes=UNLIMITED MaxTime=7-00:00:00 MinNodes=1 LLN=NO MaxCPUsPerNode=UNLIMITED  &lt;br /&gt;
   Nodes=cn[001-168]                                                                 &lt;br /&gt;
   PriorityJobFactor=1 PriorityTier=1 RootOnly=NO ReqResv=NO OverSubscribe=NO        &lt;br /&gt;
   OverTimeLimit=NONE PreemptMode=OFF                                                &lt;br /&gt;
   State=UP TotalCPUs=1344 TotalNodes=168 SelectTypeParameters=NONE                  &lt;br /&gt;
   DefMemPerNode=UNLIMITED MaxMemPerNode=UNLIMITED                                   &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, with &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; under the &amp;lt;code&amp;gt;TIMELIMIT&amp;lt;/code&amp;gt; column:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sinfo                                                     &lt;br /&gt;
PARTITION  AVAIL  TIMELIMIT  NODES  STATE NODELIST               &lt;br /&gt;
single        up 7-00:00:00      1 drain* cn097                  &lt;br /&gt;
single        up 7-00:00:00      1  maint cn002                  &lt;br /&gt;
single        up 7-00:00:00      4 drain* cn[001,061,133,154]    &lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Support ==&lt;br /&gt;
{{Message Box&lt;br /&gt;
|title=[[Support|Need Help or have other ARC Related Questions?]]&lt;br /&gt;
|message=For all general RCS related issues, questions, or comments, please contact us at support@hpc.ucalgary.ca.&lt;br /&gt;
|icon=Support Icon.png}}&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t hesitate to [[Support|contact us]] directly by email if you need help using ARC or require guidance on migrating and running your workflows to ARC.&lt;br /&gt;
&lt;br /&gt;
[[Category:ARC]]&lt;br /&gt;
[[Category:Guides]]&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=Template:Warning_Box&amp;diff=2637</id>
		<title>Template:Warning Box</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=Template:Warning_Box&amp;diff=2637"/>
		<updated>2023-09-14T23:05:22Z</updated>

		<summary type="html">&lt;p&gt;Darcy: Created page with &amp;quot;&amp;lt;table role=&amp;quot;presentation&amp;quot; style=&amp;quot;border: 1px solid #a2a9b1; background-color: #f8f9fa; width: 80%; text-align: left; margin: 1em auto 1em auto; padding-right: 15px;&amp;quot;&amp;gt; &amp;lt;tr&amp;gt;  &amp;lt;td style=&amp;quot;width: 5em; text-align: center;&amp;quot;&amp;gt; 40px &amp;lt;/td&amp;gt;  &amp;lt;td&amp;gt; &amp;lt;div style=&amp;quot;padding-top: 6px; padding-bottom: 5px;&amp;quot;&amp;gt; &amp;lt;b&amp;gt;{{{title|{{{1|Default Title}}}}}}&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt; &amp;lt;div style=&amp;quot;font-size: 90%;&amp;quot;&amp;gt;{{{message|{{{2|Default message}}}}}}&amp;lt;/div&amp;gt; &amp;lt;/div&amp;gt;  &amp;lt;/td&amp;gt; &amp;lt;/table&amp;gt;&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;table role=&amp;quot;presentation&amp;quot; style=&amp;quot;border: 1px solid #a2a9b1; background-color: #f8f9fa; width: 80%; text-align: left; margin: 1em auto 1em auto; padding-right: 15px;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;td style=&amp;quot;width: 5em; text-align: center;&amp;quot;&amp;gt;&lt;br /&gt;
[[File:{{{icon|{{{3|Attention Icon.png}}}}}}|40px]]&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;padding-top: 6px; padding-bottom: 5px;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;b&amp;gt;{{{title|{{{1|Default Title}}}}}}&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size: 90%;&amp;quot;&amp;gt;{{{message|{{{2|Default message}}}}}}&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=File:Attention_Icon.png&amp;diff=2636</id>
		<title>File:Attention Icon.png</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=File:Attention_Icon.png&amp;diff=2636"/>
		<updated>2023-09-14T22:51:14Z</updated>

		<summary type="html">&lt;p&gt;Darcy: Direct the user to pay attention to this because it is important&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
Direct the user to pay attention to this because it is important&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=RCS_Data_Sheet&amp;diff=2462</id>
		<title>RCS Data Sheet</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=RCS_Data_Sheet&amp;diff=2462"/>
		<updated>2023-05-01T21:00:29Z</updated>

		<summary type="html">&lt;p&gt;Darcy: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Specifications&lt;br /&gt;
&lt;br /&gt;
HPC CPU/GPU Service&lt;br /&gt;
&lt;br /&gt;
L4 HPC CPU/GPU/VM Service&lt;br /&gt;
&lt;br /&gt;
HPC Desktop Service&lt;br /&gt;
&lt;br /&gt;
L4 HPC Storage Service&lt;br /&gt;
&lt;br /&gt;
CloudStack VM Service&lt;br /&gt;
&lt;br /&gt;
OneFS Storage Service&lt;br /&gt;
&lt;br /&gt;
Instrument Storage Service&lt;br /&gt;
&lt;br /&gt;
HPC Access Methods&lt;br /&gt;
&lt;br /&gt;
CloudStack Access Methods&lt;br /&gt;
&lt;br /&gt;
HPC Desktop Access Methods&lt;br /&gt;
&lt;br /&gt;
L4 Services Access Methods&lt;br /&gt;
&lt;br /&gt;
OneFS Access Methods&lt;br /&gt;
&lt;br /&gt;
Instrument Storage Access Methods&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=RCS_Data_Sheet&amp;diff=2461</id>
		<title>RCS Data Sheet</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=RCS_Data_Sheet&amp;diff=2461"/>
		<updated>2023-05-01T21:00:03Z</updated>

		<summary type="html">&lt;p&gt;Darcy: Created page with &amp;quot;Specifications  HPC CPU/GPU Service L4 HPC CPU/GPU/VM Service HPC Desktop Service L4 HPC Storage Service CloudStack VM Service OneFS Storage Service Instrument Storage Service  HPC Access Methods CloudStack Access Methods HPC Desktop Access Methods L4 Services Access Methods OneFS Access Methods Instrument Storage Access Methods&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Specifications&lt;br /&gt;
&lt;br /&gt;
HPC CPU/GPU Service&lt;br /&gt;
L4 HPC CPU/GPU/VM Service&lt;br /&gt;
HPC Desktop Service&lt;br /&gt;
L4 HPC Storage Service&lt;br /&gt;
CloudStack VM Service&lt;br /&gt;
OneFS Storage Service&lt;br /&gt;
Instrument Storage Service&lt;br /&gt;
&lt;br /&gt;
HPC Access Methods&lt;br /&gt;
CloudStack Access Methods&lt;br /&gt;
HPC Desktop Access Methods&lt;br /&gt;
L4 Services Access Methods&lt;br /&gt;
OneFS Access Methods&lt;br /&gt;
Instrument Storage Access Methods&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=CloudStack_End_User_Agreement&amp;diff=2089</id>
		<title>CloudStack End User Agreement</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=CloudStack_End_User_Agreement&amp;diff=2089"/>
		<updated>2022-08-30T22:12:57Z</updated>

		<summary type="html">&lt;p&gt;Darcy: /* Important Notes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
CloudStack is an Infrastructure as a Service provided to University of Calgary researchers.  It allows them to quickly deploy Virtual Machines to support their research.&lt;br /&gt;
&lt;br /&gt;
Researchers have a great degree of freedom in how they use CloudStack.  They are solely responsible for maintaining any VMs they deploy.&lt;br /&gt;
&lt;br /&gt;
CloudStack is a research environment.  While the system is available 24/7, support is only available during University business hours.&lt;br /&gt;
&lt;br /&gt;
Researchers are asked to follow the best practices listed below to ensure that the system remains available for the campus community.&lt;br /&gt;
&lt;br /&gt;
==Best Practices==&lt;br /&gt;
&lt;br /&gt;
# Please stay abreast of security updates for your OS and apply them.&lt;br /&gt;
# Backup your data to non-CloudStack hosted storage (RCS does not provide data backups of VMs).&lt;br /&gt;
# Do not run Windows (This infrastructure is not licensed to run Windows).&lt;br /&gt;
# The University&#039;s &amp;quot;Acceptable Use of Electronic Resources and Information Policy&amp;quot; applies to your work using a VM.  Please see [https://www.ucalgary.ca/legal-services/university-policies-procedures/ University Policies and Procedures] and search for &amp;quot;Electronic&amp;quot;.&lt;br /&gt;
# This infrastructure is only rated to handle Level 1 and Level 2 data. Please see [https://www.ucalgary.ca/legal-services/university-legal-services/operating-standards-guidelines-forms/ University Legal Services] and select &amp;quot;Information Security Classification Standard&amp;quot; for details.&lt;br /&gt;
# You are responsible for the appropriate use of the VM by any accounts you have created.&lt;br /&gt;
# You should remove/disable accounts that are no longer required.&lt;br /&gt;
# All user accounts on the VM must have good passwords. See [https://it.ucalgary.ca/it-security/passwords-do-i-have-change-them/ here] for details on creating strong passwords.&lt;br /&gt;
# CloudStack is not  meant as a High Performance Computing (HPC) number cruncher. If you have HPC needs, please see &amp;quot;[[How to get an account]]&amp;quot; for details on how to apply for an account on ARC.  Not sure what you need?  Contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca].&lt;br /&gt;
# If your VM faces the outside world, please consider using appropriate security tools.  Contact [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] for assistance.&lt;br /&gt;
&lt;br /&gt;
==Important Notes==&lt;br /&gt;
In the event of a security incident, IT Operations/Security will shut down affected VMs.&lt;br /&gt;
&lt;br /&gt;
Non-critical patches/upgrades to CloudStack that are required will happen on Tuesday of each week.  Running VMs should not be affected.&lt;br /&gt;
&lt;br /&gt;
Non-critical patches/upgrades that require a complete restart of CloudStack will occur after 1 day email notice.&lt;br /&gt;
&lt;br /&gt;
Urgent security patches to CloudStack may happen with little to no notice.&lt;br /&gt;
&lt;br /&gt;
CloudStack is provided as-is, with best effort support.  It is not suitable for mission critical, high availability services.&lt;br /&gt;
&lt;br /&gt;
Please check back with this document as changes may occur.&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=CloudStack_End_User_Agreement&amp;diff=2082</id>
		<title>CloudStack End User Agreement</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=CloudStack_End_User_Agreement&amp;diff=2082"/>
		<updated>2022-08-24T17:15:41Z</updated>

		<summary type="html">&lt;p&gt;Darcy: /* Best Practices */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
CloudStack is an Infrastructure as a Service provided to University of Calgary researchers.  It allows them to quickly deploy Virtual Machines to support their research.&lt;br /&gt;
&lt;br /&gt;
Researchers have a great degree of freedom in how they use CloudStack.  They are solely responsible for maintaining any VMs they deploy.&lt;br /&gt;
&lt;br /&gt;
CloudStack is a research environment.  While the system is available 24/7, support is only available during University business hours.&lt;br /&gt;
&lt;br /&gt;
Researchers are asked to follow the best practices listed below to ensure that the system remains available for the campus community.&lt;br /&gt;
&lt;br /&gt;
==Best Practices==&lt;br /&gt;
&lt;br /&gt;
# Please stay abreast of security updates for your OS and apply them.&lt;br /&gt;
# Backup your data to non-CloudStack hosted storage (RCS does not provide data backups of VMs).&lt;br /&gt;
# Do not run Windows (This infrastructure is not licensed to run Windows).&lt;br /&gt;
# The University&#039;s &amp;quot;Acceptable Use of Electronic Resources and Information Policy&amp;quot; applies to your work using a VM.  Please see [https://www.ucalgary.ca/legal-services/university-policies-procedures/ University Policies and Procedures] and search for &amp;quot;Electronic&amp;quot;.&lt;br /&gt;
# This infrastructure is only rated to handle Level 1 and Level 2 data. Please see [https://www.ucalgary.ca/legal-services/university-legal-services/operating-standards-guidelines-forms/ University Legal Services] and select &amp;quot;Information Security Classification Standard&amp;quot; for details.&lt;br /&gt;
# You are responsible for the appropriate use of the VM by any accounts you have created.&lt;br /&gt;
# You should remove/disable accounts that are no longer required.&lt;br /&gt;
# All user accounts on the VM must have good passwords. See [https://it.ucalgary.ca/it-security/passwords-do-i-have-change-them/ here] for details on creating strong passwords.&lt;br /&gt;
# CloudStack is not  meant as a High Performance Computing (HPC) number cruncher. If you have HPC needs, please see &amp;quot;[[How to get an account]]&amp;quot; for details on how to apply for an account on ARC.  Not sure what you need?  Contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca].&lt;br /&gt;
# If your VM faces the outside world, please consider using appropriate security tools.  Contact [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] for assistance.&lt;br /&gt;
&lt;br /&gt;
==Important Notes==&lt;br /&gt;
In the event of a security incident, IT Operations/Security will shut down affected VMs.&lt;br /&gt;
&lt;br /&gt;
Non-critical patches/upgrades to CloudStack that are required will happen on Tuesday of each week.  Running VMs should not be affected.&lt;br /&gt;
&lt;br /&gt;
Non-critical patches/upgrades that require a complete restart of CloudStack will occur after 1 day email notice.&lt;br /&gt;
&lt;br /&gt;
Urgent security patches to CloudStack may happen with little to no notice.&lt;br /&gt;
&lt;br /&gt;
Please check back with this document as changes may occur.&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=CloudStack_End_User_Agreement&amp;diff=2081</id>
		<title>CloudStack End User Agreement</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=CloudStack_End_User_Agreement&amp;diff=2081"/>
		<updated>2022-08-24T17:14:46Z</updated>

		<summary type="html">&lt;p&gt;Darcy: /* Best Practices */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
CloudStack is an Infrastructure as a Service provided to University of Calgary researchers.  It allows them to quickly deploy Virtual Machines to support their research.&lt;br /&gt;
&lt;br /&gt;
Researchers have a great degree of freedom in how they use CloudStack.  They are solely responsible for maintaining any VMs they deploy.&lt;br /&gt;
&lt;br /&gt;
CloudStack is a research environment.  While the system is available 24/7, support is only available during University business hours.&lt;br /&gt;
&lt;br /&gt;
Researchers are asked to follow the best practices listed below to ensure that the system remains available for the campus community.&lt;br /&gt;
&lt;br /&gt;
==Best Practices==&lt;br /&gt;
&lt;br /&gt;
# Please stay abreast of security updates for your OS and apply them.&lt;br /&gt;
# Backup your data to some other destination (RCS does not provide data backups of VMs).&lt;br /&gt;
# Do not run Windows (This infrastructure is not licensed to run Windows).&lt;br /&gt;
# The University&#039;s &amp;quot;Acceptable Use of Electronic Resources and Information Policy&amp;quot; applies to your work using a VM.  Please see [https://www.ucalgary.ca/legal-services/university-policies-procedures/ University Policies and Procedures] and search for &amp;quot;Electronic&amp;quot;.&lt;br /&gt;
# This infrastructure is only rated to handle Level 1 and Level 2 data. Please see [https://www.ucalgary.ca/legal-services/university-legal-services/operating-standards-guidelines-forms/ University Legal Services] and select &amp;quot;Information Security Classification Standard&amp;quot; for details.&lt;br /&gt;
# You are responsible for the appropriate use of the VM by any accounts you have created.&lt;br /&gt;
# You should remove/disable accounts that are no longer required.&lt;br /&gt;
# All user accounts on the VM must have good passwords. See [https://it.ucalgary.ca/it-security/passwords-do-i-have-change-them/ here] for details on creating strong passwords.&lt;br /&gt;
# CloudStack is not  meant as a High Performance Computing (HPC) number cruncher. If you have HPC needs, please see &amp;quot;[[How to get an account]]&amp;quot; for details on how to apply for an account on ARC.  Not sure what you need?  Contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca].&lt;br /&gt;
# If your VM faces the outside world, please consider using appropriate security tools.  Contact [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] for assistance.&lt;br /&gt;
&lt;br /&gt;
==Important Notes==&lt;br /&gt;
In the event of a security incident, IT Operations/Security will shut down affected VMs.&lt;br /&gt;
&lt;br /&gt;
Non-critical patches/upgrades to CloudStack that are required will happen on Tuesday of each week.  Running VMs should not be affected.&lt;br /&gt;
&lt;br /&gt;
Non-critical patches/upgrades that require a complete restart of CloudStack will occur after 1 day email notice.&lt;br /&gt;
&lt;br /&gt;
Urgent security patches to CloudStack may happen with little to no notice.&lt;br /&gt;
&lt;br /&gt;
Please check back with this document as changes may occur.&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=CloudStack_End_User_Agreement&amp;diff=2080</id>
		<title>CloudStack End User Agreement</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=CloudStack_End_User_Agreement&amp;diff=2080"/>
		<updated>2022-08-24T17:12:51Z</updated>

		<summary type="html">&lt;p&gt;Darcy: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
CloudStack is an Infrastructure as a Service provided to University of Calgary researchers.  It allows them to quickly deploy Virtual Machines to support their research.&lt;br /&gt;
&lt;br /&gt;
Researchers have a great degree of freedom in how they use CloudStack.  They are solely responsible for maintaining any VMs they deploy.&lt;br /&gt;
&lt;br /&gt;
CloudStack is a research environment.  While the system is available 24/7, support is only available during University business hours.&lt;br /&gt;
&lt;br /&gt;
Researchers are asked to follow the best practices listed below to ensure that the system remains available for the campus community.&lt;br /&gt;
&lt;br /&gt;
==Best Practices==&lt;br /&gt;
&lt;br /&gt;
# Please stay abreast of security updates for your OS and apply them.&lt;br /&gt;
# Backup your data (RCS does not provide data backups of VMs).&lt;br /&gt;
# Do not run Windows (This infrastructure is not licensed to run Windows).&lt;br /&gt;
# The University&#039;s &amp;quot;Acceptable Use of Electronic Resources and Information Policy&amp;quot; applies to your work using a VM.  Please see [https://www.ucalgary.ca/legal-services/university-policies-procedures/ University Policies and Procedures] and search for &amp;quot;Electronic&amp;quot;.&lt;br /&gt;
# This infrastructure is only rated to handle Level 1 and Level 2 data. Please see [https://www.ucalgary.ca/legal-services/university-legal-services/operating-standards-guidelines-forms/ University Legal Services] and select &amp;quot;Information Security Classification Standard&amp;quot; for details.&lt;br /&gt;
# You are responsible for the appropriate use of the VM by any accounts you have created.&lt;br /&gt;
# You should remove/disable accounts that are no longer required.&lt;br /&gt;
# All user accounts on the VM must have good passwords. See [https://it.ucalgary.ca/it-security/passwords-do-i-have-change-them/ here] for details on creating strong passwords.&lt;br /&gt;
# CloudStack is not  meant as a High Performance Computing (HPC) number cruncher. If you have HPC needs, please see &amp;quot;[[How to get an account]]&amp;quot; for details on how to apply for an account on ARC.  Not sure what you need?  Contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca].&lt;br /&gt;
# If your VM faces the outside world, please consider using appropriate security tools.  Contact [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] for assistance.&lt;br /&gt;
&lt;br /&gt;
==Important Notes==&lt;br /&gt;
In the event of a security incident, IT Operations/Security will shut down affected VMs.&lt;br /&gt;
&lt;br /&gt;
Non-critical patches/upgrades to CloudStack that are required will happen on Tuesday of each week.  Running VMs should not be affected.&lt;br /&gt;
&lt;br /&gt;
Non-critical patches/upgrades that require a complete restart of CloudStack will occur after 1 day email notice.&lt;br /&gt;
&lt;br /&gt;
Urgent security patches to CloudStack may happen with little to no notice.&lt;br /&gt;
&lt;br /&gt;
Please check back with this document as changes may occur.&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=ARC_Cluster_Guide&amp;diff=1905</id>
		<title>ARC Cluster Guide</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=ARC_Cluster_Guide&amp;diff=1905"/>
		<updated>2022-06-10T21:18:01Z</updated>

		<summary type="html">&lt;p&gt;Darcy: /* Storage */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Message Box&lt;br /&gt;
|icon=Security Icon.png&lt;br /&gt;
|title=Cybersecurity awareness at the U of C&lt;br /&gt;
|message=Please note that there are typically about 950 phishing attempts targeting University of Calgary accounts each month. This is just a reminder to be careful about computer security issues, both at home and at the University. Please visit https://it.ucalgary.ca/it-security for more information, tips on secure computing, and how to report suspected security problems.}}&lt;br /&gt;
&lt;br /&gt;
{{Message Box&lt;br /&gt;
|title=[[Support|Need Help or have other ARC Related Questions?]]&lt;br /&gt;
|message=For all general RCS related issues, questions, or comments, please contact us at support@hpc.ucalgary.ca.&lt;br /&gt;
|icon=Support Icon.png}}&lt;br /&gt;
&lt;br /&gt;
This guide gives an overview of the Advanced Research Computing (ARC) cluster at the University of Calgary and is intended to be read by new account holders getting started on ARC. This guide covers topics such as the hardware and performance characteristics, available software, usage policies and how to log in and run jobs. &lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
The ARC compute cluster can be used for running large numbers (hundreds) of concurrent serial (one core) jobs, OpenMP or other thread-based jobs, shared-memory parallel code using up to 40 or 80 threads per job (depending on the partition), distributed-memory (MPI-based) parallel code using up to hundreds of cores, or jobs that take advantage of Graphics Processing Units (GPUs). Almost all work on ARC is done through a [[Linux Introduction|command line interface]]. This computational resource is available for research projects based at the University of Calgary and is meant to supplement the resources available to researchers through Compute Canada.&lt;br /&gt;
&lt;br /&gt;
Historically, ARC is primarily comprised of older, disparate Linux-based clusters that were formerly offered to researchers from across Canada such as Breezy, Lattice, and Parallel.  In addition, a large-memory compute node (Bigbyte) was salvaged from the now-retired local Storm cluster. In January 2019, a major addition to ARC with modern hardware was purchased. In 2020, compute clusters from CHGI have been migrated into ARC.&lt;br /&gt;
&lt;br /&gt;
=== How to Get Started ===&lt;br /&gt;
If you have a project you think would be appropriate for ARC, please write to support@hpc.ucalgary.ca and mention the intended research and software you plan to use. You must have a University of Calgary IT account in order to use ARC.&lt;br /&gt;
* For users that do not have a University of IT account or email address, please register for one at https://itregport.ucalgary.ca/.&lt;br /&gt;
* For users external to the University, such as for users collaborating on a research project at the University of Calgary, please contact us and mention the project leader you are collaborating with.&lt;br /&gt;
&lt;br /&gt;
Once your access to ARC has been granted, you will be able to immediately make use of the cluster using your University of Calgary IT account by following the [[ARC_Cluster_Guide#Using_ARC|usage guide outlined below]].&lt;br /&gt;
&lt;br /&gt;
== Hardware ==&lt;br /&gt;
Since the ARC cluster is a conglomeration of many different compute clusters, the hardware within ARC can vary widely in terms of performance and capabilities.  To mitigate any compatibility issues with different hardware, we combine similar hardware into their own Slurm partition to ensure your workload runs as consistently as possible within one partition. Please carefully review the hardware specs for each of the partitions below to avoid any surprises.&lt;br /&gt;
&lt;br /&gt;
=== Partition Hardware Specs ===&lt;br /&gt;
When submitting jobs to ARC, you may specify a partition that your job will run on.  Please choose a partition that is most appropriate for your work.&lt;br /&gt;
&lt;br /&gt;
A few things to keep in mind when choosing a partition:&lt;br /&gt;
* Specific workloads requiring special Intel Instruction Set Extensions may only work on newer Intel CPUs. &lt;br /&gt;
* If working with multi-node parallel processing, ensure your software and libraries support the partition&#039;s interconnect networking.&lt;br /&gt;
* While older partitions may be slower, they may be less busy and have little to no wait times.&lt;br /&gt;
&lt;br /&gt;
If you are unsure which partition to use or need assistance on selecting an appropriate partition, please see [[#Selecting_a_Partition|the Selecting a Partition Section]] below. &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Partition&lt;br /&gt;
! Description&lt;br /&gt;
! Nodes&lt;br /&gt;
! CPU Cores, Model, and Year&lt;br /&gt;
! Memory&lt;br /&gt;
! GPU&lt;br /&gt;
! Network&lt;br /&gt;
|-&lt;br /&gt;
| -&lt;br /&gt;
| ARC Login Node&lt;br /&gt;
| 1&lt;br /&gt;
| 16 cores, 2x Intel(R) Xeon(R) CPU E5620  @ 2.40GHz (Westmere, 2010)&lt;br /&gt;
| 48 GB&lt;br /&gt;
| N/A&lt;br /&gt;
| 40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
| gpu-v100&lt;br /&gt;
| GPU Parition&lt;br /&gt;
| 13&lt;br /&gt;
| 80 cores, 2x Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz (Skylake, 2019)&lt;br /&gt;
| 754 GB&lt;br /&gt;
| 2x Tesla V100-PCIE-16GB&lt;br /&gt;
| 100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
| cpu2021&lt;br /&gt;
| General Purpose Compute&lt;br /&gt;
| 48&lt;br /&gt;
| 48 cores, 2x Intel(R) Xeon(R) Gold 6240R CPU @ 2.40GHz (Cascade Lake, 2021)&lt;br /&gt;
| 185 GB&lt;br /&gt;
| N/A &lt;br /&gt;
| 100 Gbit/s Mellanox Infiniband&lt;br /&gt;
|-&lt;br /&gt;
| cpu2019&lt;br /&gt;
| General Purpose Compute&lt;br /&gt;
| 14&lt;br /&gt;
| 40 cores, 2x Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz (Skylake, 2019)&lt;br /&gt;
| 190 GB&lt;br /&gt;
| N/A &lt;br /&gt;
| 100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
| apophis&lt;br /&gt;
| General Purpose Compute&lt;br /&gt;
| 21&lt;br /&gt;
| 40 cores, 2x Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz (Skylake, 2019)&lt;br /&gt;
| 190 GB&lt;br /&gt;
| N/A &lt;br /&gt;
| 100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
| razi&lt;br /&gt;
| General Purpose Compute&lt;br /&gt;
| 41&lt;br /&gt;
| 40 cores, 2x Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz (Skylake, 2019)&lt;br /&gt;
| 190 GB&lt;br /&gt;
| N/A &lt;br /&gt;
| 100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
| bigmem&lt;br /&gt;
| Big Memory Nodes&lt;br /&gt;
| 2&lt;br /&gt;
| 80 cores, 4x Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz (Skylake, 2019)&lt;br /&gt;
| 3022 GB&lt;br /&gt;
| N/A &lt;br /&gt;
| 100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
| pawson&lt;br /&gt;
| General Purpose Compute&lt;br /&gt;
| 13&lt;br /&gt;
| 40 cores, 2x Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz (Skylake, 2019)&lt;br /&gt;
| 190 GB&lt;br /&gt;
| N/A&lt;br /&gt;
| 100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|cpu2017&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|14&lt;br /&gt;
|56 cores, 2x Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz (Sandy Bridge, 2012)&lt;br /&gt;
|256 GB&lt;br /&gt;
|N/A&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
| theia&lt;br /&gt;
| Former Theia cluster&lt;br /&gt;
| 20&lt;br /&gt;
| 56 cores, 2x Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz (Sandy Bridge, 2012)&lt;br /&gt;
| 188 GB&lt;br /&gt;
| N/A &lt;br /&gt;
| 40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
| cpu2013&lt;br /&gt;
| Former hyperion cluster&lt;br /&gt;
| 12&lt;br /&gt;
| 32 cores, 2x Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz (Sandy Bridge, 2012)&lt;br /&gt;
| 126 GB&lt;br /&gt;
| N/A&lt;br /&gt;
| 40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
| lattice&lt;br /&gt;
| Former Lattice cluster&lt;br /&gt;
| 307&lt;br /&gt;
| 8 cores, 2x Intel(R) Xeon(R) CPU L5520  @ 2.27GHz (Nehalem, 2009)&lt;br /&gt;
| 12 GB&lt;br /&gt;
| N/A&lt;br /&gt;
| 40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
| single&lt;br /&gt;
| Former Lattice cluster&lt;br /&gt;
| 168&lt;br /&gt;
| 8 cores, 2x Intel(R) Xeon(R) CPU L5520  @ 2.27GHz (Nehalem, 2009)&lt;br /&gt;
| 12 GB&lt;br /&gt;
| N/A&lt;br /&gt;
| 40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
| parallel&lt;br /&gt;
| Former Parallel Cluster&lt;br /&gt;
| 576&lt;br /&gt;
| 12 cores, 2x Intel(R) Xeon(R) CPU E5649  @ 2.53GHz (Westmere, 2011)&lt;br /&gt;
| 24 GB&lt;br /&gt;
| N/A&lt;br /&gt;
| 40 Gbit/s InfiniBand&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===ARC Cluster Storage===&lt;br /&gt;
Usage of ARC cluster storage is outlined by our [[ARC Storage Terms of Use]] page.&lt;br /&gt;
&lt;br /&gt;
{{Message Box&lt;br /&gt;
| title=No Backup Policy!&lt;br /&gt;
| message=You are responsible for your own backups.  Many researchers will have accounts with Compute Canada and may choose to back up their data there (the Project file system accessible through the Cedar cluster would often be used). &lt;br /&gt;
&lt;br /&gt;
Please contact us at support@hpc.ucalgary.ca if you want more information about this option.&lt;br /&gt;
&lt;br /&gt;
You can also back up data to your UofC OneDrive for business allocation see: https://rcs.ucalgary.ca/How_to_transfer_data#rclone:_rsync_for_cloud_storage This allocation starts at 5TB. Contact the support center for questions regarding OneDrive for Business.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
The ARC cluster has around 2 petabyte of shared disk storage available across the entire cluster as well as temporary storage local to each of the compute nodes. Please refer to the individual sections below on the capacity limitations and usage policies. &lt;br /&gt;
&lt;br /&gt;
Use the &amp;lt;code&amp;gt;arc.quota&amp;lt;/code&amp;gt; command on ARC to determine the available space on your various volumes and home directory.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Partition&lt;br /&gt;
!Description&lt;br /&gt;
!Capacity&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt;&lt;br /&gt;
|User home directories&lt;br /&gt;
|500 GB (per user)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;/work&amp;lt;/code&amp;gt;&lt;br /&gt;
|Research project storage&lt;br /&gt;
|Up to 100&#039;s of TB&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt;&lt;br /&gt;
|Scratch space for temporary files&lt;br /&gt;
|Up to 15 TB&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;/tmp&amp;lt;/code&amp;gt;&lt;br /&gt;
|Temporary space local to the compute cluster&lt;br /&gt;
|Dependent on nodes, use &amp;lt;code&amp;gt;df -h&amp;lt;/code&amp;gt;.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;/dev/shm&amp;lt;/code&amp;gt;&lt;br /&gt;
|Small temporary in-memory disk space local to the compute cluster&lt;br /&gt;
|Dependent on nodes, use &amp;lt;code&amp;gt;df -h&amp;lt;/code&amp;gt;.&lt;br /&gt;
|}&lt;br /&gt;
====&amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt;: Home file system====&lt;br /&gt;
Each user has a directory under /home and is the default working directory when logging in to ARC. Each home directory has a per-user quota of 500 GB. This limit is fixed and cannot be increased. Researchers requiring additional storage exceeding what is available on their home directory may use &amp;lt;code&amp;gt;/work&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Note on file sharing: Due to security concerns, permissions set using &amp;lt;code&amp;gt;chmod&amp;lt;/code&amp;gt; on your home directory to allow other users to read/write to your home directory be automatically reverted by an automated system process unless an explicit exception is made.  If you need to share files with other researchers on the ARC cluster, please write to support@hpc.ucalgary.ca to ask for such an exception.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt;: Scratch file system for large job-oriented storage====&lt;br /&gt;
Associated with each job, under the &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; directory, a subdirectory is created that can be referenced in job scripts as &amp;lt;code&amp;gt;/scratch/${SLURM_JOB_ID}&amp;lt;/code&amp;gt;. You can use that directory for temporary files needed during the course of a job. Up to 15 TB of storage may be used, per user (total for all your jobs) in the &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; file system. &lt;br /&gt;
&lt;br /&gt;
Data in &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; associated with a given job will be deleted automatically, without exception, five days after the job finishes.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;code&amp;gt;/work&amp;lt;/code&amp;gt;: Work file system for larger projects====&lt;br /&gt;
If you need more space than provided in &amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt; and the &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; job-oriented space is not appropriate for you case, please write to support@hpc.ucalgary.ca with an explanation, including an indication of how much storage you expect to need and for how long.  If approved, you will then be assigned a directory under &amp;lt;code&amp;gt;/work&amp;lt;/code&amp;gt; with an appropriately large quota.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;code&amp;gt;/tmp&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;/dev/shm&amp;lt;/code&amp;gt;: Temporary files====&lt;br /&gt;
You may use &amp;lt;code&amp;gt;/tmp&amp;lt;/code&amp;gt; for temporary files generated by your job. The &amp;lt;code&amp;gt;/tmp&amp;lt;/code&amp;gt; is stored on a disk local to the compute node and is not shared across the cluster. The files stored here may be removed immediately after your job terminates.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;/dev/shm&amp;lt;/code&amp;gt; is similar to &amp;lt;code&amp;gt;/tmp&amp;lt;/code&amp;gt; but the storage is backed by virtual memory for higher IOPS.  This is ideal for workloads that require many small read/writes to share data between processes or as a fast cache. The files stored here may be removed immediately after your job terminates.&lt;br /&gt;
&lt;br /&gt;
== Using ARC ==&lt;br /&gt;
=== Logging in ===&lt;br /&gt;
To log in to ARC, connect using SSH to &amp;lt;code&amp;gt;arc.ucalgary.ca&amp;lt;/code&amp;gt; on port &amp;lt;code&amp;gt;22&amp;lt;/code&amp;gt;. Connections to ARC are accepted only from the University of Calgary network (on campus) or through the University of Calgary General VPN (off campus).&lt;br /&gt;
&lt;br /&gt;
See [[Connecting to RCS HPC Systems]] for more information.&lt;br /&gt;
=== How to interact with ARC ===&lt;br /&gt;
&lt;br /&gt;
ARC cluster is a collection of several compute nodes connected by a high-speed network. On ARC, computations get submitted as jobs. Once submitted, the jobs are then assigned to compute nodes by the job scheduler as resources become available.&lt;br /&gt;
[[File:Cluster.png]]&lt;br /&gt;
&lt;br /&gt;
You can access ARC with your UCalgary IT user credentials. Once connected, you will get placed in the ARC login node, for basic tasks such as job submission, monitor job status, manage files, edit text, etc. It is a shared resource where multiple users get connected at the same time. Thus, any intensive tasks is not allowed on the login node as it may block other potential users to connect/submit their computations. &lt;br /&gt;
         [tannistha.nandi@arc ~]$ &lt;br /&gt;
The job scheduling system on ARC is called SLURM.  On ARC, there are two SLURM commands that can allocate resources to a job under appropriate conditions: ‘salloc’ and ‘sbatch’. They both accept the same set of command line options with respect to resource allocation. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;‘salloc’&#039;&#039;&#039; is to launch an interactive session, typically for tasks under 5 hours. &lt;br /&gt;
Once an interactive job session is created, you can do things like explore research datasets, start R or python sessions to test your code, compile software applications etc.&lt;br /&gt;
&lt;br /&gt;
a. Example 1: The following command requests for 1 cpu on 1 node for 1 task along with 1 GB of RAM for an hour. &lt;br /&gt;
          [tannistha.nandi@arc ~]$ salloc --mem=1G -c 1 -N 1 -n 1  -t 01:00:00&lt;br /&gt;
          salloc: Granted job allocation 6758015&lt;br /&gt;
          salloc: Waiting for resource configuration&lt;br /&gt;
          salloc: Nodes fc4 are ready for job&lt;br /&gt;
          [tannistha.nandi@fc4 ~]$ &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
b. Example 2:  The following command requests for 1 GPU to be used from 1 node belonging to the gpu-v100 partition along with 1 GB of RAM for 1 hour.  Generic resource scheduling (--gres) is used to request for GPU resources.&lt;br /&gt;
         [tannistha.nandi@arc ~]$ salloc --mem=1G -t 01:00:00 -p gpu-v100 --gres=gpu:1&lt;br /&gt;
         salloc: Granted job allocation 6760460&lt;br /&gt;
         salloc: Waiting for resource configuration&lt;br /&gt;
         salloc: Nodes fg3 are ready for job&lt;br /&gt;
         [tannistha.nandi@fg3 ~]$&lt;br /&gt;
&lt;br /&gt;
Once you finish the work, type &#039;exit&#039; at the command prompt to end the interactive session,&lt;br /&gt;
         [tannistha.nandi@fg3 ~]$ exit&lt;br /&gt;
         [tannistha.nandi@fg3 ~]$ salloc: Relinquishing job allocation 6760460&lt;br /&gt;
It is to ensure that the allocated resources are released from your job and now available to other users.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;‘sbatch’&#039;&#039;&#039; is to submit computations as jobs to run on the cluster. You can submit a job-script.slurm via &#039;sbatch&#039; for execution.   &lt;br /&gt;
         [tannistha.nandi@arc ~]$ sbatch job-script.slurm&lt;br /&gt;
When resources become available, they get allocated to this task. Batch jobs are suited for tasks that run for long periods of time without any user supervision. When the job-script terminates, the allocation is released. &lt;br /&gt;
Please review the section on how to prepare job scripts for more information.&lt;br /&gt;
&lt;br /&gt;
=== Prepare job scripts  ===&lt;br /&gt;
Job scripts are text files saved with an extension &#039;.slurm&#039;, for example, &#039;job-script.slurm&#039;. &lt;br /&gt;
A job script looks something like this:&lt;br /&gt;
    &#039;&#039;#!/bin/bash&#039;&#039;&lt;br /&gt;
    ####### Reserve computing resources #############&lt;br /&gt;
    #SBATCH --nodes=1&lt;br /&gt;
    #SBATCH --ntasks=1&lt;br /&gt;
    #SBATCH --cpus-per-task=1&lt;br /&gt;
    #SBATCH --time=01:00:00&lt;br /&gt;
    #SBATCH --mem=1G&lt;br /&gt;
    #SBATCH --partition=cpu2019&amp;lt;br&amp;gt;&lt;br /&gt;
    ####### Set environment variables ###############&lt;br /&gt;
    module load python/anaconda3-2018.12&amp;lt;br&amp;gt;&lt;br /&gt;
    ####### Run your script #########################&lt;br /&gt;
    python myscript.py&lt;br /&gt;
&lt;br /&gt;
The first line contains the text &amp;quot;#!/bin/bash&amp;quot; to interpret it as a bash script.&lt;br /&gt;
&lt;br /&gt;
It is followed by lines that start with a &#039;#SBATCH&#039; to communicate with  &#039;SLURM&#039;. You may add as many #SBATCH directives as needed to reserve computing resources for your task. The above example requests for one cpu on a single node for 1 task along with 1GB RAM for an hour on cpu2019 partition.&lt;br /&gt;
&lt;br /&gt;
Next, you have to set up environment variables either by loading the modules centrally installed on ARC or export path to the software in your home directory. The above example loads an available python module.&lt;br /&gt;
&lt;br /&gt;
Finally, include the Linux command to execute the local script.&lt;br /&gt;
&lt;br /&gt;
Note that failing to specify part of a resource allocation request (most notably &#039;&#039;&#039;time&#039;&#039;&#039; and &#039;&#039;&#039;memory&#039;&#039;&#039;) will result in bad resource requests as the defaults are not appropriate to most cases. Please refer to the section &#039;Running non-interactive jobs&#039; for more examples.&lt;br /&gt;
&lt;br /&gt;
=== Software ===&lt;br /&gt;
All ARC nodes run the latest version of CentOS 7 with the same set of base software packages. To maintain the stability and consistency of all nodes, any additional dependencies that your software requires must be installed under your account.  For your convenience, we have packaged commonly used software packages and dependencies as modules available under &amp;lt;code&amp;gt;/global/software&amp;lt;/code&amp;gt;. If your software package is not available as a module, you may also try Anaconda which allows users to manage and install custom packages in an isolated environment.&lt;br /&gt;
&lt;br /&gt;
For a list of available packages that have been made available, please see [[ARC Software pages]]. &lt;br /&gt;
&lt;br /&gt;
Please contact us at support@hpc.ucalgary.ca if you need additional software installed.&lt;br /&gt;
&lt;br /&gt;
==== Modules ====&lt;br /&gt;
The setup of the environment for using some of the installed software is through the &amp;lt;code&amp;gt;module&amp;lt;/code&amp;gt; command. An overview of [https://www.westgrid.ca//support/modules modules on WestGrid (external link)] is largely applicable to ARC.&lt;br /&gt;
&lt;br /&gt;
Software packages bundled as a module will be available under &amp;lt;code&amp;gt;/global/software&amp;lt;/code&amp;gt; and can be listed with the &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module avail&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To enable Python, load the Python module by running:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module load python/anaconda-3.6-5.1.0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To unload the Python module, run:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module remove python/anaconda-3.6-5.1.0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To see currently loaded modules, run:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module list&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By default, no modules are loaded on ARC. If you wish to use a specific module, such as the Intel compilers or the Open MPI parallel programming packages, you must load the appropriate module.&lt;br /&gt;
&lt;br /&gt;
=== Storage ===&lt;br /&gt;
Please review the [[#ARC Cluster Storage|ARC Cluster Storage]] section above for important policies and advice regarding file storage and file sharing.&lt;br /&gt;
&lt;br /&gt;
=== Interactive Jobs ===&lt;br /&gt;
The ARC login node may be used for such tasks as editing files, compiling programs and running short tests while developing programs. We suggest CPU intensive workloads on the login node be restricted to under 15 minutes as per [[General Cluster Guidelines and Policies|our cluster guidelines]]. For interactive workloads exceeding 15 minutes, use the &#039;&#039;&#039;[[Running_jobs#Interactive_jobs|salloc command]]&#039;&#039;&#039; to allocate an interactive session on a compute node.&lt;br /&gt;
&lt;br /&gt;
The default salloc allocation is 1 CPU and 1 GB of memory. Adjust this by specifying &amp;lt;code&amp;gt;-n CPU#&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--mem Megabytes&amp;lt;/code&amp;gt;. You may request up to 5 hours of CPU time for interactive jobs.&lt;br /&gt;
 salloc --time 5:00:00 --partition cpu2019&lt;br /&gt;
&lt;br /&gt;
Always use salloc or srun to start an interactive job. Do not SSH directly to a compute node as SSH sessions will be refused without an active job running.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- This information doesn&#039;t seem that useful or relevant to running interactive jobs. Move to getting started section?&lt;br /&gt;
ARC uses the Linux operating system. The program that responds to your typed commands and allows you to run other programs is called the Linux shell. There are several different shells available, but, by default you will use one called bash. It is useful to have some knowledge of the shell and a variety of other command-line programs that you can use to manipulate files. If you are new to Linux systems, we recommend that you work through one of the many online tutorials that are available, such as the [http://www.ee.surrey.ac.uk/Teaching/Unix/index.html UNIX Tutorial for Beginners (external link)] provided by the University of Surrey. The tutorial covers such fundamental topics, among others, as creating, renaming and deleting files and directories, how to produce a listing of your files and how to tell how much disk space you are using.  For a more comprehensive introduction to Linux, see [http://linuxcommand.sourceforge.net/tlcl.php The Linux Command Line (external link)].&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Running non-interactive jobs (batch processing) ===&lt;br /&gt;
Production runs and longer test runs should be submitted as (non-interactive) batch jobs, in which commands to be executed are listed in a script (text file). Batch jobs scripts are submitted using the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command, part of the Slurm job management and scheduling software. #SBATCH directive lines at the beginning of the script are used to specify the resources needed for the job (cores, memory, run time limit and any specialized hardware needed).&lt;br /&gt;
&lt;br /&gt;
Most of the information on the [https://docs.computecanada.ca/wiki/Running_jobs Running Jobs (external link)] page on the Compute Canada web site is also relevant for submitting and managing batch jobs and reserving processors for interactive work on ARC.  One major difference between running jobs on the ARC and Compute Canada clusters is in selecting the type of hardware that should be used for a job. On ARC, you choose the hardware to use primarily by specifying a partition, as described below.&lt;br /&gt;
&lt;br /&gt;
=== Selecting a Partition ===&lt;br /&gt;
There are some aspects to consider when selecting a partition including:&lt;br /&gt;
* Resource requirements in terms of memory and CPU cores&lt;br /&gt;
* Hardware specific requirements, such as GPU or CPU Instruction Set Extensions&lt;br /&gt;
* Partition resource limits and potential wait time&lt;br /&gt;
* Software support parallel processing using Message Passing Interface (MPI), OpenMP, etc.&lt;br /&gt;
** Eg. MPI for parallel processing can distribute memory across multiple nodes, per-node memory requirements could be lower. Whereas, OpenMP or single process code that is restricted to one node would require a higher memory node.&lt;br /&gt;
** Note: MPI code running on hardware with Omni-Path networking should be compiled with Omni-Path networking support. This is provided by loading the &amp;lt;code&amp;gt;openmpi/2.1.3-opa&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;openmpi/3.1.2-opa&amp;lt;/code&amp;gt; modules prior to compiling.&lt;br /&gt;
&lt;br /&gt;
Since resources that are requested are reserved for your job, please request only as much CPU and memory as your job requires to avoid reducing the cluster efficiency.  If you are unsure which partition to use or the specific resource requests that are appropriate for your jobs, please contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] and we would be happy to work with you.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%;&amp;quot;&lt;br /&gt;
!Partition&lt;br /&gt;
!Description&lt;br /&gt;
!Cores/node&lt;br /&gt;
!Memory Request Limit&lt;br /&gt;
!Time Limit&lt;br /&gt;
!GPU&lt;br /&gt;
!Networking&lt;br /&gt;
|-&lt;br /&gt;
|cpu2021&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|48&lt;br /&gt;
|185,000 MB&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|cpu2019&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|40&lt;br /&gt;
|185,000 MB&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|bigmem&lt;br /&gt;
|Big Memory Compute&lt;br /&gt;
|80&lt;br /&gt;
|3,000,000 MB&lt;br /&gt;
|24 hours ‡&lt;br /&gt;
|&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|gpu-v100&lt;br /&gt;
|GPU Compute&lt;br /&gt;
|80&lt;br /&gt;
|753,000 MB&lt;br /&gt;
|24 hours ‡&lt;br /&gt;
|2&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|apophis&amp;amp;dagger;&lt;br /&gt;
|Private Research Partition&lt;br /&gt;
|40&lt;br /&gt;
|185,000 MB&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|razi&amp;amp;dagger;&lt;br /&gt;
|Private Research Partition&lt;br /&gt;
|40&lt;br /&gt;
|185,000 MB&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|pawson&amp;amp;dagger;&lt;br /&gt;
|Private Research Partition&lt;br /&gt;
|40&lt;br /&gt;
|185,000 MB&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|sherlock&amp;amp;dagger;&lt;br /&gt;
|Private Research Partition&lt;br /&gt;
|7&lt;br /&gt;
|185,000 MB&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|theia&amp;amp;dagger;&lt;br /&gt;
|Private Research Partition&lt;br /&gt;
|28&lt;br /&gt;
|188,000 MB&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|synergy&amp;amp;dagger;&lt;br /&gt;
|Private Research Partition&lt;br /&gt;
|14&lt;br /&gt;
|245,000 MB&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|cpu2013&lt;br /&gt;
|Legacy General Purpose Compute&lt;br /&gt;
|16&lt;br /&gt;
|120000&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|lattice&lt;br /&gt;
|Legacy General Purpose Compute&lt;br /&gt;
|8&lt;br /&gt;
|12000&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|parallel&lt;br /&gt;
|Legacy General Purpose Compute&lt;br /&gt;
|12&lt;br /&gt;
|23000&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|single&lt;br /&gt;
|Legacy Single-Node Job Compute&lt;br /&gt;
|8&lt;br /&gt;
|12000&lt;br /&gt;
|7 days ‡&lt;br /&gt;
|&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|cpu2021-bf24&lt;br /&gt;
|Back-fill Compute (2021-era hardware, 24h)&lt;br /&gt;
|48&lt;br /&gt;
|185,000 MB&lt;br /&gt;
|24 hours ‡&lt;br /&gt;
|&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|cpu2019-bf05&lt;br /&gt;
|Back-fill Compute (2019-era hardware, 5h)&lt;br /&gt;
|40&lt;br /&gt;
|185,000 MB&lt;br /&gt;
|5 hours ‡&lt;br /&gt;
|&lt;br /&gt;
|100 Gbit/s Omni-Path&lt;br /&gt;
|-&lt;br /&gt;
|cpu2017-bf05&lt;br /&gt;
|Back-fill Compute (2017-era hardware, 5h)&lt;br /&gt;
|14&lt;br /&gt;
|245,000 MB&lt;br /&gt;
|5 hours ‡&lt;br /&gt;
|&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|+ style=&amp;quot;caption-side: bottom; text-align: left; font-weight: normal;&amp;quot; | &amp;amp;dagger; These partitions contain hardware contributed to ARC by particular researchers and should only be used by members of their research groups. However, they have generously allowed their compute nodes to be shared with others outside their research groups for short jobs.  A special &#039;back-fill&#039; or -bf partition is available for use by all ARC users for jobs shorter than 5 hours.&amp;lt;br /&amp;gt;‡ As time limits may be changed by administrators to adjust to maintenance schedules or system load, the values given in the tables are not definitive.  See the Time limits section below for commands you can use on ARC itself to determine current limits.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==== Backfill partitions ====&lt;br /&gt;
Backfill partitions can be used by all users on ARC for short-term jobs. The hardware backing these partitions are generously contributed by researchers.  We recommend including the backfill partitions for short term jobs as it may help reduce your job&#039;s wait time and increase the overall cluster throughput.&lt;br /&gt;
&lt;br /&gt;
Previously, each contributing research group had their own backfill partition. Since June 2021, we have merged:&lt;br /&gt;
&lt;br /&gt;
* apophis-bf, pawson-bf, and razi-bf into cpu2019-bf05 &lt;br /&gt;
* theia-bf and synergy-bf into cpu2017-bf05&lt;br /&gt;
&lt;br /&gt;
The naming scheme of the backfill partitions is the CPU generation year, followed by -bf and the time limit in hours.  For example, cpu2017-bf05 would represent a backfill partition containing processors from 2017 with a time limit of 5 hours.&lt;br /&gt;
&lt;br /&gt;
==== Hardware resource and job policy limits ====&lt;br /&gt;
In addition to the hardware limitations, please be aware that there may also be policy limits imposed on your account for each partition. These limits restrict the number of cores, nodes, or GPUs that can be used at any given time. Since the limits are applied on a partition-by-partition basis, using resources in one partition should not affect the available resources you can use in another partition.&lt;br /&gt;
&lt;br /&gt;
These limits can be listed by running:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sacctmgr show qos format=Name,MaxWall,MaxTRESPU%20,MaxSubmitJobs&lt;br /&gt;
      Name     MaxWall            MaxTRESPU MaxSubmit&lt;br /&gt;
---------- ----------- -------------------- ---------&lt;br /&gt;
    normal  7-00:00:00                           2000&lt;br /&gt;
    breezy  3-00:00:00              cpu=384      2000&lt;br /&gt;
       gpu  7-00:00:00                          13000&lt;br /&gt;
   cpu2019  7-00:00:00              cpu=240      2000&lt;br /&gt;
  gpu-v100  1-00:00:00    cpu=80,gres/gpu=4      2000&lt;br /&gt;
    single  7-00:00:00      cpu=408,node=75      2000&lt;br /&gt;
      razi  7-00:00:00                           2000&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Specifying a partition in a job ====&lt;br /&gt;
One you have decided which partitions best suits your computation, you can select one or more partition on a job-by-job basis by including the &amp;lt;code&amp;gt;partition&amp;lt;/code&amp;gt; keyword for an &amp;lt;code&amp;gt;SBATCH&amp;lt;/code&amp;gt; directive in your batch job. Multiple partitions should be comma separated.  If you omit the partition specification, the system will try to assign your job to appropriate hardware based on other aspects of your request. &lt;br /&gt;
&lt;br /&gt;
In some cases, you really should specify the partition explicitly.  For example, if you are running single-node jobs with thread-based parallel processing requesting 8 cores you could use:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --mem=0              ❶&lt;br /&gt;
#SBATCH --nodes=1            ❷&lt;br /&gt;
#SBATCH --ntasks=1           ❸&lt;br /&gt;
#SBATCH --cpus-per-task=8    ❹&lt;br /&gt;
#SBATCH --partition=single,lattice   ❺ &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A few things to mention in this example:&lt;br /&gt;
# &amp;lt;code&amp;gt;--mem=0&amp;lt;/code&amp;gt; allocates all available memory on the compute node for the job. This effectively allocates the entire node for your job.&lt;br /&gt;
# &amp;lt;code&amp;gt;--nodes=1&amp;lt;/code&amp;gt; allocates 1 node for the job&lt;br /&gt;
# &amp;lt;code&amp;gt;--ntasks=1&amp;lt;/code&amp;gt; your job has a single task&lt;br /&gt;
# &amp;lt;code&amp;gt;--cpus-per-task=8&amp;lt;/code&amp;gt; asks for 8 CPUs per task. This job in total will request 8 * 1, or 8 CPUs.&lt;br /&gt;
# &amp;lt;code&amp;gt;--partition=single,lattice&amp;lt;/code&amp;gt; specifies that this job can run on either single or lattice.&lt;br /&gt;
Suppose that your job requires at most 8 CPU cores and 10 GB of memory. The above Slurm request would be valid and optimal since your job fits neatly in a single node on the single and parallel partition.  However, if you failed to specify the partition, Slurm may try to schedule your job to a partition with larger nodes, such as cpu2019 where each node has 40 cores and 190 GB of memory. If your job is scheduled on such a node, your job will be effectively wasting 32 cores and 180 GB of memory because &amp;lt;code&amp;gt;--mem=0&amp;lt;/code&amp;gt; not only requests for 190 GB on this node, but also prevents other jobs from being scheduled on the same node.&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t specify a partition, please give greater thought to the memory specification to make sure that the scheduler will not assign your job more resources than are needed.&lt;br /&gt;
&lt;br /&gt;
Parameters such as &#039;&#039;&#039;--ntasks-per-cpu&#039;&#039;&#039;, &#039;&#039;&#039;--cpus-per-task&#039;&#039;&#039;, &#039;&#039;&#039;--mem&#039;&#039;&#039; and &#039;&#039;&#039;--mem-per-cpu&amp;gt;&#039;&#039;&#039; have to be adjusted according to the capabilities of the hardware also. The product of --ntasks-per-cpu and --cpus-per-task should be less than or equal to the number given in the &amp;quot;Cores/node&amp;quot; column.  The &#039;&#039;&#039;--mem&amp;gt;&#039;&#039;&#039; parameter (or the product of &#039;&#039;&#039;--mem-per-cpu&#039;&#039;&#039; and &#039;&#039;&#039;--cpus-per-task&#039;&#039;&#039;) should be less than the &amp;quot;Memory limit&amp;quot; shown. If using whole nodes, you can specify &#039;&#039;&#039;--mem=0&#039;&#039;&#039; to request the maximum amount of memory per node.&lt;br /&gt;
&lt;br /&gt;
===== Examples =====&lt;br /&gt;
Here are some examples of specifying the various partitions.&lt;br /&gt;
&lt;br /&gt;
As mentioned in the [[#Hardware|Hardware]] section above, the ARC cluster was expanded in January 2019.  To select the 40-core general purpose nodes specify:&lt;br /&gt;
&lt;br /&gt;
 #SBATCH --partition=cpu2019&lt;br /&gt;
&lt;br /&gt;
To run on the Tesla V100 GPU-enabled nodes, use the &#039;&#039;&#039;gpu-v100&#039;&#039;&#039; partition.  You will also need to include an SBATCH directive in the form &#039;&#039;&#039;--gres=gpu:n&#039;&#039;&#039; to specify the number of GPUs, n, that you need.  For example, if the software you are running can make use of both GPUs on a gpu-v100 partition compute node, use:&lt;br /&gt;
&lt;br /&gt;
 #SBATCH --partition=gpu-v100 --gres=gpu:2&lt;br /&gt;
&lt;br /&gt;
For very large memory jobs (more than 185000 MB), specify the bigmem partition:&lt;br /&gt;
&lt;br /&gt;
 #SBATCH --partition=bigmem&lt;br /&gt;
&lt;br /&gt;
If the more modern computers are too busy or you have a job well-suited to run on the compute nodes described in the legacy hardware section above, choose the cpu2013, Lattice or Parallel compute nodes by specifying the corresponding partition keyword:&lt;br /&gt;
&lt;br /&gt;
 #SBATCH --partition=cpu2013&lt;br /&gt;
 #SBATCH --partition=lattice&lt;br /&gt;
 #SBATCH --partition=parallel&lt;br /&gt;
&lt;br /&gt;
There is an additional partition called &#039;&#039;&#039;single&#039;&#039;&#039; that provides nodes similar to the lattice partition, but, is intended for single-node jobs. Select the single partition with&lt;br /&gt;
&lt;br /&gt;
 #SBATCH --partition=single&lt;br /&gt;
&lt;br /&gt;
=== Time limits ===&lt;br /&gt;
Use the &amp;lt;code&amp;gt;--time&amp;lt;/code&amp;gt; directive to tell the job scheduler the maximum time that your job might run.  For example:&lt;br /&gt;
 #SBATCH --time=hh:mm:ss&lt;br /&gt;
&lt;br /&gt;
You can use &amp;lt;code&amp;gt;scontrol show partitions&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; to see the current maximum time that a job can run.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; highlight=&amp;quot;6&amp;quot;&amp;gt;&lt;br /&gt;
$ scontrol show partitions&lt;br /&gt;
PartitionName=single                                                                 &lt;br /&gt;
   AllowGroups=ALL AllowAccounts=ALL AllowQos=ALL                                    &lt;br /&gt;
   AllocNodes=ALL Default=NO QoS=single                                              &lt;br /&gt;
   DefaultTime=NONE DisableRootJobs=NO ExclusiveUser=NO GraceTime=0 Hidden=NO        &lt;br /&gt;
   MaxNodes=UNLIMITED MaxTime=7-00:00:00 MinNodes=1 LLN=NO MaxCPUsPerNode=UNLIMITED  &lt;br /&gt;
   Nodes=cn[001-168]                                                                 &lt;br /&gt;
   PriorityJobFactor=1 PriorityTier=1 RootOnly=NO ReqResv=NO OverSubscribe=NO        &lt;br /&gt;
   OverTimeLimit=NONE PreemptMode=OFF                                                &lt;br /&gt;
   State=UP TotalCPUs=1344 TotalNodes=168 SelectTypeParameters=NONE                  &lt;br /&gt;
   DefMemPerNode=UNLIMITED MaxMemPerNode=UNLIMITED                                   &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, with &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; under the &amp;lt;code&amp;gt;TIMELIMIT&amp;lt;/code&amp;gt; column:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sinfo                                                     &lt;br /&gt;
PARTITION  AVAIL  TIMELIMIT  NODES  STATE NODELIST               &lt;br /&gt;
single        up 7-00:00:00      1 drain* cn097                  &lt;br /&gt;
single        up 7-00:00:00      1  maint cn002                  &lt;br /&gt;
single        up 7-00:00:00      4 drain* cn[001,061,133,154]    &lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Support ==&lt;br /&gt;
{{Message Box&lt;br /&gt;
|title=[[Support|Need Help or have other ARC Related Questions?]]&lt;br /&gt;
|message=For all general RCS related issues, questions, or comments, please contact us at support@hpc.ucalgary.ca.&lt;br /&gt;
|icon=Support Icon.png}}&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t hesitate to [[Support|contact us]] directly by email if you need help using ARC or require guidance on migrating and running your workflows to ARC.&lt;br /&gt;
&lt;br /&gt;
[[Category:ARC]]&lt;br /&gt;
[[Category:Guides]]&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=CloudStack_User_Guide&amp;diff=1904</id>
		<title>CloudStack User Guide</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=CloudStack_User_Guide&amp;diff=1904"/>
		<updated>2022-06-09T20:35:15Z</updated>

		<summary type="html">&lt;p&gt;Darcy: /* Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This is a user&#039;s guide on using CloudStack provided by Research Computing Services.&lt;br /&gt;
&lt;br /&gt;
== Introduction==&lt;br /&gt;
Apache CloudStack is an Infrastructure as a Service (IaaS) platform that allows users to quickly spin up Linux/Non-Windows based virtual machines. RCS is providing this service to help researchers quickly set up and prototype research related software on premises. CloudStack is not appropriate for workloads that depend on Windows. Services set up on CloudStack virtual machines can be accessed from the campus network and also the internet if required.&lt;br /&gt;
&lt;br /&gt;
Access to CloudStack can be requested via [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=e3c1d6e91be48554cca5ecefbd4bcb6c ServiceNow].&lt;br /&gt;
&lt;br /&gt;
Please refer to our [[CloudStack End User Agreement]] for acceptable uses and requirements.&lt;br /&gt;
&lt;br /&gt;
== Using your virtual machine ==&lt;br /&gt;
You will be able to run whatever virtual machine you wish (with the exception of Windows).  Clearly we cannot provide specific management advice on each and every operating system available.  We can provide you with some suggestions on important considerations to be aware of.&lt;br /&gt;
&lt;br /&gt;
=== Educate yourself ===&lt;br /&gt;
All operating systems (OS) have user groups, web sites, wikis, or mailing lists somewhere on the internet.  They can be a valuable resource.  Most OS providers have on-line documentation that describes using their product.  For example Rocky Linux, used by RCS, has a [https://docs.rockylinux.org/ documentation site].  These are excellent resources and can help you understand how to manage your virtual machine.&lt;br /&gt;
=== Keep security in mind===&lt;br /&gt;
To help keep our network and infrastructure safe from cyber attacks, it is critical that your VMs are properly configured to reduce the number of ways that hackers could exploit it. Here are some common tasks that you can do to help harden your VM:&lt;br /&gt;
*Ensure that the only services running on your VM are the ones you must run.  Each OS has a way of managing what services are running (sysinit, systemd etc).  Please ensure that unnecessary services have been disabled.&lt;br /&gt;
&lt;br /&gt;
*Disable or delete any unused accounts. Many OSs will have pre-configured accounts, and many applications will have pre-configured accounts.  Make sure they are either disabled or not allowed to login.&lt;br /&gt;
&lt;br /&gt;
*All accounts should have strong [https://it.ucalgary.ca/it-security/passwords-do-i-have-change-them/ passwords].&lt;br /&gt;
&lt;br /&gt;
*Many OS&#039;s have the ability to automatically update themselves.  If possible please consider doing this. Updates can also be configured to skip certain software if it will interfere with your research, but please be advised that doing so could place your system at risk.&lt;br /&gt;
&lt;br /&gt;
*If your VM must be exposed to the internet, consider using some kind of end-point security tool to help monitor for and block cyber attacks.&lt;br /&gt;
&lt;br /&gt;
== Accessing CloudStack ==&lt;br /&gt;
===Accessing the CloudStack management console===&lt;br /&gt;
&lt;br /&gt;
The CloudStack management console is a web-based portal that allows you to view and manage your cloud infrastructure including virtual machines, storage, and network. Any modern web browsers including Chrome, Firefox, Edge, and Safari is supported. &lt;br /&gt;
&lt;br /&gt;
Access the CloudStack management console is possible only from an IT-managed computer or through the IT General VPN when working on unmanaged machines (eg. AirUC) or when working off campus (eg. at home). Please review the IT [https://ucalgary.service-now.com/it?id=kb_article&amp;amp;sys_id=52a169d6dbe5bc506ad32637059619cd knowledge base article on connecting to the General VPN] or contact IT support if you need assistance connecting to the General VPN. &lt;br /&gt;
[[File:CloudStack VPN Connection.png|alt=CloudStack VPN Connection|none|thumb|CloudStack VPN Connection]]&lt;br /&gt;
=== Login to CloudStack===&lt;br /&gt;
&lt;br /&gt;
To log in to CloudStack, navigate to https://cloudstack.rcs.ucalgary.ca/. If this site fails to load, please make sure you are either on a IT managed computer or connected to the General VPN.&lt;br /&gt;
&lt;br /&gt;
Sign in to CloudStack using the Single Sign-On option as shown in the image below. This method will require you to authenticate through our central authentication service using your University of Calgary IT credentials and will require multi-factor authentication. You must have multi-factor authentication set up either via your phone or with the Microsoft Authenticator app.&lt;br /&gt;
[[File:CloudStack Login Page.png|alt=CloudStack Login Page|none|thumb|CloudStack Login Page]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; Due to a bug with the UI, if the Single Sign-On option is disabled, please refresh the login page and try again. This issue should be addressed in our next update for CloudStack.&lt;br /&gt;
&lt;br /&gt;
=== CloudStack Dashboard===&lt;br /&gt;
&lt;br /&gt;
After logging in, you will be presented with your CloudStack management console. The dashboard shows you a general overview of your account&#039;s status.&lt;br /&gt;
[[File:CloudStack Dashboard.png|alt=CloudStack Dashboard|none|thumb|CloudStack Dashboard]]On the right hand side of the dashboard, you will also see recent activity and events that was done within your CloudStack account.&lt;br /&gt;
&lt;br /&gt;
If you wish to see your CloudStack account resource quota and allocation, navigate to: &amp;lt;code&amp;gt;Accounts -&amp;gt; Click on your account -&amp;gt; Resources&amp;lt;/code&amp;gt;. &lt;br /&gt;
[[File:CloudStack Resource Quota.png|alt=CloudStack Resource Quota|none|thumb|CloudStack Resource Quota]]&lt;br /&gt;
&lt;br /&gt;
== Working with virtual machines==&lt;br /&gt;
&lt;br /&gt;
CloudStack allows you to control the lifecycle of virtual machines within your cloud account. VMs may be started, stopped, rebooted, or destroyed within your management console.&lt;br /&gt;
&lt;br /&gt;
===Create a VM===&lt;br /&gt;
&lt;br /&gt;
To create a new VM, enter the CloudStack management console and navigate to:  &amp;lt;code&amp;gt;Compute -&amp;gt; Instances -&amp;gt; Add Instance&amp;lt;/code&amp;gt;[[File:CloudStack Instance Summary.png|alt=CloudStack Instance Summary|thumb|CloudStack Instance Summary|493x493px]]&lt;br /&gt;
&lt;br /&gt;
Virtual Machines require the following details:&lt;br /&gt;
&lt;br /&gt;
# &#039;&#039;&#039;Deployment zone&#039;&#039;&#039;. Your account will already be placed in the appropriate zone.&lt;br /&gt;
# &#039;&#039;&#039;Boot template or ISO&#039;&#039;&#039;. You may choose either a pre-created template or boot from a custom CD-ROM ISO file.&lt;br /&gt;
# &#039;&#039;&#039;Compute offering&#039;&#039;&#039;. You may select an appropriate size for your new VM. Resources will be counted against your account&#039;s quota.&lt;br /&gt;
# &#039;&#039;&#039;Data Disk&#039;&#039;&#039;. You may choose to add an additional virtual disk to your VM to store your data. Alternatively, if you wish to use a single virtual disk for your VM, you may choose to override the size of your root disk in step 2 and select &#039;No thanks&#039; in this step.&lt;br /&gt;
# &#039;&#039;&#039;Networks&#039;&#039;&#039;. You may choose one or more networks your VM should connect to. All CloudStack accounts come with a default network already created and ready to be used.&lt;br /&gt;
# &#039;&#039;&#039;SSH keypairs&#039;&#039;&#039;. For templates that support custom SSH key pairs, you may choose to use a custom SSH keypair to be installed as part of the deployment process.&lt;br /&gt;
# &#039;&#039;&#039;Advanced settings&#039;&#039;&#039;. For templates that support custom user-data (Cloud-Init), you may choose to enable the advanced settings and provide your own Cloud-Init user-data payload. More on this in the advanced tasks section below.&lt;br /&gt;
# &#039;&#039;&#039;Other VM details&#039;&#039;&#039;. You may give your new VM a friendly name and make it part of a group. Groups allow you to group related VMs together for better organization. You may change these details at a later time.&lt;br /&gt;
&lt;br /&gt;
When you are done, review the instance summary on the right hand side and then click on the &#039;Launch Virtual Machine&#039; button.&lt;br /&gt;
&lt;br /&gt;
=== Selecting your VM Operating system ===&lt;br /&gt;
Many OSs will provide various editions that are tailored to a specific use case.  A desktop VM may not be appropriate when you need to run a database server.  The OS provider will have guides on how to choose an edition.&lt;br /&gt;
&lt;br /&gt;
You may choose to install the operating system to your virtual machine using either pre-built templates or from scratch using an ISO image.&lt;br /&gt;
&lt;br /&gt;
====Install from a virtual machine template====&lt;br /&gt;
&lt;br /&gt;
We provide a Rocky Linux 8.5 and a Ubuntu Server 22.04 LTS template for your convenience. These templates are pre-built images with the operating system installed and ready for use. Our templates also support further automated setup configured using Cloud-Init configuration data that can be provided when deploying a new VM. Currently, we offer the following templates: &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Template&lt;br /&gt;
!Cloud-Init Support&lt;br /&gt;
!Password Support&lt;br /&gt;
!Default Username&lt;br /&gt;
|-&lt;br /&gt;
|Rocky Linux 8.5&lt;br /&gt;
|Yes&lt;br /&gt;
|Yes&lt;br /&gt;
|rocky&lt;br /&gt;
|-&lt;br /&gt;
|Ubuntu Server 22.04&lt;br /&gt;
|Yes&lt;br /&gt;
|Yes&lt;br /&gt;
|ubuntu&lt;br /&gt;
|}&lt;br /&gt;
Rocky Linux is an open source Linux distribution that is binary-compatible with Red Hat Enterprise Linux and is what RCS recommends.&lt;br /&gt;
&lt;br /&gt;
For templates that support passwords, the generated password that appears after a VM is created is applied to the default username.&lt;br /&gt;
&lt;br /&gt;
Security note: All VM templates are configured with SSH password authentication enabled. You should be able to SSH to your VM from another system connected to the same guest network. Do not expose port 22 unless required and we highly recommend using key based authentication.&lt;br /&gt;
&lt;br /&gt;
===== Virtual machine credentials =====&lt;br /&gt;
VM templates that have password support will have a randomly generated password set when the VM is first created or when a password reset request is made (available only when the VM is powered off). A randomly generated 6 character password will be displayed when a new password is set and appears as a notification in your CloudStack management console. &lt;br /&gt;
[[File:CloudStack VM Password.png|alt=CloudStack VM Password|none|thumb|CloudStack VM Password]]&lt;br /&gt;
This password is set on the default username for your template. For example, the Rocky Linux VM template will set this password to the &#039;&#039;&#039;&#039;rocky&#039;&#039;&#039;&amp;lt;nowiki/&amp;gt;&#039; user account. You may become the super user by logging in as the &amp;lt;code&amp;gt;rocky&amp;lt;/code&amp;gt; user and then running &amp;lt;code&amp;gt;sudo su&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Note: If you specify a custom Cloud-Init config that creates additional users or sets account passwords, the displayed password will be overridden and have no effect.&lt;br /&gt;
&lt;br /&gt;
==== Install from an ISO image====&lt;br /&gt;
We provide various ISO images for popular Linux distributions. You may select one of these ISO images instead of using a pre-built template when deploying a new virtual machine. We currently provide:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Distribution&lt;br /&gt;
!ISO&lt;br /&gt;
|-&lt;br /&gt;
|Ubuntu 20.04&lt;br /&gt;
|ubuntu-20.04.4-desktop-amd64.iso&lt;br /&gt;
&lt;br /&gt;
ubuntu-20.04.4-live-server-amd64.iso&lt;br /&gt;
|-&lt;br /&gt;
|Ubuntu 21.10&lt;br /&gt;
|ubuntu-21.10-desktop-amd64.iso&lt;br /&gt;
&lt;br /&gt;
ubuntu-21.10-live-server-amd64.iso&lt;br /&gt;
|-&lt;br /&gt;
|Ubuntu 22.04&lt;br /&gt;
|ubuntu-22.04-live-server-amd64.iso&lt;br /&gt;
|-&lt;br /&gt;
|Rocky Linux 8.5&lt;br /&gt;
|Rocky-8.5-x86_64-minimal.iso&lt;br /&gt;
|-&lt;br /&gt;
|Fedora 35&lt;br /&gt;
|Fedora-Workstation-Live-x86_64-35-1.2.iso&lt;br /&gt;
|}&lt;br /&gt;
You may install custom ISO file into your CloudStack account either by directly uploading the ISO through the web console or by providing a URL to the ISO file on the internet.&lt;br /&gt;
&lt;br /&gt;
Please do not install Windows on our CloudStack infrastructure. It is against our user agreement to run Windows based systems in this infrastructure. If you need a Windows VM, please contact us for alternative solutions.&lt;br /&gt;
&lt;br /&gt;
===== Register a ISO with a URL=====&lt;br /&gt;
[[File:CloudStack Download ISO.png|alt=CloudStack Download ISO|thumb|CloudStack Download ISO|190x190px]]&lt;br /&gt;
&lt;br /&gt;
To add a custom ISO file from the internet, enter the CloudStack management console and navigate to:  &amp;lt;code&amp;gt;Images -&amp;gt; ISOs -&amp;gt; Register ISO&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You may check the state of the ISO file by clicking on it and verify the state of the file. If the file is successfully downloaded, its ready state should become ‘true’. The ISO file will only appear in the selection list when the file is downloaded successfully.&lt;br /&gt;
[[File:CloudStack ISO Ready.png|alt=CloudStack ISO Ready|none|thumb|172x172px|CloudStack ISO Ready]]&lt;br /&gt;
&lt;br /&gt;
=====Upload a custom ISO=====&lt;br /&gt;
&lt;br /&gt;
To upload an ISO file, enter the CloudStack management console and navigate to:  &amp;lt;code&amp;gt;Images -&amp;gt; ISOs -&amp;gt; Upload ISO from Local (icon)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:CloudStack Upload ISO.png|alt=CloudStack Upload ISO|none|thumb|CloudStack Upload ISO|217x217px]]&lt;br /&gt;
&lt;br /&gt;
===Connecting to your VM console===&lt;br /&gt;
The CloudStack management console has a KVM (keyboard, video, mouse) feature built-in, allowing you to remotely connect to and interact with your virtual machine. To connect to your virtual machine&#039;s console, navigate to: &amp;lt;code&amp;gt;Compute -&amp;gt; Instances -&amp;gt; Your Instance -&amp;gt; View console&amp;lt;/code&amp;gt;.&lt;br /&gt;
[[File:CloudStack View Console.png|alt=CloudStack View Console|none|thumb|CloudStack View Console]]&lt;br /&gt;
&lt;br /&gt;
=== Expanding a VM disk ===&lt;br /&gt;
[[File:CloudStack Expand Volume.png|alt=CloudStack Expand Volume|thumb|CloudStack Expand Volume]]&lt;br /&gt;
Virtual machine disks can be expanded after they are created within CloudStack. However, you will need to expand the partitions and filesystems manually.&lt;br /&gt;
&lt;br /&gt;
To grow an existing disk:&lt;br /&gt;
&lt;br /&gt;
# Go into your VM details page and click on ‘Volumes’.&lt;br /&gt;
# Click on the volume you wish to expand.&lt;br /&gt;
# Click on the ‘Resize Volume’ icon in the top right.&lt;br /&gt;
Once the volume has been expanded, you should be able to verify the disk volume has grown with &amp;lt;code&amp;gt;lsblk&amp;lt;/code&amp;gt;. There should also be some messages by the kernel when this occurs. However, you will still need to expand any partitions, volumes, and filesystems on your system manually.&lt;br /&gt;
&lt;br /&gt;
To expand your partition, use the &amp;lt;code&amp;gt;growpart&amp;lt;/code&amp;gt; command followed by your disk device and partition number. Eg: &amp;lt;code&amp;gt;/usr/bin/growpart /dev/vda 3&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For LVM volume sets, you can expand the volume using the &amp;lt;code&amp;gt;pvresize&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;lvresize&amp;lt;/code&amp;gt; commands:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/sbin/pvresize -y -q /dev/vda3&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/sbin/lvresize -y -q -r -l +100%FREE /dev/mapper/&amp;lt;volume-name&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To expand your filesystem:&lt;br /&gt;
&lt;br /&gt;
* XFS: &amp;lt;code&amp;gt;/usr/sbin/xfs_growfs &amp;lt;volume&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
* EXT: &amp;lt;code&amp;gt;resize2fs &amp;lt;volume&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Destroying a VM ===&lt;br /&gt;
If you need to delete a VM, click on the red garbage bin icon in the VM instance page. All deletions are irreversible, so please make sure you have a copy of any data you need before proceeding.&lt;br /&gt;
[[File:CloudStack Delete VM.png|alt=CloudStack Delete VM|none|thumb|CloudStack Delete VM]]&lt;br /&gt;
The VM root volume can be deleted immediately by enabling the &#039;Expunge&#039; option in the dialog box. If left disabled, the VM root volume will linger for a day before it is deleted by the system. You may wish to expunge a volume if you are running low on space or volume quota.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Virtual machine networking ==&lt;br /&gt;
The CloudStack platform allows you to define custom virtual private cloud (VPC) network which can contain any number of guest networks that your virtual machines connect to. Each guest network has its own private network address space and is not directly routable from campus or the internet. For virtual machines that require internet access, the VPC or guest network it is connected to must have a NAT IP address associated. The following diagram shows how a guest network connects to the internet and campus network.&lt;br /&gt;
[[File:CloudStack Guest Networking.png|alt=CloudStack Guest Networking|none|thumb|CloudStack Guest Networking]]&lt;br /&gt;
In order to expose a virtual machine&#039;s services to campus or the internet, the appropriate port forwardings must be set up on the VPC containing the guest network. More on this will be discussed in the next section.&lt;br /&gt;
&lt;br /&gt;
Having multiple guest networks allows for more advanced network setups but is not required. We recommend using a single flat network for most workloads. &lt;br /&gt;
&lt;br /&gt;
By default, all CloudStack accounts come with a default VPC and guest network set up with a NAT IP assigned.&lt;br /&gt;
&lt;br /&gt;
=== IP addresses ===&lt;br /&gt;
Due to the design decisions made during the setup of the CloudStack platform, only internal 10.44.12X.X IPs can be assigned to your VPC. These IP addresses are accessible from the university campus network. However, there is a special section of IP addresses that can be accessed from the internet.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!IP address range&lt;br /&gt;
!Accessible from&lt;br /&gt;
!Internet IP mapping&lt;br /&gt;
|-&lt;br /&gt;
|10.44.120.3-128&lt;br /&gt;
|Campus, Internet&lt;br /&gt;
|10.44.120.X maps to 136.159.140.X (ports 80 and 443 only)&lt;br /&gt;
|-&lt;br /&gt;
|10.44.120.129-255&lt;br /&gt;
|Campus only&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|10.44.121.0-255&lt;br /&gt;
|Campus only&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|10.44.122.0-255&lt;br /&gt;
|Campus only&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|10.44.123.0-255&lt;br /&gt;
|Campus only&lt;br /&gt;
|N/A&lt;br /&gt;
|}&lt;br /&gt;
If you need a service exposed to the internet, please request for a public IP address using our [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=e3c1d6e91be48554cca5ecefbd4bcb6c ServiceNow request form]. Additionally, if your service is not port 80 or 443, you must also request for a firewall change request to allow the special port through. &lt;br /&gt;
&lt;br /&gt;
=== Exposing a network service to campus ===&lt;br /&gt;
In order to make a virtual machine be visible to the campus network, you must first set up a port forwarding from a campus IP address to your virtual machine.&lt;br /&gt;
&lt;br /&gt;
To create a port forwarding, navigate to &amp;lt;code&amp;gt;Network -&amp;gt; VPC -&amp;gt; Select your VPC -&amp;gt; Public IP Addresses&amp;lt;/code&amp;gt;. If you do not have any available IP addresses, you will need to click on &#039;Acquire New IP&#039; and select an available IP address. Click on the IP address you wish to use to create a port forwarding on and then navigate to the &#039;Port Forwarding&#039; tab. Enter the private port range, the public port range, the protocol, and select the target VM. &lt;br /&gt;
&lt;br /&gt;
For example, to port forward only HTTP (tcp/80) traffic, you would enter the following:&lt;br /&gt;
[[File:CloudStack Port Forwarding.png|alt=CloudStack Port Forwarding|none|thumb|CloudStack Port Forwarding]]Once the port forwarding is created, you should be able to access the service from on campus. If for some reason access to your service does not work, there may be a firewall restriction on IT&#039;s network. In such circumstances, please contact us for assistance.&lt;br /&gt;
&lt;br /&gt;
=== Exposing a network to the internet ===&lt;br /&gt;
Exposing a service to the internet is the same as exposing it to campus. However, you must create a port forwarding on an IP address that maps to an internet IP address outlined in the IP address table above. If your account does not have one of these IP addresses available, please request for one on the [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=e3c1d6e91be48554cca5ecefbd4bcb6c ServiceNow request form].&lt;br /&gt;
&lt;br /&gt;
By default, only ports 80 and 443 are allowed through the Internet IP address. For all other ports, please [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=47cd16d113153a00b5b4ff82e144b0bf create a firewall rule change request in ServiceNow].&lt;br /&gt;
&lt;br /&gt;
== Cloud-Init Automation ==&lt;br /&gt;
&lt;br /&gt;
=== Ubuntu ===&lt;br /&gt;
The following Cloud-Init configs apply to Ubuntu VM templates.&lt;br /&gt;
&lt;br /&gt;
==== Ubuntu desktop ====&lt;br /&gt;
Use the following Cloud-Init config with the Ubuntu Server template to set up an Ubuntu desktop environment. The setup step takes a up to 15 minutes to complete and you should see a login screen when the setup finishes. &lt;br /&gt;
&lt;br /&gt;
Adjust the root and user password as desired. Below, the test user password is set to blank, allowing you to login to Gnome without a password.&lt;br /&gt;
{{Highlight|code=#cloud-config&lt;br /&gt;
disable_root: false&lt;br /&gt;
&lt;br /&gt;
users:&lt;br /&gt;
  - name: user&lt;br /&gt;
    lock_passwd: false&lt;br /&gt;
    inactive: false&lt;br /&gt;
    gecos: Test User&lt;br /&gt;
    primary_group: user&lt;br /&gt;
    groups: wheel&lt;br /&gt;
    passwd: $1$ADUODeAy$eCJ1lPSxhSGmSvrmWxjLC1&lt;br /&gt;
      &lt;br /&gt;
chpasswd:&lt;br /&gt;
  list: {{!}}&lt;br /&gt;
    root:password&lt;br /&gt;
  expire: false&lt;br /&gt;
    &lt;br /&gt;
# Install a graphical desktop&lt;br /&gt;
runcmd:&lt;br /&gt;
  - DEBIAN_FRONTEND=noninteractive apt -y upgrade&lt;br /&gt;
  - DEBIAN_FRONTEND=noninteractive apt -y install tasksel&lt;br /&gt;
  - tasksel install gnome-desktop&lt;br /&gt;
  - systemctl set-default graphical.target&lt;br /&gt;
  - systemctl isolate graphical.target|style=max-height: 300px;overflow:scroll;}}&lt;br /&gt;
&lt;br /&gt;
=== Rocky Linux ===&lt;br /&gt;
The following Cloud-Init configs apply to Rocky Linux templates.&lt;br /&gt;
&lt;br /&gt;
==== Rocky Linux Desktop ====&lt;br /&gt;
Use the following Cloud-Init config using the Rocky Linux template to set up a Rocky Linux desktop environment. The setup step takes up to 10 minutes to complete and you should see a login screen when the setup finishes. &lt;br /&gt;
&lt;br /&gt;
Adjust the root and user password as desired. Below, the test user password is set to blank, allowing you to login to Gnome without a password.&lt;br /&gt;
{{Highlight|code=#cloud-config&lt;br /&gt;
disable_root: false&lt;br /&gt;
&lt;br /&gt;
users:&lt;br /&gt;
  - name: user&lt;br /&gt;
    lock_passwd: false&lt;br /&gt;
    inactive: false&lt;br /&gt;
    gecos: Test User&lt;br /&gt;
    primary_group: user&lt;br /&gt;
    groups: wheel&lt;br /&gt;
    passwd: $1$ADUODeAy$eCJ1lPSxhSGmSvrmWxjLC1&lt;br /&gt;
      &lt;br /&gt;
chpasswd:&lt;br /&gt;
  list: {{!}}&lt;br /&gt;
    root:password&lt;br /&gt;
  expire: false&lt;br /&gt;
    &lt;br /&gt;
# Install a graphical desktop&lt;br /&gt;
runcmd:&lt;br /&gt;
  - yum -y install &amp;quot;@Workstation&amp;quot;&lt;br /&gt;
  - systemctl set-default graphical.target&lt;br /&gt;
  - systemctl isolate graphical.target}}&lt;br /&gt;
&lt;br /&gt;
==== Rocky Linux Docker host ====&lt;br /&gt;
Use the following Cloud-Init config using the Rocky Linux template to set up a new docker host. This server can then be used to run Docker containers. Also included are:&lt;br /&gt;
&lt;br /&gt;
# The docker-compose utility to help deploy container stacks more easily&lt;br /&gt;
# A helper script to expand the &amp;lt;code&amp;gt;/var&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt; filesystems on first startup based on the available space in the ROOT volume. &lt;br /&gt;
&lt;br /&gt;
Use the CloudStack generated password with the &#039;&amp;lt;code&amp;gt;rocky&amp;lt;/code&amp;gt;&#039; default user account to log in.{{Highlight|code=#cloud-config&lt;br /&gt;
&lt;br /&gt;
write_files:&lt;br /&gt;
  - path: /usr/bin/expand_lvm_root&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      #!/bin/bash&lt;br /&gt;
      /usr/bin/growpart /dev/vda 3&lt;br /&gt;
      /usr/sbin/pvresize -y -q /dev/vda3&lt;br /&gt;
      /usr/sbin/lvresize -y -q -r -l +50%FREE /dev/mapper/*root&lt;br /&gt;
      /usr/sbin/lvresize -y -q -r -l +100%FREE /dev/mapper/*var/dev/mapper/*root&lt;br /&gt;
      /usr/sbin/xfs_growfs /&lt;br /&gt;
      /usr/sbin/xfs_growfs /var&lt;br /&gt;
    permissions: &#039;0700&#039;&lt;br /&gt;
&lt;br /&gt;
  - path: /usr/bin/setup_docker&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      #!/bin/bash&lt;br /&gt;
      yum install -y yum-utils&lt;br /&gt;
      yum-config-manager \&lt;br /&gt;
         --add-repo \&lt;br /&gt;
         https://download.docker.com/linux/centos/docker-ce.repo&lt;br /&gt;
      yum install -y docker-ce docker-ce-cli containerd.io&lt;br /&gt;
      systemctl start docker&lt;br /&gt;
      systemctl enable docker&lt;br /&gt;
      &lt;br /&gt;
      curl -L &amp;quot;https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)&amp;quot; -o /usr/local/bin/docker-compose&lt;br /&gt;
      chmod +x /usr/local/bin/docker-compose&lt;br /&gt;
    permissions: &#039;0700&#039;&lt;br /&gt;
&lt;br /&gt;
  - path: /root/docker-compose.yml&lt;br /&gt;
    permissions: &#039;0700&#039;&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      version: &#039;3.3&#039;&lt;br /&gt;
      services:&lt;br /&gt;
        web:&lt;br /&gt;
          image: php:7.4-apache&lt;br /&gt;
          restart: always&lt;br /&gt;
          user: &amp;quot;0:0&amp;quot;&lt;br /&gt;
          volumes:&lt;br /&gt;
            - /var/www/html:/var/www/html&lt;br /&gt;
          ports:&lt;br /&gt;
            - &amp;quot;80:80&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  - path: /var/www/html/index.php&lt;br /&gt;
    permissions: &#039;0644&#039;&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      &amp;lt;h1&amp;gt;Hello there!&amp;lt;/h1&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;I see you from &amp;lt;?php echo $_SERVER[&#039;REMOTE_ADDR&#039;]; ?&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;&amp;lt;nowiki&amp;gt;pre&amp;lt;/nowiki&amp;gt;&amp;gt;&amp;lt;?php print_r($_SERVER); ?&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
    &lt;br /&gt;
# Ensure VM has the largest / possible&lt;br /&gt;
runcmd:&lt;br /&gt;
  - /usr/bin/expand_lvm_root&lt;br /&gt;
  - /usr/bin/setup_docker&lt;br /&gt;
  - cd /root; docker-compose up -d|style=max-height: 300px;overflow:scroll;}}&lt;br /&gt;
&lt;br /&gt;
==== UC Authentication ====&lt;br /&gt;
Use the following Cloud-Init config to allow UC-based authentication. Local accounts with the same username as the IT account may use the IT credential to log in.&lt;br /&gt;
{{Highlight|code=#cloud-config&lt;br /&gt;
&lt;br /&gt;
write_files:&lt;br /&gt;
  - path: /usr/bin/setup_uc_auth&lt;br /&gt;
    permissions: &#039;0700&#039;&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      #!/bin/bash&lt;br /&gt;
      yum install -y sssd sssd-dbus sssd-krb5 krb5-workstation authselect-compat&lt;br /&gt;
      &lt;br /&gt;
      cat &amp;lt;&amp;lt;EOF &amp;gt; /etc/sssd/sssd.conf&lt;br /&gt;
      [sssd]&lt;br /&gt;
      config_file_version = 2&lt;br /&gt;
      services = nss, pam, ifp&lt;br /&gt;
      domains = uc.ucalgary.ca&lt;br /&gt;
      &lt;br /&gt;
      [domain/uc.ucalgary.ca]&lt;br /&gt;
      id_provider = files&lt;br /&gt;
      debug_level = 5&lt;br /&gt;
      auth_provider = krb5&lt;br /&gt;
      chpass_provider = krb5&lt;br /&gt;
      &lt;br /&gt;
      krb5_realm = UC.UCALGARY.CA&lt;br /&gt;
      krb5_server = ITSODCSRV14.UC.UCALGARY.CA:88&lt;br /&gt;
      krb5_validate = false&lt;br /&gt;
      EOF&lt;br /&gt;
      chmod 600 /etc/sssd/sssd.conf&lt;br /&gt;
      &lt;br /&gt;
      cat &amp;lt;&amp;lt;EOF &amp;gt; /etc/krb5.conf&lt;br /&gt;
      [logging]&lt;br /&gt;
       default = FILE:/var/log/krb5libs.log&lt;br /&gt;
       kdc = FILE:/var/log/krb5kdc.log&lt;br /&gt;
       admin_server = FILE:/var/log/kadmind.log&lt;br /&gt;
      &lt;br /&gt;
      [libdefaults]&lt;br /&gt;
       default_realm = UC.UCALGARY.CA&lt;br /&gt;
       dns_lookup_realm = false&lt;br /&gt;
       dns_lookup_kdc = false&lt;br /&gt;
       ticket_lifetime = 24h&lt;br /&gt;
       renew_lifetime = 7d&lt;br /&gt;
       forwardable = true&lt;br /&gt;
      &lt;br /&gt;
      [realms]&lt;br /&gt;
       UC.UCALGARY.CA = {&lt;br /&gt;
        kdc = itsodcsrv14.uc.ucalgary.ca&lt;br /&gt;
       }&lt;br /&gt;
      &lt;br /&gt;
      [domain_realm]&lt;br /&gt;
       uc.ucalgary.ca = UC.UCALGARY.CA&lt;br /&gt;
       .uc.ucalgary.ca = UC.UCALGARY.CA&lt;br /&gt;
      EOF&lt;br /&gt;
      &lt;br /&gt;
      mkdir -p /etc/authselect/custom/rcs&lt;br /&gt;
      cd /etc/authselect/custom/rcs&lt;br /&gt;
      echo H4sIAMYsbGIAA+0ca3PbuDGf+StQ34dLMpEtybLdUevO6BKl56mTuJGv7Uwmo4FISMKJBHQAaEfNub+9uwBJkXrLshxnjjuTyCKJfWF3sQsuFPhS9CtB79keoQpw2mjYT4DZz1r1uPGs1qhX6yf12vFx/Vm1Vqs1jp+R6j6ZSiHWhipCnikpzarn1t3/TuErj8bhhBzccjOs6Igq41MVHBDen71WUey3mCsWHNx5W4wKpT+qSAGjI3lDQxj8SarB0UDIiB2FcsBFRfuKMfHZY4L2QpYbS2MzZMJwnxouxfnXefQHTaNi9nufhprdpQj6XAyYGisuzHIUuYcWIhlTrW/lRkxMNdO0OH5HdHdeXlDNjAGCuhJQ0IM4GjPFx0Om4PGjDM9n8pULP4wDtpkik78q1Lec/WifcMr8kSyE7fB/a9ssYf8Q2PiPc6/3RmNN/K826mez8f/07LSM/48BR4uD8dGaWOytGbc8BK8buSTu5odtFEyPitGxDH2LYXai9kFjjf8fn4HPT/3/GPy/fnZ6Uvr/Y8BXiP/gTHHOL/KZ0Z2HRpGmD2meszi5mIExjbpM3BxquRuOgIkJICE/kE7qsKQYGwjXRbSrfD2Xxe7EVp/yEOMGsjZWzOLSPASuCHJ83iCxwPtdwyN2XqtXqwvYSpHMMKPjfp/7HHFtyIydrWBnXeeFQkT4fak4ZImulwh134n3POr7Mk50sTUWGMy0RoE2hnmRAJHDgzLtwk1ewffmJq/gXZiJBf9iTSaHZGvTA0ZoGGumdsakY1AxC7q8j9qJeUD+SmqwPhAQi5kC7k8B69M4NOc9GhA7TutzOSLIB4g1EvJWnHPIFhT77HBbE9hNXZBmRNxYi0yTlHvhyexaA9cYvhDkGCMZDTfGMmITLiw7wMGNHLECuq2ZCjnIZnVUyeHZmis90YZFNhDtJJ0Mgl9lrxuNhpDwBVxZi4ioHp1Xq2dni0bNu0k2GPwkz8yn1GBqJDWj1FSWCFUwTDCxG+4DHUF8JUXgzBNNrws2u9sspB65k+4SW99s/RdgyNz4w0OsA/eUY6yr/05rtdn8r35S1n+PAjaSBU1rPGA4pA8pjCaJG6NbsS8zbuWDxmTk6jTMogZKxmOHYDsMdiAgEMxkOKYIcp69DEE60CUaMsLQ3twSRzbQBgnr2XprRtKBiCMOJFMWRaoJXcAxn5rCiG9XcRbK7T3RWOP/pyen+fqvgf5/XC3rv0eBb17eYTIbsJDaGs/+cV53dlGWjfsuG+O6zWh8qP43hYW1UQUQ7aqYHC9CYh4P6fx9eanU+3SGn6xiqBGX7CU5X656mKZ/i6uRv52T+rQa2QV3sWbaaQZdwkjwrZSQxqlBxqYiYrCQ0UHTfd4RoybdPlfadDHiz0+V5mYjI1ivmm4i9W6CuSSW9KW6BfdbwnS5tVFubSxmJvUL0oM4wkRXD2kgb70fcnojt0CKUCB3S5IrYG3EZ8pQqO1sZqkJ+JLig6H5C/CgMUaPmQjgsiA6jFW0247Hg0yg5WNXfVkkXdDR2GzKzmacPLH9pPr3sZ20TTQe3/4W05Abm+0UwzyxyuyiGLorRTgpULr3YgPOdFKrE+dT2y49dqMEg6eRo93YSZaIZfjKfbkcmnJf7jvbl3ssGEtt7Gv4PdJYt/93Um/M1P81iC9l/f8YsKFFWydAP7eeZuvVTX13uaf+6X9kEEQv8991/DJZnwvYswJrLhTYGEmtDSNqWKlNNCa5Jj3LayV55KDpvv+uh/IWc00W3G3q1DOUkqIdatVxQGG1nmJ8Yh6+Gj62W2/etfdLY43/n1VPT3P+XwP/rzaOy/3/R4G27boinU7njS1/XKIwu1P2nIauGHcZpU2MsQgKJy+8813B8zosZD62dBEz5JqMlcStc1egsRyDVMMD4GoyVhAuZJ/wAJk0E4+Kud09wHID95U+BAI4OrmgCYWIY3C46x8DpJJEVNABSwtBg8lbJMGtIU0B1qTiTC8iEjF/SAXXkca6YYgcXr5pXb0i/2Cqx5QEJSnyVjF2cdU6JBcmYwKQkfedDkGcV613kLUYpvoUxDISdzycoMlkIF0yDuPBwOqiR32oaYPktgfM+lIIYNMKAnGSj0PkvN9nCmNUWqY4vaE+3uHuFBcwoZGTg/Ygc3dK9oGzHoPJjoEE7pIaOyO/Iv4x6KjpDY0Z6+bREXyLFTvk8ggHHkF+E3jeT5N0BXhVMBaK+2EQ6CHr6isZOVqKgpgKZRXJw+6dCYRvvO+BMbj5D8hziK/sxj08yU8DIzecZlnai0NyjTY0pDcMlQZKRSGFz7weE6zPQQ2QkbJUVn/InFE5wtgNhSqDJLIPsR4+KxEYgpq4R+1UmGGsPcgeY9AlEXEE82yNiesRrB9MWY2iln92HL9CcSYyJoG0hdItFXamRoyNEzXEQqD1g7bhOQXG6A+5YK/sKJgPzz5a8A1tXQb0ghz9GuPWBNdTT0mWVFQGw50LNAox8PAlL4dJc5Nu/Usb/B+KphEIr9DYwwkBtvG2GTpsCWpHLDep1iOSjRKLrIfEaDLD+Ql1bhROUl98/eH924u///KxdX3x4b1XmQPPa8EMOyFBb6hoiUpDH0MEP9qpmspySK5CRjWozNJ3j3iB9OMIptOJazDzYATsiDiXscPBDwx6P9TQwB75ADGtIGIqF72Bxd0qGFFNnRPmFm2Y4dsEIQn7wrWNZM6+CjyCUP9qXVy2frpskw9XKHnrkrxtt65/+djuLFBCThuFzbFm08OEJAndqXfjHSSM9Q9oAk3SSBvYJsiHZn5s+A2zQ2eiGCIGTYDN4k2vWKXNUMP3pTDKhyKLudFACB/Nh0q7VKSLBOqHK4vEbgMQW28cJkJlr1jm6OQ5tLaYPYoeC1Y3cLaQzj2aCITPITUWUfY0RObxWCqYZXQTnEpczNiXcch9bq09CTFIhAs7eKGNzXI82zecSHAJV4lraia3GNlojhf7zgmeZsE8urQsTfBs8t7qkLyXRNogGjEKSxlMxkzjNKJ67opwNI90i+RF5rGJ+Ck/ufbP9VOSe9h6no3Pbmpe5hoTX6bIkxc0KxFjOIdnwO3FIGRFfHB9Fhe+7Cniq0NUgqUUTPG+qItVQ4L9jQvfEIEx9OfrAdC91pg+BLFCHRcNPA5kgqFlN5vxAkYg8I8sraFpUmNzMHxAxXYhBOsJAp762REz/lHSYpBTg8taEiKvhwzsL9laRgNOuUoDBeoEvPS/eaOe7tsVhQXixF0HeRWNmMEZltNNEc97c9GxEa1z1X598fbiNaY1/764fv0zedO6bv3U6qyObdMY9x4XagjCr0jalAQJmqE9cG1tM7bexAbedAkEJ1K3ihu4C84XJw+BA2PgqRQam9w62Jeof7vQ2jJP21wHNO47G0yWGyO95P2dRbuINK57OH14TQqWECbzhD0uYJWgmXMVmmdA1ReO0EHSTZMJvIDoDArXMjPF4DphtkCQtd1McWQNNVugmXbeTPFMm2q2QJR130zxZH0126BJGnByWNyVdUja/2m9u7rMGavnvST58sitQeibOP1zy4vnkbwVpR+QFJMsDpBixF9OIV2ykJJdaNkmayzxCkvsOo6yJX7KIKRn7TZpXXY+TNXwEnMIO8ya9PM/v/iudjhKWAUf2//85eJj+137/XVnXzTWnv85qc7s/+C3cv/nMeAdHWGfinKJc6F4xDwzq5Jc7ZemiqTDkpBVLLEwJEUzewuuqNgPrOjYOrjzKjPhFgVK6+ZXENRSyfWKRDutDXguAjbX0CWkYpeYA8ySsJ/Avp8k5+RaxewAUX2CO5/hGXc08aHk3ZuaFzVqFA6JJZrOVwNLdJ0UBOttbEPKjynztNMtkTe5sExW61E2yaCRLTsiqBxCKzOmhZDrzr+CXEUV7cot+Ji32k2z6S7GiE3w3brbRLUkYZDfH+AmwNp3lKup7gvW9e8VtYwXH0rT6yh/A01nlB9P2/kX8ZXZlHCJphc1AaTavg9dQuymJtN2oxp3nC3yQoAoRoSHkdfGaLuD55vsJUNC+zClvQc9F+li7mEehuwausvW/+IB//3kGOvyv2L//wn2/zfqtTL/ewy45+//zJ8azyciT+JQwZPpsk/61GynaTfiGleOrqBQza/+LaHkr2kG+YBHD8oO6D9IB/RTbsZ9ooe7n37751Zsfdv2z7L/c8ruU+z/dJaxz9Of6/K/WrXWaMzkf7Wzk7My/3sMuF/+9yQSvH2cGt1n4vhA8U1IyLzdC57mIIiaX+DfCD9dy4GmN0w17Q+lVfJXRrkvxdB4n3OvmSwBvv4snnpMhQt4IlkuCZ7LaZOx9m/cNIZQazttyL2O4+7ww0kbmQhZyNbyH8wqT+R+hydyN1LQol+DWFYCZ9zU98jNyjcDy1131u9S1la4dM6d8WjX1JXvw0154jmjUZ54XoulrPeXMlOeeC5PPJcnnssTz+WJ52+95VXueGXsPsUdrxJKKKGEEkoooYQSSiihhBJKKKGEEkr448D/AT019aAAeAAA {{!}} base64 -d {{!}} tar -xzpf -&lt;br /&gt;
      &lt;br /&gt;
      if [[ &amp;quot;`authselect current -r`&amp;quot; != &amp;quot;custom/rcs&amp;quot; ]] ; then&lt;br /&gt;
        authselect select custom/rcs --force&lt;br /&gt;
        systemctl restart sssd&lt;br /&gt;
      fi&lt;br /&gt;
      &lt;br /&gt;
      systemctl enable sssd&lt;br /&gt;
&lt;br /&gt;
runcmd:&lt;br /&gt;
  - /usr/bin/setup_uc_auth|style=max-height: 300px;overflow:scroll;}}&lt;br /&gt;
&lt;br /&gt;
==== Rocky Linux NFS server ====&lt;br /&gt;
Because there is no ability to share storage among multiple VMs, a local NFS server could be useful if you need to share data between multiple VMs.{{Highlight|code=#cloud-config&lt;br /&gt;
&lt;br /&gt;
write_files:&lt;br /&gt;
  - path: /usr/bin/expand_lvm_root&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      #!/bin/bash&lt;br /&gt;
      /usr/bin/growpart /dev/vda 3&lt;br /&gt;
      /usr/sbin/pvresize -y -q /dev/vda3&lt;br /&gt;
      /usr/sbin/lvresize -y -q -r -l +50%FREE /dev/mapper/*root&lt;br /&gt;
      /usr/sbin/lvresize -y -q -r -l +100%FREE /dev/mapper/*var&lt;br /&gt;
      /usr/sbin/xfs_growfs /&lt;br /&gt;
      /usr/sbin/xfs_growfs /var&lt;br /&gt;
    permissions: &#039;0700&#039;&lt;br /&gt;
&lt;br /&gt;
  - path: /usr/bin/setup_nfs&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      #!/bin/bash&lt;br /&gt;
      yum install -y nfs-utils&lt;br /&gt;
&lt;br /&gt;
      mkdir /export&lt;br /&gt;
      if [ -b /dev/vdb ] ; then&lt;br /&gt;
        mkfs.xfs /dev/vdb&lt;br /&gt;
        echo &amp;quot;/dev/vdb  /export     xfs    defaults    1 2&amp;quot; &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
        mount -a&lt;br /&gt;
      fi&lt;br /&gt;
&lt;br /&gt;
      ip a {{!}} grep -w inet {{!}} awk &#039;{print $2}&#039; {{!}} while read subnet ; do&lt;br /&gt;
        echo &amp;quot;/export     $subnet(rw,no_subtree_check,no_root_squash,async)&amp;quot; &amp;gt;&amp;gt; /etc/exports&lt;br /&gt;
      done&lt;br /&gt;
&lt;br /&gt;
      systemctl start nfs-server&lt;br /&gt;
      systemctl enable nfs-server&lt;br /&gt;
     &lt;br /&gt;
      exportfs -ra&lt;br /&gt;
&lt;br /&gt;
    permissions: &#039;0700&#039;&lt;br /&gt;
&lt;br /&gt;
runcmd:&lt;br /&gt;
  - /usr/bin/expand_lvm_root&lt;br /&gt;
  - /usr/bin/setup_nfs|style=max-height: 300px;overflow:scroll;}}NFS clients connected to the same network as the NFS server can then mount &amp;lt;code&amp;gt;/export&amp;lt;/code&amp;gt; using a command similar to: &amp;lt;code&amp;gt;mount -t nfs nfs-server:/export /mnt&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Infrastructure tools ==&lt;br /&gt;
&lt;br /&gt;
=== Generating a CloudStack API ===&lt;br /&gt;
You can request for a CloudStack API key to automate infrastructure deployment using Terraform or CloudMonkey. A new API key can be generated by navigating to your profile page (top right) and then clicking on the &#039;Generate keys&#039; button.&lt;br /&gt;
[[File:CloudStack API Key.png|alt=CloudStack API Key|none|thumb|CloudStack API Key]]&lt;br /&gt;
&lt;br /&gt;
=== CloudMonkey ===&lt;br /&gt;
CloudMonkey is a utility that makes it easier to interact with the CloudStack API. This tool may be used to help automate VM actions (such as start/stop/reboot), or infrastructure tasks (such as creating/destroying VMs, networks, or firewall rules). &lt;br /&gt;
&lt;br /&gt;
To get started with CloudMonkey, refer to the following resources:&lt;br /&gt;
&lt;br /&gt;
* Download from: &amp;lt;nowiki&amp;gt;https://github.com/apache/cloudstack-cloudmonkey/releases/tag/6.1.0&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
* Documentation at: &amp;lt;nowiki&amp;gt;https://cwiki.apache.org/confluence/display/CLOUDSTACK/CloudStack+cloudmonkey+CLI&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Terraform Integration ===&lt;br /&gt;
Terraform allows you to define infrastructure as code and can be used in conjunction with CloudStack to configure your virtual machines and guest networks. Use the official CloudStack provider.&lt;br /&gt;
&lt;br /&gt;
The following is an example Terraform file for reference. Specify your CloudStack API keys either as a separate &amp;lt;code&amp;gt;vars.tf&amp;lt;/code&amp;gt;.&lt;br /&gt;
{{Highlight|code=# Configure the CloudStack Provider&lt;br /&gt;
terraform {&lt;br /&gt;
  required_providers {&lt;br /&gt;
    cloudstack = {&lt;br /&gt;
      source = &amp;quot;cloudstack/cloudstack&amp;quot;&lt;br /&gt;
      version = &amp;quot;0.4.0&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
provider &amp;quot;cloudstack&amp;quot; {&lt;br /&gt;
  api_url    = &amp;quot;${var.cloudstack_api_url}&amp;quot;&lt;br /&gt;
  api_key    = &amp;quot;${var.cloudstack_api_key}&amp;quot;&lt;br /&gt;
  secret_key = &amp;quot;${var.cloudstack_secret_key}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Create a new VPC&lt;br /&gt;
resource &amp;quot;cloudstack_vpc&amp;quot; &amp;quot;default&amp;quot; {&lt;br /&gt;
  name = &amp;quot;wedgenet-vpc&amp;quot;&lt;br /&gt;
  display_text = &amp;quot;wedgenet-vpc&amp;quot;&lt;br /&gt;
  cidr = &amp;quot;100.64.0.0/20&amp;quot;&lt;br /&gt;
  vpc_offering = &amp;quot;Default VPC offering&amp;quot;&lt;br /&gt;
  zone = &amp;quot;zone1&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Create a new ACL&lt;br /&gt;
resource &amp;quot;cloudstack_network_acl&amp;quot; &amp;quot;default&amp;quot; {&lt;br /&gt;
    name  = &amp;quot;vpc-acl&amp;quot;&lt;br /&gt;
    vpc_id = &amp;quot;${cloudstack_vpc.default.id}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# One ingress and one egress rule for the ACL&lt;br /&gt;
resource &amp;quot;cloudstack_network_acl_rule&amp;quot; &amp;quot;ingress&amp;quot; {&lt;br /&gt;
    acl_id = &amp;quot;${cloudstack_network_acl.default.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    rule {&lt;br /&gt;
        action       = &amp;quot;allow&amp;quot;&lt;br /&gt;
        cidr_list    = [&amp;quot;0.0.0.0/0&amp;quot;]&lt;br /&gt;
        protocol     = &amp;quot;tcp&amp;quot;&lt;br /&gt;
        ports        = [&amp;quot;22&amp;quot;, &amp;quot;80&amp;quot;, &amp;quot;443&amp;quot;]&lt;br /&gt;
        traffic_type = &amp;quot;ingress&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
resource &amp;quot;cloudstack_network_acl_rule&amp;quot; &amp;quot;egress&amp;quot; {&lt;br /&gt;
    acl_id = &amp;quot;${cloudstack_network_acl.default.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    rule {&lt;br /&gt;
        action       = &amp;quot;allow&amp;quot;&lt;br /&gt;
        cidr_list    = [&amp;quot;0.0.0.0/0&amp;quot;]&lt;br /&gt;
        protocol     = &amp;quot;all&amp;quot;&lt;br /&gt;
        traffic_type = &amp;quot;egress&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Create a new network in the VPC&lt;br /&gt;
resource &amp;quot;cloudstack_network&amp;quot; &amp;quot;primary&amp;quot; {&lt;br /&gt;
    name = &amp;quot;primary&amp;quot;&lt;br /&gt;
    display_text = &amp;quot;primary&amp;quot;&lt;br /&gt;
    cidr = &amp;quot;100.64.1.0/24&amp;quot;&lt;br /&gt;
    network_offering = &amp;quot;DefaultIsolatedNetworkOfferingForVpcNetworks&amp;quot;&lt;br /&gt;
    acl_id = &amp;quot;${cloudstack_network_acl.default.id}&amp;quot;&lt;br /&gt;
    vpc_id = &amp;quot;${cloudstack_vpc.default.id}&amp;quot;&lt;br /&gt;
    zone = &amp;quot;zone1&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Create a new public IP address for this network&lt;br /&gt;
resource &amp;quot;cloudstack_ipaddress&amp;quot; &amp;quot;public_ip&amp;quot; {&lt;br /&gt;
    vpc_id = &amp;quot;${cloudstack_vpc.default.id}&amp;quot;&lt;br /&gt;
    network_id = &amp;quot;${cloudstack_network.wosnet1.id}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Create VMs. &lt;br /&gt;
resource &amp;quot;cloudstack_instance&amp;quot; &amp;quot;vm&amp;quot; {&lt;br /&gt;
  count = 1&lt;br /&gt;
  name = &amp;quot;vm${count.index+1}&amp;quot;&lt;br /&gt;
  zone = &amp;quot;zone1&amp;quot;&lt;br /&gt;
  service_offering = &amp;quot;rcs.c4&amp;quot;&lt;br /&gt;
  template = &amp;quot;RockyLinux 8.5&amp;quot;&lt;br /&gt;
  network_id = &amp;quot;${cloudstack_network.wosnet1.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  # Cloud Init data can be used to configure your VM on first startup if your template supports Cloud Init&lt;br /&gt;
  user_data = &amp;lt;&amp;lt;EOF&lt;br /&gt;
#cloud-config&lt;br /&gt;
&lt;br /&gt;
# Require specific packages&lt;br /&gt;
packages:&lt;br /&gt;
 - tmux&lt;br /&gt;
 - git&lt;br /&gt;
 - tcpdump&lt;br /&gt;
&lt;br /&gt;
EOF&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|style=max-height: 300px;overflow:scroll;}}&lt;br /&gt;
&lt;br /&gt;
= Troubleshooting =&lt;br /&gt;
&lt;br /&gt;
=== Cannot create a volume snapshot ===&lt;br /&gt;
Volume snapshots can only be taken on VMs that are powered off.&lt;br /&gt;
&lt;br /&gt;
=== Cannot create a VM snapshot ===&lt;br /&gt;
Disk-only VM snapshots cannot be taken when the VM is running. If you intend to snapshot a running system, you must also snapshot its memory.&lt;br /&gt;
&lt;br /&gt;
=== VM state is still running after shutdown ===&lt;br /&gt;
After running &#039;shutdown&#039; on a VM, the VM state reported by CloudStack is still running. &lt;br /&gt;
&lt;br /&gt;
Please try to do a force shutdown from the CloudStack management console. The VM state isn&#039;t updated by CloudStack and as a result, the state of a VM isn&#039;t properly reflected when power state changes outside of CloudStack (likely a bug?)&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=CloudStack&amp;diff=1901</id>
		<title>CloudStack</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=CloudStack&amp;diff=1901"/>
		<updated>2022-06-09T16:57:44Z</updated>

		<summary type="html">&lt;p&gt;Darcy: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Apache CloudStack is an Infrastructure as a Service (IaaS) offering that allows researchers to quickly deploy virtual machines for research projects. This service is part of Research Computing Services&#039; Digital Research Infrastructure (DRI) and is free for all University of Calgary researchers and principal investigators.&lt;br /&gt;
&lt;br /&gt;
== Use cases ==&lt;br /&gt;
CloudStack allows you to create virtual machines for a wide range of workloads and use cases, including:&lt;br /&gt;
&lt;br /&gt;
* Running an internal or public facing web site&lt;br /&gt;
* Running a database&lt;br /&gt;
* Experiment with new software tools&lt;br /&gt;
* Test out the latest release of a software package&lt;br /&gt;
Please note that CloudStack is offered as a research environment and is supported as such. For workloads that demand high availability and high uptime, this may not be the appropriate choice. Researchers are solely responsible for maintaining any VMs they deploy. The expectation is that the owners of the VMs will be patching and doing other maintenance work on a regular basis.  We will be more than happy to provide guidance on this process. &lt;br /&gt;
&lt;br /&gt;
Contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] for details.&lt;br /&gt;
&lt;br /&gt;
=== Differences between RCS HPC and CloudStack ===&lt;br /&gt;
There are some overlaps between the CloudStack offering and our existing High Performance Computing (HPC) cluster environment.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
!RCS HPC Cluster&lt;br /&gt;
!CloudStack&lt;br /&gt;
|-&lt;br /&gt;
|CPU intensive workloads&lt;br /&gt;
|Yes; 48 CPUs per node, 100&#039;s of nodes&lt;br /&gt;
|No; 1-8 CPUs per VM&lt;br /&gt;
|-&lt;br /&gt;
|Memory intensive workloads&lt;br /&gt;
|Yes; up to 2TB memory per node&lt;br /&gt;
|No; up to 32GB memory per VM&lt;br /&gt;
|-&lt;br /&gt;
|High storage requirement workloads&lt;br /&gt;
|Yes; shared multi-petabyte storage&lt;br /&gt;
|No; up to 1TB per account&lt;br /&gt;
|-&lt;br /&gt;
|Data classification&lt;br /&gt;
|Level 1 &amp;amp; 2 (ARC), Level 3 &amp;amp; 4 (MARC)&lt;br /&gt;
|Level 1 &amp;amp; 2 only&lt;br /&gt;
|-&lt;br /&gt;
|Customized software requirements&lt;br /&gt;
|Yes; use singularity containers&lt;br /&gt;
|Yes&lt;br /&gt;
|-&lt;br /&gt;
|Custom OS configuration&lt;br /&gt;
|No&lt;br /&gt;
|Yes&lt;br /&gt;
|-&lt;br /&gt;
|Persistent software or services&lt;br /&gt;
|No; time limited jobs only&lt;br /&gt;
|Yes&lt;br /&gt;
|-&lt;br /&gt;
|Managed environment&lt;br /&gt;
|Yes&lt;br /&gt;
|No; self managed VMs only&lt;br /&gt;
|-&lt;br /&gt;
|Research support by analysts&lt;br /&gt;
|Yes&lt;br /&gt;
|Limited&lt;br /&gt;
|}&lt;br /&gt;
Not sure if you need a virtual machine or a compute cluster? Is this the &amp;quot;Cloud&amp;quot;? Contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] and we will assist you in using this service to support your research goals.&lt;br /&gt;
&lt;br /&gt;
== Getting access to CloudStack ==&lt;br /&gt;
If you are a researcher or principal investigator, please review our [[CloudStack End User Agreement]] and then request a CloudStack account through [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=e3c1d6e91be48554cca5ecefbd4bcb6c Service Now].&lt;br /&gt;
&lt;br /&gt;
Once your account is ready, please refer to the [[CloudStack User Guide]] for details on using CloudStack.&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=CloudStack_User_Guide&amp;diff=1870</id>
		<title>CloudStack User Guide</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=CloudStack_User_Guide&amp;diff=1870"/>
		<updated>2022-06-03T14:04:48Z</updated>

		<summary type="html">&lt;p&gt;Darcy: /* Keep security in mind */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This is a user&#039;s guide on using CloudStack provided by Research Computing Services.&lt;br /&gt;
&lt;br /&gt;
== Introduction==&lt;br /&gt;
Apache CloudStack is an Infrastructure as a Service (IaaS) platform that allows users to quickly spin up Linux/Non-Windows based virtual machines. RCS is providing this service to help researchers quickly set up and prototype short-term research related software on premises. CloudStack is not appropriate for workloads that depend on Windows. Services set up on CloudStack virtual machines can be accessed from the campus network and also the internet if required.&lt;br /&gt;
&lt;br /&gt;
Access to CloudStack can be requested via [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=e3c1d6e91be48554cca5ecefbd4bcb6c ServiceNow].&lt;br /&gt;
&lt;br /&gt;
Please refer to our [[CloudStack End User Agreement]] for acceptable uses and requirements.&lt;br /&gt;
&lt;br /&gt;
== Using your virtual machine ==&lt;br /&gt;
You will be able to run whatever virtual machine you wish (with the exception of Windows).  Clearly we cannot provide specific management advice on each and every operating system available.  We can provide you with some suggestions on important considerations to be aware of.&lt;br /&gt;
&lt;br /&gt;
=== Educate yourself ===&lt;br /&gt;
All operating systems (OS) have user groups, web sites, wikis, or mailing lists somewhere on the internet.  They can be a valuable resource.  Most OS providers have on-line documentation that describes using their product.  For example Rocky Linux, used by RCS, has a [https://docs.rockylinux.org/ documentation site].  These are excellent resources and can help you understand how to manage your virtual machine.&lt;br /&gt;
=== Keep security in mind===&lt;br /&gt;
To help keep our network and infrastructure safe from cyber attacks, it is critical that your VMs are properly configured to reduce the number of ways that hackers could exploit it. Here are some common tasks that you can do to help harden your VM:&lt;br /&gt;
*Ensure that the only services running on your VM are the ones you must run.  Each OS has a way of managing what services are running (sysinit, systemd etc).  Please ensure that unnecessary services have been disabled.&lt;br /&gt;
&lt;br /&gt;
*Disable or delete any unused accounts. Many OSs will have pre-configured accounts, and many applications will have pre-configured accounts.  Make sure they are either disabled or not allowed to login.&lt;br /&gt;
&lt;br /&gt;
*All accounts should have strong [https://it.ucalgary.ca/it-security/passwords-do-i-have-change-them/ passwords].&lt;br /&gt;
&lt;br /&gt;
*Many OS&#039;s have the ability to automatically update themselves.  If possible please consider doing this. Updates can also be configured to skip certain software if it will interfere with your research, but please be advised that doing so could place your system at risk.&lt;br /&gt;
&lt;br /&gt;
*If your VM must be exposed to the internet, consider using some kind of end-point security tool to help monitor for and block cyber attacks.&lt;br /&gt;
&lt;br /&gt;
== Accessing CloudStack ==&lt;br /&gt;
===Accessing the CloudStack management console===&lt;br /&gt;
&lt;br /&gt;
The CloudStack management console is a web-based portal that allows you to view and manage your cloud infrastructure including virtual machines, storage, and network. Any modern web browsers including Chrome, Firefox, Edge, and Safari is supported. &lt;br /&gt;
&lt;br /&gt;
Access the CloudStack management console is possible only from an IT-managed computer or through the IT General VPN when working on unmanaged machines (eg. AirUC) or when working off campus (eg. at home). Please review the IT [https://ucalgary.service-now.com/it?id=kb_article&amp;amp;sys_id=52a169d6dbe5bc506ad32637059619cd knowledge base article on connecting to the General VPN] or contact IT support if you need assistance connecting to the General VPN. &lt;br /&gt;
[[File:CloudStack VPN Connection.png|alt=CloudStack VPN Connection|none|thumb|CloudStack VPN Connection]]&lt;br /&gt;
=== Login to CloudStack===&lt;br /&gt;
&lt;br /&gt;
To log in to CloudStack, navigate to https://cloudstack.rcs.ucalgary.ca/. If this site fails to load, please make sure you are either on a IT managed computer or connected to the General VPN.&lt;br /&gt;
&lt;br /&gt;
Sign in to CloudStack using the Single Sign-On option as shown in the image below. This method will require you to authenticate through our central authentication service using your University of Calgary IT credentials and will require multi-factor authentication. You must have multi-factor authentication set up either via your phone or with the Microsoft Authenticator app.&lt;br /&gt;
[[File:CloudStack Login Page.png|alt=CloudStack Login Page|none|thumb|CloudStack Login Page]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; Due to a bug with the UI, if the Single Sign-On option is disabled, please refresh the login page and try again. This issue should be addressed in our next update for CloudStack.&lt;br /&gt;
&lt;br /&gt;
=== CloudStack Dashboard===&lt;br /&gt;
&lt;br /&gt;
After logging in, you will be presented with your CloudStack management console. The dashboard shows you a general overview of your account&#039;s status.&lt;br /&gt;
[[File:CloudStack Dashboard.png|alt=CloudStack Dashboard|none|thumb|CloudStack Dashboard]]On the right hand side of the dashboard, you will also see recent activity and events that was done within your CloudStack account.&lt;br /&gt;
&lt;br /&gt;
If you wish to see your CloudStack account resource quota and allocation, navigate to: &amp;lt;code&amp;gt;Accounts -&amp;gt; Click on your account -&amp;gt; Resources&amp;lt;/code&amp;gt;. &lt;br /&gt;
[[File:CloudStack Resource Quota.png|alt=CloudStack Resource Quota|none|thumb|CloudStack Resource Quota]]&lt;br /&gt;
&lt;br /&gt;
== Working with virtual machines==&lt;br /&gt;
&lt;br /&gt;
CloudStack allows you to control the lifecycle of virtual machines within your cloud account. VMs may be started, stopped, rebooted, or destroyed within your management console.&lt;br /&gt;
&lt;br /&gt;
===Create a VM===&lt;br /&gt;
&lt;br /&gt;
To create a new VM, enter the CloudStack management console and navigate to:  &amp;lt;code&amp;gt;Compute -&amp;gt; Instances -&amp;gt; Add Instance&amp;lt;/code&amp;gt;[[File:CloudStack Instance Summary.png|alt=CloudStack Instance Summary|thumb|CloudStack Instance Summary|493x493px]]&lt;br /&gt;
&lt;br /&gt;
Virtual Machines require the following details:&lt;br /&gt;
&lt;br /&gt;
# &#039;&#039;&#039;Deployment zone&#039;&#039;&#039;. Your account will already be placed in the appropriate zone.&lt;br /&gt;
# &#039;&#039;&#039;Boot template or ISO&#039;&#039;&#039;. You may choose either a pre-created template or boot from a custom CD-ROM ISO file.&lt;br /&gt;
# &#039;&#039;&#039;Compute offering&#039;&#039;&#039;. You may select an appropriate size for your new VM. Resources will be counted against your account&#039;s quota.&lt;br /&gt;
# &#039;&#039;&#039;Data Disk&#039;&#039;&#039;. You may choose to add an additional virtual disk to your VM to store your data. Alternatively, if you wish to use a single virtual disk for your VM, you may choose to override the size of your root disk in step 2 and select &#039;No thanks&#039; in this step.&lt;br /&gt;
# &#039;&#039;&#039;Networks&#039;&#039;&#039;. You may choose one or more networks your VM should connect to. All CloudStack accounts come with a default network already created and ready to be used.&lt;br /&gt;
# &#039;&#039;&#039;SSH keypairs&#039;&#039;&#039;. For templates that support custom SSH key pairs, you may choose to use a custom SSH keypair to be installed as part of the deployment process.&lt;br /&gt;
# &#039;&#039;&#039;Advanced settings&#039;&#039;&#039;. For templates that support custom user-data (Cloud-Init), you may choose to enable the advanced settings and provide your own Cloud-Init user-data payload. More on this in the advanced tasks section below.&lt;br /&gt;
# &#039;&#039;&#039;Other VM details&#039;&#039;&#039;. You may give your new VM a friendly name and make it part of a group. Groups allow you to group related VMs together for better organization. You may change these details at a later time.&lt;br /&gt;
&lt;br /&gt;
When you are done, review the instance summary on the right hand side and then click on the &#039;Launch Virtual Machine&#039; button.&lt;br /&gt;
&lt;br /&gt;
=== Selecting your VM Operating system ===&lt;br /&gt;
Many OSs will provide various editions that are tailored to a specific use case.  A desktop VM may not be appropriate when you need to run a database server.  The OS provider will have guides on how to choose an edition.&lt;br /&gt;
&lt;br /&gt;
You may choose to install the operating system to your virtual machine using either pre-built templates or from scratch using an ISO image.&lt;br /&gt;
&lt;br /&gt;
====Install from a virtual machine template====&lt;br /&gt;
&lt;br /&gt;
We provide a Rocky Linux 8.5 and a Ubuntu Server 22.04 LTS template for your convenience. These templates are pre-built images with the operating system installed and ready for use. Our templates also support further automated setup configured using Cloud-Init configuration data that can be provided when deploying a new VM. Currently, we offer the following templates: &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Template&lt;br /&gt;
!Cloud-Init Support&lt;br /&gt;
!Password Support&lt;br /&gt;
!Default Username&lt;br /&gt;
|-&lt;br /&gt;
|Rocky Linux 8.5&lt;br /&gt;
|Yes&lt;br /&gt;
|Yes&lt;br /&gt;
|rocky&lt;br /&gt;
|-&lt;br /&gt;
|Ubuntu Server 22.04&lt;br /&gt;
|Yes&lt;br /&gt;
|Yes&lt;br /&gt;
|ubuntu&lt;br /&gt;
|}&lt;br /&gt;
Rocky Linux is an open source Linux distribution that is binary-compatible with Red Hat Enterprise Linux and is what RCS recommends.&lt;br /&gt;
&lt;br /&gt;
For templates that support passwords, the generated password that appears after a VM is created is applied to the default username.&lt;br /&gt;
&lt;br /&gt;
Security note: All VM templates are configured with SSH password authentication enabled. You should be able to SSH to your VM from another system connected to the same guest network. Do not expose port 22 unless required and we highly recommend using key based authentication.&lt;br /&gt;
&lt;br /&gt;
===== Virtual machine credentials =====&lt;br /&gt;
VM templates that have password support will have a randomly generated password set when the VM is first created or when a password reset request is made (available only when the VM is powered off). A randomly generated 6 character password will be displayed when a new password is set and appears as a notification in your CloudStack management console. &lt;br /&gt;
[[File:CloudStack VM Password.png|alt=CloudStack VM Password|none|thumb|CloudStack VM Password]]&lt;br /&gt;
This password is set on the default username for your template. For example, the Rocky Linux VM template will set this password to the &#039;&#039;&#039;&#039;rocky&#039;&#039;&#039;&amp;lt;nowiki/&amp;gt;&#039; user account. You may become the super user by logging in as the &amp;lt;code&amp;gt;rocky&amp;lt;/code&amp;gt; user and then running &amp;lt;code&amp;gt;sudo su&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Note: If you specify a custom Cloud-Init config that creates additional users or sets account passwords, the displayed password will be overridden and have no effect.&lt;br /&gt;
&lt;br /&gt;
==== Install from an ISO image====&lt;br /&gt;
We provide various ISO images for popular Linux distributions. You may select one of these ISO images instead of using a pre-built template when deploying a new virtual machine. We currently provide:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Distribution&lt;br /&gt;
!ISO&lt;br /&gt;
|-&lt;br /&gt;
|Ubuntu 20.04&lt;br /&gt;
|ubuntu-20.04.4-desktop-amd64.iso&lt;br /&gt;
&lt;br /&gt;
ubuntu-20.04.4-live-server-amd64.iso&lt;br /&gt;
|-&lt;br /&gt;
|Ubuntu 21.10&lt;br /&gt;
|ubuntu-21.10-desktop-amd64.iso&lt;br /&gt;
&lt;br /&gt;
ubuntu-21.10-live-server-amd64.iso&lt;br /&gt;
|-&lt;br /&gt;
|Ubuntu 22.04&lt;br /&gt;
|ubuntu-22.04-live-server-amd64.iso&lt;br /&gt;
|-&lt;br /&gt;
|Rocky Linux 8.5&lt;br /&gt;
|Rocky-8.5-x86_64-minimal.iso&lt;br /&gt;
|-&lt;br /&gt;
|Fedora 35&lt;br /&gt;
|Fedora-Workstation-Live-x86_64-35-1.2.iso&lt;br /&gt;
|}&lt;br /&gt;
You may install custom ISO file into your CloudStack account either by directly uploading the ISO through the web console or by providing a URL to the ISO file on the internet.&lt;br /&gt;
&lt;br /&gt;
Please do not install Windows on our CloudStack infrastructure. It is against our user agreement to run Windows based systems in this infrastructure. If you need a Windows VM, please contact us for alternative solutions.&lt;br /&gt;
&lt;br /&gt;
===== Register a ISO with a URL=====&lt;br /&gt;
[[File:CloudStack Download ISO.png|alt=CloudStack Download ISO|thumb|CloudStack Download ISO|190x190px]]&lt;br /&gt;
&lt;br /&gt;
To add a custom ISO file from the internet, enter the CloudStack management console and navigate to:  &amp;lt;code&amp;gt;Images -&amp;gt; ISOs -&amp;gt; Register ISO&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You may check the state of the ISO file by clicking on it and verify the state of the file. If the file is successfully downloaded, its ready state should become ‘true’. The ISO file will only appear in the selection list when the file is downloaded successfully.&lt;br /&gt;
[[File:CloudStack ISO Ready.png|alt=CloudStack ISO Ready|none|thumb|172x172px|CloudStack ISO Ready]]&lt;br /&gt;
&lt;br /&gt;
=====Upload a custom ISO=====&lt;br /&gt;
&lt;br /&gt;
To upload an ISO file, enter the CloudStack management console and navigate to:  &amp;lt;code&amp;gt;Images -&amp;gt; ISOs -&amp;gt; Upload ISO from Local (icon)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:CloudStack Upload ISO.png|alt=CloudStack Upload ISO|none|thumb|CloudStack Upload ISO|217x217px]]&lt;br /&gt;
&lt;br /&gt;
===Connecting to your VM console===&lt;br /&gt;
The CloudStack management console has a KVM (keyboard, video, mouse) feature built-in, allowing you to remotely connect to and interact with your virtual machine. To connect to your virtual machine&#039;s console, navigate to: &amp;lt;code&amp;gt;Compute -&amp;gt; Instances -&amp;gt; Your Instance -&amp;gt; View console&amp;lt;/code&amp;gt;.&lt;br /&gt;
[[File:CloudStack View Console.png|alt=CloudStack View Console|none|thumb|CloudStack View Console]]&lt;br /&gt;
&lt;br /&gt;
=== Expanding a VM disk ===&lt;br /&gt;
[[File:CloudStack Expand Volume.png|alt=CloudStack Expand Volume|thumb|CloudStack Expand Volume]]&lt;br /&gt;
Virtual machine disks can be expanded after they are created within CloudStack. However, you will need to expand the partitions and filesystems manually.&lt;br /&gt;
&lt;br /&gt;
To grow an existing disk:&lt;br /&gt;
&lt;br /&gt;
# Go into your VM details page and click on ‘Volumes’.&lt;br /&gt;
# Click on the volume you wish to expand.&lt;br /&gt;
# Click on the ‘Resize Volume’ icon in the top right.&lt;br /&gt;
Once the volume has been expanded, you should be able to verify the disk volume has grown with &amp;lt;code&amp;gt;lsblk&amp;lt;/code&amp;gt;. There should also be some messages by the kernel when this occurs. However, you will still need to expand any partitions, volumes, and filesystems on your system manually.&lt;br /&gt;
&lt;br /&gt;
To expand your partition, use the &amp;lt;code&amp;gt;growpart&amp;lt;/code&amp;gt; command followed by your disk device and partition number. Eg: &amp;lt;code&amp;gt;/usr/bin/growpart /dev/vda 3&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For LVM volume sets, you can expand the volume using the &amp;lt;code&amp;gt;pvresize&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;lvresize&amp;lt;/code&amp;gt; commands:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/sbin/pvresize -y -q /dev/vda3&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/sbin/lvresize -y -q -r -l +100%FREE /dev/mapper/&amp;lt;volume-name&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To expand your filesystem:&lt;br /&gt;
&lt;br /&gt;
* XFS: &amp;lt;code&amp;gt;/usr/sbin/xfs_growfs &amp;lt;volume&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
* EXT: &amp;lt;code&amp;gt;resize2fs &amp;lt;volume&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Destroying a VM ===&lt;br /&gt;
If you need to delete a VM, click on the red garbage bin icon in the VM instance page. All deletions are irreversible, so please make sure you have a copy of any data you need before proceeding.&lt;br /&gt;
[[File:CloudStack Delete VM.png|alt=CloudStack Delete VM|none|thumb|CloudStack Delete VM]]&lt;br /&gt;
The VM root volume can be deleted immediately by enabling the &#039;Expunge&#039; option in the dialog box. If left disabled, the VM root volume will linger for a day before it is deleted by the system. You may wish to expunge a volume if you are running low on space or volume quota.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Virtual machine networking ==&lt;br /&gt;
The CloudStack platform allows you to define custom virtual private cloud (VPC) network which can contain any number of guest networks that your virtual machines connect to. Each guest network has its own private network address space and is not directly routable from campus or the internet. For virtual machines that require internet access, the VPC or guest network it is connected to must have a NAT IP address associated. The following diagram shows how a guest network connects to the internet and campus network.&lt;br /&gt;
[[File:CloudStack Guest Networking.png|alt=CloudStack Guest Networking|none|thumb|CloudStack Guest Networking]]&lt;br /&gt;
In order to expose a virtual machine&#039;s services to campus or the internet, the appropriate port forwardings must be set up on the VPC containing the guest network. More on this will be discussed in the next section.&lt;br /&gt;
&lt;br /&gt;
Having multiple guest networks allows for more advanced network setups but is not required. We recommend using a single flat network for most workloads. &lt;br /&gt;
&lt;br /&gt;
By default, all CloudStack accounts come with a default VPC and guest network set up with a NAT IP assigned.&lt;br /&gt;
&lt;br /&gt;
=== IP addresses ===&lt;br /&gt;
Due to the design decisions made during the setup of the CloudStack platform, only internal 10.44.12X.X IPs can be assigned to your VPC. These IP addresses are accessible from the university campus network. However, there is a special section of IP addresses that can be accessed from the internet.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!IP address range&lt;br /&gt;
!Accessible from&lt;br /&gt;
!Internet IP mapping&lt;br /&gt;
|-&lt;br /&gt;
|10.44.120.3-128&lt;br /&gt;
|Campus, Internet&lt;br /&gt;
|10.44.120.X maps to 136.159.140.X (ports 80 and 443 only)&lt;br /&gt;
|-&lt;br /&gt;
|10.44.120.129-255&lt;br /&gt;
|Campus only&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|10.44.121.0-255&lt;br /&gt;
|Campus only&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|10.44.122.0-255&lt;br /&gt;
|Campus only&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|10.44.123.0-255&lt;br /&gt;
|Campus only&lt;br /&gt;
|N/A&lt;br /&gt;
|}&lt;br /&gt;
If you need a service exposed to the internet, please request for a public IP address using our [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=e3c1d6e91be48554cca5ecefbd4bcb6c ServiceNow request form]. Additionally, if your service is not port 80 or 443, you must also request for a firewall change request to allow the special port through. &lt;br /&gt;
&lt;br /&gt;
=== Exposing a network service to campus ===&lt;br /&gt;
In order to make a virtual machine be visible to the campus network, you must first set up a port forwarding from a campus IP address to your virtual machine.&lt;br /&gt;
&lt;br /&gt;
To create a port forwarding, navigate to &amp;lt;code&amp;gt;Network -&amp;gt; VPC -&amp;gt; Select your VPC -&amp;gt; Public IP Addresses&amp;lt;/code&amp;gt;. If you do not have any available IP addresses, you will need to click on &#039;Acquire New IP&#039; and select an available IP address. Click on the IP address you wish to use to create a port forwarding on and then navigate to the &#039;Port Forwarding&#039; tab. Enter the private port range, the public port range, the protocol, and select the target VM. &lt;br /&gt;
&lt;br /&gt;
For example, to port forward only HTTP (tcp/80) traffic, you would enter the following:&lt;br /&gt;
[[File:CloudStack Port Forwarding.png|alt=CloudStack Port Forwarding|none|thumb|CloudStack Port Forwarding]]Once the port forwarding is created, you should be able to access the service from on campus. If for some reason access to your service does not work, there may be a firewall restriction on IT&#039;s network. In such circumstances, please contact us for assistance.&lt;br /&gt;
&lt;br /&gt;
=== Exposing a network to the internet ===&lt;br /&gt;
Exposing a service to the internet is the same as exposing it to campus. However, you must create a port forwarding on an IP address that maps to an internet IP address outlined in the IP address table above. If your account does not have one of these IP addresses available, please request for one on the [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=e3c1d6e91be48554cca5ecefbd4bcb6c ServiceNow request form].&lt;br /&gt;
&lt;br /&gt;
By default, only ports 80 and 443 are allowed through the Internet IP address. For all other ports, please [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=47cd16d113153a00b5b4ff82e144b0bf create a firewall rule change request in ServiceNow].&lt;br /&gt;
&lt;br /&gt;
== Cloud-Init Automation ==&lt;br /&gt;
&lt;br /&gt;
=== Ubuntu ===&lt;br /&gt;
The following Cloud-Init configs apply to Ubuntu VM templates.&lt;br /&gt;
&lt;br /&gt;
==== Ubuntu desktop ====&lt;br /&gt;
Use the following Cloud-Init config with the Ubuntu Server template to set up an Ubuntu desktop environment. The setup step takes a up to 15 minutes to complete and you should see a login screen when the setup finishes. &lt;br /&gt;
&lt;br /&gt;
Adjust the root and user password as desired. Below, the test user password is set to blank, allowing you to login to Gnome without a password.&lt;br /&gt;
{{Highlight|code=#cloud-config&lt;br /&gt;
disable_root: false&lt;br /&gt;
&lt;br /&gt;
users:&lt;br /&gt;
  - name: user&lt;br /&gt;
    lock_passwd: false&lt;br /&gt;
    inactive: false&lt;br /&gt;
    gecos: Test User&lt;br /&gt;
    primary_group: user&lt;br /&gt;
    groups: wheel&lt;br /&gt;
    passwd: $1$ADUODeAy$eCJ1lPSxhSGmSvrmWxjLC1&lt;br /&gt;
      &lt;br /&gt;
chpasswd:&lt;br /&gt;
  list: {{!}}&lt;br /&gt;
    root:password&lt;br /&gt;
  expire: false&lt;br /&gt;
    &lt;br /&gt;
# Install a graphical desktop&lt;br /&gt;
runcmd:&lt;br /&gt;
  - DEBIAN_FRONTEND=noninteractive apt -y upgrade&lt;br /&gt;
  - DEBIAN_FRONTEND=noninteractive apt -y install tasksel&lt;br /&gt;
  - tasksel install gnome-desktop&lt;br /&gt;
  - systemctl set-default graphical.target&lt;br /&gt;
  - systemctl isolate graphical.target|style=max-height: 300px;overflow:scroll;}}&lt;br /&gt;
&lt;br /&gt;
=== Rocky Linux ===&lt;br /&gt;
The following Cloud-Init configs apply to Rocky Linux templates.&lt;br /&gt;
&lt;br /&gt;
==== Rocky Linux Desktop ====&lt;br /&gt;
Use the following Cloud-Init config using the Rocky Linux template to set up a Rocky Linux desktop environment. The setup step takes up to 10 minutes to complete and you should see a login screen when the setup finishes. &lt;br /&gt;
&lt;br /&gt;
Adjust the root and user password as desired. Below, the test user password is set to blank, allowing you to login to Gnome without a password.&lt;br /&gt;
{{Highlight|code=#cloud-config&lt;br /&gt;
disable_root: false&lt;br /&gt;
&lt;br /&gt;
users:&lt;br /&gt;
  - name: user&lt;br /&gt;
    lock_passwd: false&lt;br /&gt;
    inactive: false&lt;br /&gt;
    gecos: Test User&lt;br /&gt;
    primary_group: user&lt;br /&gt;
    groups: wheel&lt;br /&gt;
    passwd: $1$ADUODeAy$eCJ1lPSxhSGmSvrmWxjLC1&lt;br /&gt;
      &lt;br /&gt;
chpasswd:&lt;br /&gt;
  list: {{!}}&lt;br /&gt;
    root:password&lt;br /&gt;
  expire: false&lt;br /&gt;
    &lt;br /&gt;
# Install a graphical desktop&lt;br /&gt;
runcmd:&lt;br /&gt;
  - yum -y install &amp;quot;@Workstation&amp;quot;&lt;br /&gt;
  - systemctl set-default graphical.target&lt;br /&gt;
  - systemctl isolate graphical.target}}&lt;br /&gt;
&lt;br /&gt;
==== Rocky Linux Docker host ====&lt;br /&gt;
Use the following Cloud-Init config using the Rocky Linux template to set up a new docker host. This server can then be used to run Docker containers. Also included are:&lt;br /&gt;
&lt;br /&gt;
# The docker-compose utility to help deploy container stacks more easily&lt;br /&gt;
# A helper script to expand the &amp;lt;code&amp;gt;/var&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt; filesystems on first startup based on the available space in the ROOT volume. &lt;br /&gt;
&lt;br /&gt;
Use the CloudStack generated password with the &#039;&amp;lt;code&amp;gt;rocky&amp;lt;/code&amp;gt;&#039; default user account to log in.{{Highlight|code=#cloud-config&lt;br /&gt;
&lt;br /&gt;
write_files:&lt;br /&gt;
  - path: /usr/bin/expand_lvm_root&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      #!/bin/bash&lt;br /&gt;
      /usr/bin/growpart /dev/vda 3&lt;br /&gt;
      /usr/sbin/pvresize -y -q /dev/vda3&lt;br /&gt;
      /usr/sbin/lvresize -y -q -r -l +50%FREE /dev/mapper/*root&lt;br /&gt;
      /usr/sbin/lvresize -y -q -r -l +100%FREE /dev/mapper/*var/dev/mapper/*root&lt;br /&gt;
      /usr/sbin/xfs_growfs /&lt;br /&gt;
      /usr/sbin/xfs_growfs /var&lt;br /&gt;
    permissions: &#039;0700&#039;&lt;br /&gt;
&lt;br /&gt;
  - path: /usr/bin/setup_docker&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      #!/bin/bash&lt;br /&gt;
      yum install -y yum-utils&lt;br /&gt;
      yum-config-manager \&lt;br /&gt;
         --add-repo \&lt;br /&gt;
         https://download.docker.com/linux/centos/docker-ce.repo&lt;br /&gt;
      yum install -y docker-ce docker-ce-cli containerd.io&lt;br /&gt;
      systemctl start docker&lt;br /&gt;
      systemctl enable docker&lt;br /&gt;
      &lt;br /&gt;
      curl -L &amp;quot;https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)&amp;quot; -o /usr/local/bin/docker-compose&lt;br /&gt;
      chmod +x /usr/local/bin/docker-compose&lt;br /&gt;
    permissions: &#039;0700&#039;&lt;br /&gt;
&lt;br /&gt;
  - path: /root/docker-compose.yml&lt;br /&gt;
    permissions: &#039;0700&#039;&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      version: &#039;3.3&#039;&lt;br /&gt;
      services:&lt;br /&gt;
        web:&lt;br /&gt;
          image: php:7.4-apache&lt;br /&gt;
          restart: always&lt;br /&gt;
          user: &amp;quot;0:0&amp;quot;&lt;br /&gt;
          volumes:&lt;br /&gt;
            - /var/www/html:/var/www/html&lt;br /&gt;
          ports:&lt;br /&gt;
            - &amp;quot;80:80&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  - path: /var/www/html/index.php&lt;br /&gt;
    permissions: &#039;0644&#039;&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      &amp;lt;h1&amp;gt;Hello there!&amp;lt;/h1&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;I see you from &amp;lt;?php echo $_SERVER[&#039;REMOTE_ADDR&#039;]; ?&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;&amp;lt;nowiki&amp;gt;pre&amp;lt;/nowiki&amp;gt;&amp;gt;&amp;lt;?php print_r($_SERVER); ?&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
    &lt;br /&gt;
# Ensure VM has the largest / possible&lt;br /&gt;
runcmd:&lt;br /&gt;
  - /usr/bin/expand_lvm_root&lt;br /&gt;
  - /usr/bin/setup_docker&lt;br /&gt;
  - cd /root; docker-compose up -d|style=max-height: 300px;overflow:scroll;}}&lt;br /&gt;
&lt;br /&gt;
==== UC Authentication ====&lt;br /&gt;
Use the following Cloud-Init config to allow UC-based authentication. Local accounts with the same username as the IT account may use the IT credential to log in.&lt;br /&gt;
{{Highlight|code=#cloud-config&lt;br /&gt;
&lt;br /&gt;
write_files:&lt;br /&gt;
  - path: /usr/bin/setup_uc_auth&lt;br /&gt;
    permissions: &#039;0700&#039;&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      #!/bin/bash&lt;br /&gt;
      yum install -y sssd sssd-dbus sssd-krb5 krb5-workstation authselect-compat&lt;br /&gt;
      &lt;br /&gt;
      cat &amp;lt;&amp;lt;EOF &amp;gt; /etc/sssd/sssd.conf&lt;br /&gt;
      [sssd]&lt;br /&gt;
      config_file_version = 2&lt;br /&gt;
      services = nss, pam, ifp&lt;br /&gt;
      domains = uc.ucalgary.ca&lt;br /&gt;
      &lt;br /&gt;
      [domain/uc.ucalgary.ca]&lt;br /&gt;
      id_provider = files&lt;br /&gt;
      debug_level = 5&lt;br /&gt;
      auth_provider = krb5&lt;br /&gt;
      chpass_provider = krb5&lt;br /&gt;
      &lt;br /&gt;
      krb5_realm = UC.UCALGARY.CA&lt;br /&gt;
      krb5_server = ITSODCSRV14.UC.UCALGARY.CA:88&lt;br /&gt;
      krb5_validate = false&lt;br /&gt;
      EOF&lt;br /&gt;
      chmod 600 /etc/sssd/sssd.conf&lt;br /&gt;
      &lt;br /&gt;
      cat &amp;lt;&amp;lt;EOF &amp;gt; /etc/krb5.conf&lt;br /&gt;
      [logging]&lt;br /&gt;
       default = FILE:/var/log/krb5libs.log&lt;br /&gt;
       kdc = FILE:/var/log/krb5kdc.log&lt;br /&gt;
       admin_server = FILE:/var/log/kadmind.log&lt;br /&gt;
      &lt;br /&gt;
      [libdefaults]&lt;br /&gt;
       default_realm = UC.UCALGARY.CA&lt;br /&gt;
       dns_lookup_realm = false&lt;br /&gt;
       dns_lookup_kdc = false&lt;br /&gt;
       ticket_lifetime = 24h&lt;br /&gt;
       renew_lifetime = 7d&lt;br /&gt;
       forwardable = true&lt;br /&gt;
      &lt;br /&gt;
      [realms]&lt;br /&gt;
       UC.UCALGARY.CA = {&lt;br /&gt;
        kdc = itsodcsrv14.uc.ucalgary.ca&lt;br /&gt;
       }&lt;br /&gt;
      &lt;br /&gt;
      [domain_realm]&lt;br /&gt;
       uc.ucalgary.ca = UC.UCALGARY.CA&lt;br /&gt;
       .uc.ucalgary.ca = UC.UCALGARY.CA&lt;br /&gt;
      EOF&lt;br /&gt;
      &lt;br /&gt;
      mkdir -p /etc/authselect/custom/rcs&lt;br /&gt;
      cd /etc/authselect/custom/rcs&lt;br /&gt;
      echo H4sIAMYsbGIAA+0ca3PbuDGf+StQ34dLMpEtybLdUevO6BKl56mTuJGv7Uwmo4FISMKJBHQAaEfNub+9uwBJkXrLshxnjjuTyCKJfWF3sQsuFPhS9CtB79keoQpw2mjYT4DZz1r1uPGs1qhX6yf12vFx/Vm1Vqs1jp+R6j6ZSiHWhipCnikpzarn1t3/TuErj8bhhBzccjOs6Igq41MVHBDen71WUey3mCsWHNx5W4wKpT+qSAGjI3lDQxj8SarB0UDIiB2FcsBFRfuKMfHZY4L2QpYbS2MzZMJwnxouxfnXefQHTaNi9nufhprdpQj6XAyYGisuzHIUuYcWIhlTrW/lRkxMNdO0OH5HdHdeXlDNjAGCuhJQ0IM4GjPFx0Om4PGjDM9n8pULP4wDtpkik78q1Lec/WifcMr8kSyE7fB/a9ssYf8Q2PiPc6/3RmNN/K826mez8f/07LSM/48BR4uD8dGaWOytGbc8BK8buSTu5odtFEyPitGxDH2LYXai9kFjjf8fn4HPT/3/GPy/fnZ6Uvr/Y8BXiP/gTHHOL/KZ0Z2HRpGmD2meszi5mIExjbpM3BxquRuOgIkJICE/kE7qsKQYGwjXRbSrfD2Xxe7EVp/yEOMGsjZWzOLSPASuCHJ83iCxwPtdwyN2XqtXqwvYSpHMMKPjfp/7HHFtyIydrWBnXeeFQkT4fak4ZImulwh134n3POr7Mk50sTUWGMy0RoE2hnmRAJHDgzLtwk1ewffmJq/gXZiJBf9iTSaHZGvTA0ZoGGumdsakY1AxC7q8j9qJeUD+SmqwPhAQi5kC7k8B69M4NOc9GhA7TutzOSLIB4g1EvJWnHPIFhT77HBbE9hNXZBmRNxYi0yTlHvhyexaA9cYvhDkGCMZDTfGMmITLiw7wMGNHLECuq2ZCjnIZnVUyeHZmis90YZFNhDtJJ0Mgl9lrxuNhpDwBVxZi4ioHp1Xq2dni0bNu0k2GPwkz8yn1GBqJDWj1FSWCFUwTDCxG+4DHUF8JUXgzBNNrws2u9sspB65k+4SW99s/RdgyNz4w0OsA/eUY6yr/05rtdn8r35S1n+PAjaSBU1rPGA4pA8pjCaJG6NbsS8zbuWDxmTk6jTMogZKxmOHYDsMdiAgEMxkOKYIcp69DEE60CUaMsLQ3twSRzbQBgnr2XprRtKBiCMOJFMWRaoJXcAxn5rCiG9XcRbK7T3RWOP/pyen+fqvgf5/XC3rv0eBb17eYTIbsJDaGs/+cV53dlGWjfsuG+O6zWh8qP43hYW1UQUQ7aqYHC9CYh4P6fx9eanU+3SGn6xiqBGX7CU5X656mKZ/i6uRv52T+rQa2QV3sWbaaQZdwkjwrZSQxqlBxqYiYrCQ0UHTfd4RoybdPlfadDHiz0+V5mYjI1ivmm4i9W6CuSSW9KW6BfdbwnS5tVFubSxmJvUL0oM4wkRXD2kgb70fcnojt0CKUCB3S5IrYG3EZ8pQqO1sZqkJ+JLig6H5C/CgMUaPmQjgsiA6jFW0247Hg0yg5WNXfVkkXdDR2GzKzmacPLH9pPr3sZ20TTQe3/4W05Abm+0UwzyxyuyiGLorRTgpULr3YgPOdFKrE+dT2y49dqMEg6eRo93YSZaIZfjKfbkcmnJf7jvbl3ssGEtt7Gv4PdJYt/93Um/M1P81iC9l/f8YsKFFWydAP7eeZuvVTX13uaf+6X9kEEQv8991/DJZnwvYswJrLhTYGEmtDSNqWKlNNCa5Jj3LayV55KDpvv+uh/IWc00W3G3q1DOUkqIdatVxQGG1nmJ8Yh6+Gj62W2/etfdLY43/n1VPT3P+XwP/rzaOy/3/R4G27boinU7njS1/XKIwu1P2nIauGHcZpU2MsQgKJy+8813B8zosZD62dBEz5JqMlcStc1egsRyDVMMD4GoyVhAuZJ/wAJk0E4+Kud09wHID95U+BAI4OrmgCYWIY3C46x8DpJJEVNABSwtBg8lbJMGtIU0B1qTiTC8iEjF/SAXXkca6YYgcXr5pXb0i/2Cqx5QEJSnyVjF2cdU6JBcmYwKQkfedDkGcV613kLUYpvoUxDISdzycoMlkIF0yDuPBwOqiR32oaYPktgfM+lIIYNMKAnGSj0PkvN9nCmNUWqY4vaE+3uHuFBcwoZGTg/Ygc3dK9oGzHoPJjoEE7pIaOyO/Iv4x6KjpDY0Z6+bREXyLFTvk8ggHHkF+E3jeT5N0BXhVMBaK+2EQ6CHr6isZOVqKgpgKZRXJw+6dCYRvvO+BMbj5D8hziK/sxj08yU8DIzecZlnai0NyjTY0pDcMlQZKRSGFz7weE6zPQQ2QkbJUVn/InFE5wtgNhSqDJLIPsR4+KxEYgpq4R+1UmGGsPcgeY9AlEXEE82yNiesRrB9MWY2iln92HL9CcSYyJoG0hdItFXamRoyNEzXEQqD1g7bhOQXG6A+5YK/sKJgPzz5a8A1tXQb0ghz9GuPWBNdTT0mWVFQGw50LNAox8PAlL4dJc5Nu/Usb/B+KphEIr9DYwwkBtvG2GTpsCWpHLDep1iOSjRKLrIfEaDLD+Ql1bhROUl98/eH924u///KxdX3x4b1XmQPPa8EMOyFBb6hoiUpDH0MEP9qpmspySK5CRjWozNJ3j3iB9OMIptOJazDzYATsiDiXscPBDwx6P9TQwB75ADGtIGIqF72Bxd0qGFFNnRPmFm2Y4dsEIQn7wrWNZM6+CjyCUP9qXVy2frpskw9XKHnrkrxtt65/+djuLFBCThuFzbFm08OEJAndqXfjHSSM9Q9oAk3SSBvYJsiHZn5s+A2zQ2eiGCIGTYDN4k2vWKXNUMP3pTDKhyKLudFACB/Nh0q7VKSLBOqHK4vEbgMQW28cJkJlr1jm6OQ5tLaYPYoeC1Y3cLaQzj2aCITPITUWUfY0RObxWCqYZXQTnEpczNiXcch9bq09CTFIhAs7eKGNzXI82zecSHAJV4lraia3GNlojhf7zgmeZsE8urQsTfBs8t7qkLyXRNogGjEKSxlMxkzjNKJ67opwNI90i+RF5rGJ+Ck/ufbP9VOSe9h6no3Pbmpe5hoTX6bIkxc0KxFjOIdnwO3FIGRFfHB9Fhe+7Cniq0NUgqUUTPG+qItVQ4L9jQvfEIEx9OfrAdC91pg+BLFCHRcNPA5kgqFlN5vxAkYg8I8sraFpUmNzMHxAxXYhBOsJAp762REz/lHSYpBTg8taEiKvhwzsL9laRgNOuUoDBeoEvPS/eaOe7tsVhQXixF0HeRWNmMEZltNNEc97c9GxEa1z1X598fbiNaY1/764fv0zedO6bv3U6qyObdMY9x4XagjCr0jalAQJmqE9cG1tM7bexAbedAkEJ1K3ihu4C84XJw+BA2PgqRQam9w62Jeof7vQ2jJP21wHNO47G0yWGyO95P2dRbuINK57OH14TQqWECbzhD0uYJWgmXMVmmdA1ReO0EHSTZMJvIDoDArXMjPF4DphtkCQtd1McWQNNVugmXbeTPFMm2q2QJR130zxZH0126BJGnByWNyVdUja/2m9u7rMGavnvST58sitQeibOP1zy4vnkbwVpR+QFJMsDpBixF9OIV2ykJJdaNkmayzxCkvsOo6yJX7KIKRn7TZpXXY+TNXwEnMIO8ya9PM/v/iudjhKWAUf2//85eJj+137/XVnXzTWnv85qc7s/+C3cv/nMeAdHWGfinKJc6F4xDwzq5Jc7ZemiqTDkpBVLLEwJEUzewuuqNgPrOjYOrjzKjPhFgVK6+ZXENRSyfWKRDutDXguAjbX0CWkYpeYA8ySsJ/Avp8k5+RaxewAUX2CO5/hGXc08aHk3ZuaFzVqFA6JJZrOVwNLdJ0UBOttbEPKjynztNMtkTe5sExW61E2yaCRLTsiqBxCKzOmhZDrzr+CXEUV7cot+Ji32k2z6S7GiE3w3brbRLUkYZDfH+AmwNp3lKup7gvW9e8VtYwXH0rT6yh/A01nlB9P2/kX8ZXZlHCJphc1AaTavg9dQuymJtN2oxp3nC3yQoAoRoSHkdfGaLuD55vsJUNC+zClvQc9F+li7mEehuwausvW/+IB//3kGOvyv2L//wn2/zfqtTL/ewy45+//zJ8azyciT+JQwZPpsk/61GynaTfiGleOrqBQza/+LaHkr2kG+YBHD8oO6D9IB/RTbsZ9ooe7n37751Zsfdv2z7L/c8ruU+z/dJaxz9Of6/K/WrXWaMzkf7Wzk7My/3sMuF/+9yQSvH2cGt1n4vhA8U1IyLzdC57mIIiaX+DfCD9dy4GmN0w17Q+lVfJXRrkvxdB4n3OvmSwBvv4snnpMhQt4IlkuCZ7LaZOx9m/cNIZQazttyL2O4+7ww0kbmQhZyNbyH8wqT+R+hydyN1LQol+DWFYCZ9zU98jNyjcDy1131u9S1la4dM6d8WjX1JXvw0154jmjUZ54XoulrPeXMlOeeC5PPJcnnssTz+WJ52+95VXueGXsPsUdrxJKKKGEEkoooYQSSiihhBJKKKGEEkr448D/AT019aAAeAAA {{!}} base64 -d {{!}} tar -xzpf -&lt;br /&gt;
      &lt;br /&gt;
      if [[ &amp;quot;`authselect current -r`&amp;quot; != &amp;quot;custom/rcs&amp;quot; ]] ; then&lt;br /&gt;
        authselect select custom/rcs --force&lt;br /&gt;
        systemctl restart sssd&lt;br /&gt;
      fi&lt;br /&gt;
      &lt;br /&gt;
      systemctl enable sssd&lt;br /&gt;
&lt;br /&gt;
runcmd:&lt;br /&gt;
  - /usr/bin/setup_uc_auth|style=max-height: 300px;overflow:scroll;}}&lt;br /&gt;
&lt;br /&gt;
==== Rocky Linux NFS server ====&lt;br /&gt;
Because there is no ability to share storage among multiple VMs, a local NFS server could be useful if you need to share data between multiple VMs.{{Highlight|code=#cloud-config&lt;br /&gt;
&lt;br /&gt;
write_files:&lt;br /&gt;
  - path: /usr/bin/expand_lvm_root&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      #!/bin/bash&lt;br /&gt;
      /usr/bin/growpart /dev/vda 3&lt;br /&gt;
      /usr/sbin/pvresize -y -q /dev/vda3&lt;br /&gt;
      /usr/sbin/lvresize -y -q -r -l +50%FREE /dev/mapper/*root&lt;br /&gt;
      /usr/sbin/lvresize -y -q -r -l +100%FREE /dev/mapper/*var&lt;br /&gt;
      /usr/sbin/xfs_growfs /&lt;br /&gt;
      /usr/sbin/xfs_growfs /var&lt;br /&gt;
    permissions: &#039;0700&#039;&lt;br /&gt;
&lt;br /&gt;
  - path: /usr/bin/setup_nfs&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      #!/bin/bash&lt;br /&gt;
      yum install -y nfs-utils&lt;br /&gt;
&lt;br /&gt;
      mkdir /export&lt;br /&gt;
      if [ -b /dev/vdb ] ; then&lt;br /&gt;
        mkfs.xfs /dev/vdb&lt;br /&gt;
        echo &amp;quot;/dev/vdb  /export     xfs    defaults    1 2&amp;quot; &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
        mount -a&lt;br /&gt;
      fi&lt;br /&gt;
&lt;br /&gt;
      ip a {{!}} grep -w inet {{!}} awk &#039;{print $2}&#039; {{!}} while read subnet ; do&lt;br /&gt;
        echo &amp;quot;/export     $subnet(rw,no_subtree_check,no_root_squash,async)&amp;quot; &amp;gt;&amp;gt; /etc/exports&lt;br /&gt;
      done&lt;br /&gt;
&lt;br /&gt;
      systemctl start nfs-server&lt;br /&gt;
      systemctl enable nfs-server&lt;br /&gt;
     &lt;br /&gt;
      exportfs -ra&lt;br /&gt;
&lt;br /&gt;
    permissions: &#039;0700&#039;&lt;br /&gt;
&lt;br /&gt;
runcmd:&lt;br /&gt;
  - /usr/bin/expand_lvm_root&lt;br /&gt;
  - /usr/bin/setup_nfs|style=max-height: 300px;overflow:scroll;}}NFS clients connected to the same network as the NFS server can then mount &amp;lt;code&amp;gt;/export&amp;lt;/code&amp;gt; using a command similar to: &amp;lt;code&amp;gt;mount -t nfs nfs-server:/export /mnt&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Infrastructure tools ==&lt;br /&gt;
&lt;br /&gt;
=== Generating a CloudStack API ===&lt;br /&gt;
You can request for a CloudStack API key to automate infrastructure deployment using Terraform or CloudMonkey. A new API key can be generated by navigating to your profile page (top right) and then clicking on the &#039;Generate keys&#039; button.&lt;br /&gt;
[[File:CloudStack API Key.png|alt=CloudStack API Key|none|thumb|CloudStack API Key]]&lt;br /&gt;
&lt;br /&gt;
=== CloudMonkey ===&lt;br /&gt;
CloudMonkey is a utility that makes it easier to interact with the CloudStack API. This tool may be used to help automate VM actions (such as start/stop/reboot), or infrastructure tasks (such as creating/destroying VMs, networks, or firewall rules). &lt;br /&gt;
&lt;br /&gt;
To get started with CloudMonkey, refer to the following resources:&lt;br /&gt;
&lt;br /&gt;
* Download from: &amp;lt;nowiki&amp;gt;https://github.com/apache/cloudstack-cloudmonkey/releases/tag/6.1.0&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
* Documentation at: &amp;lt;nowiki&amp;gt;https://cwiki.apache.org/confluence/display/CLOUDSTACK/CloudStack+cloudmonkey+CLI&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Terraform Integration ===&lt;br /&gt;
Terraform allows you to define infrastructure as code and can be used in conjunction with CloudStack to configure your virtual machines and guest networks. Use the official CloudStack provider.&lt;br /&gt;
&lt;br /&gt;
The following is an example Terraform file for reference. Specify your CloudStack API keys either as a separate &amp;lt;code&amp;gt;vars.tf&amp;lt;/code&amp;gt;.&lt;br /&gt;
{{Highlight|code=# Configure the CloudStack Provider&lt;br /&gt;
terraform {&lt;br /&gt;
  required_providers {&lt;br /&gt;
    cloudstack = {&lt;br /&gt;
      source = &amp;quot;cloudstack/cloudstack&amp;quot;&lt;br /&gt;
      version = &amp;quot;0.4.0&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
provider &amp;quot;cloudstack&amp;quot; {&lt;br /&gt;
  api_url    = &amp;quot;${var.cloudstack_api_url}&amp;quot;&lt;br /&gt;
  api_key    = &amp;quot;${var.cloudstack_api_key}&amp;quot;&lt;br /&gt;
  secret_key = &amp;quot;${var.cloudstack_secret_key}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Create a new VPC&lt;br /&gt;
resource &amp;quot;cloudstack_vpc&amp;quot; &amp;quot;default&amp;quot; {&lt;br /&gt;
  name = &amp;quot;wedgenet-vpc&amp;quot;&lt;br /&gt;
  display_text = &amp;quot;wedgenet-vpc&amp;quot;&lt;br /&gt;
  cidr = &amp;quot;100.64.0.0/20&amp;quot;&lt;br /&gt;
  vpc_offering = &amp;quot;Default VPC offering&amp;quot;&lt;br /&gt;
  zone = &amp;quot;zone1&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Create a new ACL&lt;br /&gt;
resource &amp;quot;cloudstack_network_acl&amp;quot; &amp;quot;default&amp;quot; {&lt;br /&gt;
    name  = &amp;quot;vpc-acl&amp;quot;&lt;br /&gt;
    vpc_id = &amp;quot;${cloudstack_vpc.default.id}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# One ingress and one egress rule for the ACL&lt;br /&gt;
resource &amp;quot;cloudstack_network_acl_rule&amp;quot; &amp;quot;ingress&amp;quot; {&lt;br /&gt;
    acl_id = &amp;quot;${cloudstack_network_acl.default.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    rule {&lt;br /&gt;
        action       = &amp;quot;allow&amp;quot;&lt;br /&gt;
        cidr_list    = [&amp;quot;0.0.0.0/0&amp;quot;]&lt;br /&gt;
        protocol     = &amp;quot;tcp&amp;quot;&lt;br /&gt;
        ports        = [&amp;quot;22&amp;quot;, &amp;quot;80&amp;quot;, &amp;quot;443&amp;quot;]&lt;br /&gt;
        traffic_type = &amp;quot;ingress&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
resource &amp;quot;cloudstack_network_acl_rule&amp;quot; &amp;quot;egress&amp;quot; {&lt;br /&gt;
    acl_id = &amp;quot;${cloudstack_network_acl.default.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    rule {&lt;br /&gt;
        action       = &amp;quot;allow&amp;quot;&lt;br /&gt;
        cidr_list    = [&amp;quot;0.0.0.0/0&amp;quot;]&lt;br /&gt;
        protocol     = &amp;quot;all&amp;quot;&lt;br /&gt;
        traffic_type = &amp;quot;egress&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Create a new network in the VPC&lt;br /&gt;
resource &amp;quot;cloudstack_network&amp;quot; &amp;quot;primary&amp;quot; {&lt;br /&gt;
    name = &amp;quot;primary&amp;quot;&lt;br /&gt;
    display_text = &amp;quot;primary&amp;quot;&lt;br /&gt;
    cidr = &amp;quot;100.64.1.0/24&amp;quot;&lt;br /&gt;
    network_offering = &amp;quot;DefaultIsolatedNetworkOfferingForVpcNetworks&amp;quot;&lt;br /&gt;
    acl_id = &amp;quot;${cloudstack_network_acl.default.id}&amp;quot;&lt;br /&gt;
    vpc_id = &amp;quot;${cloudstack_vpc.default.id}&amp;quot;&lt;br /&gt;
    zone = &amp;quot;zone1&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Create a new public IP address for this network&lt;br /&gt;
resource &amp;quot;cloudstack_ipaddress&amp;quot; &amp;quot;public_ip&amp;quot; {&lt;br /&gt;
    vpc_id = &amp;quot;${cloudstack_vpc.default.id}&amp;quot;&lt;br /&gt;
    network_id = &amp;quot;${cloudstack_network.wosnet1.id}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Create VMs. &lt;br /&gt;
resource &amp;quot;cloudstack_instance&amp;quot; &amp;quot;vm&amp;quot; {&lt;br /&gt;
  count = 1&lt;br /&gt;
  name = &amp;quot;vm${count.index+1}&amp;quot;&lt;br /&gt;
  zone = &amp;quot;zone1&amp;quot;&lt;br /&gt;
  service_offering = &amp;quot;rcs.c4&amp;quot;&lt;br /&gt;
  template = &amp;quot;RockyLinux 8.5&amp;quot;&lt;br /&gt;
  network_id = &amp;quot;${cloudstack_network.wosnet1.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  # Cloud Init data can be used to configure your VM on first startup if your template supports Cloud Init&lt;br /&gt;
  user_data = &amp;lt;&amp;lt;EOF&lt;br /&gt;
#cloud-config&lt;br /&gt;
&lt;br /&gt;
# Require specific packages&lt;br /&gt;
packages:&lt;br /&gt;
 - tmux&lt;br /&gt;
 - git&lt;br /&gt;
 - tcpdump&lt;br /&gt;
&lt;br /&gt;
EOF&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|style=max-height: 300px;overflow:scroll;}}&lt;br /&gt;
&lt;br /&gt;
= Troubleshooting =&lt;br /&gt;
&lt;br /&gt;
=== Cannot create a volume snapshot ===&lt;br /&gt;
Volume snapshots can only be taken on VMs that are powered off.&lt;br /&gt;
&lt;br /&gt;
=== Cannot create a VM snapshot ===&lt;br /&gt;
Disk-only VM snapshots cannot be taken when the VM is running. If you intend to snapshot a running system, you must also snapshot its memory.&lt;br /&gt;
&lt;br /&gt;
=== VM state is still running after shutdown ===&lt;br /&gt;
After running &#039;shutdown&#039; on a VM, the VM state reported by CloudStack is still running. &lt;br /&gt;
&lt;br /&gt;
Please try to do a force shutdown from the CloudStack management console. The VM state isn&#039;t updated by CloudStack and as a result, the state of a VM isn&#039;t properly reflected when power state changes outside of CloudStack (likely a bug?)&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=RCS_Home_Page&amp;diff=1869</id>
		<title>RCS Home Page</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=RCS_Home_Page&amp;diff=1869"/>
		<updated>2022-06-02T21:09:40Z</updated>

		<summary type="html">&lt;p&gt;Darcy: /* General information */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Research Computing Services (RCS) is a group within the wider University of Calgary Information Technologies team that plans, manages, and supports high performance computing (HPC) systems in use by researchers throughout the University of Calgary.  Our primary focus is to meet the increasing demand for engineering and scientific computation by offering a wide range of specialized services to help researchers solve highly complex real-world problems or run large scale computationally intensive workloads on our high-end HPC resources.&lt;br /&gt;
&lt;br /&gt;
This RCS Wiki contains technical documentation for use by users of HPC systems operated by RCS&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- &lt;br /&gt;
In case cluster status changes:&lt;br /&gt;
    *  set the status to yellow or red &lt;br /&gt;
    *  provide a custom &#039;title&#039; and &#039;message&#039;&lt;br /&gt;
&lt;br /&gt;
{{Cluster Status&lt;br /&gt;
|status=green&lt;br /&gt;
}}&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Contact us for support ===&lt;br /&gt;
[[File:Map HSC G204Z.png|150px|thumb|right|Find us at G204Z]]&lt;br /&gt;
* For general RCS/HPC inquiries, please email: [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca]&lt;br /&gt;
* For IT related issues (networking, VPN, email), please email: [mailto:it@ucalgary.ca it@ucalgary.ca]&lt;br /&gt;
* For Compute Canada specific questions: [mailto:support@computecanada.ca support@computecanada.ca]&lt;br /&gt;
&lt;br /&gt;
RCS has an office at the Foothills campus located at [http://ucmapspro.ucalgary.ca/RoomFinder/?Building=HSC&amp;amp;Room=B200D HSC B200D] and can be reached via the IT reception on the main floor at [https://ucmapspro.ucalgary.ca/RoomFinder/?Building=HSC&amp;amp;Room=G204Z G204Z] next to the University bookstore&#039;&#039;&#039;. If you would like to have a face-to-face meeting with an analyst, please contact us via email to arrange an appointment beforehand.&lt;br /&gt;
&lt;br /&gt;
{{Clear}}&lt;br /&gt;
&amp;lt;div class=&amp;quot;row&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;div class=&amp;quot;col-md-6&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== General information ==&lt;br /&gt;
* [[General Cluster Guidelines and Policies]]&lt;br /&gt;
* [[How to get an account]]&lt;br /&gt;
* [[Data ownership]]&lt;br /&gt;
* [[Connecting to RCS HPC Systems]]&lt;br /&gt;
* [[External collaborators]]&lt;br /&gt;
&lt;br /&gt;
* [[CloudStack|Cloud/Virtual Machine Infrastructure (CloudStack)]]&lt;br /&gt;
&lt;br /&gt;
* [[On-line resources for new Linux and ARC users]]&lt;br /&gt;
* [[Acknowledging Research Computing Services Group]]&lt;br /&gt;
&lt;br /&gt;
== Cluster Guides ==&lt;br /&gt;
* [[ ARC Cluster Guide]] - ARC is a general purpose cluster for University of Calgary researchers.&lt;br /&gt;
*  [[Helix Cluster Guide]] - Helix is a specialized cluster mainly provided for Cumming School of Medicine projects&lt;br /&gt;
*  [[GLaDOS Cluster Guide]] - GLaDOS is a researcher-owned cluster maintained by Research Computing Services.&lt;br /&gt;
*  [[TALC Cluster Guide]] - Teaching and Learning Cluster (TALC) is a cluster created by Research Computing Services to support academic courses and workshops.&lt;br /&gt;
* [[MARC Cluster Guide]] -- Medical Advanced Research Computing cluster at the University of Calgary created by Research Computing Services in 2020.&lt;br /&gt;
&lt;br /&gt;
== Other services ==&lt;br /&gt;
&lt;br /&gt;
* [[Jupyter Notebooks]]&lt;br /&gt;
* [[Open OnDemand | Open OnDemand portal]]&lt;br /&gt;
&lt;br /&gt;
== Software pages ==&lt;br /&gt;
* [[Managing software on ARC]]&lt;br /&gt;
* [https://hpc.ucalgary.ca/arc/software/conda Using Conda (external link)]&lt;br /&gt;
* [[Gaussian on ARC]] -- How to use Gaussian 16 on ARC.&lt;br /&gt;
* [[Apache Spark on ARC]]&lt;br /&gt;
* [[ARC Software pages]]&lt;br /&gt;
* [[Bioinformatics applications]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div class=&amp;quot;col-md-6&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Running courses on HPC resources ==&lt;br /&gt;
* [[TALC Cluster|TALC]] - Teaching and Learning Cluster (TALC) is a cluster created by Research Computing Services to support academic courses and workshops.&lt;br /&gt;
* [[TALC Terms of Use]] - Terms of use to which TALC account holders must agree to use the cluster.&lt;br /&gt;
* [[List of courses on TALC]] - A list of current and historical courses taught using TALC.&lt;br /&gt;
&lt;br /&gt;
== Training ==&lt;br /&gt;
* Our [[HPC Systems]]&lt;br /&gt;
* [[HPC Linux topics]] - A list of topics on which RCS technical support staff can provide one-on-one or group training&lt;br /&gt;
* [[Courses]]&lt;br /&gt;
* [[Linux Introduction]]&lt;br /&gt;
* [[What is a scheduler?]]&lt;br /&gt;
* [[Running jobs]]&lt;br /&gt;
* [[Data storage options for UofC researchers]]&lt;br /&gt;
* [[Security and privacy]]&lt;br /&gt;
* [[How to transfer data]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Clear}}&lt;br /&gt;
&lt;br /&gt;
==What&#039;s New==&lt;br /&gt;
* [[CHGI Transition]] - Information on the current CHGI Transition&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=CloudStack&amp;diff=1868</id>
		<title>CloudStack</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=CloudStack&amp;diff=1868"/>
		<updated>2022-06-02T20:54:45Z</updated>

		<summary type="html">&lt;p&gt;Darcy: /* Differences between RCS HPC and CloudStack */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Apache CloudStack is an Infrastructure as a Service (IaaS) offering that allows researchers to quickly deploy virtual machines for research projects. This service is part of Research Computing Services&#039; Digital Research Infrastructure (DRI) and is free for all University of Calgary researchers and principal investigators.&lt;br /&gt;
&lt;br /&gt;
== Use cases ==&lt;br /&gt;
CloudStack allows you to create virtual machines for a wide range of workloads and use cases, including:&lt;br /&gt;
&lt;br /&gt;
* Running a internal or public facing web site&lt;br /&gt;
* Running a database&lt;br /&gt;
* Experiment with new software tools&lt;br /&gt;
* Test out the latest release of a software package&lt;br /&gt;
Please note that CloudStack is offered as a research environment and is supported as such. For workloads that demand high availability and high uptime, this may not be the appropriate choice. Researchers are solely responsible for maintaining any VMs they deploy. The expectation is that the owners of the VMs will be patching and doing other maintenance work on a regular basis.  We will be more than happy to provide guidance on this process. &lt;br /&gt;
&lt;br /&gt;
Contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] for details.&lt;br /&gt;
&lt;br /&gt;
=== Differences between RCS HPC and CloudStack ===&lt;br /&gt;
There are some overlaps between the CloudStack offering and our existing High Performance Computing (HPC) cluster environment.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
!RCS HPC Cluster&lt;br /&gt;
!CloudStack&lt;br /&gt;
|-&lt;br /&gt;
|CPU intensive workloads&lt;br /&gt;
|Yes; 48 CPUs per node, 100&#039;s of nodes&lt;br /&gt;
|No; 1-8 CPUs per VM&lt;br /&gt;
|-&lt;br /&gt;
|Memory intensive workloads&lt;br /&gt;
|Yes; up to 2TB memory per node&lt;br /&gt;
|No; up to 32GB memory per VM&lt;br /&gt;
|-&lt;br /&gt;
|High storage requirement workloads&lt;br /&gt;
|Yes; shared multi-petabyte storage&lt;br /&gt;
|No; up to 1TB per account&lt;br /&gt;
|-&lt;br /&gt;
|Data classification&lt;br /&gt;
|Level 1 &amp;amp; 2 (ARC), Level 3 &amp;amp; 4 (MARC)&lt;br /&gt;
|Level 1 &amp;amp; 2 only&lt;br /&gt;
|-&lt;br /&gt;
|Customized software requirements&lt;br /&gt;
|Yes; use singularity containers&lt;br /&gt;
|Yes&lt;br /&gt;
|-&lt;br /&gt;
|Custom OS configuration&lt;br /&gt;
|No&lt;br /&gt;
|Yes&lt;br /&gt;
|-&lt;br /&gt;
|Persistent software or services&lt;br /&gt;
|No; time limited jobs only&lt;br /&gt;
|Yes&lt;br /&gt;
|-&lt;br /&gt;
|Managed environment&lt;br /&gt;
|Yes&lt;br /&gt;
|No; self managed VMs only&lt;br /&gt;
|-&lt;br /&gt;
|Research support by analysts&lt;br /&gt;
|Yes&lt;br /&gt;
|Limited&lt;br /&gt;
|}&lt;br /&gt;
Not sure if you need a virtual machine or a compute cluster? Is this the &amp;quot;Cloud&amp;quot;? Contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] and we will assist you in using this service to support your research goals.&lt;br /&gt;
&lt;br /&gt;
== Getting access to CloudStack ==&lt;br /&gt;
If you are a researcher or principal investigator, please review our [[CloudStack End User Agreement]] and then request a CloudStack account through [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=e3c1d6e91be48554cca5ecefbd4bcb6c Service Now].&lt;br /&gt;
&lt;br /&gt;
Once your account is ready, please refer to the [[CloudStack User Guide]] for details on using CloudStack.&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=CloudStack&amp;diff=1867</id>
		<title>CloudStack</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=CloudStack&amp;diff=1867"/>
		<updated>2022-06-02T20:33:39Z</updated>

		<summary type="html">&lt;p&gt;Darcy: /* Differences between RCS HPC and CloudStack */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Apache CloudStack is an Infrastructure as a Service (IaaS) offering that allows researchers to quickly deploy virtual machines for research projects. This service is part of Research Computing Services&#039; Digital Research Infrastructure (DRI) and is free for all University of Calgary researchers and principal investigators.&lt;br /&gt;
&lt;br /&gt;
== Use cases ==&lt;br /&gt;
CloudStack allows you to create virtual machines for a wide range of workloads and use cases, including:&lt;br /&gt;
&lt;br /&gt;
* Running a internal or public facing web site&lt;br /&gt;
* Running a database&lt;br /&gt;
* Experiment with new software tools&lt;br /&gt;
* Test out the latest release of a software package&lt;br /&gt;
Please note that CloudStack is offered as a research environment and is supported as such. For workloads that demand high availability and high uptime, this may not be the appropriate choice. Researchers are solely responsible for maintaining any VMs they deploy. The expectation is that the owners of the VMs will be patching and doing other maintenance work on a regular basis.  We will be more than happy to provide guidance on this process. &lt;br /&gt;
&lt;br /&gt;
Contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] for details.&lt;br /&gt;
&lt;br /&gt;
=== Differences between RCS HPC and CloudStack ===&lt;br /&gt;
There are some overlaps between the CloudStack offering and our existing High Performance Computing (HPC) cluster environment.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
!RCS HPC Cluster&lt;br /&gt;
!CloudStack&lt;br /&gt;
|-&lt;br /&gt;
|CPU intensive workloads&lt;br /&gt;
|Yes; 48 CPUs per node, 100&#039;s of nodes&lt;br /&gt;
|No; 1-8 CPUs per VM&lt;br /&gt;
|-&lt;br /&gt;
|Memory intensive workloads&lt;br /&gt;
|Yes; up to 2TB memory per node&lt;br /&gt;
|No; up to 16GB memory per VM&lt;br /&gt;
|-&lt;br /&gt;
|High storage requirement workloads&lt;br /&gt;
|Yes; shared multi-petabyte storage&lt;br /&gt;
|No; up to 1TB per account&lt;br /&gt;
|-&lt;br /&gt;
|Data classification&lt;br /&gt;
|Level 1 &amp;amp; 2 (ARC), Level 3 &amp;amp; 4 (MARC)&lt;br /&gt;
|Level 1 &amp;amp; 2 only&lt;br /&gt;
|-&lt;br /&gt;
|Customized software requirements&lt;br /&gt;
|Yes; use singularity containers&lt;br /&gt;
|Yes&lt;br /&gt;
|-&lt;br /&gt;
|Custom OS configuration&lt;br /&gt;
|No&lt;br /&gt;
|Yes&lt;br /&gt;
|-&lt;br /&gt;
|Persistent software or services&lt;br /&gt;
|No; time limited jobs only&lt;br /&gt;
|Yes&lt;br /&gt;
|-&lt;br /&gt;
|Managed environment&lt;br /&gt;
|Yes&lt;br /&gt;
|No; self managed VMs only&lt;br /&gt;
|-&lt;br /&gt;
|Research support by analysts&lt;br /&gt;
|Yes&lt;br /&gt;
|Limited&lt;br /&gt;
|}&lt;br /&gt;
Not sure if you need a virtual machine or a compute cluster? Is this the &amp;quot;Cloud&amp;quot;? Contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] and we will assist you in using this service to support your research goals.&lt;br /&gt;
&lt;br /&gt;
== Getting access to CloudStack ==&lt;br /&gt;
If you are a researcher or principal investigator, please review our [[CloudStack End User Agreement]] and then request a CloudStack account through [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=e3c1d6e91be48554cca5ecefbd4bcb6c Service Now].&lt;br /&gt;
&lt;br /&gt;
Once your account is ready, please refer to the [[CloudStack User Guide]] for details on using CloudStack.&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=CloudStack&amp;diff=1866</id>
		<title>CloudStack</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=CloudStack&amp;diff=1866"/>
		<updated>2022-06-02T20:32:23Z</updated>

		<summary type="html">&lt;p&gt;Darcy: /* Differences between RCS HPC and CloudStack */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Apache CloudStack is an Infrastructure as a Service (IaaS) offering that allows researchers to quickly deploy virtual machines for research projects. This service is part of Research Computing Services&#039; Digital Research Infrastructure (DRI) and is free for all University of Calgary researchers and principal investigators.&lt;br /&gt;
&lt;br /&gt;
== Use cases ==&lt;br /&gt;
CloudStack allows you to create virtual machines for a wide range of workloads and use cases, including:&lt;br /&gt;
&lt;br /&gt;
* Running a internal or public facing web site&lt;br /&gt;
* Running a database&lt;br /&gt;
* Experiment with new software tools&lt;br /&gt;
* Test out the latest release of a software package&lt;br /&gt;
Please note that CloudStack is offered as a research environment and is supported as such. For workloads that demand high availability and high uptime, this may not be the appropriate choice. Researchers are solely responsible for maintaining any VMs they deploy. The expectation is that the owners of the VMs will be patching and doing other maintenance work on a regular basis.  We will be more than happy to provide guidance on this process. &lt;br /&gt;
&lt;br /&gt;
Contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] for details.&lt;br /&gt;
&lt;br /&gt;
=== Differences between RCS HPC and CloudStack ===&lt;br /&gt;
There are some overlaps between the CloudStack offering and our existing High Performance Computing (HPC) cluster environment.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
!RCS HPC Cluster&lt;br /&gt;
!CloudStack&lt;br /&gt;
|-&lt;br /&gt;
|CPU intensive workloads&lt;br /&gt;
|Yes; 48 CPUs per node, 100&#039;s of nodes&lt;br /&gt;
|No; 1-8 CPUs per VM&lt;br /&gt;
|-&lt;br /&gt;
|Memory intensive workloads&lt;br /&gt;
|Yes; up to 2TB memory per node&lt;br /&gt;
|No; up to 16GB memory per VM&lt;br /&gt;
|-&lt;br /&gt;
|High storage requirement workloads&lt;br /&gt;
|Yes; shared multi-petabyte storage&lt;br /&gt;
|No; up to 1TB per account&lt;br /&gt;
|-&lt;br /&gt;
|Data classification&lt;br /&gt;
|Level 1 &amp;amp; 2 (ARC), Level 3 &amp;amp; 4 (MARC)&lt;br /&gt;
|Level 1 &amp;amp; 2 only&lt;br /&gt;
|-&lt;br /&gt;
|Customized software requirements&lt;br /&gt;
|Yes; use singularity containers&lt;br /&gt;
|Yes&lt;br /&gt;
|-&lt;br /&gt;
|Custom OS cofiguration&lt;br /&gt;
|No&lt;br /&gt;
|Yes&lt;br /&gt;
|-&lt;br /&gt;
|Persistent software or services&lt;br /&gt;
|No; time limited jobs only&lt;br /&gt;
|Yes&lt;br /&gt;
|-&lt;br /&gt;
|Managed environment&lt;br /&gt;
|Yes&lt;br /&gt;
|No; self managed VMs only&lt;br /&gt;
|-&lt;br /&gt;
|Research support by analysts&lt;br /&gt;
|Yes&lt;br /&gt;
|Limited&lt;br /&gt;
|}&lt;br /&gt;
Not sure if you need a virtual machine or a compute cluster? Is this the &amp;quot;Cloud&amp;quot;? Contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] and we will assist you in using this service to support your research goals.&lt;br /&gt;
&lt;br /&gt;
== Getting access to CloudStack ==&lt;br /&gt;
If you are a researcher or principal investigator, please review our [[CloudStack End User Agreement]] and then request a CloudStack account through [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=e3c1d6e91be48554cca5ecefbd4bcb6c Service Now].&lt;br /&gt;
&lt;br /&gt;
Once your account is ready, please refer to the [[CloudStack User Guide]] for details on using CloudStack.&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=CloudStack_User_Guide&amp;diff=1859</id>
		<title>CloudStack User Guide</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=CloudStack_User_Guide&amp;diff=1859"/>
		<updated>2022-06-02T19:25:03Z</updated>

		<summary type="html">&lt;p&gt;Darcy: /* Keep security in mind */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This is a user&#039;s guide on using CloudStack provided by Research Computing Services.&lt;br /&gt;
&lt;br /&gt;
== Introduction==&lt;br /&gt;
Apache CloudStack is an Infrastructure as a Service (IaaS) platform that allows users to quickly spin up Linux/Non-Windows based virtual machines. RCS is providing this service to help researchers quickly set up and prototype short-term research related software on premises. CloudStack is not appropriate for workloads that depend on Windows. Services set up on CloudStack virtual machines can be accessed from the campus network and also the internet if required.&lt;br /&gt;
&lt;br /&gt;
Access to CloudStack can be requested via [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=e3c1d6e91be48554cca5ecefbd4bcb6c ServiceNow].&lt;br /&gt;
&lt;br /&gt;
Please refer to our [[CloudStack End User Agreement]] for acceptable uses and requirements.&lt;br /&gt;
&lt;br /&gt;
== Using your virtual machine ==&lt;br /&gt;
You will be able to run whatever virtual machine you wish (with the exception of Windows).  Clearly we cannot provide specific management advice on each and every operating system available.  We can provide you with some suggestions on important considerations to be aware of.&lt;br /&gt;
&lt;br /&gt;
=== Educate yourself ===&lt;br /&gt;
All operating systems (OS) have user groups, web sites, wikis, or mailing lists somewhere on the internet.  They can be a valuable resource.  Most OS providers have on-line documentation that describes using their product.  For example Rocky Linux, used by RCS, has a [https://docs.rockylinux.org/ documentation site].  These are excellent resources and can help you understand how to manage your virtual machine.&lt;br /&gt;
=== Keep security in mind===&lt;br /&gt;
To help keep our network and infrastructure safe from cyber attacks, it is critical that your VMs are properly configured to reduce the number of ways that hackers could exploit it. Here are some common tasks that you can do to help harden your VM:&lt;br /&gt;
*Ensure that the only services running on your VM are the ones you must run.  Each OS has a way of managing what services are running (sysinit, systemd etc).  Please ensure that unnecessary services have been disabled.&lt;br /&gt;
&lt;br /&gt;
*Disable or delete any unused accounts. Many OSs will have pre-configured accounts, and many applications will have pre-configured accounts.  Make sure they are either disabled or not allowed to login.&lt;br /&gt;
&lt;br /&gt;
*All accounts should have strong [https://it.ucalgary.ca/it-security/passwords-do-i-have-change-them/ passwords].&lt;br /&gt;
&lt;br /&gt;
*Many OS&#039;s have the ability to automatically update themselves.  If possible please consider doing this. Updates can also be configured to skip certain software if it will interfere with your research, but please be advised that doing so could place your system at risk.&lt;br /&gt;
&lt;br /&gt;
*If your VM must be exposed to the internet, consider using Trend Micro Cloud One Workload Security from IT security to help monitor for and block cyber attacks.&lt;br /&gt;
&lt;br /&gt;
== Accessing CloudStack ==&lt;br /&gt;
===Accessing the CloudStack management console===&lt;br /&gt;
&lt;br /&gt;
The CloudStack management console is a web-based portal that allows you to view and manage your cloud infrastructure including virtual machines, storage, and network. Any modern web browsers including Chrome, Firefox, Edge, and Safari is supported. &lt;br /&gt;
&lt;br /&gt;
Access the CloudStack management console is possible only from an IT-managed computer or through the IT General VPN when working on unmanaged machines (eg. AirUC) or when working off campus (eg. at home). Please review the IT [https://ucalgary.service-now.com/it?id=kb_article&amp;amp;sys_id=52a169d6dbe5bc506ad32637059619cd knowledge base article on connecting to the General VPN] or contact IT support if you need assistance connecting to the General VPN. &lt;br /&gt;
[[File:CloudStack VPN Connection.png|alt=CloudStack VPN Connection|none|thumb|CloudStack VPN Connection]]&lt;br /&gt;
=== Login to CloudStack===&lt;br /&gt;
&lt;br /&gt;
To log in to CloudStack, navigate to https://cloudstack.rcs.ucalgary.ca/. If this site fails to load, please make sure you are either on a IT managed computer or connected to the General VPN.&lt;br /&gt;
&lt;br /&gt;
Sign in to CloudStack using the Single Sign-On option as shown in the image below. This method will require you to authenticate through our central authentication service using your University of Calgary IT credentials and will require multi-factor authentication. You must have multi-factor authentication set up either via your phone or with the Microsoft Authenticator app.&lt;br /&gt;
[[File:CloudStack Login Page.png|alt=CloudStack Login Page|none|thumb|CloudStack Login Page]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; Due to a bug with the UI, if the Single Sign-On option is disabled, please refresh the login page and try again. This issue should be addressed in our next update for CloudStack.&lt;br /&gt;
&lt;br /&gt;
=== CloudStack Dashboard===&lt;br /&gt;
&lt;br /&gt;
After logging in, you will be presented with your CloudStack management console. The dashboard shows you a general overview of your account&#039;s status.&lt;br /&gt;
[[File:CloudStack Dashboard.png|alt=CloudStack Dashboard|none|thumb|CloudStack Dashboard]]On the right hand side of the dashboard, you will also see recent activity and events that was done within your CloudStack account.&lt;br /&gt;
&lt;br /&gt;
If you wish to see your CloudStack account resource quota and allocation, navigate to: &amp;lt;code&amp;gt;Accounts -&amp;gt; Click on your account -&amp;gt; Resources&amp;lt;/code&amp;gt;. &lt;br /&gt;
[[File:CloudStack Resource Quota.png|alt=CloudStack Resource Quota|none|thumb|CloudStack Resource Quota]]&lt;br /&gt;
&lt;br /&gt;
== Working with virtual machines==&lt;br /&gt;
&lt;br /&gt;
CloudStack allows you to control the lifecycle of virtual machines within your cloud account. VMs may be started, stopped, rebooted, or destroyed within your management console.&lt;br /&gt;
&lt;br /&gt;
===Create a VM===&lt;br /&gt;
&lt;br /&gt;
To create a new VM, enter the CloudStack management console and navigate to:  &amp;lt;code&amp;gt;Compute -&amp;gt; Instances -&amp;gt; Add Instance&amp;lt;/code&amp;gt;[[File:CloudStack Instance Summary.png|alt=CloudStack Instance Summary|thumb|CloudStack Instance Summary|493x493px]]&lt;br /&gt;
&lt;br /&gt;
Virtual Machines require the following details:&lt;br /&gt;
&lt;br /&gt;
# &#039;&#039;&#039;Deployment zone&#039;&#039;&#039;. Your account will already be placed in the appropriate zone.&lt;br /&gt;
# &#039;&#039;&#039;Boot template or ISO&#039;&#039;&#039;. You may choose either a pre-created template or boot from a custom CD-ROM ISO file.&lt;br /&gt;
# &#039;&#039;&#039;Compute offering&#039;&#039;&#039;. You may select an appropriate size for your new VM. Resources will be counted against your account&#039;s quota.&lt;br /&gt;
# &#039;&#039;&#039;Data Disk&#039;&#039;&#039;. You may choose to add an additional virtual disk to your VM to store your data. Alternatively, if you wish to use a single virtual disk for your VM, you may choose to override the size of your root disk in step 2 and select &#039;No thanks&#039; in this step.&lt;br /&gt;
# &#039;&#039;&#039;Networks&#039;&#039;&#039;. You may choose one or more networks your VM should connect to. All CloudStack accounts come with a default network already created and ready to be used.&lt;br /&gt;
# &#039;&#039;&#039;SSH keypairs&#039;&#039;&#039;. For templates that support custom SSH key pairs, you may choose to use a custom SSH keypair to be installed as part of the deployment process.&lt;br /&gt;
# &#039;&#039;&#039;Advanced settings&#039;&#039;&#039;. For templates that support custom user-data (Cloud-Init), you may choose to enable the advanced settings and provide your own Cloud-Init user-data payload. More on this in the advanced tasks section below.&lt;br /&gt;
# &#039;&#039;&#039;Other VM details&#039;&#039;&#039;. You may give your new VM a friendly name and make it part of a group. Groups allow you to group related VMs together for better organization. You may change these details at a later time.&lt;br /&gt;
&lt;br /&gt;
When you are done, review the instance summary on the right hand side and then click on the &#039;Launch Virtual Machine&#039; button.&lt;br /&gt;
&lt;br /&gt;
=== Selecting your VM Operating system ===&lt;br /&gt;
Many OSs will provide various editions that are tailored to a specific use case.  A desktop VM may not be appropriate when you need to run a database server.  The OS provider will have guides on how to choose an edition.&lt;br /&gt;
&lt;br /&gt;
You may choose to install the operating system to your virtual machine using either pre-built templates or from scratch using an ISO image.&lt;br /&gt;
&lt;br /&gt;
====Install from a virtual machine template====&lt;br /&gt;
&lt;br /&gt;
We provide a Rocky Linux 8.5 and a Ubuntu Server 22.04 LTS template for your convenience. These templates are pre-built images with the operating system installed and ready for use. Our templates also support further automated setup configured using Cloud-Init configuration data that can be provided when deploying a new VM. Currently, we offer the following templates: &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Template&lt;br /&gt;
!Cloud-Init Support&lt;br /&gt;
!Password Support&lt;br /&gt;
!Default Username&lt;br /&gt;
|-&lt;br /&gt;
|Rocky Linux 8.5&lt;br /&gt;
|Yes&lt;br /&gt;
|Yes&lt;br /&gt;
|rocky&lt;br /&gt;
|-&lt;br /&gt;
|Ubuntu Server 22.04&lt;br /&gt;
|Yes&lt;br /&gt;
|Yes&lt;br /&gt;
|ubuntu&lt;br /&gt;
|}&lt;br /&gt;
Rocky Linux is an open source Linux distribution that is binary-compatible with Red Hat Enterprise Linux and is what RCS recommends.&lt;br /&gt;
&lt;br /&gt;
For templates that support passwords, the generated password that appears after a VM is created is applied to the default username.&lt;br /&gt;
&lt;br /&gt;
Security note: All VM templates are configured with SSH password authentication enabled. You should be able to SSH to your VM from another system connected to the same guest network. Do not expose port 22 unless required and we highly recommend using key based authentication.&lt;br /&gt;
&lt;br /&gt;
===== Virtual machine credentials =====&lt;br /&gt;
VM templates that have password support will have a randomly generated password set when the VM is first created or when a password reset request is made (available only when the VM is powered off). A randomly generated 6 character password will be displayed when a new password is set and appears as a notification in your CloudStack management console. &lt;br /&gt;
[[File:CloudStack VM Password.png|alt=CloudStack VM Password|none|thumb|CloudStack VM Password]]&lt;br /&gt;
This password is set on the default username for your template. For example, the Rocky Linux VM template will set this password to the &#039;&#039;&#039;&#039;rocky&#039;&#039;&#039;&amp;lt;nowiki/&amp;gt;&#039; user account. You may become the super user by logging in as the &amp;lt;code&amp;gt;rocky&amp;lt;/code&amp;gt; user and then running &amp;lt;code&amp;gt;sudo su&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Note: If you specify a custom Cloud-Init config that creates additional users or sets account passwords, the displayed password will be overridden and have no effect.&lt;br /&gt;
&lt;br /&gt;
==== Install from an ISO image====&lt;br /&gt;
We provide various ISO images for popular Linux distributions. You may select one of these ISO images instead of using a pre-built template when deploying a new virtual machine. We currently provide:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Distribution&lt;br /&gt;
!ISO&lt;br /&gt;
|-&lt;br /&gt;
|Ubuntu 20.04&lt;br /&gt;
|ubuntu-20.04.4-desktop-amd64.iso&lt;br /&gt;
&lt;br /&gt;
ubuntu-20.04.4-live-server-amd64.iso&lt;br /&gt;
|-&lt;br /&gt;
|Ubuntu 21.10&lt;br /&gt;
|ubuntu-21.10-desktop-amd64.iso&lt;br /&gt;
&lt;br /&gt;
ubuntu-21.10-live-server-amd64.iso&lt;br /&gt;
|-&lt;br /&gt;
|Ubuntu 22.04&lt;br /&gt;
|ubuntu-22.04-live-server-amd64.iso&lt;br /&gt;
|-&lt;br /&gt;
|Rocky Linux 8.5&lt;br /&gt;
|Rocky-8.5-x86_64-minimal.iso&lt;br /&gt;
|-&lt;br /&gt;
|Fedora 35&lt;br /&gt;
|Fedora-Workstation-Live-x86_64-35-1.2.iso&lt;br /&gt;
|}&lt;br /&gt;
You may install custom ISO file into your CloudStack account either by directly uploading the ISO through the web console or by providing a URL to the ISO file on the internet.&lt;br /&gt;
&lt;br /&gt;
Please do not install Windows on our CloudStack infrastructure. It is against our user agreement to run Windows based systems in this infrastructure. If you need a Windows VM, please contact us for alternative solutions.&lt;br /&gt;
&lt;br /&gt;
===== Register a ISO with a URL=====&lt;br /&gt;
[[File:CloudStack Download ISO.png|alt=CloudStack Download ISO|thumb|CloudStack Download ISO|190x190px]]&lt;br /&gt;
&lt;br /&gt;
To add a custom ISO file from the internet, enter the CloudStack management console and navigate to:  &amp;lt;code&amp;gt;Images -&amp;gt; ISOs -&amp;gt; Register ISO&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You may check the state of the ISO file by clicking on it and verify the state of the file. If the file is successfully downloaded, its ready state should become ‘true’. The ISO file will only appear in the selection list when the file is downloaded successfully.&lt;br /&gt;
[[File:CloudStack ISO Ready.png|alt=CloudStack ISO Ready|none|thumb|172x172px|CloudStack ISO Ready]]&lt;br /&gt;
&lt;br /&gt;
=====Upload a custom ISO=====&lt;br /&gt;
&lt;br /&gt;
To upload an ISO file, enter the CloudStack management console and navigate to:  &amp;lt;code&amp;gt;Images -&amp;gt; ISOs -&amp;gt; Upload ISO from Local (icon)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:CloudStack Upload ISO.png|alt=CloudStack Upload ISO|none|thumb|CloudStack Upload ISO|217x217px]]&lt;br /&gt;
&lt;br /&gt;
===Connecting to your VM console===&lt;br /&gt;
The CloudStack management console has a KVM (keyboard, video, mouse) feature built-in, allowing you to remotely connect to and interact with your virtual machine. To connect to your virtual machine&#039;s console, navigate to: &amp;lt;code&amp;gt;Compute -&amp;gt; Instances -&amp;gt; Your Instance -&amp;gt; View console&amp;lt;/code&amp;gt;.&lt;br /&gt;
[[File:CloudStack View Console.png|alt=CloudStack View Console|none|thumb|CloudStack View Console]]&lt;br /&gt;
&lt;br /&gt;
=== Expanding a VM disk ===&lt;br /&gt;
[[File:CloudStack Expand Volume.png|alt=CloudStack Expand Volume|thumb|CloudStack Expand Volume]]&lt;br /&gt;
Virtual machine disks can be expanded after they are created within CloudStack. However, you will need to expand the partitions and filesystems manually.&lt;br /&gt;
&lt;br /&gt;
To grow an existing disk:&lt;br /&gt;
&lt;br /&gt;
# Go into your VM details page and click on ‘Volumes’.&lt;br /&gt;
# Click on the volume you wish to expand.&lt;br /&gt;
# Click on the ‘Resize Volume’ icon in the top right.&lt;br /&gt;
Once the volume has been expanded, you should be able to verify the disk volume has grown with &amp;lt;code&amp;gt;lsblk&amp;lt;/code&amp;gt;. There should also be some messages by the kernel when this occurs. However, you will still need to expand any partitions, volumes, and filesystems on your system manually.&lt;br /&gt;
&lt;br /&gt;
To expand your partition, use the &amp;lt;code&amp;gt;growpart&amp;lt;/code&amp;gt; command followed by your disk device and partition number. Eg: &amp;lt;code&amp;gt;/usr/bin/growpart /dev/vda 3&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For LVM volume sets, you can expand the volume using the &amp;lt;code&amp;gt;pvresize&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;lvresize&amp;lt;/code&amp;gt; commands:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/sbin/pvresize -y -q /dev/vda3&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/sbin/lvresize -y -q -r -l +100%FREE /dev/mapper/&amp;lt;volume-name&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To expand your filesystem:&lt;br /&gt;
&lt;br /&gt;
* XFS: &amp;lt;code&amp;gt;/usr/sbin/xfs_growfs &amp;lt;volume&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
* EXT: &amp;lt;code&amp;gt;resize2fs &amp;lt;volume&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Destroying a VM ===&lt;br /&gt;
If you need to delete a VM, click on the red garbage bin icon in the VM instance page. All deletions are irreversible, so please make sure you have a copy of any data you need before proceeding.&lt;br /&gt;
[[File:CloudStack Delete VM.png|alt=CloudStack Delete VM|none|thumb|CloudStack Delete VM]]&lt;br /&gt;
The VM root volume can be deleted immediately by enabling the &#039;Expunge&#039; option in the dialog box. If left disabled, the VM root volume will linger for a day before it is deleted by the system. You may wish to expunge a volume if you are running low on space or volume quota.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Virtual machine networking ==&lt;br /&gt;
The CloudStack platform allows you to define custom virtual private cloud (VPC) network which can contain any number of guest networks that your virtual machines connect to. Each guest network has its own private network address space and is not directly routable from campus or the internet. For virtual machines that require internet access, the VPC or guest network it is connected to must have a NAT IP address associated. The following diagram shows how a guest network connects to the internet and campus network.&lt;br /&gt;
[[File:CloudStack Guest Networking.png|alt=CloudStack Guest Networking|none|thumb|CloudStack Guest Networking]]&lt;br /&gt;
In order to expose a virtual machine&#039;s services to campus or the internet, the appropriate port forwardings must be set up on the VPC containing the guest network. More on this will be discussed in the next section.&lt;br /&gt;
&lt;br /&gt;
Having multiple guest networks allows for more advanced network setups but is not required. We recommend using a single flat network for most workloads. &lt;br /&gt;
&lt;br /&gt;
By default, all CloudStack accounts come with a default VPC and guest network set up with a NAT IP assigned.&lt;br /&gt;
&lt;br /&gt;
=== IP addresses ===&lt;br /&gt;
Due to the design decisions made during the setup of the CloudStack platform, only internal 10.44.12X.X IPs can be assigned to your VPC. These IP addresses are accessible from the university campus network. However, there is a special section of IP addresses that can be accessed from the internet.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!IP address range&lt;br /&gt;
!Accessible from&lt;br /&gt;
!Internet IP mapping&lt;br /&gt;
|-&lt;br /&gt;
|10.44.120.3-128&lt;br /&gt;
|Campus, Internet&lt;br /&gt;
|10.44.120.X maps to 136.159.140.X (ports 80 and 443 only)&lt;br /&gt;
|-&lt;br /&gt;
|10.44.120.129-255&lt;br /&gt;
|Campus only&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|10.44.121.0-255&lt;br /&gt;
|Campus only&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|10.44.122.0-255&lt;br /&gt;
|Campus only&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|10.44.123.0-255&lt;br /&gt;
|Campus only&lt;br /&gt;
|N/A&lt;br /&gt;
|}&lt;br /&gt;
If you need a service exposed to the internet, please request for a public IP address using our [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=e3c1d6e91be48554cca5ecefbd4bcb6c ServiceNow request form]. Additionally, if your service is not port 80 or 443, you must also request for a firewall change request to allow the special port through. &lt;br /&gt;
&lt;br /&gt;
=== Exposing a network service to campus ===&lt;br /&gt;
In order to make a virtual machine be visible to the campus network, you must first set up a port forwarding from a campus IP address to your virtual machine.&lt;br /&gt;
&lt;br /&gt;
To create a port forwarding, navigate to &amp;lt;code&amp;gt;Network -&amp;gt; VPC -&amp;gt; Select your VPC -&amp;gt; Public IP Addresses&amp;lt;/code&amp;gt;. If you do not have any available IP addresses, you will need to click on &#039;Acquire New IP&#039; and select an available IP address. Click on the IP address you wish to use to create a port forwarding on and then navigate to the &#039;Port Forwarding&#039; tab. Enter the private port range, the public port range, the protocol, and select the target VM. &lt;br /&gt;
&lt;br /&gt;
For example, to port forward only HTTP (tcp/80) traffic, you would enter the following:&lt;br /&gt;
[[File:CloudStack Port Forwarding.png|alt=CloudStack Port Forwarding|none|thumb|CloudStack Port Forwarding]]Once the port forwarding is created, you should be able to access the service from on campus. If for some reason access to your service does not work, there may be a firewall restriction on IT&#039;s network. In such circumstances, please contact us for assistance.&lt;br /&gt;
&lt;br /&gt;
=== Exposing a network to the internet ===&lt;br /&gt;
Exposing a service to the internet is the same as exposing it to campus. However, you must create a port forwarding on an IP address that maps to an internet IP address outlined in the IP address table above. If your account does not have one of these IP addresses available, please request for one on the [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=e3c1d6e91be48554cca5ecefbd4bcb6c ServiceNow request form].&lt;br /&gt;
&lt;br /&gt;
By default, only ports 80 and 443 are allowed through the Internet IP address. For all other ports, please [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=47cd16d113153a00b5b4ff82e144b0bf create a firewall rule change request in ServiceNow].&lt;br /&gt;
&lt;br /&gt;
== Cloud-Init Automation ==&lt;br /&gt;
&lt;br /&gt;
=== Ubuntu ===&lt;br /&gt;
The following Cloud-Init configs apply to Ubuntu VM templates.&lt;br /&gt;
&lt;br /&gt;
==== Ubuntu desktop ====&lt;br /&gt;
Use the following Cloud-Init config with the Ubuntu Server template to set up an Ubuntu desktop environment. The setup step takes a up to 15 minutes to complete and you should see a login screen when the setup finishes. &lt;br /&gt;
&lt;br /&gt;
Adjust the root and user password as desired. Below, the test user password is set to blank, allowing you to login to Gnome without a password.&lt;br /&gt;
{{Highlight|code=#cloud-config&lt;br /&gt;
disable_root: false&lt;br /&gt;
&lt;br /&gt;
users:&lt;br /&gt;
  - name: user&lt;br /&gt;
    lock_passwd: false&lt;br /&gt;
    inactive: false&lt;br /&gt;
    gecos: Test User&lt;br /&gt;
    primary_group: user&lt;br /&gt;
    groups: wheel&lt;br /&gt;
    passwd: $1$ADUODeAy$eCJ1lPSxhSGmSvrmWxjLC1&lt;br /&gt;
      &lt;br /&gt;
chpasswd:&lt;br /&gt;
  list: {{!}}&lt;br /&gt;
    root:password&lt;br /&gt;
  expire: false&lt;br /&gt;
    &lt;br /&gt;
# Install a graphical desktop&lt;br /&gt;
runcmd:&lt;br /&gt;
  - DEBIAN_FRONTEND=noninteractive apt -y upgrade&lt;br /&gt;
  - DEBIAN_FRONTEND=noninteractive apt -y install tasksel&lt;br /&gt;
  - tasksel install gnome-desktop&lt;br /&gt;
  - systemctl set-default graphical.target&lt;br /&gt;
  - systemctl isolate graphical.target|style=max-height: 300px;overflow:scroll;}}&lt;br /&gt;
&lt;br /&gt;
=== Rocky Linux ===&lt;br /&gt;
The following Cloud-Init configs apply to Rocky Linux templates.&lt;br /&gt;
&lt;br /&gt;
==== Rocky Linux Desktop ====&lt;br /&gt;
Use the following Cloud-Init config using the Rocky Linux template to set up a Rocky Linux desktop environment. The setup step takes up to 10 minutes to complete and you should see a login screen when the setup finishes. &lt;br /&gt;
&lt;br /&gt;
Adjust the root and user password as desired. Below, the test user password is set to blank, allowing you to login to Gnome without a password.&lt;br /&gt;
{{Highlight|code=#cloud-config&lt;br /&gt;
disable_root: false&lt;br /&gt;
&lt;br /&gt;
users:&lt;br /&gt;
  - name: user&lt;br /&gt;
    lock_passwd: false&lt;br /&gt;
    inactive: false&lt;br /&gt;
    gecos: Test User&lt;br /&gt;
    primary_group: user&lt;br /&gt;
    groups: wheel&lt;br /&gt;
    passwd: $1$ADUODeAy$eCJ1lPSxhSGmSvrmWxjLC1&lt;br /&gt;
      &lt;br /&gt;
chpasswd:&lt;br /&gt;
  list: {{!}}&lt;br /&gt;
    root:password&lt;br /&gt;
  expire: false&lt;br /&gt;
    &lt;br /&gt;
# Install a graphical desktop&lt;br /&gt;
runcmd:&lt;br /&gt;
  - yum -y install &amp;quot;@Workstation&amp;quot;&lt;br /&gt;
  - systemctl set-default graphical.target&lt;br /&gt;
  - systemctl isolate graphical.target}}&lt;br /&gt;
&lt;br /&gt;
==== Rocky Linux Docker host ====&lt;br /&gt;
Use the following Cloud-Init config using the Rocky Linux template to set up a new docker host. This server can then be used to run Docker containers. Also included are:&lt;br /&gt;
&lt;br /&gt;
# The docker-compose utility to help deploy container stacks more easily&lt;br /&gt;
# A helper script to expand the &amp;lt;code&amp;gt;/var&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt; filesystems on first startup based on the available space in the ROOT volume. &lt;br /&gt;
&lt;br /&gt;
Use the CloudStack generated password with the &#039;&amp;lt;code&amp;gt;rocky&amp;lt;/code&amp;gt;&#039; default user account to log in.{{Highlight|code=#cloud-config&lt;br /&gt;
&lt;br /&gt;
write_files:&lt;br /&gt;
  - path: /usr/bin/expand_lvm_root&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      #!/bin/bash&lt;br /&gt;
      /usr/bin/growpart /dev/vda 3&lt;br /&gt;
      /usr/sbin/pvresize -y -q /dev/vda3&lt;br /&gt;
      /usr/sbin/lvresize -y -q -r -l +50%FREE /dev/mapper/*root&lt;br /&gt;
      /usr/sbin/lvresize -y -q -r -l +100%FREE /dev/mapper/*var/dev/mapper/*root&lt;br /&gt;
      /usr/sbin/xfs_growfs /&lt;br /&gt;
      /usr/sbin/xfs_growfs /var&lt;br /&gt;
    permissions: &#039;0700&#039;&lt;br /&gt;
&lt;br /&gt;
  - path: /usr/bin/setup_docker&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      #!/bin/bash&lt;br /&gt;
      yum install -y yum-utils&lt;br /&gt;
      yum-config-manager \&lt;br /&gt;
         --add-repo \&lt;br /&gt;
         https://download.docker.com/linux/centos/docker-ce.repo&lt;br /&gt;
      yum install -y docker-ce docker-ce-cli containerd.io&lt;br /&gt;
      systemctl start docker&lt;br /&gt;
      systemctl enable docker&lt;br /&gt;
      &lt;br /&gt;
      curl -L &amp;quot;https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)&amp;quot; -o /usr/local/bin/docker-compose&lt;br /&gt;
      chmod +x /usr/local/bin/docker-compose&lt;br /&gt;
    permissions: &#039;0700&#039;&lt;br /&gt;
&lt;br /&gt;
  - path: /root/docker-compose.yml&lt;br /&gt;
    permissions: &#039;0700&#039;&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      version: &#039;3.3&#039;&lt;br /&gt;
      services:&lt;br /&gt;
        web:&lt;br /&gt;
          image: php:7.4-apache&lt;br /&gt;
          restart: always&lt;br /&gt;
          user: &amp;quot;0:0&amp;quot;&lt;br /&gt;
          volumes:&lt;br /&gt;
            - /var/www/html:/var/www/html&lt;br /&gt;
          ports:&lt;br /&gt;
            - &amp;quot;80:80&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  - path: /var/www/html/index.php&lt;br /&gt;
    permissions: &#039;0644&#039;&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      &amp;lt;h1&amp;gt;Hello there!&amp;lt;/h1&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;I see you from &amp;lt;?php echo $_SERVER[&#039;REMOTE_ADDR&#039;]; ?&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;&amp;lt;nowiki&amp;gt;pre&amp;lt;/nowiki&amp;gt;&amp;gt;&amp;lt;?php print_r($_SERVER); ?&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
    &lt;br /&gt;
# Ensure VM has the largest / possible&lt;br /&gt;
runcmd:&lt;br /&gt;
  - /usr/bin/expand_lvm_root&lt;br /&gt;
  - /usr/bin/setup_docker&lt;br /&gt;
  - cd /root; docker-compose up -d|style=max-height: 300px;overflow:scroll;}}&lt;br /&gt;
&lt;br /&gt;
==== UC Authentication ====&lt;br /&gt;
Use the following Cloud-Init config to allow UC-based authentication. Local accounts with the same username as the IT account may use the IT credential to log in.&lt;br /&gt;
{{Highlight|code=#cloud-config&lt;br /&gt;
&lt;br /&gt;
write_files:&lt;br /&gt;
  - path: /usr/bin/setup_uc_auth&lt;br /&gt;
    permissions: &#039;0700&#039;&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      #!/bin/bash&lt;br /&gt;
      yum install -y sssd sssd-dbus sssd-krb5 krb5-workstation authselect-compat&lt;br /&gt;
      &lt;br /&gt;
      cat &amp;lt;&amp;lt;EOF &amp;gt; /etc/sssd/sssd.conf&lt;br /&gt;
      [sssd]&lt;br /&gt;
      config_file_version = 2&lt;br /&gt;
      services = nss, pam, ifp&lt;br /&gt;
      domains = uc.ucalgary.ca&lt;br /&gt;
      &lt;br /&gt;
      [domain/uc.ucalgary.ca]&lt;br /&gt;
      id_provider = files&lt;br /&gt;
      debug_level = 5&lt;br /&gt;
      auth_provider = krb5&lt;br /&gt;
      chpass_provider = krb5&lt;br /&gt;
      &lt;br /&gt;
      krb5_realm = UC.UCALGARY.CA&lt;br /&gt;
      krb5_server = ITSODCSRV14.UC.UCALGARY.CA:88&lt;br /&gt;
      krb5_validate = false&lt;br /&gt;
      EOF&lt;br /&gt;
      chmod 600 /etc/sssd/sssd.conf&lt;br /&gt;
      &lt;br /&gt;
      cat &amp;lt;&amp;lt;EOF &amp;gt; /etc/krb5.conf&lt;br /&gt;
      [logging]&lt;br /&gt;
       default = FILE:/var/log/krb5libs.log&lt;br /&gt;
       kdc = FILE:/var/log/krb5kdc.log&lt;br /&gt;
       admin_server = FILE:/var/log/kadmind.log&lt;br /&gt;
      &lt;br /&gt;
      [libdefaults]&lt;br /&gt;
       default_realm = UC.UCALGARY.CA&lt;br /&gt;
       dns_lookup_realm = false&lt;br /&gt;
       dns_lookup_kdc = false&lt;br /&gt;
       ticket_lifetime = 24h&lt;br /&gt;
       renew_lifetime = 7d&lt;br /&gt;
       forwardable = true&lt;br /&gt;
      &lt;br /&gt;
      [realms]&lt;br /&gt;
       UC.UCALGARY.CA = {&lt;br /&gt;
        kdc = itsodcsrv14.uc.ucalgary.ca&lt;br /&gt;
       }&lt;br /&gt;
      &lt;br /&gt;
      [domain_realm]&lt;br /&gt;
       uc.ucalgary.ca = UC.UCALGARY.CA&lt;br /&gt;
       .uc.ucalgary.ca = UC.UCALGARY.CA&lt;br /&gt;
      EOF&lt;br /&gt;
      &lt;br /&gt;
      mkdir -p /etc/authselect/custom/rcs&lt;br /&gt;
      cd /etc/authselect/custom/rcs&lt;br /&gt;
      echo H4sIAMYsbGIAA+0ca3PbuDGf+StQ34dLMpEtybLdUevO6BKl56mTuJGv7Uwmo4FISMKJBHQAaEfNub+9uwBJkXrLshxnjjuTyCKJfWF3sQsuFPhS9CtB79keoQpw2mjYT4DZz1r1uPGs1qhX6yf12vFx/Vm1Vqs1jp+R6j6ZSiHWhipCnikpzarn1t3/TuErj8bhhBzccjOs6Igq41MVHBDen71WUey3mCsWHNx5W4wKpT+qSAGjI3lDQxj8SarB0UDIiB2FcsBFRfuKMfHZY4L2QpYbS2MzZMJwnxouxfnXefQHTaNi9nufhprdpQj6XAyYGisuzHIUuYcWIhlTrW/lRkxMNdO0OH5HdHdeXlDNjAGCuhJQ0IM4GjPFx0Om4PGjDM9n8pULP4wDtpkik78q1Lec/WifcMr8kSyE7fB/a9ssYf8Q2PiPc6/3RmNN/K826mez8f/07LSM/48BR4uD8dGaWOytGbc8BK8buSTu5odtFEyPitGxDH2LYXai9kFjjf8fn4HPT/3/GPy/fnZ6Uvr/Y8BXiP/gTHHOL/KZ0Z2HRpGmD2meszi5mIExjbpM3BxquRuOgIkJICE/kE7qsKQYGwjXRbSrfD2Xxe7EVp/yEOMGsjZWzOLSPASuCHJ83iCxwPtdwyN2XqtXqwvYSpHMMKPjfp/7HHFtyIydrWBnXeeFQkT4fak4ZImulwh134n3POr7Mk50sTUWGMy0RoE2hnmRAJHDgzLtwk1ewffmJq/gXZiJBf9iTSaHZGvTA0ZoGGumdsakY1AxC7q8j9qJeUD+SmqwPhAQi5kC7k8B69M4NOc9GhA7TutzOSLIB4g1EvJWnHPIFhT77HBbE9hNXZBmRNxYi0yTlHvhyexaA9cYvhDkGCMZDTfGMmITLiw7wMGNHLECuq2ZCjnIZnVUyeHZmis90YZFNhDtJJ0Mgl9lrxuNhpDwBVxZi4ioHp1Xq2dni0bNu0k2GPwkz8yn1GBqJDWj1FSWCFUwTDCxG+4DHUF8JUXgzBNNrws2u9sspB65k+4SW99s/RdgyNz4w0OsA/eUY6yr/05rtdn8r35S1n+PAjaSBU1rPGA4pA8pjCaJG6NbsS8zbuWDxmTk6jTMogZKxmOHYDsMdiAgEMxkOKYIcp69DEE60CUaMsLQ3twSRzbQBgnr2XprRtKBiCMOJFMWRaoJXcAxn5rCiG9XcRbK7T3RWOP/pyen+fqvgf5/XC3rv0eBb17eYTIbsJDaGs/+cV53dlGWjfsuG+O6zWh8qP43hYW1UQUQ7aqYHC9CYh4P6fx9eanU+3SGn6xiqBGX7CU5X656mKZ/i6uRv52T+rQa2QV3sWbaaQZdwkjwrZSQxqlBxqYiYrCQ0UHTfd4RoybdPlfadDHiz0+V5mYjI1ivmm4i9W6CuSSW9KW6BfdbwnS5tVFubSxmJvUL0oM4wkRXD2kgb70fcnojt0CKUCB3S5IrYG3EZ8pQqO1sZqkJ+JLig6H5C/CgMUaPmQjgsiA6jFW0247Hg0yg5WNXfVkkXdDR2GzKzmacPLH9pPr3sZ20TTQe3/4W05Abm+0UwzyxyuyiGLorRTgpULr3YgPOdFKrE+dT2y49dqMEg6eRo93YSZaIZfjKfbkcmnJf7jvbl3ssGEtt7Gv4PdJYt/93Um/M1P81iC9l/f8YsKFFWydAP7eeZuvVTX13uaf+6X9kEEQv8991/DJZnwvYswJrLhTYGEmtDSNqWKlNNCa5Jj3LayV55KDpvv+uh/IWc00W3G3q1DOUkqIdatVxQGG1nmJ8Yh6+Gj62W2/etfdLY43/n1VPT3P+XwP/rzaOy/3/R4G27boinU7njS1/XKIwu1P2nIauGHcZpU2MsQgKJy+8813B8zosZD62dBEz5JqMlcStc1egsRyDVMMD4GoyVhAuZJ/wAJk0E4+Kud09wHID95U+BAI4OrmgCYWIY3C46x8DpJJEVNABSwtBg8lbJMGtIU0B1qTiTC8iEjF/SAXXkca6YYgcXr5pXb0i/2Cqx5QEJSnyVjF2cdU6JBcmYwKQkfedDkGcV613kLUYpvoUxDISdzycoMlkIF0yDuPBwOqiR32oaYPktgfM+lIIYNMKAnGSj0PkvN9nCmNUWqY4vaE+3uHuFBcwoZGTg/Ygc3dK9oGzHoPJjoEE7pIaOyO/Iv4x6KjpDY0Z6+bREXyLFTvk8ggHHkF+E3jeT5N0BXhVMBaK+2EQ6CHr6isZOVqKgpgKZRXJw+6dCYRvvO+BMbj5D8hziK/sxj08yU8DIzecZlnai0NyjTY0pDcMlQZKRSGFz7weE6zPQQ2QkbJUVn/InFE5wtgNhSqDJLIPsR4+KxEYgpq4R+1UmGGsPcgeY9AlEXEE82yNiesRrB9MWY2iln92HL9CcSYyJoG0hdItFXamRoyNEzXEQqD1g7bhOQXG6A+5YK/sKJgPzz5a8A1tXQb0ghz9GuPWBNdTT0mWVFQGw50LNAox8PAlL4dJc5Nu/Usb/B+KphEIr9DYwwkBtvG2GTpsCWpHLDep1iOSjRKLrIfEaDLD+Ql1bhROUl98/eH924u///KxdX3x4b1XmQPPa8EMOyFBb6hoiUpDH0MEP9qpmspySK5CRjWozNJ3j3iB9OMIptOJazDzYATsiDiXscPBDwx6P9TQwB75ADGtIGIqF72Bxd0qGFFNnRPmFm2Y4dsEIQn7wrWNZM6+CjyCUP9qXVy2frpskw9XKHnrkrxtt65/+djuLFBCThuFzbFm08OEJAndqXfjHSSM9Q9oAk3SSBvYJsiHZn5s+A2zQ2eiGCIGTYDN4k2vWKXNUMP3pTDKhyKLudFACB/Nh0q7VKSLBOqHK4vEbgMQW28cJkJlr1jm6OQ5tLaYPYoeC1Y3cLaQzj2aCITPITUWUfY0RObxWCqYZXQTnEpczNiXcch9bq09CTFIhAs7eKGNzXI82zecSHAJV4lraia3GNlojhf7zgmeZsE8urQsTfBs8t7qkLyXRNogGjEKSxlMxkzjNKJ67opwNI90i+RF5rGJ+Ck/ufbP9VOSe9h6no3Pbmpe5hoTX6bIkxc0KxFjOIdnwO3FIGRFfHB9Fhe+7Cniq0NUgqUUTPG+qItVQ4L9jQvfEIEx9OfrAdC91pg+BLFCHRcNPA5kgqFlN5vxAkYg8I8sraFpUmNzMHxAxXYhBOsJAp762REz/lHSYpBTg8taEiKvhwzsL9laRgNOuUoDBeoEvPS/eaOe7tsVhQXixF0HeRWNmMEZltNNEc97c9GxEa1z1X598fbiNaY1/764fv0zedO6bv3U6qyObdMY9x4XagjCr0jalAQJmqE9cG1tM7bexAbedAkEJ1K3ihu4C84XJw+BA2PgqRQam9w62Jeof7vQ2jJP21wHNO47G0yWGyO95P2dRbuINK57OH14TQqWECbzhD0uYJWgmXMVmmdA1ReO0EHSTZMJvIDoDArXMjPF4DphtkCQtd1McWQNNVugmXbeTPFMm2q2QJR130zxZH0126BJGnByWNyVdUja/2m9u7rMGavnvST58sitQeibOP1zy4vnkbwVpR+QFJMsDpBixF9OIV2ykJJdaNkmayzxCkvsOo6yJX7KIKRn7TZpXXY+TNXwEnMIO8ya9PM/v/iudjhKWAUf2//85eJj+137/XVnXzTWnv85qc7s/+C3cv/nMeAdHWGfinKJc6F4xDwzq5Jc7ZemiqTDkpBVLLEwJEUzewuuqNgPrOjYOrjzKjPhFgVK6+ZXENRSyfWKRDutDXguAjbX0CWkYpeYA8ySsJ/Avp8k5+RaxewAUX2CO5/hGXc08aHk3ZuaFzVqFA6JJZrOVwNLdJ0UBOttbEPKjynztNMtkTe5sExW61E2yaCRLTsiqBxCKzOmhZDrzr+CXEUV7cot+Ji32k2z6S7GiE3w3brbRLUkYZDfH+AmwNp3lKup7gvW9e8VtYwXH0rT6yh/A01nlB9P2/kX8ZXZlHCJphc1AaTavg9dQuymJtN2oxp3nC3yQoAoRoSHkdfGaLuD55vsJUNC+zClvQc9F+li7mEehuwausvW/+IB//3kGOvyv2L//wn2/zfqtTL/ewy45+//zJ8azyciT+JQwZPpsk/61GynaTfiGleOrqBQza/+LaHkr2kG+YBHD8oO6D9IB/RTbsZ9ooe7n37751Zsfdv2z7L/c8ruU+z/dJaxz9Of6/K/WrXWaMzkf7Wzk7My/3sMuF/+9yQSvH2cGt1n4vhA8U1IyLzdC57mIIiaX+DfCD9dy4GmN0w17Q+lVfJXRrkvxdB4n3OvmSwBvv4snnpMhQt4IlkuCZ7LaZOx9m/cNIZQazttyL2O4+7ww0kbmQhZyNbyH8wqT+R+hydyN1LQol+DWFYCZ9zU98jNyjcDy1131u9S1la4dM6d8WjX1JXvw0154jmjUZ54XoulrPeXMlOeeC5PPJcnnssTz+WJ52+95VXueGXsPsUdrxJKKKGEEkoooYQSSiihhBJKKKGEEkr448D/AT019aAAeAAA {{!}} base64 -d {{!}} tar -xzpf -&lt;br /&gt;
      &lt;br /&gt;
      if [[ &amp;quot;`authselect current -r`&amp;quot; != &amp;quot;custom/rcs&amp;quot; ]] ; then&lt;br /&gt;
        authselect select custom/rcs --force&lt;br /&gt;
        systemctl restart sssd&lt;br /&gt;
      fi&lt;br /&gt;
      &lt;br /&gt;
      systemctl enable sssd&lt;br /&gt;
&lt;br /&gt;
runcmd:&lt;br /&gt;
  - /usr/bin/setup_uc_auth|style=max-height: 300px;overflow:scroll;}}&lt;br /&gt;
&lt;br /&gt;
==== Rocky Linux NFS server ====&lt;br /&gt;
Because there is no ability to share storage among multiple VMs, a local NFS server could be useful if you need to share data between multiple VMs.{{Highlight|code=#cloud-config&lt;br /&gt;
&lt;br /&gt;
write_files:&lt;br /&gt;
  - path: /usr/bin/expand_lvm_root&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      #!/bin/bash&lt;br /&gt;
      /usr/bin/growpart /dev/vda 3&lt;br /&gt;
      /usr/sbin/pvresize -y -q /dev/vda3&lt;br /&gt;
      /usr/sbin/lvresize -y -q -r -l +50%FREE /dev/mapper/*root&lt;br /&gt;
      /usr/sbin/lvresize -y -q -r -l +100%FREE /dev/mapper/*var&lt;br /&gt;
      /usr/sbin/xfs_growfs /&lt;br /&gt;
      /usr/sbin/xfs_growfs /var&lt;br /&gt;
    permissions: &#039;0700&#039;&lt;br /&gt;
&lt;br /&gt;
  - path: /usr/bin/setup_nfs&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      #!/bin/bash&lt;br /&gt;
      yum install -y nfs-utils&lt;br /&gt;
&lt;br /&gt;
      mkdir /export&lt;br /&gt;
      if [ -b /dev/vdb ] ; then&lt;br /&gt;
        mkfs.xfs /dev/vdb&lt;br /&gt;
        echo &amp;quot;/dev/vdb  /export     xfs    defaults    1 2&amp;quot; &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
        mount -a&lt;br /&gt;
      fi&lt;br /&gt;
&lt;br /&gt;
      ip a {{!}} grep -w inet {{!}} awk &#039;{print $2}&#039; {{!}} while read subnet ; do&lt;br /&gt;
        echo &amp;quot;/export     $subnet(rw,no_subtree_check,no_root_squash,async)&amp;quot; &amp;gt;&amp;gt; /etc/exports&lt;br /&gt;
      done&lt;br /&gt;
&lt;br /&gt;
      systemctl start nfs-server&lt;br /&gt;
      systemctl enable nfs-server&lt;br /&gt;
     &lt;br /&gt;
      exportfs -ra&lt;br /&gt;
&lt;br /&gt;
    permissions: &#039;0700&#039;&lt;br /&gt;
&lt;br /&gt;
runcmd:&lt;br /&gt;
  - /usr/bin/expand_lvm_root&lt;br /&gt;
  - /usr/bin/setup_nfs|style=max-height: 300px;overflow:scroll;}}NFS clients connected to the same network as the NFS server can then mount &amp;lt;code&amp;gt;/export&amp;lt;/code&amp;gt; using a command similar to: &amp;lt;code&amp;gt;mount -t nfs nfs-server:/export /mnt&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Infrastructure tools ==&lt;br /&gt;
&lt;br /&gt;
=== Generating a CloudStack API ===&lt;br /&gt;
You can request for a CloudStack API key to automate infrastructure deployment using Terraform or CloudMonkey. A new API key can be generated by navigating to your profile page (top right) and then clicking on the &#039;Generate keys&#039; button.&lt;br /&gt;
[[File:CloudStack API Key.png|alt=CloudStack API Key|none|thumb|CloudStack API Key]]&lt;br /&gt;
&lt;br /&gt;
=== CloudMonkey ===&lt;br /&gt;
CloudMonkey is a utility that makes it easier to interact with the CloudStack API. This tool may be used to help automate VM actions (such as start/stop/reboot), or infrastructure tasks (such as creating/destroying VMs, networks, or firewall rules). &lt;br /&gt;
&lt;br /&gt;
To get started with CloudMonkey, refer to the following resources:&lt;br /&gt;
&lt;br /&gt;
* Download from: &amp;lt;nowiki&amp;gt;https://github.com/apache/cloudstack-cloudmonkey/releases/tag/6.1.0&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
* Documentation at: &amp;lt;nowiki&amp;gt;https://cwiki.apache.org/confluence/display/CLOUDSTACK/CloudStack+cloudmonkey+CLI&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Terraform Integration ===&lt;br /&gt;
Terraform allows you to define infrastructure as code and can be used in conjunction with CloudStack to configure your virtual machines and guest networks. Use the official CloudStack provider.&lt;br /&gt;
&lt;br /&gt;
The following is an example Terraform file for reference. Specify your CloudStack API keys either as a separate &amp;lt;code&amp;gt;vars.tf&amp;lt;/code&amp;gt;.&lt;br /&gt;
{{Highlight|code=# Configure the CloudStack Provider&lt;br /&gt;
terraform {&lt;br /&gt;
  required_providers {&lt;br /&gt;
    cloudstack = {&lt;br /&gt;
      source = &amp;quot;cloudstack/cloudstack&amp;quot;&lt;br /&gt;
      version = &amp;quot;0.4.0&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
provider &amp;quot;cloudstack&amp;quot; {&lt;br /&gt;
  api_url    = &amp;quot;${var.cloudstack_api_url}&amp;quot;&lt;br /&gt;
  api_key    = &amp;quot;${var.cloudstack_api_key}&amp;quot;&lt;br /&gt;
  secret_key = &amp;quot;${var.cloudstack_secret_key}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Create a new VPC&lt;br /&gt;
resource &amp;quot;cloudstack_vpc&amp;quot; &amp;quot;default&amp;quot; {&lt;br /&gt;
  name = &amp;quot;wedgenet-vpc&amp;quot;&lt;br /&gt;
  display_text = &amp;quot;wedgenet-vpc&amp;quot;&lt;br /&gt;
  cidr = &amp;quot;100.64.0.0/20&amp;quot;&lt;br /&gt;
  vpc_offering = &amp;quot;Default VPC offering&amp;quot;&lt;br /&gt;
  zone = &amp;quot;zone1&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Create a new ACL&lt;br /&gt;
resource &amp;quot;cloudstack_network_acl&amp;quot; &amp;quot;default&amp;quot; {&lt;br /&gt;
    name  = &amp;quot;vpc-acl&amp;quot;&lt;br /&gt;
    vpc_id = &amp;quot;${cloudstack_vpc.default.id}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# One ingress and one egress rule for the ACL&lt;br /&gt;
resource &amp;quot;cloudstack_network_acl_rule&amp;quot; &amp;quot;ingress&amp;quot; {&lt;br /&gt;
    acl_id = &amp;quot;${cloudstack_network_acl.default.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    rule {&lt;br /&gt;
        action       = &amp;quot;allow&amp;quot;&lt;br /&gt;
        cidr_list    = [&amp;quot;0.0.0.0/0&amp;quot;]&lt;br /&gt;
        protocol     = &amp;quot;tcp&amp;quot;&lt;br /&gt;
        ports        = [&amp;quot;22&amp;quot;, &amp;quot;80&amp;quot;, &amp;quot;443&amp;quot;]&lt;br /&gt;
        traffic_type = &amp;quot;ingress&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
resource &amp;quot;cloudstack_network_acl_rule&amp;quot; &amp;quot;egress&amp;quot; {&lt;br /&gt;
    acl_id = &amp;quot;${cloudstack_network_acl.default.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    rule {&lt;br /&gt;
        action       = &amp;quot;allow&amp;quot;&lt;br /&gt;
        cidr_list    = [&amp;quot;0.0.0.0/0&amp;quot;]&lt;br /&gt;
        protocol     = &amp;quot;all&amp;quot;&lt;br /&gt;
        traffic_type = &amp;quot;egress&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Create a new network in the VPC&lt;br /&gt;
resource &amp;quot;cloudstack_network&amp;quot; &amp;quot;primary&amp;quot; {&lt;br /&gt;
    name = &amp;quot;primary&amp;quot;&lt;br /&gt;
    display_text = &amp;quot;primary&amp;quot;&lt;br /&gt;
    cidr = &amp;quot;100.64.1.0/24&amp;quot;&lt;br /&gt;
    network_offering = &amp;quot;DefaultIsolatedNetworkOfferingForVpcNetworks&amp;quot;&lt;br /&gt;
    acl_id = &amp;quot;${cloudstack_network_acl.default.id}&amp;quot;&lt;br /&gt;
    vpc_id = &amp;quot;${cloudstack_vpc.default.id}&amp;quot;&lt;br /&gt;
    zone = &amp;quot;zone1&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Create a new public IP address for this network&lt;br /&gt;
resource &amp;quot;cloudstack_ipaddress&amp;quot; &amp;quot;public_ip&amp;quot; {&lt;br /&gt;
    vpc_id = &amp;quot;${cloudstack_vpc.default.id}&amp;quot;&lt;br /&gt;
    network_id = &amp;quot;${cloudstack_network.wosnet1.id}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Create VMs. &lt;br /&gt;
resource &amp;quot;cloudstack_instance&amp;quot; &amp;quot;vm&amp;quot; {&lt;br /&gt;
  count = 1&lt;br /&gt;
  name = &amp;quot;vm${count.index+1}&amp;quot;&lt;br /&gt;
  zone = &amp;quot;zone1&amp;quot;&lt;br /&gt;
  service_offering = &amp;quot;rcs.c4&amp;quot;&lt;br /&gt;
  template = &amp;quot;RockyLinux 8.5&amp;quot;&lt;br /&gt;
  network_id = &amp;quot;${cloudstack_network.wosnet1.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  # Cloud Init data can be used to configure your VM on first startup if your template supports Cloud Init&lt;br /&gt;
  user_data = &amp;lt;&amp;lt;EOF&lt;br /&gt;
#cloud-config&lt;br /&gt;
&lt;br /&gt;
# Require specific packages&lt;br /&gt;
packages:&lt;br /&gt;
 - tmux&lt;br /&gt;
 - git&lt;br /&gt;
 - tcpdump&lt;br /&gt;
&lt;br /&gt;
EOF&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|style=max-height: 300px;overflow:scroll;}}&lt;br /&gt;
&lt;br /&gt;
= Troubleshooting =&lt;br /&gt;
&lt;br /&gt;
=== Cannot create a volume snapshot ===&lt;br /&gt;
Volume snapshots can only be taken on VMs that are powered off.&lt;br /&gt;
&lt;br /&gt;
=== Cannot create a VM snapshot ===&lt;br /&gt;
Disk-only VM snapshots cannot be taken when the VM is running. If you intend to snapshot a running system, you must also snapshot its memory.&lt;br /&gt;
&lt;br /&gt;
=== VM state is still running after shutdown ===&lt;br /&gt;
After running &#039;shutdown&#039; on a VM, the VM state reported by CloudStack is still running. &lt;br /&gt;
&lt;br /&gt;
Please try to do a force shutdown from the CloudStack management console. The VM state isn&#039;t updated by CloudStack and as a result, the state of a VM isn&#039;t properly reflected when power state changes outside of CloudStack (likely a bug?)&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=CloudStack_User_Guide&amp;diff=1853</id>
		<title>CloudStack User Guide</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=CloudStack_User_Guide&amp;diff=1853"/>
		<updated>2022-06-02T17:25:54Z</updated>

		<summary type="html">&lt;p&gt;Darcy: /* Managing your virtual machine */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This is a user&#039;s guide on using CloudStack provided by Research Computing Services.&lt;br /&gt;
&lt;br /&gt;
== Introduction==&lt;br /&gt;
Apache CloudStack is an Infrastructure as a Service (IaaS) platform that allows users to quickly spin up Linux/Non-Windows based virtual machines. RCS is providing this service to help researchers quickly set up and prototype short-term research related software on premises. CloudStack is not appropriate for workloads that depend on Windows. Services set up on CloudStack virtual machines can be accessed from the campus network and also the internet if required.&lt;br /&gt;
&lt;br /&gt;
Access to CloudStack can be requested via [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=e3c1d6e91be48554cca5ecefbd4bcb6c ServiceNow].&lt;br /&gt;
&lt;br /&gt;
Please refer to our [[CloudStack End User Agreement]] for acceptable uses and requirements.&lt;br /&gt;
&lt;br /&gt;
== Using your virtual machine ==&lt;br /&gt;
You will be able to run whatever virtual machine you wish (with the exception of Windows).  Clearly we cannot provide specific management advice on each and every operating system available.  We can provide you with some suggestions on important considerations to be aware of.&lt;br /&gt;
&lt;br /&gt;
=== Educate yourself ===&lt;br /&gt;
All operating systems (OS) have user groups, web sites, wikis, or mailing lists somewhere on the internet.  They can be a valuable resource.  Most OS providers have on-line documentation that describes using their product.  For example Rocky Linux, used by RCS, has a [https://docs.rockylinux.org/ documentation site].  These are excellent resources and can help you understand how to manage your virtual machine.&lt;br /&gt;
&lt;br /&gt;
=== Configure your VM&#039;s OS ===&lt;br /&gt;
It is critical to ensure that the only services running on your VM are the ones you must run.  Each OS has a way of managing what services are running (sysinit, systemd etc).  Please ensure that unnecessary services have been disabled.&lt;br /&gt;
&lt;br /&gt;
Many OSs will have pre-configured accounts, and many applications will have pre-configured accounts.  Make sure they are either disabled or not allowed to login.&lt;br /&gt;
&lt;br /&gt;
All un-used accounts should be disabled or preferably deleted.&lt;br /&gt;
&lt;br /&gt;
All accounts should have strong [https://it.ucalgary.ca/it-security/passwords-do-i-have-change-them/ passwords].&lt;br /&gt;
&lt;br /&gt;
Many OS&#039;s have the ability to automatically update themselves.  If possible please consider doing this.&lt;br /&gt;
&lt;br /&gt;
Updates can also be configured to skip certain software if it will interfere with your research, but please be advised that doing so could place your system at risk.&lt;br /&gt;
&lt;br /&gt;
=== Exposed to the Internet ===&lt;br /&gt;
Not everyone is a computer security expert.  If your VM must be exposed to the internet, please consider using Trend Micro Cloud One Workload Security from IT security to enhance your security posture.&lt;br /&gt;
&lt;br /&gt;
== Accessing CloudStack ==&lt;br /&gt;
===Accessing the CloudStack management console===&lt;br /&gt;
&lt;br /&gt;
The CloudStack management console is a web-based portal that allows you to view and manage your cloud infrastructure including virtual machines, storage, and network. Any modern web browsers including Chrome, Firefox, Edge, and Safari is supported. &lt;br /&gt;
&lt;br /&gt;
Access the CloudStack management console is possible only from an IT-managed computer or through the IT General VPN when working on unmanaged machines (eg. AirUC) or when working off campus (eg. at home). Please review the IT [https://ucalgary.service-now.com/it?id=kb_article&amp;amp;sys_id=52a169d6dbe5bc506ad32637059619cd knowledge base article on connecting to the General VPN] or contact IT support if you need assistance connecting to the General VPN. &lt;br /&gt;
[[File:CloudStack VPN Connection.png|alt=CloudStack VPN Connection|none|thumb|CloudStack VPN Connection]]&lt;br /&gt;
=== Login to CloudStack===&lt;br /&gt;
&lt;br /&gt;
To log in to CloudStack, navigate to https://cloudstack.rcs.ucalgary.ca/. If this site fails to load, please make sure you are either on a IT managed computer or connected to the General VPN.&lt;br /&gt;
&lt;br /&gt;
Sign in to CloudStack using the Single Sign-On option as shown in the image below. This method will require you to authenticate through our central authentication service using your University of Calgary IT credentials and will require multi-factor authentication. You must have multi-factor authentication set up either via your phone or with the Microsoft Authenticator app.&lt;br /&gt;
[[File:CloudStack Login Page.png|alt=CloudStack Login Page|none|thumb|CloudStack Login Page]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; Due to a bug with the UI, if the Single Sign-On option is disabled, please refresh the login page and try again. This issue should be addressed in our next update for CloudStack.&lt;br /&gt;
&lt;br /&gt;
=== CloudStack Dashboard===&lt;br /&gt;
&lt;br /&gt;
After logging in, you will be presented with your CloudStack management console. The dashboard shows you a general overview of your account&#039;s status.&lt;br /&gt;
[[File:CloudStack Dashboard.png|alt=CloudStack Dashboard|none|thumb|CloudStack Dashboard]]On the right hand side of the dashboard, you will also see recent activity and events that was done within your CloudStack account.&lt;br /&gt;
&lt;br /&gt;
If you wish to see your CloudStack account resource quota and allocation, navigate to: &amp;lt;code&amp;gt;Accounts -&amp;gt; Click on your account -&amp;gt; Resources&amp;lt;/code&amp;gt;. &lt;br /&gt;
[[File:CloudStack Resource Quota.png|alt=CloudStack Resource Quota|none|thumb|CloudStack Resource Quota]]&lt;br /&gt;
&lt;br /&gt;
== Working with virtual machines==&lt;br /&gt;
&lt;br /&gt;
CloudStack allows you to control the lifecycle of virtual machines within your cloud account. VMs may be started, stopped, rebooted, or destroyed within your management console.&lt;br /&gt;
&lt;br /&gt;
===Create a VM===&lt;br /&gt;
&lt;br /&gt;
To create a new VM, enter the CloudStack management console and navigate to:  &amp;lt;code&amp;gt;Compute -&amp;gt; Instances -&amp;gt; Add Instance&amp;lt;/code&amp;gt;[[File:CloudStack Instance Summary.png|alt=CloudStack Instance Summary|thumb|CloudStack Instance Summary|493x493px]]&lt;br /&gt;
&lt;br /&gt;
Virtual Machines require the following details:&lt;br /&gt;
&lt;br /&gt;
# &#039;&#039;&#039;Deployment zone&#039;&#039;&#039;. Your account will already be placed in the appropriate zone.&lt;br /&gt;
# &#039;&#039;&#039;Boot template or ISO&#039;&#039;&#039;. You may choose either a pre-created template or boot from a custom CD-ROM ISO file.&lt;br /&gt;
# &#039;&#039;&#039;Compute offering&#039;&#039;&#039;. You may select an appropriate size for your new VM. Resources will be counted against your account&#039;s quota.&lt;br /&gt;
# &#039;&#039;&#039;Data Disk&#039;&#039;&#039;. You may choose to add an additional virtual disk to your VM to store your data. Alternatively, if you wish to use a single virtual disk for your VM, you may choose to override the size of your root disk in step 2 and select &#039;No thanks&#039; in this step.&lt;br /&gt;
# &#039;&#039;&#039;Networks&#039;&#039;&#039;. You may choose one or more networks your VM should connect to. All CloudStack accounts come with a default network already created and ready to be used.&lt;br /&gt;
# &#039;&#039;&#039;SSH keypairs&#039;&#039;&#039;. For templates that support custom SSH key pairs, you may choose to use a custom SSH keypair to be installed as part of the deployment process.&lt;br /&gt;
# &#039;&#039;&#039;Advanced settings&#039;&#039;&#039;. For templates that support custom user-data (Cloud-Init), you may choose to enable the advanced settings and provide your own Cloud-Init user-data payload. More on this in the advanced tasks section below.&lt;br /&gt;
# &#039;&#039;&#039;Other VM details&#039;&#039;&#039;. You may give your new VM a friendly name and make it part of a group. Groups allow you to group related VMs together for better organization. You may change these details at a later time.&lt;br /&gt;
&lt;br /&gt;
When you are done, review the instance summary on the right hand side and then click on the &#039;Launch Virtual Machine&#039; button.&lt;br /&gt;
&lt;br /&gt;
=== Selecting your VM Operating system ===&lt;br /&gt;
Many OSs will provide various editions that are tailored to a specific use case.  A desktop VM may not be appropriate when you need to run a database server.  The OS provider will have guides on how to choose an edition.&lt;br /&gt;
&lt;br /&gt;
You may choose to install the operating system to your virtual machine using either pre-built templates or from scratch using an ISO image.&lt;br /&gt;
&lt;br /&gt;
====Install from a virtual machine template====&lt;br /&gt;
&lt;br /&gt;
We provide a Rocky Linux 8.5 and a Ubuntu Server 22.04 LTS template for your convenience. These templates are pre-built images with the operating system installed and ready for use. Our templates also support further automated setup configured using Cloud-Init configuration data that can be provided when deploying a new VM. Currently, we offer the following templates: &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Template&lt;br /&gt;
!Cloud-Init Support&lt;br /&gt;
!Password Support&lt;br /&gt;
!Default Username&lt;br /&gt;
|-&lt;br /&gt;
|Rocky Linux 8.5&lt;br /&gt;
|Yes&lt;br /&gt;
|Yes&lt;br /&gt;
|rocky&lt;br /&gt;
|-&lt;br /&gt;
|Ubuntu Server 22.04&lt;br /&gt;
|Yes&lt;br /&gt;
|Yes&lt;br /&gt;
|ubuntu&lt;br /&gt;
|}&lt;br /&gt;
Rocky Linux is an open source Linux distribution that is binary-compatible with Red Hat Enterprise Linux and is what RCS recommends.&lt;br /&gt;
&lt;br /&gt;
For templates that support passwords, the generated password that appears after a VM is created is applied to the default username.&lt;br /&gt;
&lt;br /&gt;
Security note: All VM templates are configured with SSH password authentication enabled. You should be able to SSH to your VM from another system connected to the same guest network. Do not expose port 22 unless required and we highly recommend using key based authentication.&lt;br /&gt;
&lt;br /&gt;
===== Virtual machine credentials =====&lt;br /&gt;
VM templates that support password will have a randomly generated password set when the VM is first created or when a password reset request is made (available only when the VM is powered off). A randomly generated 6 character password will be displayed when a new password is set and appears as a notification in your CloudStack management console. &lt;br /&gt;
[[File:CloudStack VM Password.png|alt=CloudStack VM Password|none|thumb|CloudStack VM Password]]&lt;br /&gt;
This password is set on the default username for your template. For example, the Rocky Linux VM template will set this password to the &#039;&#039;&#039;&#039;rocky&#039;&#039;&#039;&amp;lt;nowiki/&amp;gt;&#039; user account. You may become the super user by logging in as the &amp;lt;code&amp;gt;rocky&amp;lt;/code&amp;gt; user and then running &amp;lt;code&amp;gt;sudo su&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Note: If you specify a custom Cloud-Init config that creates additional users or sets account passwords, the displayed password will be overridden and have no effect.&lt;br /&gt;
&lt;br /&gt;
==== Install from an ISO image====&lt;br /&gt;
We provide various ISO images for popular Linux distributions. You may select one of these ISO images instead of using a pre-built template when deploying a new virtual machine. We currently provide:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Distribution&lt;br /&gt;
!ISO&lt;br /&gt;
|-&lt;br /&gt;
|Ubuntu 20.04&lt;br /&gt;
|ubuntu-20.04.4-desktop-amd64.iso&lt;br /&gt;
&lt;br /&gt;
ubuntu-20.04.4-live-server-amd64.iso&lt;br /&gt;
|-&lt;br /&gt;
|Ubuntu 21.10&lt;br /&gt;
|ubuntu-21.10-desktop-amd64.iso&lt;br /&gt;
&lt;br /&gt;
ubuntu-21.10-live-server-amd64.iso&lt;br /&gt;
|-&lt;br /&gt;
|Ubuntu 22.04&lt;br /&gt;
|ubuntu-22.04-live-server-amd64.iso&lt;br /&gt;
|-&lt;br /&gt;
|Rocky Linux 8.5&lt;br /&gt;
|Rocky-8.5-x86_64-minimal.iso&lt;br /&gt;
|-&lt;br /&gt;
|Fedora 35&lt;br /&gt;
|Fedora-Workstation-Live-x86_64-35-1.2.iso&lt;br /&gt;
|}&lt;br /&gt;
You may install custom ISO file into your CloudStack account either by directly uploading the ISO through the web console or by providing a URL to the ISO file on the internet.&lt;br /&gt;
&lt;br /&gt;
Please do not install Windows on our CloudStack infrastructure. It is against our user agreement to run Windows based systems in this infrastructure. If you need a Windows VM, please contact us for alternative solutions.&lt;br /&gt;
&lt;br /&gt;
===== Register a ISO with a URL=====&lt;br /&gt;
[[File:CloudStack Download ISO.png|alt=CloudStack Download ISO|thumb|CloudStack Download ISO|190x190px]]&lt;br /&gt;
&lt;br /&gt;
To add a custom ISO file from the internet, enter the CloudStack management console and navigate to:  &amp;lt;code&amp;gt;Images -&amp;gt; ISOs -&amp;gt; Register ISO&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You may check the state of the ISO file by clicking on it and verify the state of the file. If the file is successfully downloaded, its ready state should become ‘true’. The ISO file will only appear in the selection list when the file is downloaded successfully.&lt;br /&gt;
[[File:CloudStack ISO Ready.png|alt=CloudStack ISO Ready|none|thumb|172x172px|CloudStack ISO Ready]]&lt;br /&gt;
&lt;br /&gt;
=====Upload a custom ISO=====&lt;br /&gt;
&lt;br /&gt;
To upload an ISO file, enter the CloudStack management console and navigate to:  &amp;lt;code&amp;gt;Images -&amp;gt; ISOs -&amp;gt; Upload ISO from Local (icon)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:CloudStack Upload ISO.png|alt=CloudStack Upload ISO|none|thumb|CloudStack Upload ISO|217x217px]]&lt;br /&gt;
&lt;br /&gt;
===Connecting to your VM console===&lt;br /&gt;
The CloudStack management console has a KVM (keyboard, video, mouse) feature built-in, allowing you to remotely connect to and interact with your virtual machine. To connect to your virtual machine&#039;s console, navigate to: &amp;lt;code&amp;gt;Compute -&amp;gt; Instances -&amp;gt; Your Instance -&amp;gt; View console&amp;lt;/code&amp;gt;.&lt;br /&gt;
[[File:CloudStack View Console.png|alt=CloudStack View Console|none|thumb|CloudStack View Console]]&lt;br /&gt;
&lt;br /&gt;
=== Expanding a VM disk ===&lt;br /&gt;
[[File:CloudStack Expand Volume.png|alt=CloudStack Expand Volume|thumb|CloudStack Expand Volume]]&lt;br /&gt;
Virtual machine disks can be expanded after they are created within CloudStack. However, you will need to expand the partitions and filesystems manually.&lt;br /&gt;
&lt;br /&gt;
To grow an existing disk:&lt;br /&gt;
&lt;br /&gt;
# Go into your VM details page and click on ‘Volumes’.&lt;br /&gt;
# Click on the volume you wish to expand.&lt;br /&gt;
# Click on the ‘Resize Volume’ icon in the top right.&lt;br /&gt;
Once the volume has been expanded, you should be able to verify the disk volume has grown with &amp;lt;code&amp;gt;lsblk&amp;lt;/code&amp;gt;. There should also be some messages by the kernel when this occurs. However, you will still need to expand any partitions, volumes, and filesystems on your system manually.&lt;br /&gt;
&lt;br /&gt;
To expand your partition, use the &amp;lt;code&amp;gt;growpart&amp;lt;/code&amp;gt; command followed by your disk device and partition number. Eg: &amp;lt;code&amp;gt;/usr/bin/growpart /dev/vda 3&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For LVM volume sets, you can expand the volume using the &amp;lt;code&amp;gt;pvresize&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;lvresize&amp;lt;/code&amp;gt; commands:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/sbin/pvresize -y -q /dev/vda3&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/sbin/lvresize -y -q -r -l +100%FREE /dev/mapper/&amp;lt;volume-name&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To expand your filesystem:&lt;br /&gt;
&lt;br /&gt;
* XFS: &amp;lt;code&amp;gt;/usr/sbin/xfs_growfs &amp;lt;volume&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
* EXT: &amp;lt;code&amp;gt;resize2fs &amp;lt;volume&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Destroying a VM ===&lt;br /&gt;
If you need to delete a VM, click on the red garbage bin icon in the VM instance page. All deletions are irreversible, so please make sure you have a copy of any data you need before proceeding.&lt;br /&gt;
[[File:CloudStack Delete VM.png|alt=CloudStack Delete VM|none|thumb|CloudStack Delete VM]]&lt;br /&gt;
The VM root volume can be deleted immediately by enabling the &#039;Expunge&#039; option in the dialog box. If left disabled, the VM root volume will linger for a day before it is deleted by the system. You may wish to expunge a volume if you are running low on space or volume quota.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Virtual machine networking ==&lt;br /&gt;
The CloudStack platform allows you to define custom virtual private cloud (VPC) network which can contain any number of guest networks that your virtual machines connect to. Each guest network has its own private network address space and is not directly routable from campus or the internet. For virtual machines that require internet access, the VPC or guest network it is connected to must have a NAT IP address associated. The following diagram shows how a guest network connects to the internet and campus network.&lt;br /&gt;
[[File:CloudStack Guest Networking.png|alt=CloudStack Guest Networking|none|thumb|CloudStack Guest Networking]]&lt;br /&gt;
In order to expose a virtual machine&#039;s services to campus or the internet, the appropriate port forwardings must be set up on the VPC containing the guest network. More on this will be discussed in the next section.&lt;br /&gt;
&lt;br /&gt;
Having multiple guest networks allows for more advanced network setups but is not required. We recommend using a single flat network for most workloads. &lt;br /&gt;
&lt;br /&gt;
By default, all CloudStack accounts come with a default VPC and guest network set up with a NAT IP assigned.&lt;br /&gt;
&lt;br /&gt;
=== IP addresses ===&lt;br /&gt;
Due to the design decisions made during the setup of the CloudStack platform, only internal 10.44.12X.X IPs can be assigned to your VPC. These IP addresses are accessible from the university campus network. However, there is a special section of IP addresses that can be accessed from the internet.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!IP address range&lt;br /&gt;
!Accessible from&lt;br /&gt;
!Internet IP mapping&lt;br /&gt;
|-&lt;br /&gt;
|10.44.120.3-128&lt;br /&gt;
|Campus, Internet&lt;br /&gt;
|10.44.120.X maps to 136.159.140.X (ports 80 and 443 only)&lt;br /&gt;
|-&lt;br /&gt;
|10.44.120.129-255&lt;br /&gt;
|Campus only&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|10.44.121.0-255&lt;br /&gt;
|Campus only&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|10.44.122.0-255&lt;br /&gt;
|Campus only&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|10.44.123.0-255&lt;br /&gt;
|Campus only&lt;br /&gt;
|N/A&lt;br /&gt;
|}&lt;br /&gt;
If you need a service exposed to the internet, please request for a public IP address using our [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=e3c1d6e91be48554cca5ecefbd4bcb6c ServiceNow request form]. Additionally, if your service is not port 80 or 443, you must also request for a firewall change request to allow the special port through. &lt;br /&gt;
&lt;br /&gt;
=== Exposing a network service to campus ===&lt;br /&gt;
In order to make a virtual machine be visible to the campus network, you must first set up a port forwarding from a campus IP address to your virtual machine.&lt;br /&gt;
&lt;br /&gt;
To create a port forwarding, navigate to &amp;lt;code&amp;gt;Network -&amp;gt; VPC -&amp;gt; Select your VPC -&amp;gt; Public IP Addresses&amp;lt;/code&amp;gt;. If you do not have any available IP addresses, you will need to click on &#039;Acquire New IP&#039; and select an available IP address. Click on the IP address you wish to use to create a port forwarding on and then navigate to the &#039;Port Forwarding&#039; tab. Enter the private port range, the public port range, the protocol, and select the target VM. &lt;br /&gt;
&lt;br /&gt;
For example, to port forward only HTTP (tcp/80) traffic, you would enter the following:&lt;br /&gt;
[[File:CloudStack Port Forwarding.png|alt=CloudStack Port Forwarding|none|thumb|CloudStack Port Forwarding]]Once the port forwarding is created, you should be able to access the service from on campus. If for some reason access to your service does not work, there may be a firewall restriction on IT&#039;s network. In such circumstances, please contact us for assistance.&lt;br /&gt;
&lt;br /&gt;
=== Exposing a network to the internet ===&lt;br /&gt;
Exposing a service to the internet is the same as exposing it to campus. However, you must create a port forwarding on an IP address that maps to an internet IP address outlined in the IP address table above. If your account does not have one of these IP addresses available, please request for one on the [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=e3c1d6e91be48554cca5ecefbd4bcb6c ServiceNow request form].&lt;br /&gt;
&lt;br /&gt;
By default, only ports 80 and 443 are allowed through the Internet IP address. For all other ports, please [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=47cd16d113153a00b5b4ff82e144b0bf create a firewall rule change request in ServiceNow].&lt;br /&gt;
&lt;br /&gt;
== Cloud-Init Automation ==&lt;br /&gt;
&lt;br /&gt;
=== Ubuntu ===&lt;br /&gt;
The following Cloud-Init configs apply to Ubuntu VM templates.&lt;br /&gt;
&lt;br /&gt;
==== Ubuntu desktop ====&lt;br /&gt;
Use the following Cloud-Init config with the Ubuntu Server template to set up an Ubuntu desktop environment. The setup step takes a up to 15 minutes to complete and you should see a login screen when the setup finishes. &lt;br /&gt;
&lt;br /&gt;
Adjust the root and user password as desired. Below, the test user password is set to blank, allowing you to login to Gnome without a password.&lt;br /&gt;
{{Highlight|code=#cloud-config&lt;br /&gt;
disable_root: false&lt;br /&gt;
&lt;br /&gt;
users:&lt;br /&gt;
  - name: user&lt;br /&gt;
    lock_passwd: false&lt;br /&gt;
    inactive: false&lt;br /&gt;
    gecos: Test User&lt;br /&gt;
    primary_group: user&lt;br /&gt;
    groups: wheel&lt;br /&gt;
    passwd: $1$ADUODeAy$eCJ1lPSxhSGmSvrmWxjLC1&lt;br /&gt;
      &lt;br /&gt;
chpasswd:&lt;br /&gt;
  list: {{!}}&lt;br /&gt;
    root:password&lt;br /&gt;
  expire: false&lt;br /&gt;
    &lt;br /&gt;
# Install a graphical desktop&lt;br /&gt;
runcmd:&lt;br /&gt;
  - DEBIAN_FRONTEND=noninteractive apt -y upgrade&lt;br /&gt;
  - DEBIAN_FRONTEND=noninteractive apt -y install tasksel&lt;br /&gt;
  - tasksel install gnome-desktop&lt;br /&gt;
  - systemctl set-default graphical.target&lt;br /&gt;
  - systemctl isolate graphical.target}}&lt;br /&gt;
&lt;br /&gt;
=== Rocky Linux ===&lt;br /&gt;
The following Cloud-Init configs apply to Rocky Linux templates.&lt;br /&gt;
&lt;br /&gt;
==== Rocky Linux Desktop ====&lt;br /&gt;
Use the following Cloud-Init config using the Rocky Linux template to set up a Rocky Linux desktop environment. The setup step takes up to 10 minutes to complete and you should see a login screen when the setup finishes. &lt;br /&gt;
&lt;br /&gt;
Adjust the root and user password as desired. Below, the test user password is set to blank, allowing you to login to Gnome without a password.&lt;br /&gt;
{{Highlight|code=#cloud-config&lt;br /&gt;
disable_root: false&lt;br /&gt;
&lt;br /&gt;
users:&lt;br /&gt;
  - name: user&lt;br /&gt;
    lock_passwd: false&lt;br /&gt;
    inactive: false&lt;br /&gt;
    gecos: Test User&lt;br /&gt;
    primary_group: user&lt;br /&gt;
    groups: wheel&lt;br /&gt;
    passwd: $1$ADUODeAy$eCJ1lPSxhSGmSvrmWxjLC1&lt;br /&gt;
      &lt;br /&gt;
chpasswd:&lt;br /&gt;
  list: {{!}}&lt;br /&gt;
    root:password&lt;br /&gt;
  expire: false&lt;br /&gt;
    &lt;br /&gt;
# Install a graphical desktop&lt;br /&gt;
runcmd:&lt;br /&gt;
  - yum -y install &amp;quot;@Workstation&amp;quot;&lt;br /&gt;
  - systemctl set-default graphical.target&lt;br /&gt;
  - systemctl isolate graphical.target}}&lt;br /&gt;
&lt;br /&gt;
==== Rocky Linux Docker host ====&lt;br /&gt;
Use the following Cloud-Init config using the Rocky Linux template to set up a new docker host. This server can then be used to run Docker containers. Also included are:&lt;br /&gt;
&lt;br /&gt;
# The docker-compose utility to help deploy container stacks more easily&lt;br /&gt;
# A helper script to expand the &amp;lt;code&amp;gt;/var&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt; filesystems on first startup based on the available space in the ROOT volume. &lt;br /&gt;
&lt;br /&gt;
Use the CloudStack generated password with the &#039;&amp;lt;code&amp;gt;rocky&amp;lt;/code&amp;gt;&#039; default user account to log in.{{Highlight|code=#cloud-config&lt;br /&gt;
&lt;br /&gt;
write_files:&lt;br /&gt;
  - path: /usr/bin/expand_lvm_root&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      #!/bin/bash&lt;br /&gt;
      /usr/bin/growpart /dev/vda 3&lt;br /&gt;
      /usr/sbin/pvresize -y -q /dev/vda3&lt;br /&gt;
      /usr/sbin/lvresize -y -q -r -l +50%FREE /dev/mapper/*root&lt;br /&gt;
      /usr/sbin/lvresize -y -q -r -l +100%FREE /dev/mapper/*var/dev/mapper/*root&lt;br /&gt;
      /usr/sbin/xfs_growfs /&lt;br /&gt;
      /usr/sbin/xfs_growfs /var&lt;br /&gt;
    permissions: &#039;0700&#039;&lt;br /&gt;
&lt;br /&gt;
  - path: /usr/bin/setup_docker&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      #!/bin/bash&lt;br /&gt;
      yum install -y yum-utils&lt;br /&gt;
      yum-config-manager \&lt;br /&gt;
         --add-repo \&lt;br /&gt;
         https://download.docker.com/linux/centos/docker-ce.repo&lt;br /&gt;
      yum install -y docker-ce docker-ce-cli containerd.io&lt;br /&gt;
      systemctl start docker&lt;br /&gt;
      systemctl enable docker&lt;br /&gt;
      &lt;br /&gt;
      curl -L &amp;quot;https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)&amp;quot; -o /usr/local/bin/docker-compose&lt;br /&gt;
      chmod +x /usr/local/bin/docker-compose&lt;br /&gt;
    permissions: &#039;0700&#039;&lt;br /&gt;
&lt;br /&gt;
  - path: /root/docker-compose.yml&lt;br /&gt;
    permissions: &#039;0700&#039;&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      version: &#039;3.3&#039;&lt;br /&gt;
      services:&lt;br /&gt;
        web:&lt;br /&gt;
          image: php:7.4-apache&lt;br /&gt;
          restart: always&lt;br /&gt;
          user: &amp;quot;0:0&amp;quot;&lt;br /&gt;
          volumes:&lt;br /&gt;
            - /var/www/html:/var/www/html&lt;br /&gt;
          ports:&lt;br /&gt;
            - &amp;quot;80:80&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  - path: /var/www/html/index.php&lt;br /&gt;
    permissions: &#039;0644&#039;&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      &amp;lt;h1&amp;gt;Hello there!&amp;lt;/h1&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;I see you from &amp;lt;?php echo $_SERVER[&#039;REMOTE_ADDR&#039;]; ?&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;&amp;lt;nowiki&amp;gt;pre&amp;lt;/nowiki&amp;gt;&amp;gt;&amp;lt;?php print_r($_SERVER); ?&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
    &lt;br /&gt;
# Ensure VM has the largest / possible&lt;br /&gt;
runcmd:&lt;br /&gt;
  - /usr/bin/expand_lvm_root&lt;br /&gt;
  - /usr/bin/setup_docker&lt;br /&gt;
  - cd /root; docker-compose up -d}}&lt;br /&gt;
&lt;br /&gt;
==== UC Authentication ====&lt;br /&gt;
Use the following Cloud-Init config to allow UC-based authentication. Local accounts with the same username as the IT account may use the IT credential to log in.&lt;br /&gt;
{{Highlight|code=#cloud-config&lt;br /&gt;
&lt;br /&gt;
write_files:&lt;br /&gt;
  - path: /usr/bin/setup_uc_auth&lt;br /&gt;
    permissions: &#039;0700&#039;&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      #!/bin/bash&lt;br /&gt;
      yum install -y sssd sssd-dbus sssd-krb5 krb5-workstation authselect-compat&lt;br /&gt;
      &lt;br /&gt;
      cat &amp;lt;&amp;lt;EOF &amp;gt; /etc/sssd/sssd.conf&lt;br /&gt;
      [sssd]&lt;br /&gt;
      config_file_version = 2&lt;br /&gt;
      services = nss, pam, ifp&lt;br /&gt;
      domains = uc.ucalgary.ca&lt;br /&gt;
      &lt;br /&gt;
      [domain/uc.ucalgary.ca]&lt;br /&gt;
      id_provider = files&lt;br /&gt;
      debug_level = 5&lt;br /&gt;
      auth_provider = krb5&lt;br /&gt;
      chpass_provider = krb5&lt;br /&gt;
      &lt;br /&gt;
      krb5_realm = UC.UCALGARY.CA&lt;br /&gt;
      krb5_server = ITSODCSRV14.UC.UCALGARY.CA:88&lt;br /&gt;
      krb5_validate = false&lt;br /&gt;
      EOF&lt;br /&gt;
      chmod 600 /etc/sssd/sssd.conf&lt;br /&gt;
      &lt;br /&gt;
      cat &amp;lt;&amp;lt;EOF &amp;gt; /etc/krb5.conf&lt;br /&gt;
      [logging]&lt;br /&gt;
       default = FILE:/var/log/krb5libs.log&lt;br /&gt;
       kdc = FILE:/var/log/krb5kdc.log&lt;br /&gt;
       admin_server = FILE:/var/log/kadmind.log&lt;br /&gt;
      &lt;br /&gt;
      [libdefaults]&lt;br /&gt;
       default_realm = UC.UCALGARY.CA&lt;br /&gt;
       dns_lookup_realm = false&lt;br /&gt;
       dns_lookup_kdc = false&lt;br /&gt;
       ticket_lifetime = 24h&lt;br /&gt;
       renew_lifetime = 7d&lt;br /&gt;
       forwardable = true&lt;br /&gt;
      &lt;br /&gt;
      [realms]&lt;br /&gt;
       UC.UCALGARY.CA = {&lt;br /&gt;
        kdc = itsodcsrv14.uc.ucalgary.ca&lt;br /&gt;
       }&lt;br /&gt;
      &lt;br /&gt;
      [domain_realm]&lt;br /&gt;
       uc.ucalgary.ca = UC.UCALGARY.CA&lt;br /&gt;
       .uc.ucalgary.ca = UC.UCALGARY.CA&lt;br /&gt;
      EOF&lt;br /&gt;
      &lt;br /&gt;
      mkdir -p /etc/authselect/custom/rcs&lt;br /&gt;
      cd /etc/authselect/custom/rcs&lt;br /&gt;
      echo H4sIAMYsbGIAA+0ca3PbuDGf+StQ34dLMpEtybLdUevO6BKl56mTuJGv7Uwmo4FISMKJBHQAaEfNub+9uwBJkXrLshxnjjuTyCKJfWF3sQsuFPhS9CtB79keoQpw2mjYT4DZz1r1uPGs1qhX6yf12vFx/Vm1Vqs1jp+R6j6ZSiHWhipCnikpzarn1t3/TuErj8bhhBzccjOs6Igq41MVHBDen71WUey3mCsWHNx5W4wKpT+qSAGjI3lDQxj8SarB0UDIiB2FcsBFRfuKMfHZY4L2QpYbS2MzZMJwnxouxfnXefQHTaNi9nufhprdpQj6XAyYGisuzHIUuYcWIhlTrW/lRkxMNdO0OH5HdHdeXlDNjAGCuhJQ0IM4GjPFx0Om4PGjDM9n8pULP4wDtpkik78q1Lec/WifcMr8kSyE7fB/a9ssYf8Q2PiPc6/3RmNN/K826mez8f/07LSM/48BR4uD8dGaWOytGbc8BK8buSTu5odtFEyPitGxDH2LYXai9kFjjf8fn4HPT/3/GPy/fnZ6Uvr/Y8BXiP/gTHHOL/KZ0Z2HRpGmD2meszi5mIExjbpM3BxquRuOgIkJICE/kE7qsKQYGwjXRbSrfD2Xxe7EVp/yEOMGsjZWzOLSPASuCHJ83iCxwPtdwyN2XqtXqwvYSpHMMKPjfp/7HHFtyIydrWBnXeeFQkT4fak4ZImulwh134n3POr7Mk50sTUWGMy0RoE2hnmRAJHDgzLtwk1ewffmJq/gXZiJBf9iTSaHZGvTA0ZoGGumdsakY1AxC7q8j9qJeUD+SmqwPhAQi5kC7k8B69M4NOc9GhA7TutzOSLIB4g1EvJWnHPIFhT77HBbE9hNXZBmRNxYi0yTlHvhyexaA9cYvhDkGCMZDTfGMmITLiw7wMGNHLECuq2ZCjnIZnVUyeHZmis90YZFNhDtJJ0Mgl9lrxuNhpDwBVxZi4ioHp1Xq2dni0bNu0k2GPwkz8yn1GBqJDWj1FSWCFUwTDCxG+4DHUF8JUXgzBNNrws2u9sspB65k+4SW99s/RdgyNz4w0OsA/eUY6yr/05rtdn8r35S1n+PAjaSBU1rPGA4pA8pjCaJG6NbsS8zbuWDxmTk6jTMogZKxmOHYDsMdiAgEMxkOKYIcp69DEE60CUaMsLQ3twSRzbQBgnr2XprRtKBiCMOJFMWRaoJXcAxn5rCiG9XcRbK7T3RWOP/pyen+fqvgf5/XC3rv0eBb17eYTIbsJDaGs/+cV53dlGWjfsuG+O6zWh8qP43hYW1UQUQ7aqYHC9CYh4P6fx9eanU+3SGn6xiqBGX7CU5X656mKZ/i6uRv52T+rQa2QV3sWbaaQZdwkjwrZSQxqlBxqYiYrCQ0UHTfd4RoybdPlfadDHiz0+V5mYjI1ivmm4i9W6CuSSW9KW6BfdbwnS5tVFubSxmJvUL0oM4wkRXD2kgb70fcnojt0CKUCB3S5IrYG3EZ8pQqO1sZqkJ+JLig6H5C/CgMUaPmQjgsiA6jFW0247Hg0yg5WNXfVkkXdDR2GzKzmacPLH9pPr3sZ20TTQe3/4W05Abm+0UwzyxyuyiGLorRTgpULr3YgPOdFKrE+dT2y49dqMEg6eRo93YSZaIZfjKfbkcmnJf7jvbl3ssGEtt7Gv4PdJYt/93Um/M1P81iC9l/f8YsKFFWydAP7eeZuvVTX13uaf+6X9kEEQv8991/DJZnwvYswJrLhTYGEmtDSNqWKlNNCa5Jj3LayV55KDpvv+uh/IWc00W3G3q1DOUkqIdatVxQGG1nmJ8Yh6+Gj62W2/etfdLY43/n1VPT3P+XwP/rzaOy/3/R4G27boinU7njS1/XKIwu1P2nIauGHcZpU2MsQgKJy+8813B8zosZD62dBEz5JqMlcStc1egsRyDVMMD4GoyVhAuZJ/wAJk0E4+Kud09wHID95U+BAI4OrmgCYWIY3C46x8DpJJEVNABSwtBg8lbJMGtIU0B1qTiTC8iEjF/SAXXkca6YYgcXr5pXb0i/2Cqx5QEJSnyVjF2cdU6JBcmYwKQkfedDkGcV613kLUYpvoUxDISdzycoMlkIF0yDuPBwOqiR32oaYPktgfM+lIIYNMKAnGSj0PkvN9nCmNUWqY4vaE+3uHuFBcwoZGTg/Ygc3dK9oGzHoPJjoEE7pIaOyO/Iv4x6KjpDY0Z6+bREXyLFTvk8ggHHkF+E3jeT5N0BXhVMBaK+2EQ6CHr6isZOVqKgpgKZRXJw+6dCYRvvO+BMbj5D8hziK/sxj08yU8DIzecZlnai0NyjTY0pDcMlQZKRSGFz7weE6zPQQ2QkbJUVn/InFE5wtgNhSqDJLIPsR4+KxEYgpq4R+1UmGGsPcgeY9AlEXEE82yNiesRrB9MWY2iln92HL9CcSYyJoG0hdItFXamRoyNEzXEQqD1g7bhOQXG6A+5YK/sKJgPzz5a8A1tXQb0ghz9GuPWBNdTT0mWVFQGw50LNAox8PAlL4dJc5Nu/Usb/B+KphEIr9DYwwkBtvG2GTpsCWpHLDep1iOSjRKLrIfEaDLD+Ql1bhROUl98/eH924u///KxdX3x4b1XmQPPa8EMOyFBb6hoiUpDH0MEP9qpmspySK5CRjWozNJ3j3iB9OMIptOJazDzYATsiDiXscPBDwx6P9TQwB75ADGtIGIqF72Bxd0qGFFNnRPmFm2Y4dsEIQn7wrWNZM6+CjyCUP9qXVy2frpskw9XKHnrkrxtt65/+djuLFBCThuFzbFm08OEJAndqXfjHSSM9Q9oAk3SSBvYJsiHZn5s+A2zQ2eiGCIGTYDN4k2vWKXNUMP3pTDKhyKLudFACB/Nh0q7VKSLBOqHK4vEbgMQW28cJkJlr1jm6OQ5tLaYPYoeC1Y3cLaQzj2aCITPITUWUfY0RObxWCqYZXQTnEpczNiXcch9bq09CTFIhAs7eKGNzXI82zecSHAJV4lraia3GNlojhf7zgmeZsE8urQsTfBs8t7qkLyXRNogGjEKSxlMxkzjNKJ67opwNI90i+RF5rGJ+Ck/ufbP9VOSe9h6no3Pbmpe5hoTX6bIkxc0KxFjOIdnwO3FIGRFfHB9Fhe+7Cniq0NUgqUUTPG+qItVQ4L9jQvfEIEx9OfrAdC91pg+BLFCHRcNPA5kgqFlN5vxAkYg8I8sraFpUmNzMHxAxXYhBOsJAp762REz/lHSYpBTg8taEiKvhwzsL9laRgNOuUoDBeoEvPS/eaOe7tsVhQXixF0HeRWNmMEZltNNEc97c9GxEa1z1X598fbiNaY1/764fv0zedO6bv3U6qyObdMY9x4XagjCr0jalAQJmqE9cG1tM7bexAbedAkEJ1K3ihu4C84XJw+BA2PgqRQam9w62Jeof7vQ2jJP21wHNO47G0yWGyO95P2dRbuINK57OH14TQqWECbzhD0uYJWgmXMVmmdA1ReO0EHSTZMJvIDoDArXMjPF4DphtkCQtd1McWQNNVugmXbeTPFMm2q2QJR130zxZH0126BJGnByWNyVdUja/2m9u7rMGavnvST58sitQeibOP1zy4vnkbwVpR+QFJMsDpBixF9OIV2ykJJdaNkmayzxCkvsOo6yJX7KIKRn7TZpXXY+TNXwEnMIO8ya9PM/v/iudjhKWAUf2//85eJj+137/XVnXzTWnv85qc7s/+C3cv/nMeAdHWGfinKJc6F4xDwzq5Jc7ZemiqTDkpBVLLEwJEUzewuuqNgPrOjYOrjzKjPhFgVK6+ZXENRSyfWKRDutDXguAjbX0CWkYpeYA8ySsJ/Avp8k5+RaxewAUX2CO5/hGXc08aHk3ZuaFzVqFA6JJZrOVwNLdJ0UBOttbEPKjynztNMtkTe5sExW61E2yaCRLTsiqBxCKzOmhZDrzr+CXEUV7cot+Ji32k2z6S7GiE3w3brbRLUkYZDfH+AmwNp3lKup7gvW9e8VtYwXH0rT6yh/A01nlB9P2/kX8ZXZlHCJphc1AaTavg9dQuymJtN2oxp3nC3yQoAoRoSHkdfGaLuD55vsJUNC+zClvQc9F+li7mEehuwausvW/+IB//3kGOvyv2L//wn2/zfqtTL/ewy45+//zJ8azyciT+JQwZPpsk/61GynaTfiGleOrqBQza/+LaHkr2kG+YBHD8oO6D9IB/RTbsZ9ooe7n37751Zsfdv2z7L/c8ruU+z/dJaxz9Of6/K/WrXWaMzkf7Wzk7My/3sMuF/+9yQSvH2cGt1n4vhA8U1IyLzdC57mIIiaX+DfCD9dy4GmN0w17Q+lVfJXRrkvxdB4n3OvmSwBvv4snnpMhQt4IlkuCZ7LaZOx9m/cNIZQazttyL2O4+7ww0kbmQhZyNbyH8wqT+R+hydyN1LQol+DWFYCZ9zU98jNyjcDy1131u9S1la4dM6d8WjX1JXvw0154jmjUZ54XoulrPeXMlOeeC5PPJcnnssTz+WJ52+95VXueGXsPsUdrxJKKKGEEkoooYQSSiihhBJKKKGEEkr448D/AT019aAAeAAA {{!}} base64 -d {{!}} tar -xzpf -&lt;br /&gt;
      &lt;br /&gt;
      if [[ &amp;quot;`authselect current -r`&amp;quot; != &amp;quot;custom/rcs&amp;quot; ]] ; then&lt;br /&gt;
        authselect select custom/rcs --force&lt;br /&gt;
        systemctl restart sssd&lt;br /&gt;
      fi&lt;br /&gt;
      &lt;br /&gt;
      systemctl enable sssd&lt;br /&gt;
&lt;br /&gt;
runcmd:&lt;br /&gt;
  - /usr/bin/expand_lvm_root}}&lt;br /&gt;
&lt;br /&gt;
== Infrastructure tools ==&lt;br /&gt;
&lt;br /&gt;
=== Generating a CloudStack API ===&lt;br /&gt;
You can request for a CloudStack API key to automate infrastructure deployment using Terraform or CloudMonkey. A new API key can be generated by navigating to your profile page (top right) and then clicking on the &#039;Generate keys&#039; button.&lt;br /&gt;
[[File:CloudStack API Key.png|alt=CloudStack API Key|none|thumb|CloudStack API Key]]&lt;br /&gt;
&lt;br /&gt;
=== CloudMonkey ===&lt;br /&gt;
CloudMonkey is a utility that makes it easier to interact with the CloudStack API. This tool may be used to help automate VM actions (such as start/stop/reboot), or infrastructure tasks (such as creating/destroying VMs, networks, or firewall rules). &lt;br /&gt;
&lt;br /&gt;
To get started with CloudMonkey, refer to the following resources:&lt;br /&gt;
&lt;br /&gt;
* Download from: &amp;lt;nowiki&amp;gt;https://github.com/apache/cloudstack-cloudmonkey/releases/tag/6.1.0&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
* Documentation at: &amp;lt;nowiki&amp;gt;https://cwiki.apache.org/confluence/display/CLOUDSTACK/CloudStack+cloudmonkey+CLI&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Terraform Integration ===&lt;br /&gt;
Terraform allows you to define infrastructure as code and can be used in conjunction with CloudStack to configure your virtual machines and guest networks. Use the official CloudStack provider.&lt;br /&gt;
&lt;br /&gt;
The following is an example Terraform file for reference. Specify your CloudStack API keys either as a separate &amp;lt;code&amp;gt;vars.tf&amp;lt;/code&amp;gt;.&lt;br /&gt;
{{Highlight|code=# Configure the CloudStack Provider&lt;br /&gt;
terraform {&lt;br /&gt;
  required_providers {&lt;br /&gt;
    cloudstack = {&lt;br /&gt;
      source = &amp;quot;cloudstack/cloudstack&amp;quot;&lt;br /&gt;
      version = &amp;quot;0.4.0&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
provider &amp;quot;cloudstack&amp;quot; {&lt;br /&gt;
  api_url    = &amp;quot;${var.cloudstack_api_url}&amp;quot;&lt;br /&gt;
  api_key    = &amp;quot;${var.cloudstack_api_key}&amp;quot;&lt;br /&gt;
  secret_key = &amp;quot;${var.cloudstack_secret_key}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Create a new VPC&lt;br /&gt;
resource &amp;quot;cloudstack_vpc&amp;quot; &amp;quot;default&amp;quot; {&lt;br /&gt;
  name = &amp;quot;wedgenet-vpc&amp;quot;&lt;br /&gt;
  display_text = &amp;quot;wedgenet-vpc&amp;quot;&lt;br /&gt;
  cidr = &amp;quot;100.64.0.0/20&amp;quot;&lt;br /&gt;
  vpc_offering = &amp;quot;Default VPC offering&amp;quot;&lt;br /&gt;
  zone = &amp;quot;zone1&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Create a new ACL&lt;br /&gt;
resource &amp;quot;cloudstack_network_acl&amp;quot; &amp;quot;default&amp;quot; {&lt;br /&gt;
    name  = &amp;quot;vpc-acl&amp;quot;&lt;br /&gt;
    vpc_id = &amp;quot;${cloudstack_vpc.default.id}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# One ingress and one egress rule for the ACL&lt;br /&gt;
resource &amp;quot;cloudstack_network_acl_rule&amp;quot; &amp;quot;ingress&amp;quot; {&lt;br /&gt;
    acl_id = &amp;quot;${cloudstack_network_acl.default.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    rule {&lt;br /&gt;
        action       = &amp;quot;allow&amp;quot;&lt;br /&gt;
        cidr_list    = [&amp;quot;0.0.0.0/0&amp;quot;]&lt;br /&gt;
        protocol     = &amp;quot;tcp&amp;quot;&lt;br /&gt;
        ports        = [&amp;quot;22&amp;quot;, &amp;quot;80&amp;quot;, &amp;quot;443&amp;quot;]&lt;br /&gt;
        traffic_type = &amp;quot;ingress&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
resource &amp;quot;cloudstack_network_acl_rule&amp;quot; &amp;quot;egress&amp;quot; {&lt;br /&gt;
    acl_id = &amp;quot;${cloudstack_network_acl.default.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    rule {&lt;br /&gt;
        action       = &amp;quot;allow&amp;quot;&lt;br /&gt;
        cidr_list    = [&amp;quot;0.0.0.0/0&amp;quot;]&lt;br /&gt;
        protocol     = &amp;quot;all&amp;quot;&lt;br /&gt;
        traffic_type = &amp;quot;egress&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Create a new network in the VPC&lt;br /&gt;
resource &amp;quot;cloudstack_network&amp;quot; &amp;quot;primary&amp;quot; {&lt;br /&gt;
    name = &amp;quot;primary&amp;quot;&lt;br /&gt;
    display_text = &amp;quot;primary&amp;quot;&lt;br /&gt;
    cidr = &amp;quot;100.64.1.0/24&amp;quot;&lt;br /&gt;
    network_offering = &amp;quot;DefaultIsolatedNetworkOfferingForVpcNetworks&amp;quot;&lt;br /&gt;
    acl_id = &amp;quot;${cloudstack_network_acl.default.id}&amp;quot;&lt;br /&gt;
    vpc_id = &amp;quot;${cloudstack_vpc.default.id}&amp;quot;&lt;br /&gt;
    zone = &amp;quot;zone1&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Create a new public IP address for this network&lt;br /&gt;
resource &amp;quot;cloudstack_ipaddress&amp;quot; &amp;quot;public_ip&amp;quot; {&lt;br /&gt;
    vpc_id = &amp;quot;${cloudstack_vpc.default.id}&amp;quot;&lt;br /&gt;
    network_id = &amp;quot;${cloudstack_network.wosnet1.id}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Create VMs. &lt;br /&gt;
resource &amp;quot;cloudstack_instance&amp;quot; &amp;quot;vm&amp;quot; {&lt;br /&gt;
  count = 1&lt;br /&gt;
  name = &amp;quot;vm${count.index+1}&amp;quot;&lt;br /&gt;
  zone = &amp;quot;zone1&amp;quot;&lt;br /&gt;
  service_offering = &amp;quot;rcs.c4&amp;quot;&lt;br /&gt;
  template = &amp;quot;RockyLinux 8.5&amp;quot;&lt;br /&gt;
  network_id = &amp;quot;${cloudstack_network.wosnet1.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  # Cloud Init data can be used to configure your VM on first startup if your template supports Cloud Init&lt;br /&gt;
  user_data = &amp;lt;&amp;lt;EOF&lt;br /&gt;
#cloud-config&lt;br /&gt;
&lt;br /&gt;
# Require specific packages&lt;br /&gt;
packages:&lt;br /&gt;
 - tmux&lt;br /&gt;
 - git&lt;br /&gt;
 - tcpdump&lt;br /&gt;
&lt;br /&gt;
EOF&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
= Troubleshooting =&lt;br /&gt;
&lt;br /&gt;
=== Cannot create a volume snapshot ===&lt;br /&gt;
Volume snapshots can only be taken on VMs that are powered off.&lt;br /&gt;
&lt;br /&gt;
=== Cannot create a VM snapshot ===&lt;br /&gt;
Disk-only VM snapshots cannot be taken when the VM is running. If you intend to snapshot a running system, you must also snapshot its memory.&lt;br /&gt;
&lt;br /&gt;
=== VM state is still running after shutdown ===&lt;br /&gt;
After running &#039;shutdown&#039; on a VM, the VM state reported by CloudStack is still running. &lt;br /&gt;
&lt;br /&gt;
Please try to do a force shutdown from the CloudStack management console. The VM state isn&#039;t updated by CloudStack and as a result, the state of a VM isn&#039;t properly reflected when power state changes outside of CloudStack (likely a bug?)&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=CloudStack_User_Guide&amp;diff=1852</id>
		<title>CloudStack User Guide</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=CloudStack_User_Guide&amp;diff=1852"/>
		<updated>2022-06-02T17:25:17Z</updated>

		<summary type="html">&lt;p&gt;Darcy: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This is a user&#039;s guide on using CloudStack provided by Research Computing Services.&lt;br /&gt;
&lt;br /&gt;
== Introduction==&lt;br /&gt;
Apache CloudStack is an Infrastructure as a Service (IaaS) platform that allows users to quickly spin up Linux/Non-Windows based virtual machines. RCS is providing this service to help researchers quickly set up and prototype short-term research related software on premises. CloudStack is not appropriate for workloads that depend on Windows. Services set up on CloudStack virtual machines can be accessed from the campus network and also the internet if required.&lt;br /&gt;
&lt;br /&gt;
Access to CloudStack can be requested via [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=e3c1d6e91be48554cca5ecefbd4bcb6c ServiceNow].&lt;br /&gt;
&lt;br /&gt;
Please refer to our [[CloudStack End User Agreement]] for acceptable uses and requirements.&lt;br /&gt;
&lt;br /&gt;
== Managing your virtual machine ==&lt;br /&gt;
You will be able to run whatever virtual machine you wish (with the exception of Windows).  Clearly we cannot provide specific management advice on each and every operating system available.  We can provide you with some suggestions on important considerations to be aware of.&lt;br /&gt;
&lt;br /&gt;
=== Educate yourself ===&lt;br /&gt;
All operating systems (OS) have user groups, web sites, wikis, or mailing lists somewhere on the internet.  They can be a valuable resource.  Most OS providers have on-line documentation that describes using their product.  For example Rocky Linux, used by RCS, has a [https://docs.rockylinux.org/ documentation site].  These are excellent resources and can help you understand how to manage your virtual machine.&lt;br /&gt;
&lt;br /&gt;
=== Configure your VM&#039;s OS ===&lt;br /&gt;
It is critical to ensure that the only services running on your VM are the ones you must run.  Each OS has a way of managing what services are running (sysinit, systemd etc).  Please ensure that unnecessary services have been disabled.&lt;br /&gt;
&lt;br /&gt;
Many OSs will have pre-configured accounts, and many applications will have pre-configured accounts.  Make sure they are either disabled or not allowed to login.&lt;br /&gt;
&lt;br /&gt;
All un-used accounts should be disabled or preferably deleted.&lt;br /&gt;
&lt;br /&gt;
All accounts should have strong [https://it.ucalgary.ca/it-security/passwords-do-i-have-change-them/ passwords].&lt;br /&gt;
&lt;br /&gt;
Many OS&#039;s have the ability to automatically update themselves.  If possible please consider doing this.&lt;br /&gt;
&lt;br /&gt;
Updates can also be configured to skip certain software if it will interfere with your research, but please be advised that doing so could place your system at risk.&lt;br /&gt;
&lt;br /&gt;
=== Exposed to the Internet ===&lt;br /&gt;
Not everyone is a computer security expert.  If your VM must be exposed to the internet, please consider using Trend Micro Cloud One Workload Security from IT security to enhance your security posture.&lt;br /&gt;
&lt;br /&gt;
== Accessing CloudStack ==&lt;br /&gt;
===Accessing the CloudStack management console===&lt;br /&gt;
&lt;br /&gt;
The CloudStack management console is a web-based portal that allows you to view and manage your cloud infrastructure including virtual machines, storage, and network. Any modern web browsers including Chrome, Firefox, Edge, and Safari is supported. &lt;br /&gt;
&lt;br /&gt;
Access the CloudStack management console is possible only from an IT-managed computer or through the IT General VPN when working on unmanaged machines (eg. AirUC) or when working off campus (eg. at home). Please review the IT [https://ucalgary.service-now.com/it?id=kb_article&amp;amp;sys_id=52a169d6dbe5bc506ad32637059619cd knowledge base article on connecting to the General VPN] or contact IT support if you need assistance connecting to the General VPN. &lt;br /&gt;
[[File:CloudStack VPN Connection.png|alt=CloudStack VPN Connection|none|thumb|CloudStack VPN Connection]]&lt;br /&gt;
=== Login to CloudStack===&lt;br /&gt;
&lt;br /&gt;
To log in to CloudStack, navigate to https://cloudstack.rcs.ucalgary.ca/. If this site fails to load, please make sure you are either on a IT managed computer or connected to the General VPN.&lt;br /&gt;
&lt;br /&gt;
Sign in to CloudStack using the Single Sign-On option as shown in the image below. This method will require you to authenticate through our central authentication service using your University of Calgary IT credentials and will require multi-factor authentication. You must have multi-factor authentication set up either via your phone or with the Microsoft Authenticator app.&lt;br /&gt;
[[File:CloudStack Login Page.png|alt=CloudStack Login Page|none|thumb|CloudStack Login Page]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; Due to a bug with the UI, if the Single Sign-On option is disabled, please refresh the login page and try again. This issue should be addressed in our next update for CloudStack.&lt;br /&gt;
&lt;br /&gt;
=== CloudStack Dashboard===&lt;br /&gt;
&lt;br /&gt;
After logging in, you will be presented with your CloudStack management console. The dashboard shows you a general overview of your account&#039;s status.&lt;br /&gt;
[[File:CloudStack Dashboard.png|alt=CloudStack Dashboard|none|thumb|CloudStack Dashboard]]On the right hand side of the dashboard, you will also see recent activity and events that was done within your CloudStack account.&lt;br /&gt;
&lt;br /&gt;
If you wish to see your CloudStack account resource quota and allocation, navigate to: &amp;lt;code&amp;gt;Accounts -&amp;gt; Click on your account -&amp;gt; Resources&amp;lt;/code&amp;gt;. &lt;br /&gt;
[[File:CloudStack Resource Quota.png|alt=CloudStack Resource Quota|none|thumb|CloudStack Resource Quota]]&lt;br /&gt;
&lt;br /&gt;
== Working with virtual machines==&lt;br /&gt;
&lt;br /&gt;
CloudStack allows you to control the lifecycle of virtual machines within your cloud account. VMs may be started, stopped, rebooted, or destroyed within your management console.&lt;br /&gt;
&lt;br /&gt;
===Create a VM===&lt;br /&gt;
&lt;br /&gt;
To create a new VM, enter the CloudStack management console and navigate to:  &amp;lt;code&amp;gt;Compute -&amp;gt; Instances -&amp;gt; Add Instance&amp;lt;/code&amp;gt;[[File:CloudStack Instance Summary.png|alt=CloudStack Instance Summary|thumb|CloudStack Instance Summary|493x493px]]&lt;br /&gt;
&lt;br /&gt;
Virtual Machines require the following details:&lt;br /&gt;
&lt;br /&gt;
# &#039;&#039;&#039;Deployment zone&#039;&#039;&#039;. Your account will already be placed in the appropriate zone.&lt;br /&gt;
# &#039;&#039;&#039;Boot template or ISO&#039;&#039;&#039;. You may choose either a pre-created template or boot from a custom CD-ROM ISO file.&lt;br /&gt;
# &#039;&#039;&#039;Compute offering&#039;&#039;&#039;. You may select an appropriate size for your new VM. Resources will be counted against your account&#039;s quota.&lt;br /&gt;
# &#039;&#039;&#039;Data Disk&#039;&#039;&#039;. You may choose to add an additional virtual disk to your VM to store your data. Alternatively, if you wish to use a single virtual disk for your VM, you may choose to override the size of your root disk in step 2 and select &#039;No thanks&#039; in this step.&lt;br /&gt;
# &#039;&#039;&#039;Networks&#039;&#039;&#039;. You may choose one or more networks your VM should connect to. All CloudStack accounts come with a default network already created and ready to be used.&lt;br /&gt;
# &#039;&#039;&#039;SSH keypairs&#039;&#039;&#039;. For templates that support custom SSH key pairs, you may choose to use a custom SSH keypair to be installed as part of the deployment process.&lt;br /&gt;
# &#039;&#039;&#039;Advanced settings&#039;&#039;&#039;. For templates that support custom user-data (Cloud-Init), you may choose to enable the advanced settings and provide your own Cloud-Init user-data payload. More on this in the advanced tasks section below.&lt;br /&gt;
# &#039;&#039;&#039;Other VM details&#039;&#039;&#039;. You may give your new VM a friendly name and make it part of a group. Groups allow you to group related VMs together for better organization. You may change these details at a later time.&lt;br /&gt;
&lt;br /&gt;
When you are done, review the instance summary on the right hand side and then click on the &#039;Launch Virtual Machine&#039; button.&lt;br /&gt;
&lt;br /&gt;
=== Selecting your VM Operating system ===&lt;br /&gt;
Many OSs will provide various editions that are tailored to a specific use case.  A desktop VM may not be appropriate when you need to run a database server.  The OS provider will have guides on how to choose an edition.&lt;br /&gt;
&lt;br /&gt;
You may choose to install the operating system to your virtual machine using either pre-built templates or from scratch using an ISO image.&lt;br /&gt;
&lt;br /&gt;
====Install from a virtual machine template====&lt;br /&gt;
&lt;br /&gt;
We provide a Rocky Linux 8.5 and a Ubuntu Server 22.04 LTS template for your convenience. These templates are pre-built images with the operating system installed and ready for use. Our templates also support further automated setup configured using Cloud-Init configuration data that can be provided when deploying a new VM. Currently, we offer the following templates: &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Template&lt;br /&gt;
!Cloud-Init Support&lt;br /&gt;
!Password Support&lt;br /&gt;
!Default Username&lt;br /&gt;
|-&lt;br /&gt;
|Rocky Linux 8.5&lt;br /&gt;
|Yes&lt;br /&gt;
|Yes&lt;br /&gt;
|rocky&lt;br /&gt;
|-&lt;br /&gt;
|Ubuntu Server 22.04&lt;br /&gt;
|Yes&lt;br /&gt;
|Yes&lt;br /&gt;
|ubuntu&lt;br /&gt;
|}&lt;br /&gt;
Rocky Linux is an open source Linux distribution that is binary-compatible with Red Hat Enterprise Linux and is what RCS recommends.&lt;br /&gt;
&lt;br /&gt;
For templates that support passwords, the generated password that appears after a VM is created is applied to the default username.&lt;br /&gt;
&lt;br /&gt;
Security note: All VM templates are configured with SSH password authentication enabled. You should be able to SSH to your VM from another system connected to the same guest network. Do not expose port 22 unless required and we highly recommend using key based authentication.&lt;br /&gt;
&lt;br /&gt;
===== Virtual machine credentials =====&lt;br /&gt;
VM templates that support password will have a randomly generated password set when the VM is first created or when a password reset request is made (available only when the VM is powered off). A randomly generated 6 character password will be displayed when a new password is set and appears as a notification in your CloudStack management console. &lt;br /&gt;
[[File:CloudStack VM Password.png|alt=CloudStack VM Password|none|thumb|CloudStack VM Password]]&lt;br /&gt;
This password is set on the default username for your template. For example, the Rocky Linux VM template will set this password to the &#039;&#039;&#039;&#039;rocky&#039;&#039;&#039;&amp;lt;nowiki/&amp;gt;&#039; user account. You may become the super user by logging in as the &amp;lt;code&amp;gt;rocky&amp;lt;/code&amp;gt; user and then running &amp;lt;code&amp;gt;sudo su&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Note: If you specify a custom Cloud-Init config that creates additional users or sets account passwords, the displayed password will be overridden and have no effect.&lt;br /&gt;
&lt;br /&gt;
==== Install from an ISO image====&lt;br /&gt;
We provide various ISO images for popular Linux distributions. You may select one of these ISO images instead of using a pre-built template when deploying a new virtual machine. We currently provide:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Distribution&lt;br /&gt;
!ISO&lt;br /&gt;
|-&lt;br /&gt;
|Ubuntu 20.04&lt;br /&gt;
|ubuntu-20.04.4-desktop-amd64.iso&lt;br /&gt;
&lt;br /&gt;
ubuntu-20.04.4-live-server-amd64.iso&lt;br /&gt;
|-&lt;br /&gt;
|Ubuntu 21.10&lt;br /&gt;
|ubuntu-21.10-desktop-amd64.iso&lt;br /&gt;
&lt;br /&gt;
ubuntu-21.10-live-server-amd64.iso&lt;br /&gt;
|-&lt;br /&gt;
|Ubuntu 22.04&lt;br /&gt;
|ubuntu-22.04-live-server-amd64.iso&lt;br /&gt;
|-&lt;br /&gt;
|Rocky Linux 8.5&lt;br /&gt;
|Rocky-8.5-x86_64-minimal.iso&lt;br /&gt;
|-&lt;br /&gt;
|Fedora 35&lt;br /&gt;
|Fedora-Workstation-Live-x86_64-35-1.2.iso&lt;br /&gt;
|}&lt;br /&gt;
You may install custom ISO file into your CloudStack account either by directly uploading the ISO through the web console or by providing a URL to the ISO file on the internet.&lt;br /&gt;
&lt;br /&gt;
Please do not install Windows on our CloudStack infrastructure. It is against our user agreement to run Windows based systems in this infrastructure. If you need a Windows VM, please contact us for alternative solutions.&lt;br /&gt;
&lt;br /&gt;
===== Register a ISO with a URL=====&lt;br /&gt;
[[File:CloudStack Download ISO.png|alt=CloudStack Download ISO|thumb|CloudStack Download ISO|190x190px]]&lt;br /&gt;
&lt;br /&gt;
To add a custom ISO file from the internet, enter the CloudStack management console and navigate to:  &amp;lt;code&amp;gt;Images -&amp;gt; ISOs -&amp;gt; Register ISO&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You may check the state of the ISO file by clicking on it and verify the state of the file. If the file is successfully downloaded, its ready state should become ‘true’. The ISO file will only appear in the selection list when the file is downloaded successfully.&lt;br /&gt;
[[File:CloudStack ISO Ready.png|alt=CloudStack ISO Ready|none|thumb|172x172px|CloudStack ISO Ready]]&lt;br /&gt;
&lt;br /&gt;
=====Upload a custom ISO=====&lt;br /&gt;
&lt;br /&gt;
To upload an ISO file, enter the CloudStack management console and navigate to:  &amp;lt;code&amp;gt;Images -&amp;gt; ISOs -&amp;gt; Upload ISO from Local (icon)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:CloudStack Upload ISO.png|alt=CloudStack Upload ISO|none|thumb|CloudStack Upload ISO|217x217px]]&lt;br /&gt;
&lt;br /&gt;
===Connecting to your VM console===&lt;br /&gt;
The CloudStack management console has a KVM (keyboard, video, mouse) feature built-in, allowing you to remotely connect to and interact with your virtual machine. To connect to your virtual machine&#039;s console, navigate to: &amp;lt;code&amp;gt;Compute -&amp;gt; Instances -&amp;gt; Your Instance -&amp;gt; View console&amp;lt;/code&amp;gt;.&lt;br /&gt;
[[File:CloudStack View Console.png|alt=CloudStack View Console|none|thumb|CloudStack View Console]]&lt;br /&gt;
&lt;br /&gt;
=== Expanding a VM disk ===&lt;br /&gt;
[[File:CloudStack Expand Volume.png|alt=CloudStack Expand Volume|thumb|CloudStack Expand Volume]]&lt;br /&gt;
Virtual machine disks can be expanded after they are created within CloudStack. However, you will need to expand the partitions and filesystems manually.&lt;br /&gt;
&lt;br /&gt;
To grow an existing disk:&lt;br /&gt;
&lt;br /&gt;
# Go into your VM details page and click on ‘Volumes’.&lt;br /&gt;
# Click on the volume you wish to expand.&lt;br /&gt;
# Click on the ‘Resize Volume’ icon in the top right.&lt;br /&gt;
Once the volume has been expanded, you should be able to verify the disk volume has grown with &amp;lt;code&amp;gt;lsblk&amp;lt;/code&amp;gt;. There should also be some messages by the kernel when this occurs. However, you will still need to expand any partitions, volumes, and filesystems on your system manually.&lt;br /&gt;
&lt;br /&gt;
To expand your partition, use the &amp;lt;code&amp;gt;growpart&amp;lt;/code&amp;gt; command followed by your disk device and partition number. Eg: &amp;lt;code&amp;gt;/usr/bin/growpart /dev/vda 3&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For LVM volume sets, you can expand the volume using the &amp;lt;code&amp;gt;pvresize&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;lvresize&amp;lt;/code&amp;gt; commands:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/sbin/pvresize -y -q /dev/vda3&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/sbin/lvresize -y -q -r -l +100%FREE /dev/mapper/&amp;lt;volume-name&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To expand your filesystem:&lt;br /&gt;
&lt;br /&gt;
* XFS: &amp;lt;code&amp;gt;/usr/sbin/xfs_growfs &amp;lt;volume&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
* EXT: &amp;lt;code&amp;gt;resize2fs &amp;lt;volume&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Destroying a VM ===&lt;br /&gt;
If you need to delete a VM, click on the red garbage bin icon in the VM instance page. All deletions are irreversible, so please make sure you have a copy of any data you need before proceeding.&lt;br /&gt;
[[File:CloudStack Delete VM.png|alt=CloudStack Delete VM|none|thumb|CloudStack Delete VM]]&lt;br /&gt;
The VM root volume can be deleted immediately by enabling the &#039;Expunge&#039; option in the dialog box. If left disabled, the VM root volume will linger for a day before it is deleted by the system. You may wish to expunge a volume if you are running low on space or volume quota.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Virtual machine networking ==&lt;br /&gt;
The CloudStack platform allows you to define custom virtual private cloud (VPC) network which can contain any number of guest networks that your virtual machines connect to. Each guest network has its own private network address space and is not directly routable from campus or the internet. For virtual machines that require internet access, the VPC or guest network it is connected to must have a NAT IP address associated. The following diagram shows how a guest network connects to the internet and campus network.&lt;br /&gt;
[[File:CloudStack Guest Networking.png|alt=CloudStack Guest Networking|none|thumb|CloudStack Guest Networking]]&lt;br /&gt;
In order to expose a virtual machine&#039;s services to campus or the internet, the appropriate port forwardings must be set up on the VPC containing the guest network. More on this will be discussed in the next section.&lt;br /&gt;
&lt;br /&gt;
Having multiple guest networks allows for more advanced network setups but is not required. We recommend using a single flat network for most workloads. &lt;br /&gt;
&lt;br /&gt;
By default, all CloudStack accounts come with a default VPC and guest network set up with a NAT IP assigned.&lt;br /&gt;
&lt;br /&gt;
=== IP addresses ===&lt;br /&gt;
Due to the design decisions made during the setup of the CloudStack platform, only internal 10.44.12X.X IPs can be assigned to your VPC. These IP addresses are accessible from the university campus network. However, there is a special section of IP addresses that can be accessed from the internet.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!IP address range&lt;br /&gt;
!Accessible from&lt;br /&gt;
!Internet IP mapping&lt;br /&gt;
|-&lt;br /&gt;
|10.44.120.3-128&lt;br /&gt;
|Campus, Internet&lt;br /&gt;
|10.44.120.X maps to 136.159.140.X (ports 80 and 443 only)&lt;br /&gt;
|-&lt;br /&gt;
|10.44.120.129-255&lt;br /&gt;
|Campus only&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|10.44.121.0-255&lt;br /&gt;
|Campus only&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|10.44.122.0-255&lt;br /&gt;
|Campus only&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|10.44.123.0-255&lt;br /&gt;
|Campus only&lt;br /&gt;
|N/A&lt;br /&gt;
|}&lt;br /&gt;
If you need a service exposed to the internet, please request for a public IP address using our [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=e3c1d6e91be48554cca5ecefbd4bcb6c ServiceNow request form]. Additionally, if your service is not port 80 or 443, you must also request for a firewall change request to allow the special port through. &lt;br /&gt;
&lt;br /&gt;
=== Exposing a network service to campus ===&lt;br /&gt;
In order to make a virtual machine be visible to the campus network, you must first set up a port forwarding from a campus IP address to your virtual machine.&lt;br /&gt;
&lt;br /&gt;
To create a port forwarding, navigate to &amp;lt;code&amp;gt;Network -&amp;gt; VPC -&amp;gt; Select your VPC -&amp;gt; Public IP Addresses&amp;lt;/code&amp;gt;. If you do not have any available IP addresses, you will need to click on &#039;Acquire New IP&#039; and select an available IP address. Click on the IP address you wish to use to create a port forwarding on and then navigate to the &#039;Port Forwarding&#039; tab. Enter the private port range, the public port range, the protocol, and select the target VM. &lt;br /&gt;
&lt;br /&gt;
For example, to port forward only HTTP (tcp/80) traffic, you would enter the following:&lt;br /&gt;
[[File:CloudStack Port Forwarding.png|alt=CloudStack Port Forwarding|none|thumb|CloudStack Port Forwarding]]Once the port forwarding is created, you should be able to access the service from on campus. If for some reason access to your service does not work, there may be a firewall restriction on IT&#039;s network. In such circumstances, please contact us for assistance.&lt;br /&gt;
&lt;br /&gt;
=== Exposing a network to the internet ===&lt;br /&gt;
Exposing a service to the internet is the same as exposing it to campus. However, you must create a port forwarding on an IP address that maps to an internet IP address outlined in the IP address table above. If your account does not have one of these IP addresses available, please request for one on the [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=e3c1d6e91be48554cca5ecefbd4bcb6c ServiceNow request form].&lt;br /&gt;
&lt;br /&gt;
By default, only ports 80 and 443 are allowed through the Internet IP address. For all other ports, please [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=47cd16d113153a00b5b4ff82e144b0bf create a firewall rule change request in ServiceNow].&lt;br /&gt;
&lt;br /&gt;
== Cloud-Init Automation ==&lt;br /&gt;
&lt;br /&gt;
=== Ubuntu ===&lt;br /&gt;
The following Cloud-Init configs apply to Ubuntu VM templates.&lt;br /&gt;
&lt;br /&gt;
==== Ubuntu desktop ====&lt;br /&gt;
Use the following Cloud-Init config with the Ubuntu Server template to set up an Ubuntu desktop environment. The setup step takes a up to 15 minutes to complete and you should see a login screen when the setup finishes. &lt;br /&gt;
&lt;br /&gt;
Adjust the root and user password as desired. Below, the test user password is set to blank, allowing you to login to Gnome without a password.&lt;br /&gt;
{{Highlight|code=#cloud-config&lt;br /&gt;
disable_root: false&lt;br /&gt;
&lt;br /&gt;
users:&lt;br /&gt;
  - name: user&lt;br /&gt;
    lock_passwd: false&lt;br /&gt;
    inactive: false&lt;br /&gt;
    gecos: Test User&lt;br /&gt;
    primary_group: user&lt;br /&gt;
    groups: wheel&lt;br /&gt;
    passwd: $1$ADUODeAy$eCJ1lPSxhSGmSvrmWxjLC1&lt;br /&gt;
      &lt;br /&gt;
chpasswd:&lt;br /&gt;
  list: {{!}}&lt;br /&gt;
    root:password&lt;br /&gt;
  expire: false&lt;br /&gt;
    &lt;br /&gt;
# Install a graphical desktop&lt;br /&gt;
runcmd:&lt;br /&gt;
  - DEBIAN_FRONTEND=noninteractive apt -y upgrade&lt;br /&gt;
  - DEBIAN_FRONTEND=noninteractive apt -y install tasksel&lt;br /&gt;
  - tasksel install gnome-desktop&lt;br /&gt;
  - systemctl set-default graphical.target&lt;br /&gt;
  - systemctl isolate graphical.target}}&lt;br /&gt;
&lt;br /&gt;
=== Rocky Linux ===&lt;br /&gt;
The following Cloud-Init configs apply to Rocky Linux templates.&lt;br /&gt;
&lt;br /&gt;
==== Rocky Linux Desktop ====&lt;br /&gt;
Use the following Cloud-Init config using the Rocky Linux template to set up a Rocky Linux desktop environment. The setup step takes up to 10 minutes to complete and you should see a login screen when the setup finishes. &lt;br /&gt;
&lt;br /&gt;
Adjust the root and user password as desired. Below, the test user password is set to blank, allowing you to login to Gnome without a password.&lt;br /&gt;
{{Highlight|code=#cloud-config&lt;br /&gt;
disable_root: false&lt;br /&gt;
&lt;br /&gt;
users:&lt;br /&gt;
  - name: user&lt;br /&gt;
    lock_passwd: false&lt;br /&gt;
    inactive: false&lt;br /&gt;
    gecos: Test User&lt;br /&gt;
    primary_group: user&lt;br /&gt;
    groups: wheel&lt;br /&gt;
    passwd: $1$ADUODeAy$eCJ1lPSxhSGmSvrmWxjLC1&lt;br /&gt;
      &lt;br /&gt;
chpasswd:&lt;br /&gt;
  list: {{!}}&lt;br /&gt;
    root:password&lt;br /&gt;
  expire: false&lt;br /&gt;
    &lt;br /&gt;
# Install a graphical desktop&lt;br /&gt;
runcmd:&lt;br /&gt;
  - yum -y install &amp;quot;@Workstation&amp;quot;&lt;br /&gt;
  - systemctl set-default graphical.target&lt;br /&gt;
  - systemctl isolate graphical.target}}&lt;br /&gt;
&lt;br /&gt;
==== Rocky Linux Docker host ====&lt;br /&gt;
Use the following Cloud-Init config using the Rocky Linux template to set up a new docker host. This server can then be used to run Docker containers. Also included are:&lt;br /&gt;
&lt;br /&gt;
# The docker-compose utility to help deploy container stacks more easily&lt;br /&gt;
# A helper script to expand the &amp;lt;code&amp;gt;/var&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt; filesystems on first startup based on the available space in the ROOT volume. &lt;br /&gt;
&lt;br /&gt;
Use the CloudStack generated password with the &#039;&amp;lt;code&amp;gt;rocky&amp;lt;/code&amp;gt;&#039; default user account to log in.{{Highlight|code=#cloud-config&lt;br /&gt;
&lt;br /&gt;
write_files:&lt;br /&gt;
  - path: /usr/bin/expand_lvm_root&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      #!/bin/bash&lt;br /&gt;
      /usr/bin/growpart /dev/vda 3&lt;br /&gt;
      /usr/sbin/pvresize -y -q /dev/vda3&lt;br /&gt;
      /usr/sbin/lvresize -y -q -r -l +50%FREE /dev/mapper/*root&lt;br /&gt;
      /usr/sbin/lvresize -y -q -r -l +100%FREE /dev/mapper/*var/dev/mapper/*root&lt;br /&gt;
      /usr/sbin/xfs_growfs /&lt;br /&gt;
      /usr/sbin/xfs_growfs /var&lt;br /&gt;
    permissions: &#039;0700&#039;&lt;br /&gt;
&lt;br /&gt;
  - path: /usr/bin/setup_docker&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      #!/bin/bash&lt;br /&gt;
      yum install -y yum-utils&lt;br /&gt;
      yum-config-manager \&lt;br /&gt;
         --add-repo \&lt;br /&gt;
         https://download.docker.com/linux/centos/docker-ce.repo&lt;br /&gt;
      yum install -y docker-ce docker-ce-cli containerd.io&lt;br /&gt;
      systemctl start docker&lt;br /&gt;
      systemctl enable docker&lt;br /&gt;
      &lt;br /&gt;
      curl -L &amp;quot;https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)&amp;quot; -o /usr/local/bin/docker-compose&lt;br /&gt;
      chmod +x /usr/local/bin/docker-compose&lt;br /&gt;
    permissions: &#039;0700&#039;&lt;br /&gt;
&lt;br /&gt;
  - path: /root/docker-compose.yml&lt;br /&gt;
    permissions: &#039;0700&#039;&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      version: &#039;3.3&#039;&lt;br /&gt;
      services:&lt;br /&gt;
        web:&lt;br /&gt;
          image: php:7.4-apache&lt;br /&gt;
          restart: always&lt;br /&gt;
          user: &amp;quot;0:0&amp;quot;&lt;br /&gt;
          volumes:&lt;br /&gt;
            - /var/www/html:/var/www/html&lt;br /&gt;
          ports:&lt;br /&gt;
            - &amp;quot;80:80&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  - path: /var/www/html/index.php&lt;br /&gt;
    permissions: &#039;0644&#039;&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      &amp;lt;h1&amp;gt;Hello there!&amp;lt;/h1&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;I see you from &amp;lt;?php echo $_SERVER[&#039;REMOTE_ADDR&#039;]; ?&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;&amp;lt;nowiki&amp;gt;pre&amp;lt;/nowiki&amp;gt;&amp;gt;&amp;lt;?php print_r($_SERVER); ?&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
    &lt;br /&gt;
# Ensure VM has the largest / possible&lt;br /&gt;
runcmd:&lt;br /&gt;
  - /usr/bin/expand_lvm_root&lt;br /&gt;
  - /usr/bin/setup_docker&lt;br /&gt;
  - cd /root; docker-compose up -d}}&lt;br /&gt;
&lt;br /&gt;
==== UC Authentication ====&lt;br /&gt;
Use the following Cloud-Init config to allow UC-based authentication. Local accounts with the same username as the IT account may use the IT credential to log in.&lt;br /&gt;
{{Highlight|code=#cloud-config&lt;br /&gt;
&lt;br /&gt;
write_files:&lt;br /&gt;
  - path: /usr/bin/setup_uc_auth&lt;br /&gt;
    permissions: &#039;0700&#039;&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      #!/bin/bash&lt;br /&gt;
      yum install -y sssd sssd-dbus sssd-krb5 krb5-workstation authselect-compat&lt;br /&gt;
      &lt;br /&gt;
      cat &amp;lt;&amp;lt;EOF &amp;gt; /etc/sssd/sssd.conf&lt;br /&gt;
      [sssd]&lt;br /&gt;
      config_file_version = 2&lt;br /&gt;
      services = nss, pam, ifp&lt;br /&gt;
      domains = uc.ucalgary.ca&lt;br /&gt;
      &lt;br /&gt;
      [domain/uc.ucalgary.ca]&lt;br /&gt;
      id_provider = files&lt;br /&gt;
      debug_level = 5&lt;br /&gt;
      auth_provider = krb5&lt;br /&gt;
      chpass_provider = krb5&lt;br /&gt;
      &lt;br /&gt;
      krb5_realm = UC.UCALGARY.CA&lt;br /&gt;
      krb5_server = ITSODCSRV14.UC.UCALGARY.CA:88&lt;br /&gt;
      krb5_validate = false&lt;br /&gt;
      EOF&lt;br /&gt;
      chmod 600 /etc/sssd/sssd.conf&lt;br /&gt;
      &lt;br /&gt;
      cat &amp;lt;&amp;lt;EOF &amp;gt; /etc/krb5.conf&lt;br /&gt;
      [logging]&lt;br /&gt;
       default = FILE:/var/log/krb5libs.log&lt;br /&gt;
       kdc = FILE:/var/log/krb5kdc.log&lt;br /&gt;
       admin_server = FILE:/var/log/kadmind.log&lt;br /&gt;
      &lt;br /&gt;
      [libdefaults]&lt;br /&gt;
       default_realm = UC.UCALGARY.CA&lt;br /&gt;
       dns_lookup_realm = false&lt;br /&gt;
       dns_lookup_kdc = false&lt;br /&gt;
       ticket_lifetime = 24h&lt;br /&gt;
       renew_lifetime = 7d&lt;br /&gt;
       forwardable = true&lt;br /&gt;
      &lt;br /&gt;
      [realms]&lt;br /&gt;
       UC.UCALGARY.CA = {&lt;br /&gt;
        kdc = itsodcsrv14.uc.ucalgary.ca&lt;br /&gt;
       }&lt;br /&gt;
      &lt;br /&gt;
      [domain_realm]&lt;br /&gt;
       uc.ucalgary.ca = UC.UCALGARY.CA&lt;br /&gt;
       .uc.ucalgary.ca = UC.UCALGARY.CA&lt;br /&gt;
      EOF&lt;br /&gt;
      &lt;br /&gt;
      mkdir -p /etc/authselect/custom/rcs&lt;br /&gt;
      cd /etc/authselect/custom/rcs&lt;br /&gt;
      echo H4sIAMYsbGIAA+0ca3PbuDGf+StQ34dLMpEtybLdUevO6BKl56mTuJGv7Uwmo4FISMKJBHQAaEfNub+9uwBJkXrLshxnjjuTyCKJfWF3sQsuFPhS9CtB79keoQpw2mjYT4DZz1r1uPGs1qhX6yf12vFx/Vm1Vqs1jp+R6j6ZSiHWhipCnikpzarn1t3/TuErj8bhhBzccjOs6Igq41MVHBDen71WUey3mCsWHNx5W4wKpT+qSAGjI3lDQxj8SarB0UDIiB2FcsBFRfuKMfHZY4L2QpYbS2MzZMJwnxouxfnXefQHTaNi9nufhprdpQj6XAyYGisuzHIUuYcWIhlTrW/lRkxMNdO0OH5HdHdeXlDNjAGCuhJQ0IM4GjPFx0Om4PGjDM9n8pULP4wDtpkik78q1Lec/WifcMr8kSyE7fB/a9ssYf8Q2PiPc6/3RmNN/K826mez8f/07LSM/48BR4uD8dGaWOytGbc8BK8buSTu5odtFEyPitGxDH2LYXai9kFjjf8fn4HPT/3/GPy/fnZ6Uvr/Y8BXiP/gTHHOL/KZ0Z2HRpGmD2meszi5mIExjbpM3BxquRuOgIkJICE/kE7qsKQYGwjXRbSrfD2Xxe7EVp/yEOMGsjZWzOLSPASuCHJ83iCxwPtdwyN2XqtXqwvYSpHMMKPjfp/7HHFtyIydrWBnXeeFQkT4fak4ZImulwh134n3POr7Mk50sTUWGMy0RoE2hnmRAJHDgzLtwk1ewffmJq/gXZiJBf9iTSaHZGvTA0ZoGGumdsakY1AxC7q8j9qJeUD+SmqwPhAQi5kC7k8B69M4NOc9GhA7TutzOSLIB4g1EvJWnHPIFhT77HBbE9hNXZBmRNxYi0yTlHvhyexaA9cYvhDkGCMZDTfGMmITLiw7wMGNHLECuq2ZCjnIZnVUyeHZmis90YZFNhDtJJ0Mgl9lrxuNhpDwBVxZi4ioHp1Xq2dni0bNu0k2GPwkz8yn1GBqJDWj1FSWCFUwTDCxG+4DHUF8JUXgzBNNrws2u9sspB65k+4SW99s/RdgyNz4w0OsA/eUY6yr/05rtdn8r35S1n+PAjaSBU1rPGA4pA8pjCaJG6NbsS8zbuWDxmTk6jTMogZKxmOHYDsMdiAgEMxkOKYIcp69DEE60CUaMsLQ3twSRzbQBgnr2XprRtKBiCMOJFMWRaoJXcAxn5rCiG9XcRbK7T3RWOP/pyen+fqvgf5/XC3rv0eBb17eYTIbsJDaGs/+cV53dlGWjfsuG+O6zWh8qP43hYW1UQUQ7aqYHC9CYh4P6fx9eanU+3SGn6xiqBGX7CU5X656mKZ/i6uRv52T+rQa2QV3sWbaaQZdwkjwrZSQxqlBxqYiYrCQ0UHTfd4RoybdPlfadDHiz0+V5mYjI1ivmm4i9W6CuSSW9KW6BfdbwnS5tVFubSxmJvUL0oM4wkRXD2kgb70fcnojt0CKUCB3S5IrYG3EZ8pQqO1sZqkJ+JLig6H5C/CgMUaPmQjgsiA6jFW0247Hg0yg5WNXfVkkXdDR2GzKzmacPLH9pPr3sZ20TTQe3/4W05Abm+0UwzyxyuyiGLorRTgpULr3YgPOdFKrE+dT2y49dqMEg6eRo93YSZaIZfjKfbkcmnJf7jvbl3ssGEtt7Gv4PdJYt/93Um/M1P81iC9l/f8YsKFFWydAP7eeZuvVTX13uaf+6X9kEEQv8991/DJZnwvYswJrLhTYGEmtDSNqWKlNNCa5Jj3LayV55KDpvv+uh/IWc00W3G3q1DOUkqIdatVxQGG1nmJ8Yh6+Gj62W2/etfdLY43/n1VPT3P+XwP/rzaOy/3/R4G27boinU7njS1/XKIwu1P2nIauGHcZpU2MsQgKJy+8813B8zosZD62dBEz5JqMlcStc1egsRyDVMMD4GoyVhAuZJ/wAJk0E4+Kud09wHID95U+BAI4OrmgCYWIY3C46x8DpJJEVNABSwtBg8lbJMGtIU0B1qTiTC8iEjF/SAXXkca6YYgcXr5pXb0i/2Cqx5QEJSnyVjF2cdU6JBcmYwKQkfedDkGcV613kLUYpvoUxDISdzycoMlkIF0yDuPBwOqiR32oaYPktgfM+lIIYNMKAnGSj0PkvN9nCmNUWqY4vaE+3uHuFBcwoZGTg/Ygc3dK9oGzHoPJjoEE7pIaOyO/Iv4x6KjpDY0Z6+bREXyLFTvk8ggHHkF+E3jeT5N0BXhVMBaK+2EQ6CHr6isZOVqKgpgKZRXJw+6dCYRvvO+BMbj5D8hziK/sxj08yU8DIzecZlnai0NyjTY0pDcMlQZKRSGFz7weE6zPQQ2QkbJUVn/InFE5wtgNhSqDJLIPsR4+KxEYgpq4R+1UmGGsPcgeY9AlEXEE82yNiesRrB9MWY2iln92HL9CcSYyJoG0hdItFXamRoyNEzXEQqD1g7bhOQXG6A+5YK/sKJgPzz5a8A1tXQb0ghz9GuPWBNdTT0mWVFQGw50LNAox8PAlL4dJc5Nu/Usb/B+KphEIr9DYwwkBtvG2GTpsCWpHLDep1iOSjRKLrIfEaDLD+Ql1bhROUl98/eH924u///KxdX3x4b1XmQPPa8EMOyFBb6hoiUpDH0MEP9qpmspySK5CRjWozNJ3j3iB9OMIptOJazDzYATsiDiXscPBDwx6P9TQwB75ADGtIGIqF72Bxd0qGFFNnRPmFm2Y4dsEIQn7wrWNZM6+CjyCUP9qXVy2frpskw9XKHnrkrxtt65/+djuLFBCThuFzbFm08OEJAndqXfjHSSM9Q9oAk3SSBvYJsiHZn5s+A2zQ2eiGCIGTYDN4k2vWKXNUMP3pTDKhyKLudFACB/Nh0q7VKSLBOqHK4vEbgMQW28cJkJlr1jm6OQ5tLaYPYoeC1Y3cLaQzj2aCITPITUWUfY0RObxWCqYZXQTnEpczNiXcch9bq09CTFIhAs7eKGNzXI82zecSHAJV4lraia3GNlojhf7zgmeZsE8urQsTfBs8t7qkLyXRNogGjEKSxlMxkzjNKJ67opwNI90i+RF5rGJ+Ck/ufbP9VOSe9h6no3Pbmpe5hoTX6bIkxc0KxFjOIdnwO3FIGRFfHB9Fhe+7Cniq0NUgqUUTPG+qItVQ4L9jQvfEIEx9OfrAdC91pg+BLFCHRcNPA5kgqFlN5vxAkYg8I8sraFpUmNzMHxAxXYhBOsJAp762REz/lHSYpBTg8taEiKvhwzsL9laRgNOuUoDBeoEvPS/eaOe7tsVhQXixF0HeRWNmMEZltNNEc97c9GxEa1z1X598fbiNaY1/764fv0zedO6bv3U6qyObdMY9x4XagjCr0jalAQJmqE9cG1tM7bexAbedAkEJ1K3ihu4C84XJw+BA2PgqRQam9w62Jeof7vQ2jJP21wHNO47G0yWGyO95P2dRbuINK57OH14TQqWECbzhD0uYJWgmXMVmmdA1ReO0EHSTZMJvIDoDArXMjPF4DphtkCQtd1McWQNNVugmXbeTPFMm2q2QJR130zxZH0126BJGnByWNyVdUja/2m9u7rMGavnvST58sitQeibOP1zy4vnkbwVpR+QFJMsDpBixF9OIV2ykJJdaNkmayzxCkvsOo6yJX7KIKRn7TZpXXY+TNXwEnMIO8ya9PM/v/iudjhKWAUf2//85eJj+137/XVnXzTWnv85qc7s/+C3cv/nMeAdHWGfinKJc6F4xDwzq5Jc7ZemiqTDkpBVLLEwJEUzewuuqNgPrOjYOrjzKjPhFgVK6+ZXENRSyfWKRDutDXguAjbX0CWkYpeYA8ySsJ/Avp8k5+RaxewAUX2CO5/hGXc08aHk3ZuaFzVqFA6JJZrOVwNLdJ0UBOttbEPKjynztNMtkTe5sExW61E2yaCRLTsiqBxCKzOmhZDrzr+CXEUV7cot+Ji32k2z6S7GiE3w3brbRLUkYZDfH+AmwNp3lKup7gvW9e8VtYwXH0rT6yh/A01nlB9P2/kX8ZXZlHCJphc1AaTavg9dQuymJtN2oxp3nC3yQoAoRoSHkdfGaLuD55vsJUNC+zClvQc9F+li7mEehuwausvW/+IB//3kGOvyv2L//wn2/zfqtTL/ewy45+//zJ8azyciT+JQwZPpsk/61GynaTfiGleOrqBQza/+LaHkr2kG+YBHD8oO6D9IB/RTbsZ9ooe7n37751Zsfdv2z7L/c8ruU+z/dJaxz9Of6/K/WrXWaMzkf7Wzk7My/3sMuF/+9yQSvH2cGt1n4vhA8U1IyLzdC57mIIiaX+DfCD9dy4GmN0w17Q+lVfJXRrkvxdB4n3OvmSwBvv4snnpMhQt4IlkuCZ7LaZOx9m/cNIZQazttyL2O4+7ww0kbmQhZyNbyH8wqT+R+hydyN1LQol+DWFYCZ9zU98jNyjcDy1131u9S1la4dM6d8WjX1JXvw0154jmjUZ54XoulrPeXMlOeeC5PPJcnnssTz+WJ52+95VXueGXsPsUdrxJKKKGEEkoooYQSSiihhBJKKKGEEkr448D/AT019aAAeAAA {{!}} base64 -d {{!}} tar -xzpf -&lt;br /&gt;
      &lt;br /&gt;
      if [[ &amp;quot;`authselect current -r`&amp;quot; != &amp;quot;custom/rcs&amp;quot; ]] ; then&lt;br /&gt;
        authselect select custom/rcs --force&lt;br /&gt;
        systemctl restart sssd&lt;br /&gt;
      fi&lt;br /&gt;
      &lt;br /&gt;
      systemctl enable sssd&lt;br /&gt;
&lt;br /&gt;
runcmd:&lt;br /&gt;
  - /usr/bin/expand_lvm_root}}&lt;br /&gt;
&lt;br /&gt;
== Infrastructure tools ==&lt;br /&gt;
&lt;br /&gt;
=== Generating a CloudStack API ===&lt;br /&gt;
You can request for a CloudStack API key to automate infrastructure deployment using Terraform or CloudMonkey. A new API key can be generated by navigating to your profile page (top right) and then clicking on the &#039;Generate keys&#039; button.&lt;br /&gt;
[[File:CloudStack API Key.png|alt=CloudStack API Key|none|thumb|CloudStack API Key]]&lt;br /&gt;
&lt;br /&gt;
=== CloudMonkey ===&lt;br /&gt;
CloudMonkey is a utility that makes it easier to interact with the CloudStack API. This tool may be used to help automate VM actions (such as start/stop/reboot), or infrastructure tasks (such as creating/destroying VMs, networks, or firewall rules). &lt;br /&gt;
&lt;br /&gt;
To get started with CloudMonkey, refer to the following resources:&lt;br /&gt;
&lt;br /&gt;
* Download from: &amp;lt;nowiki&amp;gt;https://github.com/apache/cloudstack-cloudmonkey/releases/tag/6.1.0&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
* Documentation at: &amp;lt;nowiki&amp;gt;https://cwiki.apache.org/confluence/display/CLOUDSTACK/CloudStack+cloudmonkey+CLI&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Terraform Integration ===&lt;br /&gt;
Terraform allows you to define infrastructure as code and can be used in conjunction with CloudStack to configure your virtual machines and guest networks. Use the official CloudStack provider.&lt;br /&gt;
&lt;br /&gt;
The following is an example Terraform file for reference. Specify your CloudStack API keys either as a separate &amp;lt;code&amp;gt;vars.tf&amp;lt;/code&amp;gt;.&lt;br /&gt;
{{Highlight|code=# Configure the CloudStack Provider&lt;br /&gt;
terraform {&lt;br /&gt;
  required_providers {&lt;br /&gt;
    cloudstack = {&lt;br /&gt;
      source = &amp;quot;cloudstack/cloudstack&amp;quot;&lt;br /&gt;
      version = &amp;quot;0.4.0&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
provider &amp;quot;cloudstack&amp;quot; {&lt;br /&gt;
  api_url    = &amp;quot;${var.cloudstack_api_url}&amp;quot;&lt;br /&gt;
  api_key    = &amp;quot;${var.cloudstack_api_key}&amp;quot;&lt;br /&gt;
  secret_key = &amp;quot;${var.cloudstack_secret_key}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Create a new VPC&lt;br /&gt;
resource &amp;quot;cloudstack_vpc&amp;quot; &amp;quot;default&amp;quot; {&lt;br /&gt;
  name = &amp;quot;wedgenet-vpc&amp;quot;&lt;br /&gt;
  display_text = &amp;quot;wedgenet-vpc&amp;quot;&lt;br /&gt;
  cidr = &amp;quot;100.64.0.0/20&amp;quot;&lt;br /&gt;
  vpc_offering = &amp;quot;Default VPC offering&amp;quot;&lt;br /&gt;
  zone = &amp;quot;zone1&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Create a new ACL&lt;br /&gt;
resource &amp;quot;cloudstack_network_acl&amp;quot; &amp;quot;default&amp;quot; {&lt;br /&gt;
    name  = &amp;quot;vpc-acl&amp;quot;&lt;br /&gt;
    vpc_id = &amp;quot;${cloudstack_vpc.default.id}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# One ingress and one egress rule for the ACL&lt;br /&gt;
resource &amp;quot;cloudstack_network_acl_rule&amp;quot; &amp;quot;ingress&amp;quot; {&lt;br /&gt;
    acl_id = &amp;quot;${cloudstack_network_acl.default.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    rule {&lt;br /&gt;
        action       = &amp;quot;allow&amp;quot;&lt;br /&gt;
        cidr_list    = [&amp;quot;0.0.0.0/0&amp;quot;]&lt;br /&gt;
        protocol     = &amp;quot;tcp&amp;quot;&lt;br /&gt;
        ports        = [&amp;quot;22&amp;quot;, &amp;quot;80&amp;quot;, &amp;quot;443&amp;quot;]&lt;br /&gt;
        traffic_type = &amp;quot;ingress&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
resource &amp;quot;cloudstack_network_acl_rule&amp;quot; &amp;quot;egress&amp;quot; {&lt;br /&gt;
    acl_id = &amp;quot;${cloudstack_network_acl.default.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    rule {&lt;br /&gt;
        action       = &amp;quot;allow&amp;quot;&lt;br /&gt;
        cidr_list    = [&amp;quot;0.0.0.0/0&amp;quot;]&lt;br /&gt;
        protocol     = &amp;quot;all&amp;quot;&lt;br /&gt;
        traffic_type = &amp;quot;egress&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Create a new network in the VPC&lt;br /&gt;
resource &amp;quot;cloudstack_network&amp;quot; &amp;quot;primary&amp;quot; {&lt;br /&gt;
    name = &amp;quot;primary&amp;quot;&lt;br /&gt;
    display_text = &amp;quot;primary&amp;quot;&lt;br /&gt;
    cidr = &amp;quot;100.64.1.0/24&amp;quot;&lt;br /&gt;
    network_offering = &amp;quot;DefaultIsolatedNetworkOfferingForVpcNetworks&amp;quot;&lt;br /&gt;
    acl_id = &amp;quot;${cloudstack_network_acl.default.id}&amp;quot;&lt;br /&gt;
    vpc_id = &amp;quot;${cloudstack_vpc.default.id}&amp;quot;&lt;br /&gt;
    zone = &amp;quot;zone1&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Create a new public IP address for this network&lt;br /&gt;
resource &amp;quot;cloudstack_ipaddress&amp;quot; &amp;quot;public_ip&amp;quot; {&lt;br /&gt;
    vpc_id = &amp;quot;${cloudstack_vpc.default.id}&amp;quot;&lt;br /&gt;
    network_id = &amp;quot;${cloudstack_network.wosnet1.id}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Create VMs. &lt;br /&gt;
resource &amp;quot;cloudstack_instance&amp;quot; &amp;quot;vm&amp;quot; {&lt;br /&gt;
  count = 1&lt;br /&gt;
  name = &amp;quot;vm${count.index+1}&amp;quot;&lt;br /&gt;
  zone = &amp;quot;zone1&amp;quot;&lt;br /&gt;
  service_offering = &amp;quot;rcs.c4&amp;quot;&lt;br /&gt;
  template = &amp;quot;RockyLinux 8.5&amp;quot;&lt;br /&gt;
  network_id = &amp;quot;${cloudstack_network.wosnet1.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  # Cloud Init data can be used to configure your VM on first startup if your template supports Cloud Init&lt;br /&gt;
  user_data = &amp;lt;&amp;lt;EOF&lt;br /&gt;
#cloud-config&lt;br /&gt;
&lt;br /&gt;
# Require specific packages&lt;br /&gt;
packages:&lt;br /&gt;
 - tmux&lt;br /&gt;
 - git&lt;br /&gt;
 - tcpdump&lt;br /&gt;
&lt;br /&gt;
EOF&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
= Troubleshooting =&lt;br /&gt;
&lt;br /&gt;
=== Cannot create a volume snapshot ===&lt;br /&gt;
Volume snapshots can only be taken on VMs that are powered off.&lt;br /&gt;
&lt;br /&gt;
=== Cannot create a VM snapshot ===&lt;br /&gt;
Disk-only VM snapshots cannot be taken when the VM is running. If you intend to snapshot a running system, you must also snapshot its memory.&lt;br /&gt;
&lt;br /&gt;
=== VM state is still running after shutdown ===&lt;br /&gt;
After running &#039;shutdown&#039; on a VM, the VM state reported by CloudStack is still running. &lt;br /&gt;
&lt;br /&gt;
Please try to do a force shutdown from the CloudStack management console. The VM state isn&#039;t updated by CloudStack and as a result, the state of a VM isn&#039;t properly reflected when power state changes outside of CloudStack (likely a bug?)&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=CloudStack_User_Guide&amp;diff=1849</id>
		<title>CloudStack User Guide</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=CloudStack_User_Guide&amp;diff=1849"/>
		<updated>2022-06-02T16:56:52Z</updated>

		<summary type="html">&lt;p&gt;Darcy: /* Exposed to the Internet */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This is a user&#039;s guide on using CloudStack provided by Research Computing Services.&lt;br /&gt;
&lt;br /&gt;
== Introduction==&lt;br /&gt;
Apache CloudStack is an Infrastructure as a Service (IaaS) platform that allows users to quickly spin up Linux/Non-Windows based virtual machines. RCS is providing this service to help researchers quickly set up and prototype short-term research related software on premises. CloudStack is not appropriate for workloads that depend on Windows. Services set up on CloudStack virtual machines can be accessed from the campus network and also the internet if required.&lt;br /&gt;
&lt;br /&gt;
Access to CloudStack can be requested via [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=e3c1d6e91be48554cca5ecefbd4bcb6c ServiceNow].&lt;br /&gt;
&lt;br /&gt;
Please refer to our [[CloudStack End User Agreement]] for acceptable uses and requirements.&lt;br /&gt;
&lt;br /&gt;
===Accessing the CloudStack management console===&lt;br /&gt;
&lt;br /&gt;
The CloudStack management console is a web-based portal that allows you to view and manage your cloud infrastructure including virtual machines, storage, and network. Any modern web browsers including Chrome, Firefox, Edge, and Safari is supported. &lt;br /&gt;
&lt;br /&gt;
Access the CloudStack management console is possible only from an IT-managed computer or through the IT General VPN when working on unmanaged machines (eg. AirUC) or when working off campus (eg. at home). Please review the IT [https://ucalgary.service-now.com/it?id=kb_article&amp;amp;sys_id=52a169d6dbe5bc506ad32637059619cd knowledge base article on connecting to the General VPN] or contact IT support if you need assistance connecting to the General VPN. &lt;br /&gt;
[[File:CloudStack VPN Connection.png|alt=CloudStack VPN Connection|none|thumb|CloudStack VPN Connection]]&lt;br /&gt;
=== Login to CloudStack===&lt;br /&gt;
&lt;br /&gt;
To log in to CloudStack, navigate to https://cloudstack.rcs.ucalgary.ca/. If this site fails to load, please make sure you are either on a IT managed computer or connected to the General VPN.&lt;br /&gt;
&lt;br /&gt;
Sign in to CloudStack using the Single Sign-On option as shown in the image below. This method will require you to authenticate through our central authentication service using your University of Calgary IT credentials and will require multi-factor authentication. You must have multi-factor authentication set up either via your phone or with the Microsoft Authenticator app.&lt;br /&gt;
[[File:CloudStack Login Page.png|alt=CloudStack Login Page|none|thumb|CloudStack Login Page]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; Due to a bug with the UI, if the Single Sign-On option is disabled, please refresh the login page and try again. This issue should be addressed in our next update for CloudStack.&lt;br /&gt;
&lt;br /&gt;
=== CloudStack Dashboard===&lt;br /&gt;
&lt;br /&gt;
After logging in, you will be presented with your CloudStack management console. The dashboard shows you a general overview of your account&#039;s status.&lt;br /&gt;
[[File:CloudStack Dashboard.png|alt=CloudStack Dashboard|none|thumb|CloudStack Dashboard]]On the right hand side of the dashboard, you will also see recent activity and events that was done within your CloudStack account.&lt;br /&gt;
&lt;br /&gt;
If you wish to see your CloudStack account resource quota and allocation, navigate to: &amp;lt;code&amp;gt;Accounts -&amp;gt; Click on your account -&amp;gt; Resources&amp;lt;/code&amp;gt;. &lt;br /&gt;
[[File:CloudStack Resource Quota.png|alt=CloudStack Resource Quota|none|thumb|CloudStack Resource Quota]]&lt;br /&gt;
&lt;br /&gt;
== Working with virtual machines==&lt;br /&gt;
&lt;br /&gt;
CloudStack allows you to control the lifecycle of virtual machines within your cloud account. VMs may be started, stopped, rebooted, or destroyed within your management console.&lt;br /&gt;
&lt;br /&gt;
===Create a VM===&lt;br /&gt;
&lt;br /&gt;
To create a new VM, enter the CloudStack management console and navigate to:  &amp;lt;code&amp;gt;Compute -&amp;gt; Instances -&amp;gt; Add Instance&amp;lt;/code&amp;gt;[[File:CloudStack Instance Summary.png|alt=CloudStack Instance Summary|thumb|CloudStack Instance Summary|493x493px]]&lt;br /&gt;
&lt;br /&gt;
Virtual Machines require the following details:&lt;br /&gt;
&lt;br /&gt;
# &#039;&#039;&#039;Deployment zone&#039;&#039;&#039;. Your account will already be placed in the appropriate zone.&lt;br /&gt;
# &#039;&#039;&#039;Boot template or ISO&#039;&#039;&#039;. You may choose either a pre-created template or boot from a custom CD-ROM ISO file.&lt;br /&gt;
# &#039;&#039;&#039;Compute offering&#039;&#039;&#039;. You may select an appropriate size for your new VM. Resources will be counted against your account&#039;s quota.&lt;br /&gt;
# &#039;&#039;&#039;Data Disk&#039;&#039;&#039;. You may choose to add an additional virtual disk to your VM to store your data. Alternatively, if you wish to use a single virtual disk for your VM, you may choose to override the size of your root disk in step 2 and select &#039;No thanks&#039; in this step.&lt;br /&gt;
# &#039;&#039;&#039;Networks&#039;&#039;&#039;. You may choose one or more networks your VM should connect to. All CloudStack accounts come with a default network already created and ready to be used.&lt;br /&gt;
# &#039;&#039;&#039;SSH keypairs&#039;&#039;&#039;. For templates that support custom SSH key pairs, you may choose to use a custom SSH keypair to be installed as part of the deployment process.&lt;br /&gt;
# &#039;&#039;&#039;Advanced settings&#039;&#039;&#039;. For templates that support custom user-data (Cloud-Init), you may choose to enable the advanced settings and provide your own Cloud-Init user-data payload. More on this in the advanced tasks section below.&lt;br /&gt;
# &#039;&#039;&#039;Other VM details&#039;&#039;&#039;. You may give your new VM a friendly name and make it part of a group. Groups allow you to group related VMs together for better organization. You may change these details at a later time.&lt;br /&gt;
&lt;br /&gt;
When you are done, review the instance summary on the right hand side and then click on the &#039;Launch Virtual Machine&#039; button.&lt;br /&gt;
&lt;br /&gt;
====Choosing a virtual machine template====&lt;br /&gt;
&lt;br /&gt;
We provide a Rocky Linux 8.5 and a Ubuntu Server 22.04 LTS template for your convenience. These templates are pre-built images with the operating system installed and ready for use. Our templates also support further automated setup configured using Cloud-Init configuration data that can be provided when deploying a new VM. Currently, we offer the following templates: &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Template&lt;br /&gt;
!Cloud-Init Support&lt;br /&gt;
!Password Support&lt;br /&gt;
!Default Username&lt;br /&gt;
|-&lt;br /&gt;
|Rocky Linux 8.5&lt;br /&gt;
|Yes&lt;br /&gt;
|Yes&lt;br /&gt;
|rocky&lt;br /&gt;
|-&lt;br /&gt;
|Ubuntu Server 22.04&lt;br /&gt;
|Yes&lt;br /&gt;
|Yes&lt;br /&gt;
|ubuntu&lt;br /&gt;
|}&lt;br /&gt;
Rocky Linux is an open source Linux distribution that is binary-compatible with Red Hat Enterprise Linux and is what RCS recommends.&lt;br /&gt;
&lt;br /&gt;
For templates that support passwords, the generated password that appears after a VM is created is applied to the default username.&lt;br /&gt;
&lt;br /&gt;
Security note: All VM templates are configured with SSH password authentication enabled. You should be able to SSH to your VM from another system connected to the same guest network. Do not expose port 22 unless required and we highly recommend using key based authentication.&lt;br /&gt;
&lt;br /&gt;
==== Virtual machine credentials ====&lt;br /&gt;
VM templates that support password will have a randomly generated password set when the VM is first created or when a password reset request is made (available only when the VM is powered off). A randomly generated 6 character password will be displayed when a new password is set and appears as a notification in your CloudStack management console. &lt;br /&gt;
[[File:CloudStack VM Password.png|alt=CloudStack VM Password|none|thumb|CloudStack VM Password]]&lt;br /&gt;
This password is set on the default username for your template. For example, the Rocky Linux VM template will set this password to the &#039;&#039;&#039;&#039;rocky&#039;&#039;&#039;&amp;lt;nowiki/&amp;gt;&#039; user account. You may become the super user by logging in as the &amp;lt;code&amp;gt;rocky&amp;lt;/code&amp;gt; user and then running &amp;lt;code&amp;gt;sudo su&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Note: If you specify a custom Cloud-Init config that creates additional users or sets account passwords, the displayed password will be overridden and have no effect.&lt;br /&gt;
&lt;br /&gt;
=== Creating a custom template===&lt;br /&gt;
&lt;br /&gt;
Alternatively, you may decide to install a custom OS such as a different Linux distribution or other UNIX based operating systems and create a template from that. To create a custom template:&lt;br /&gt;
&lt;br /&gt;
# Create a new virtual machine and select your custom ISO media. If you wish to upload your own ISO, see the &#039;register ISO&#039; section below.&lt;br /&gt;
# Start the virtual machine and proceed through the OS setup process&lt;br /&gt;
# Once the system has been set up, prepare the VM to be templated by removing any host-specific files such as SSH host keys, static network configuration settings, temporary files and caches.&lt;br /&gt;
# Power off the virtual machine&lt;br /&gt;
# Navigate to the virtual machine page and click on the &#039;create template&#039; button&lt;br /&gt;
[[File:CloudStack Instance Controls.png|alt=CloudStack Instance Controls|none|thumb|CloudStack Instance Controls]]&lt;br /&gt;
&lt;br /&gt;
===Registering a custom ISO===&lt;br /&gt;
&lt;br /&gt;
You may install custom ISO file into your CloudStack account either by directly uploading the ISO through the web console or by providing a URL to the ISO file on the internet.&lt;br /&gt;
&lt;br /&gt;
Please do not install Windows on our CloudStack infrastructure. If you need a Windows VM, please contact us as we have alternative solutions.&lt;br /&gt;
&lt;br /&gt;
==== Download a ISO from the internet====&lt;br /&gt;
[[File:CloudStack Download ISO.png|alt=CloudStack Download ISO|thumb|CloudStack Download ISO|190x190px]]&lt;br /&gt;
&lt;br /&gt;
To add a custom ISO file from the internet, enter the CloudStack management console and navigate to:  &amp;lt;code&amp;gt;Images -&amp;gt; ISOs -&amp;gt; Register ISO&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You may check the state of the ISO file by clicking on it and verify the state of the file. If the file is successfully downloaded, its ready state should become ‘true’. The ISO file will only appear in the selection list when the file is downloaded successfully.&lt;br /&gt;
[[File:CloudStack ISO Ready.png|alt=CloudStack ISO Ready|none|thumb|172x172px|CloudStack ISO Ready]]&lt;br /&gt;
&lt;br /&gt;
====Upload a custom ISO====&lt;br /&gt;
&lt;br /&gt;
To upload an ISO file, enter the CloudStack management console and navigate to:  &amp;lt;code&amp;gt;Images -&amp;gt; ISOs -&amp;gt; Upload ISO from Local (icon)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:CloudStack Upload ISO.png|alt=CloudStack Upload ISO|none|thumb|CloudStack Upload ISO|217x217px]]&lt;br /&gt;
&lt;br /&gt;
===Connecting to your VM console===&lt;br /&gt;
The CloudStack management console has a KVM (keyboard, video, mouse) feature built-in, allowing you to remotely connect to and interact with your virtual machine. To connect to your virtual machine&#039;s console, navigate to: &amp;lt;code&amp;gt;Compute -&amp;gt; Instances -&amp;gt; Your Instance -&amp;gt; View console&amp;lt;/code&amp;gt;.&lt;br /&gt;
[[File:CloudStack View Console.png|alt=CloudStack View Console|none|thumb|CloudStack View Console]]&lt;br /&gt;
&lt;br /&gt;
=== Expanding a VM disk ===&lt;br /&gt;
[[File:CloudStack Expand Volume.png|alt=CloudStack Expand Volume|thumb|CloudStack Expand Volume]]&lt;br /&gt;
Virtual machine disks can be expanded after they are created within CloudStack. However, you will need to expand the partitions and filesystems manually.&lt;br /&gt;
&lt;br /&gt;
To grow an existing disk:&lt;br /&gt;
&lt;br /&gt;
# Go into your VM details page and click on ‘Volumes’.&lt;br /&gt;
# Click on the volume you wish to expand.&lt;br /&gt;
# Click on the ‘Resize Volume’ icon in the top right.&lt;br /&gt;
Once the volume has been expanded, you should be able to verify the disk volume has grown with &amp;lt;code&amp;gt;lsblk&amp;lt;/code&amp;gt;. There should also be some messages by the kernel when this occurs. However, you will still need to expand any partitions, volumes, and filesystems on your system manually.&lt;br /&gt;
&lt;br /&gt;
To expand your partition, use the &amp;lt;code&amp;gt;growpart&amp;lt;/code&amp;gt; command followed by your disk device and partition number. Eg: &amp;lt;code&amp;gt;/usr/bin/growpart /dev/vda 3&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For LVM volume sets, you can expand the volume using the &amp;lt;code&amp;gt;pvresize&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;lvresize&amp;lt;/code&amp;gt; commands:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/sbin/pvresize -y -q /dev/vda3&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/sbin/lvresize -y -q -r -l +100%FREE /dev/mapper/&amp;lt;volume-name&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To expand your filesystem:&lt;br /&gt;
&lt;br /&gt;
* XFS: &amp;lt;code&amp;gt;/usr/sbin/xfs_growfs &amp;lt;volume&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
* EXT: &amp;lt;code&amp;gt;resize2fs &amp;lt;volume&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Destroying a VM ===&lt;br /&gt;
If you need to delete a VM, click on the red garbage bin icon in the VM instance page. All deletions are irreversible, so please make sure you have a copy of any data you need before proceeding.&lt;br /&gt;
[[File:CloudStack Delete VM.png|alt=CloudStack Delete VM|none|thumb|CloudStack Delete VM]]&lt;br /&gt;
The VM root volume can be deleted immediately by enabling the &#039;Expunge&#039; option in the dialog box. If left disabled, the VM root volume will linger for a day before it is deleted by the system. You may wish to expunge a volume if you are running low on space or volume quota.&lt;br /&gt;
&lt;br /&gt;
== Managing your virtual machine ==&lt;br /&gt;
You will be able to run whatever virtual machine you wish (with the exception of Windows).  Clearly we cannot provide specific management advice on each and every operating system available.  We can provide you with some suggestions on important considerations to be aware of.&lt;br /&gt;
&lt;br /&gt;
=== Educate yourself ===&lt;br /&gt;
All operating systems (OS) have user groups, web sites, wikis, or mailing lists somewhere on the internet.  They can be a valuable resource.  Most OS providers have on-line documentation that describes using their product.  For example Rocky Linux, used by RCS, has a [https://docs.rockylinux.org/ documentation site].  These are excellent resources and can help you understand how to manage your virtual machine.&lt;br /&gt;
&lt;br /&gt;
=== Choose an appropriate OS edition ===&lt;br /&gt;
Many OSs will provide various editions that are tailored to a specific use case.  A desktop VM may not be appropriate when you need to run a database server.  The OS provider will have guides on how to choose an edition.&lt;br /&gt;
&lt;br /&gt;
=== Configure your VM&#039;s OS ===&lt;br /&gt;
It is critical to ensure that the only services running on your VM are the ones you must run.  Each OS has a way of managing what services are running (sysinit, systemd etc).  Please ensure that unnecessary services have been disabled.&lt;br /&gt;
&lt;br /&gt;
Many OSs will have pre-configured accounts, and many applications will have pre-configured accounts.  Make sure they are either disabled or not allowed to login.&lt;br /&gt;
&lt;br /&gt;
All un-used accounts should be disabled or preferably deleted.&lt;br /&gt;
&lt;br /&gt;
All accounts should have strong [https://it.ucalgary.ca/it-security/passwords-do-i-have-change-them/ passwords].&lt;br /&gt;
&lt;br /&gt;
Many OS&#039;s have the ability to automatically update themselves.  If possible please consider doing this.&lt;br /&gt;
&lt;br /&gt;
Updates can also be configured to skip certain software if it will interfere with your research, but please be advised that doing so could place your system at risk.&lt;br /&gt;
&lt;br /&gt;
=== Exposed to the Internet ===&lt;br /&gt;
Not everyone is a computer security expert.  If your VM must be exposed to the internet, please consider using Trend Micro Cloud One Workload Security from IT security to enhance your security posture.&lt;br /&gt;
&lt;br /&gt;
== Virtual machine networking ==&lt;br /&gt;
The CloudStack platform allows you to define custom virtual private cloud (VPC) network which can contain any number of guest networks that your virtual machines connect to. Each guest network has its own private network address space and is not directly routable from campus or the internet. For virtual machines that require internet access, the VPC or guest network it is connected to must have a NAT IP address associated. The following diagram shows how a guest network connects to the internet and campus network.&lt;br /&gt;
[[File:CloudStack Guest Networking.png|alt=CloudStack Guest Networking|none|thumb|CloudStack Guest Networking]]&lt;br /&gt;
In order to expose a virtual machine&#039;s services to campus or the internet, the appropriate port forwardings must be set up on the VPC containing the guest network. More on this will be discussed in the next section.&lt;br /&gt;
&lt;br /&gt;
Having multiple guest networks allows for more advanced network setups but is not required. We recommend using a single flat network for most workloads. &lt;br /&gt;
&lt;br /&gt;
By default, all CloudStack accounts come with a default VPC and guest network set up with a NAT IP assigned.&lt;br /&gt;
&lt;br /&gt;
=== IP addresses ===&lt;br /&gt;
Due to the design decisions made during the setup of the CloudStack platform, only internal 10.44.12X.X IPs can be assigned to your VPC. These IP addresses are accessible from the university campus network. However, there is a special section of IP addresses that can be accessed from the internet.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!IP address range&lt;br /&gt;
!Accessible from&lt;br /&gt;
!Internet IP mapping&lt;br /&gt;
|-&lt;br /&gt;
|10.44.120.3-128&lt;br /&gt;
|Campus, Internet&lt;br /&gt;
|10.44.120.X maps to 136.159.140.X (ports 80 and 443 only)&lt;br /&gt;
|-&lt;br /&gt;
|10.44.120.129-255&lt;br /&gt;
|Campus only&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|10.44.121.0-255&lt;br /&gt;
|Campus only&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|10.44.122.0-255&lt;br /&gt;
|Campus only&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|10.44.123.0-255&lt;br /&gt;
|Campus only&lt;br /&gt;
|N/A&lt;br /&gt;
|}&lt;br /&gt;
If you need a service exposed to the internet, please request for a public IP address using our [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=e3c1d6e91be48554cca5ecefbd4bcb6c ServiceNow request form]. Additionally, if your service is not port 80 or 443, you must also request for a firewall change request to allow the special port through. &lt;br /&gt;
&lt;br /&gt;
=== Exposing a network service to campus ===&lt;br /&gt;
In order to make a virtual machine be visible to the campus network, you must first set up a port forwarding from a campus IP address to your virtual machine.&lt;br /&gt;
&lt;br /&gt;
To create a port forwarding, navigate to &amp;lt;code&amp;gt;Network -&amp;gt; VPC -&amp;gt; Select your VPC -&amp;gt; Public IP Addresses&amp;lt;/code&amp;gt;. If you do not have any available IP addresses, you will need to click on &#039;Acquire New IP&#039; and select an available IP address. Click on the IP address you wish to use to create a port forwarding on and then navigate to the &#039;Port Forwarding&#039; tab. Enter the private port range, the public port range, the protocol, and select the target VM. &lt;br /&gt;
&lt;br /&gt;
For example, to port forward only HTTP (tcp/80) traffic, you would enter the following:&lt;br /&gt;
[[File:CloudStack Port Forwarding.png|alt=CloudStack Port Forwarding|none|thumb|CloudStack Port Forwarding]]Once the port forwarding is created, you should be able to access the service from on campus. If for some reason access to your service does not work, there may be a firewall restriction on IT&#039;s network. In such circumstances, please contact us for assistance.&lt;br /&gt;
&lt;br /&gt;
=== Exposing a network to the internet ===&lt;br /&gt;
Exposing a service to the internet is the same as exposing it to campus. However, you must create a port forwarding on an IP address that maps to an internet IP address outlined in the IP address table above. If your account does not have one of these IP addresses available, please request for one on the [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=e3c1d6e91be48554cca5ecefbd4bcb6c ServiceNow request form].&lt;br /&gt;
&lt;br /&gt;
By default, only ports 80 and 443 are allowed through the Internet IP address. For all other ports, please [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=47cd16d113153a00b5b4ff82e144b0bf create a firewall rule change request in ServiceNow].&lt;br /&gt;
&lt;br /&gt;
== Cloud-Init Automation ==&lt;br /&gt;
&lt;br /&gt;
=== Ubuntu ===&lt;br /&gt;
The following Cloud-Init configs apply to Ubuntu VM templates.&lt;br /&gt;
&lt;br /&gt;
==== Ubuntu desktop ====&lt;br /&gt;
Use the following Cloud-Init config with the Ubuntu Server template to set up an Ubuntu desktop environment. The setup step takes a up to 15 minutes to complete and you should see a login screen when the setup finishes. &lt;br /&gt;
&lt;br /&gt;
Adjust the root and user password as desired. Below, the test user password is set to blank, allowing you to login to Gnome without a password.&lt;br /&gt;
{{Highlight|code=#cloud-config&lt;br /&gt;
disable_root: false&lt;br /&gt;
&lt;br /&gt;
users:&lt;br /&gt;
  - name: user&lt;br /&gt;
    lock_passwd: false&lt;br /&gt;
    inactive: false&lt;br /&gt;
    gecos: Test User&lt;br /&gt;
    primary_group: user&lt;br /&gt;
    groups: wheel&lt;br /&gt;
    passwd: $1$ADUODeAy$eCJ1lPSxhSGmSvrmWxjLC1&lt;br /&gt;
      &lt;br /&gt;
chpasswd:&lt;br /&gt;
  list: {{!}}&lt;br /&gt;
    root:password&lt;br /&gt;
  expire: false&lt;br /&gt;
    &lt;br /&gt;
# Install a graphical desktop&lt;br /&gt;
runcmd:&lt;br /&gt;
  - DEBIAN_FRONTEND=noninteractive apt -y upgrade&lt;br /&gt;
  - DEBIAN_FRONTEND=noninteractive apt -y install tasksel&lt;br /&gt;
  - tasksel install gnome-desktop&lt;br /&gt;
  - systemctl set-default graphical.target&lt;br /&gt;
  - systemctl isolate graphical.target}}&lt;br /&gt;
&lt;br /&gt;
=== Rocky Linux ===&lt;br /&gt;
The following Cloud-Init configs apply to Rocky Linux templates.&lt;br /&gt;
&lt;br /&gt;
==== Rocky Linux Desktop ====&lt;br /&gt;
Use the following Cloud-Init config using the Rocky Linux template to set up a Rocky Linux desktop environment. The setup step takes up to 10 minutes to complete and you should see a login screen when the setup finishes. &lt;br /&gt;
&lt;br /&gt;
Adjust the root and user password as desired. Below, the test user password is set to blank, allowing you to login to Gnome without a password.&lt;br /&gt;
{{Highlight|code=#cloud-config&lt;br /&gt;
disable_root: false&lt;br /&gt;
&lt;br /&gt;
users:&lt;br /&gt;
  - name: user&lt;br /&gt;
    lock_passwd: false&lt;br /&gt;
    inactive: false&lt;br /&gt;
    gecos: Test User&lt;br /&gt;
    primary_group: user&lt;br /&gt;
    groups: wheel&lt;br /&gt;
    passwd: $1$ADUODeAy$eCJ1lPSxhSGmSvrmWxjLC1&lt;br /&gt;
      &lt;br /&gt;
chpasswd:&lt;br /&gt;
  list: {{!}}&lt;br /&gt;
    root:password&lt;br /&gt;
  expire: false&lt;br /&gt;
    &lt;br /&gt;
# Install a graphical desktop&lt;br /&gt;
runcmd:&lt;br /&gt;
  - yum -y install &amp;quot;@Workstation&amp;quot;&lt;br /&gt;
  - systemctl set-default graphical.target&lt;br /&gt;
  - systemctl isolate graphical.target}}&lt;br /&gt;
&lt;br /&gt;
==== Rocky Linux Docker host ====&lt;br /&gt;
Use the following Cloud-Init config using the Rocky Linux template to set up a new docker host. This server can then be used to run Docker containers. Also included are:&lt;br /&gt;
&lt;br /&gt;
# The docker-compose utility to help deploy container stacks more easily&lt;br /&gt;
# A helper script to expand the &amp;lt;code&amp;gt;/var&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt; filesystems on first startup based on the available space in the ROOT volume. &lt;br /&gt;
&lt;br /&gt;
Use the CloudStack generated password with the &#039;&amp;lt;code&amp;gt;rocky&amp;lt;/code&amp;gt;&#039; default user account to log in.{{Highlight|code=#cloud-config&lt;br /&gt;
&lt;br /&gt;
write_files:&lt;br /&gt;
  - path: /usr/bin/expand_lvm_root&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      #!/bin/bash&lt;br /&gt;
      /usr/bin/growpart /dev/vda 3&lt;br /&gt;
      /usr/sbin/pvresize -y -q /dev/vda3&lt;br /&gt;
      /usr/sbin/lvresize -y -q -r -l +50%FREE /dev/mapper/*root&lt;br /&gt;
      /usr/sbin/lvresize -y -q -r -l +100%FREE /dev/mapper/*var/dev/mapper/*root&lt;br /&gt;
      /usr/sbin/xfs_growfs /&lt;br /&gt;
      /usr/sbin/xfs_growfs /var&lt;br /&gt;
    permissions: &#039;0700&#039;&lt;br /&gt;
&lt;br /&gt;
  - path: /usr/bin/setup_docker&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      #!/bin/bash&lt;br /&gt;
      yum install -y yum-utils&lt;br /&gt;
      yum-config-manager \&lt;br /&gt;
         --add-repo \&lt;br /&gt;
         https://download.docker.com/linux/centos/docker-ce.repo&lt;br /&gt;
      yum install -y docker-ce docker-ce-cli containerd.io&lt;br /&gt;
      systemctl start docker&lt;br /&gt;
      systemctl enable docker&lt;br /&gt;
      &lt;br /&gt;
      curl -L &amp;quot;https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)&amp;quot; -o /usr/local/bin/docker-compose&lt;br /&gt;
      chmod +x /usr/local/bin/docker-compose&lt;br /&gt;
    permissions: &#039;0700&#039;&lt;br /&gt;
&lt;br /&gt;
  - path: /root/docker-compose.yml&lt;br /&gt;
    permissions: &#039;0700&#039;&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      version: &#039;3.3&#039;&lt;br /&gt;
      services:&lt;br /&gt;
        web:&lt;br /&gt;
          image: php:7.4-apache&lt;br /&gt;
          restart: always&lt;br /&gt;
          user: &amp;quot;0:0&amp;quot;&lt;br /&gt;
          volumes:&lt;br /&gt;
            - /var/www/html:/var/www/html&lt;br /&gt;
          ports:&lt;br /&gt;
            - &amp;quot;80:80&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  - path: /var/www/html/index.php&lt;br /&gt;
    permissions: &#039;0644&#039;&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      &amp;lt;h1&amp;gt;Hello there!&amp;lt;/h1&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;I see you from &amp;lt;?php echo $_SERVER[&#039;REMOTE_ADDR&#039;]; ?&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;&amp;lt;nowiki&amp;gt;pre&amp;lt;/nowiki&amp;gt;&amp;gt;&amp;lt;?php print_r($_SERVER); ?&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
    &lt;br /&gt;
# Ensure VM has the largest / possible&lt;br /&gt;
runcmd:&lt;br /&gt;
  - /usr/bin/expand_lvm_root&lt;br /&gt;
  - /usr/bin/setup_docker&lt;br /&gt;
  - cd /root; docker-compose up -d}}&lt;br /&gt;
&lt;br /&gt;
==== UC Authentication ====&lt;br /&gt;
Use the following Cloud-Init config to allow UC-based authentication. Local accounts with the same username as the IT account may use the IT credential to log in.&lt;br /&gt;
{{Highlight|code=#cloud-config&lt;br /&gt;
&lt;br /&gt;
write_files:&lt;br /&gt;
  - path: /usr/bin/setup_uc_auth&lt;br /&gt;
    permissions: &#039;0700&#039;&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      #!/bin/bash&lt;br /&gt;
      yum install -y sssd sssd-dbus sssd-krb5 krb5-workstation authselect-compat&lt;br /&gt;
      &lt;br /&gt;
      cat &amp;lt;&amp;lt;EOF &amp;gt; /etc/sssd/sssd.conf&lt;br /&gt;
      [sssd]&lt;br /&gt;
      config_file_version = 2&lt;br /&gt;
      services = nss, pam, ifp&lt;br /&gt;
      domains = uc.ucalgary.ca&lt;br /&gt;
      &lt;br /&gt;
      [domain/uc.ucalgary.ca]&lt;br /&gt;
      id_provider = files&lt;br /&gt;
      debug_level = 5&lt;br /&gt;
      auth_provider = krb5&lt;br /&gt;
      chpass_provider = krb5&lt;br /&gt;
      &lt;br /&gt;
      krb5_realm = UC.UCALGARY.CA&lt;br /&gt;
      krb5_server = ITSODCSRV14.UC.UCALGARY.CA:88&lt;br /&gt;
      krb5_validate = false&lt;br /&gt;
      EOF&lt;br /&gt;
      chmod 600 /etc/sssd/sssd.conf&lt;br /&gt;
      &lt;br /&gt;
      cat &amp;lt;&amp;lt;EOF &amp;gt; /etc/krb5.conf&lt;br /&gt;
      [logging]&lt;br /&gt;
       default = FILE:/var/log/krb5libs.log&lt;br /&gt;
       kdc = FILE:/var/log/krb5kdc.log&lt;br /&gt;
       admin_server = FILE:/var/log/kadmind.log&lt;br /&gt;
      &lt;br /&gt;
      [libdefaults]&lt;br /&gt;
       default_realm = UC.UCALGARY.CA&lt;br /&gt;
       dns_lookup_realm = false&lt;br /&gt;
       dns_lookup_kdc = false&lt;br /&gt;
       ticket_lifetime = 24h&lt;br /&gt;
       renew_lifetime = 7d&lt;br /&gt;
       forwardable = true&lt;br /&gt;
      &lt;br /&gt;
      [realms]&lt;br /&gt;
       UC.UCALGARY.CA = {&lt;br /&gt;
        kdc = itsodcsrv14.uc.ucalgary.ca&lt;br /&gt;
       }&lt;br /&gt;
      &lt;br /&gt;
      [domain_realm]&lt;br /&gt;
       uc.ucalgary.ca = UC.UCALGARY.CA&lt;br /&gt;
       .uc.ucalgary.ca = UC.UCALGARY.CA&lt;br /&gt;
      EOF&lt;br /&gt;
      &lt;br /&gt;
      mkdir -p /etc/authselect/custom/rcs&lt;br /&gt;
      cd /etc/authselect/custom/rcs&lt;br /&gt;
      echo H4sIAMYsbGIAA+0ca3PbuDGf+StQ34dLMpEtybLdUevO6BKl56mTuJGv7Uwmo4FISMKJBHQAaEfNub+9uwBJkXrLshxnjjuTyCKJfWF3sQsuFPhS9CtB79keoQpw2mjYT4DZz1r1uPGs1qhX6yf12vFx/Vm1Vqs1jp+R6j6ZSiHWhipCnikpzarn1t3/TuErj8bhhBzccjOs6Igq41MVHBDen71WUey3mCsWHNx5W4wKpT+qSAGjI3lDQxj8SarB0UDIiB2FcsBFRfuKMfHZY4L2QpYbS2MzZMJwnxouxfnXefQHTaNi9nufhprdpQj6XAyYGisuzHIUuYcWIhlTrW/lRkxMNdO0OH5HdHdeXlDNjAGCuhJQ0IM4GjPFx0Om4PGjDM9n8pULP4wDtpkik78q1Lec/WifcMr8kSyE7fB/a9ssYf8Q2PiPc6/3RmNN/K826mez8f/07LSM/48BR4uD8dGaWOytGbc8BK8buSTu5odtFEyPitGxDH2LYXai9kFjjf8fn4HPT/3/GPy/fnZ6Uvr/Y8BXiP/gTHHOL/KZ0Z2HRpGmD2meszi5mIExjbpM3BxquRuOgIkJICE/kE7qsKQYGwjXRbSrfD2Xxe7EVp/yEOMGsjZWzOLSPASuCHJ83iCxwPtdwyN2XqtXqwvYSpHMMKPjfp/7HHFtyIydrWBnXeeFQkT4fak4ZImulwh134n3POr7Mk50sTUWGMy0RoE2hnmRAJHDgzLtwk1ewffmJq/gXZiJBf9iTSaHZGvTA0ZoGGumdsakY1AxC7q8j9qJeUD+SmqwPhAQi5kC7k8B69M4NOc9GhA7TutzOSLIB4g1EvJWnHPIFhT77HBbE9hNXZBmRNxYi0yTlHvhyexaA9cYvhDkGCMZDTfGMmITLiw7wMGNHLECuq2ZCjnIZnVUyeHZmis90YZFNhDtJJ0Mgl9lrxuNhpDwBVxZi4ioHp1Xq2dni0bNu0k2GPwkz8yn1GBqJDWj1FSWCFUwTDCxG+4DHUF8JUXgzBNNrws2u9sspB65k+4SW99s/RdgyNz4w0OsA/eUY6yr/05rtdn8r35S1n+PAjaSBU1rPGA4pA8pjCaJG6NbsS8zbuWDxmTk6jTMogZKxmOHYDsMdiAgEMxkOKYIcp69DEE60CUaMsLQ3twSRzbQBgnr2XprRtKBiCMOJFMWRaoJXcAxn5rCiG9XcRbK7T3RWOP/pyen+fqvgf5/XC3rv0eBb17eYTIbsJDaGs/+cV53dlGWjfsuG+O6zWh8qP43hYW1UQUQ7aqYHC9CYh4P6fx9eanU+3SGn6xiqBGX7CU5X656mKZ/i6uRv52T+rQa2QV3sWbaaQZdwkjwrZSQxqlBxqYiYrCQ0UHTfd4RoybdPlfadDHiz0+V5mYjI1ivmm4i9W6CuSSW9KW6BfdbwnS5tVFubSxmJvUL0oM4wkRXD2kgb70fcnojt0CKUCB3S5IrYG3EZ8pQqO1sZqkJ+JLig6H5C/CgMUaPmQjgsiA6jFW0247Hg0yg5WNXfVkkXdDR2GzKzmacPLH9pPr3sZ20TTQe3/4W05Abm+0UwzyxyuyiGLorRTgpULr3YgPOdFKrE+dT2y49dqMEg6eRo93YSZaIZfjKfbkcmnJf7jvbl3ssGEtt7Gv4PdJYt/93Um/M1P81iC9l/f8YsKFFWydAP7eeZuvVTX13uaf+6X9kEEQv8991/DJZnwvYswJrLhTYGEmtDSNqWKlNNCa5Jj3LayV55KDpvv+uh/IWc00W3G3q1DOUkqIdatVxQGG1nmJ8Yh6+Gj62W2/etfdLY43/n1VPT3P+XwP/rzaOy/3/R4G27boinU7njS1/XKIwu1P2nIauGHcZpU2MsQgKJy+8813B8zosZD62dBEz5JqMlcStc1egsRyDVMMD4GoyVhAuZJ/wAJk0E4+Kud09wHID95U+BAI4OrmgCYWIY3C46x8DpJJEVNABSwtBg8lbJMGtIU0B1qTiTC8iEjF/SAXXkca6YYgcXr5pXb0i/2Cqx5QEJSnyVjF2cdU6JBcmYwKQkfedDkGcV613kLUYpvoUxDISdzycoMlkIF0yDuPBwOqiR32oaYPktgfM+lIIYNMKAnGSj0PkvN9nCmNUWqY4vaE+3uHuFBcwoZGTg/Ygc3dK9oGzHoPJjoEE7pIaOyO/Iv4x6KjpDY0Z6+bREXyLFTvk8ggHHkF+E3jeT5N0BXhVMBaK+2EQ6CHr6isZOVqKgpgKZRXJw+6dCYRvvO+BMbj5D8hziK/sxj08yU8DIzecZlnai0NyjTY0pDcMlQZKRSGFz7weE6zPQQ2QkbJUVn/InFE5wtgNhSqDJLIPsR4+KxEYgpq4R+1UmGGsPcgeY9AlEXEE82yNiesRrB9MWY2iln92HL9CcSYyJoG0hdItFXamRoyNEzXEQqD1g7bhOQXG6A+5YK/sKJgPzz5a8A1tXQb0ghz9GuPWBNdTT0mWVFQGw50LNAox8PAlL4dJc5Nu/Usb/B+KphEIr9DYwwkBtvG2GTpsCWpHLDep1iOSjRKLrIfEaDLD+Ql1bhROUl98/eH924u///KxdX3x4b1XmQPPa8EMOyFBb6hoiUpDH0MEP9qpmspySK5CRjWozNJ3j3iB9OMIptOJazDzYATsiDiXscPBDwx6P9TQwB75ADGtIGIqF72Bxd0qGFFNnRPmFm2Y4dsEIQn7wrWNZM6+CjyCUP9qXVy2frpskw9XKHnrkrxtt65/+djuLFBCThuFzbFm08OEJAndqXfjHSSM9Q9oAk3SSBvYJsiHZn5s+A2zQ2eiGCIGTYDN4k2vWKXNUMP3pTDKhyKLudFACB/Nh0q7VKSLBOqHK4vEbgMQW28cJkJlr1jm6OQ5tLaYPYoeC1Y3cLaQzj2aCITPITUWUfY0RObxWCqYZXQTnEpczNiXcch9bq09CTFIhAs7eKGNzXI82zecSHAJV4lraia3GNlojhf7zgmeZsE8urQsTfBs8t7qkLyXRNogGjEKSxlMxkzjNKJ67opwNI90i+RF5rGJ+Ck/ufbP9VOSe9h6no3Pbmpe5hoTX6bIkxc0KxFjOIdnwO3FIGRFfHB9Fhe+7Cniq0NUgqUUTPG+qItVQ4L9jQvfEIEx9OfrAdC91pg+BLFCHRcNPA5kgqFlN5vxAkYg8I8sraFpUmNzMHxAxXYhBOsJAp762REz/lHSYpBTg8taEiKvhwzsL9laRgNOuUoDBeoEvPS/eaOe7tsVhQXixF0HeRWNmMEZltNNEc97c9GxEa1z1X598fbiNaY1/764fv0zedO6bv3U6qyObdMY9x4XagjCr0jalAQJmqE9cG1tM7bexAbedAkEJ1K3ihu4C84XJw+BA2PgqRQam9w62Jeof7vQ2jJP21wHNO47G0yWGyO95P2dRbuINK57OH14TQqWECbzhD0uYJWgmXMVmmdA1ReO0EHSTZMJvIDoDArXMjPF4DphtkCQtd1McWQNNVugmXbeTPFMm2q2QJR130zxZH0126BJGnByWNyVdUja/2m9u7rMGavnvST58sitQeibOP1zy4vnkbwVpR+QFJMsDpBixF9OIV2ykJJdaNkmayzxCkvsOo6yJX7KIKRn7TZpXXY+TNXwEnMIO8ya9PM/v/iudjhKWAUf2//85eJj+137/XVnXzTWnv85qc7s/+C3cv/nMeAdHWGfinKJc6F4xDwzq5Jc7ZemiqTDkpBVLLEwJEUzewuuqNgPrOjYOrjzKjPhFgVK6+ZXENRSyfWKRDutDXguAjbX0CWkYpeYA8ySsJ/Avp8k5+RaxewAUX2CO5/hGXc08aHk3ZuaFzVqFA6JJZrOVwNLdJ0UBOttbEPKjynztNMtkTe5sExW61E2yaCRLTsiqBxCKzOmhZDrzr+CXEUV7cot+Ji32k2z6S7GiE3w3brbRLUkYZDfH+AmwNp3lKup7gvW9e8VtYwXH0rT6yh/A01nlB9P2/kX8ZXZlHCJphc1AaTavg9dQuymJtN2oxp3nC3yQoAoRoSHkdfGaLuD55vsJUNC+zClvQc9F+li7mEehuwausvW/+IB//3kGOvyv2L//wn2/zfqtTL/ewy45+//zJ8azyciT+JQwZPpsk/61GynaTfiGleOrqBQza/+LaHkr2kG+YBHD8oO6D9IB/RTbsZ9ooe7n37751Zsfdv2z7L/c8ruU+z/dJaxz9Of6/K/WrXWaMzkf7Wzk7My/3sMuF/+9yQSvH2cGt1n4vhA8U1IyLzdC57mIIiaX+DfCD9dy4GmN0w17Q+lVfJXRrkvxdB4n3OvmSwBvv4snnpMhQt4IlkuCZ7LaZOx9m/cNIZQazttyL2O4+7ww0kbmQhZyNbyH8wqT+R+hydyN1LQol+DWFYCZ9zU98jNyjcDy1131u9S1la4dM6d8WjX1JXvw0154jmjUZ54XoulrPeXMlOeeC5PPJcnnssTz+WJ52+95VXueGXsPsUdrxJKKKGEEkoooYQSSiihhBJKKKGEEkr448D/AT019aAAeAAA {{!}} base64 -d {{!}} tar -xzpf -&lt;br /&gt;
      &lt;br /&gt;
      if [[ &amp;quot;`authselect current -r`&amp;quot; != &amp;quot;custom/rcs&amp;quot; ]] ; then&lt;br /&gt;
        authselect select custom/rcs --force&lt;br /&gt;
        systemctl restart sssd&lt;br /&gt;
      fi&lt;br /&gt;
      &lt;br /&gt;
      systemctl enable sssd&lt;br /&gt;
&lt;br /&gt;
runcmd:&lt;br /&gt;
  - /usr/bin/expand_lvm_root}}&lt;br /&gt;
&lt;br /&gt;
== Infrastructure tools ==&lt;br /&gt;
&lt;br /&gt;
=== Generating a CloudStack API ===&lt;br /&gt;
You can request for a CloudStack API key to automate infrastructure deployment using Terraform or CloudMonkey. A new API key can be generated by navigating to your profile page (top right) and then clicking on the &#039;Generate keys&#039; button.&lt;br /&gt;
[[File:CloudStack API Key.png|alt=CloudStack API Key|none|thumb|CloudStack API Key]]&lt;br /&gt;
&lt;br /&gt;
=== CloudMonkey ===&lt;br /&gt;
CloudMonkey is a utility that makes it easier to interact with the CloudStack API. This tool may be used to help automate VM actions (such as start/stop/reboot), or infrastructure tasks (such as creating/destroying VMs, networks, or firewall rules). &lt;br /&gt;
&lt;br /&gt;
To get started with CloudMonkey, refer to the following resources:&lt;br /&gt;
&lt;br /&gt;
* Download from: &amp;lt;nowiki&amp;gt;https://github.com/apache/cloudstack-cloudmonkey/releases/tag/6.1.0&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
* Documentation at: &amp;lt;nowiki&amp;gt;https://cwiki.apache.org/confluence/display/CLOUDSTACK/CloudStack+cloudmonkey+CLI&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Terraform Integration ===&lt;br /&gt;
Terraform allows you to define infrastructure as code and can be used in conjunction with CloudStack to configure your virtual machines and guest networks. Use the official CloudStack provider.&lt;br /&gt;
&lt;br /&gt;
The following is an example Terraform file for reference. Specify your CloudStack API keys either as a separate &amp;lt;code&amp;gt;vars.tf&amp;lt;/code&amp;gt;.&lt;br /&gt;
{{Highlight|code=# Configure the CloudStack Provider&lt;br /&gt;
terraform {&lt;br /&gt;
  required_providers {&lt;br /&gt;
    cloudstack = {&lt;br /&gt;
      source = &amp;quot;cloudstack/cloudstack&amp;quot;&lt;br /&gt;
      version = &amp;quot;0.4.0&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
provider &amp;quot;cloudstack&amp;quot; {&lt;br /&gt;
  api_url    = &amp;quot;${var.cloudstack_api_url}&amp;quot;&lt;br /&gt;
  api_key    = &amp;quot;${var.cloudstack_api_key}&amp;quot;&lt;br /&gt;
  secret_key = &amp;quot;${var.cloudstack_secret_key}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Create a new VPC&lt;br /&gt;
resource &amp;quot;cloudstack_vpc&amp;quot; &amp;quot;default&amp;quot; {&lt;br /&gt;
  name = &amp;quot;wedgenet-vpc&amp;quot;&lt;br /&gt;
  display_text = &amp;quot;wedgenet-vpc&amp;quot;&lt;br /&gt;
  cidr = &amp;quot;100.64.0.0/20&amp;quot;&lt;br /&gt;
  vpc_offering = &amp;quot;Default VPC offering&amp;quot;&lt;br /&gt;
  zone = &amp;quot;zone1&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Create a new ACL&lt;br /&gt;
resource &amp;quot;cloudstack_network_acl&amp;quot; &amp;quot;default&amp;quot; {&lt;br /&gt;
    name  = &amp;quot;vpc-acl&amp;quot;&lt;br /&gt;
    vpc_id = &amp;quot;${cloudstack_vpc.default.id}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# One ingress and one egress rule for the ACL&lt;br /&gt;
resource &amp;quot;cloudstack_network_acl_rule&amp;quot; &amp;quot;ingress&amp;quot; {&lt;br /&gt;
    acl_id = &amp;quot;${cloudstack_network_acl.default.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    rule {&lt;br /&gt;
        action       = &amp;quot;allow&amp;quot;&lt;br /&gt;
        cidr_list    = [&amp;quot;0.0.0.0/0&amp;quot;]&lt;br /&gt;
        protocol     = &amp;quot;tcp&amp;quot;&lt;br /&gt;
        ports        = [&amp;quot;22&amp;quot;, &amp;quot;80&amp;quot;, &amp;quot;443&amp;quot;]&lt;br /&gt;
        traffic_type = &amp;quot;ingress&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
resource &amp;quot;cloudstack_network_acl_rule&amp;quot; &amp;quot;egress&amp;quot; {&lt;br /&gt;
    acl_id = &amp;quot;${cloudstack_network_acl.default.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    rule {&lt;br /&gt;
        action       = &amp;quot;allow&amp;quot;&lt;br /&gt;
        cidr_list    = [&amp;quot;0.0.0.0/0&amp;quot;]&lt;br /&gt;
        protocol     = &amp;quot;all&amp;quot;&lt;br /&gt;
        traffic_type = &amp;quot;egress&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Create a new network in the VPC&lt;br /&gt;
resource &amp;quot;cloudstack_network&amp;quot; &amp;quot;primary&amp;quot; {&lt;br /&gt;
    name = &amp;quot;primary&amp;quot;&lt;br /&gt;
    display_text = &amp;quot;primary&amp;quot;&lt;br /&gt;
    cidr = &amp;quot;100.64.1.0/24&amp;quot;&lt;br /&gt;
    network_offering = &amp;quot;DefaultIsolatedNetworkOfferingForVpcNetworks&amp;quot;&lt;br /&gt;
    acl_id = &amp;quot;${cloudstack_network_acl.default.id}&amp;quot;&lt;br /&gt;
    vpc_id = &amp;quot;${cloudstack_vpc.default.id}&amp;quot;&lt;br /&gt;
    zone = &amp;quot;zone1&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Create a new public IP address for this network&lt;br /&gt;
resource &amp;quot;cloudstack_ipaddress&amp;quot; &amp;quot;public_ip&amp;quot; {&lt;br /&gt;
    vpc_id = &amp;quot;${cloudstack_vpc.default.id}&amp;quot;&lt;br /&gt;
    network_id = &amp;quot;${cloudstack_network.wosnet1.id}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Create VMs. &lt;br /&gt;
resource &amp;quot;cloudstack_instance&amp;quot; &amp;quot;vm&amp;quot; {&lt;br /&gt;
  count = 1&lt;br /&gt;
  name = &amp;quot;vm${count.index+1}&amp;quot;&lt;br /&gt;
  zone = &amp;quot;zone1&amp;quot;&lt;br /&gt;
  service_offering = &amp;quot;rcs.c4&amp;quot;&lt;br /&gt;
  template = &amp;quot;RockyLinux 8.5&amp;quot;&lt;br /&gt;
  network_id = &amp;quot;${cloudstack_network.wosnet1.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  # Cloud Init data can be used to configure your VM on first startup if your template supports Cloud Init&lt;br /&gt;
  user_data = &amp;lt;&amp;lt;EOF&lt;br /&gt;
#cloud-config&lt;br /&gt;
&lt;br /&gt;
# Require specific packages&lt;br /&gt;
packages:&lt;br /&gt;
 - tmux&lt;br /&gt;
 - git&lt;br /&gt;
 - tcpdump&lt;br /&gt;
&lt;br /&gt;
EOF&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
= Troubleshooting =&lt;br /&gt;
&lt;br /&gt;
=== Cannot create a volume snapshot ===&lt;br /&gt;
Volume snapshots can only be taken on VMs that are powered off.&lt;br /&gt;
&lt;br /&gt;
=== Cannot create a VM snapshot ===&lt;br /&gt;
Disk-only VM snapshots cannot be taken when the VM is running. If you intend to snapshot a running system, you must also snapshot its memory.&lt;br /&gt;
&lt;br /&gt;
=== VM state is still running after shutdown ===&lt;br /&gt;
After running &#039;shutdown&#039; on a VM, the VM state reported by CloudStack is still running. &lt;br /&gt;
&lt;br /&gt;
Please try to do a force shutdown from the CloudStack management console. The VM state isn&#039;t updated by CloudStack and as a result, the state of a VM isn&#039;t properly reflected when power state changes outside of CloudStack (likely a bug?)&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=CloudStack&amp;diff=1848</id>
		<title>CloudStack</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=CloudStack&amp;diff=1848"/>
		<updated>2022-06-02T16:48:29Z</updated>

		<summary type="html">&lt;p&gt;Darcy: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Apache CloudStack is an Infrastructure as a Service (IaaS) offering that provides Virtual Machines (VMs) for University of Calgary researchers.  It is part of Research Computing Services Digital Research Infrastructure (DRI).  This service will allow you to quickly deploy VMs to support your research projects.&lt;br /&gt;
&lt;br /&gt;
If you have need for a web site, a database, you wish to experiment with new software tools or you want to test out the latest release of a software package, then CloudStack can provide you with an environment to support your work.&lt;br /&gt;
&lt;br /&gt;
There is no charge for the use of CloudStack.&lt;br /&gt;
&lt;br /&gt;
Not sure if you need a VM or a compute cluster? Is this the &amp;quot;Cloud&amp;quot;? Don&#039;t understand what we&#039;re talking about? Contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] and we will assist you in using this service to support your research goals.&lt;br /&gt;
&lt;br /&gt;
CloudStack is available now to any PI at the University of Calgary.&lt;br /&gt;
&lt;br /&gt;
Requests to use CloudStack are done through [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=e3c1d6e91be48554cca5ecefbd4bcb6c Service Now].&lt;br /&gt;
&lt;br /&gt;
Please read the [[CloudStack End User Agreement]] before using this service.&lt;br /&gt;
&lt;br /&gt;
Please refer to the [[CloudStack User Guide]] for details on using CloudStack.&lt;br /&gt;
&lt;br /&gt;
==Note==&lt;br /&gt;
&lt;br /&gt;
CloudStack is a research environment and is supported as such.&lt;br /&gt;
&lt;br /&gt;
Researchers have a great degree of freedom in how they use CloudStack and they are solely responsible for maintaining any VMs they deploy. The expectation is that the owners of the VMs will be patching and doing other maintenance work on a regular basis.  We will be more than happy to provide guidance on this process.  Contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] for details.&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=CloudStack&amp;diff=1847</id>
		<title>CloudStack</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=CloudStack&amp;diff=1847"/>
		<updated>2022-06-02T16:43:11Z</updated>

		<summary type="html">&lt;p&gt;Darcy: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Apache CloudStack is an Infrastructure as a Service (IaaS) offering that provides Virtual Machines (VMs) for University of Calgary researchers.  It is part of Research Computing Services Digital Research Infrastructure (DRI).  This service will allow you to quickly deploy VMs to support your research projects.&lt;br /&gt;
&lt;br /&gt;
If you have need for a web site, a database, you wish to experiment with new software tools or you want to test out the latest release of a software package, then CloudStack can provide you with an environment to support your work.&lt;br /&gt;
&lt;br /&gt;
There is no charge for the use of CloudStack.&lt;br /&gt;
&lt;br /&gt;
Not sure if this is what you need?  Please contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca].  We will be more than happy to discuss your needs with you.&lt;br /&gt;
&lt;br /&gt;
==Getting Started==&lt;br /&gt;
Not sure if you need a VM or a compute cluster? Is this the &amp;quot;Cloud&amp;quot;? Don&#039;t understand what we&#039;re talking about? Contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] and we will assist you in using this service to support your research goals.&lt;br /&gt;
&lt;br /&gt;
Please refer to the [[CloudStack User Guide]] for details on using CloudStack.&lt;br /&gt;
&lt;br /&gt;
==Availability==&lt;br /&gt;
CloudStack is available now to any PI at the University of Calgary.&lt;br /&gt;
&lt;br /&gt;
==Requesting Access==&lt;br /&gt;
Requests to use CloudStack are done through [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=e3c1d6e91be48554cca5ecefbd4bcb6c Service Now].&lt;br /&gt;
&lt;br /&gt;
==End User Agreement==&lt;br /&gt;
Please read the [[CloudStack End User Agreement]] before using this service.&lt;br /&gt;
&lt;br /&gt;
=Please Note=&lt;br /&gt;
&lt;br /&gt;
CloudStack is a research environment and is supported as such.&lt;br /&gt;
&lt;br /&gt;
Researchers have a great degree of freedom in how they use CloudStack and they are solely responsible for maintaining any VMs they deploy. The expectation is that the owners of the VMs will be patching and doing other maintenance work on a regular basis.  We will be more than happy to provide guidance on this process.  Contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] for details.&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=CloudStack&amp;diff=1846</id>
		<title>CloudStack</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=CloudStack&amp;diff=1846"/>
		<updated>2022-06-02T16:42:22Z</updated>

		<summary type="html">&lt;p&gt;Darcy: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Apache CloudStack is an Infrastructure as a Service (IaaS) offering that provides Virtual Machines (VMs) for University of Calgary researchers.  It is part of Research Computing Services Digital Research Infrastructure (DRI).  This service will allow you to quickly deploy VMs to support your research projects.&lt;br /&gt;
&lt;br /&gt;
If you have need for a web site, a database, you wish to experiment with new software tools or you want to test out the latest release of a software package, then CloudStack can provide you with an environment to support your work.&lt;br /&gt;
&lt;br /&gt;
There is no charge for the use of CloudStack.&lt;br /&gt;
&lt;br /&gt;
Not sure if this is what you need?  Please contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca].  We will be more than happy to discuss your needs with you.&lt;br /&gt;
&lt;br /&gt;
=Getting Started=&lt;br /&gt;
Not sure if you need a VM or a compute cluster? Is this the &amp;quot;Cloud&amp;quot;? Don&#039;t understand what we&#039;re talking about? Contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] and we will assist you in using this service to support your research goals.&lt;br /&gt;
&lt;br /&gt;
Please refer to the [[CloudStack User Guide]] for details on using CloudStack.&lt;br /&gt;
&lt;br /&gt;
=Availability=&lt;br /&gt;
CloudStack is available now to any PI at the University of Calgary.&lt;br /&gt;
&lt;br /&gt;
=Requesting Access=&lt;br /&gt;
Requests to use CloudStack are done through [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=e3c1d6e91be48554cca5ecefbd4bcb6c Service Now].&lt;br /&gt;
&lt;br /&gt;
=End User Agreement=&lt;br /&gt;
Please read the [[CloudStack End User Agreement]] before using this service.&lt;br /&gt;
&lt;br /&gt;
=Please Note=&lt;br /&gt;
&lt;br /&gt;
CloudStack is a research environment and is supported as such.&lt;br /&gt;
&lt;br /&gt;
Researchers have a great degree of freedom in how they use CloudStack and they are solely responsible for maintaining any VMs they deploy. The expectation is that the owners of the VMs will be patching and doing other maintenance work on a regular basis.  We will be more than happy to provide guidance on this process.  Contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] for details.&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=CloudStack&amp;diff=1845</id>
		<title>CloudStack</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=CloudStack&amp;diff=1845"/>
		<updated>2022-06-02T16:20:24Z</updated>

		<summary type="html">&lt;p&gt;Darcy: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Apache CloudStack is an Infrastructure as a Service (IaaS) offering that provides Virtual Machines (VMs) for University of Calgary researchers.  It is part of Research Computing Services Digital Research Infrastructure (DRI).  This service will allow you to quickly deploy VMs to support your research projects.&lt;br /&gt;
&lt;br /&gt;
If you have need for a web site, a database, you wish to experiment with new software tools or you want to test out the latest release of a software package, then CloudStack can provide you with an environment to support your work.&lt;br /&gt;
&lt;br /&gt;
There is no charge for the use of CloudStack.&lt;br /&gt;
&lt;br /&gt;
Not sure if this is what you need?  Please contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca].  We will be more than happy to discuss your needs with you.&lt;br /&gt;
&lt;br /&gt;
=Getting Started=&lt;br /&gt;
Not sure if you need a VM or a compute cluster? Is this the &amp;quot;Cloud&amp;quot;? Don&#039;t understand what we&#039;re talking about? Contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] and we will assist you in using this service to support your research goals.&lt;br /&gt;
&lt;br /&gt;
Please refer to the [[CloudStack User Guide]] for details on using CloudStack.&lt;br /&gt;
&lt;br /&gt;
==Availability==&lt;br /&gt;
CloudStack is available now to any PI at the University of Calgary.&lt;br /&gt;
==Requesting Access==&lt;br /&gt;
Requests to use CloudStack are done through [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=e3c1d6e91be48554cca5ecefbd4bcb6c Service Now].&lt;br /&gt;
&lt;br /&gt;
==End User Agreement==&lt;br /&gt;
Please read the [[CloudStack End User Agreement]].&lt;br /&gt;
&lt;br /&gt;
=Please Note=&lt;br /&gt;
&lt;br /&gt;
CloudStack is a research environment and is supported as such.&lt;br /&gt;
&lt;br /&gt;
Researchers have a great degree of freedom in how they use CloudStack and they are solely responsible for maintaining any VMs they deploy. The expectation is that the owners of the VMs will be patching and doing other maintenance work on a regular basis.  We will be more than happy to provide guidance on this process.  Contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] for details.&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=CloudStack&amp;diff=1844</id>
		<title>CloudStack</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=CloudStack&amp;diff=1844"/>
		<updated>2022-06-02T16:19:14Z</updated>

		<summary type="html">&lt;p&gt;Darcy: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Apache CloudStack is an Infrastructure as a Service (IaaS) offering that provides Virtual Machines (VMs) for University of Calgary researchers.  It is part of Research Computing Services Digital Research Infrastructure (DRI).  This service will allow you to quickly deploy VMs to support your research projects.&lt;br /&gt;
&lt;br /&gt;
If you have need for a web site, a database, or you wish to experiment with new software tools, test out the latest release of a software package, then CloudStack can provide you with an environment to support your work.&lt;br /&gt;
&lt;br /&gt;
There is no charge for the use of CloudStack.&lt;br /&gt;
&lt;br /&gt;
Not sure if this is what you need?  Please contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca].  We will be more than happy to discuss your needs with you.&lt;br /&gt;
&lt;br /&gt;
=Getting Started=&lt;br /&gt;
Not sure if you need a VM or a compute cluster? Is this the &amp;quot;Cloud&amp;quot;? Don&#039;t understand what we&#039;re talking about? Contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] and we will assist you in using this service to support your research goals.&lt;br /&gt;
&lt;br /&gt;
Please refer to the [[CloudStack User Guide]] for details on using CloudStack.&lt;br /&gt;
&lt;br /&gt;
==Availability==&lt;br /&gt;
CloudStack is available now to any PI at the University of Calgary.&lt;br /&gt;
==Requesting Access==&lt;br /&gt;
Requests to use CloudStack are done through [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=e3c1d6e91be48554cca5ecefbd4bcb6c Service Now].&lt;br /&gt;
&lt;br /&gt;
==End User Agreement==&lt;br /&gt;
Please read the [[CloudStack End User Agreement]].&lt;br /&gt;
&lt;br /&gt;
=Please Note=&lt;br /&gt;
&lt;br /&gt;
CloudStack is a research environment and is supported as such.&lt;br /&gt;
&lt;br /&gt;
Researchers have a great degree of freedom in how they use CloudStack and they are solely responsible for maintaining any VMs they deploy. The expectation is that the owners of the VMs will be patching and doing other maintenance work on a regular basis.  We will be more than happy to provide guidance on this process.  Contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] for details.&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=CloudStack_User_Guide&amp;diff=1843</id>
		<title>CloudStack User Guide</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=CloudStack_User_Guide&amp;diff=1843"/>
		<updated>2022-06-02T15:49:06Z</updated>

		<summary type="html">&lt;p&gt;Darcy: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This is a user&#039;s guide on using CloudStack provided by Research Computing Services.&lt;br /&gt;
&lt;br /&gt;
== Introduction==&lt;br /&gt;
Apache CloudStack is an Infrastructure as a Service (IaaS) platform that allows users to quickly spin up Linux/Non-Windows based virtual machines. RCS is providing this service to help researchers quickly set up and prototype short-term research related software on premises. CloudStack is not appropriate for workloads that depend on Windows. Services set up on CloudStack virtual machines can be accessed from the campus network and also the internet if required.&lt;br /&gt;
&lt;br /&gt;
Access to CloudStack can be requested via [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=e3c1d6e91be48554cca5ecefbd4bcb6c ServiceNow].&lt;br /&gt;
&lt;br /&gt;
Please refer to our [[CloudStack End User Agreement]] for acceptable uses and requirements.&lt;br /&gt;
&lt;br /&gt;
===Accessing the CloudStack management console===&lt;br /&gt;
&lt;br /&gt;
The CloudStack management console is a web-based portal that allows you to view and manage your cloud infrastructure including virtual machines, storage, and network. Any modern web browsers including Chrome, Firefox, Edge, and Safari is supported. &lt;br /&gt;
&lt;br /&gt;
Access the CloudStack management console is possible only from an IT-managed computer or through the IT General VPN when working on unmanaged machines (eg. AirUC) or when working off campus (eg. at home). Please review the IT [https://ucalgary.service-now.com/it?id=kb_article&amp;amp;sys_id=52a169d6dbe5bc506ad32637059619cd knowledge base article on connecting to the General VPN] or contact IT support if you need assistance connecting to the General VPN. &lt;br /&gt;
[[File:CloudStack VPN Connection.png|alt=CloudStack VPN Connection|none|thumb|CloudStack VPN Connection]]&lt;br /&gt;
=== Login to CloudStack===&lt;br /&gt;
&lt;br /&gt;
To log in to CloudStack, navigate to https://cloudstack.rcs.ucalgary.ca/. If this site fails to load, please make sure you are either on a IT managed computer or connected to the General VPN.&lt;br /&gt;
&lt;br /&gt;
Sign in to CloudStack using the Single Sign-On option as shown in the image below. This method will require you to authenticate through our central authentication service using your University of Calgary IT credentials and will require multi-factor authentication. You must have multi-factor authentication set up either via your phone or with the Microsoft Authenticator app.&lt;br /&gt;
[[File:CloudStack Login Page.png|alt=CloudStack Login Page|none|thumb|CloudStack Login Page]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; Due to a bug with the UI, if the Single Sign-On option is disabled, please refresh the login page and try again. This issue should be addressed in our next update for CloudStack.&lt;br /&gt;
&lt;br /&gt;
=== CloudStack Dashboard===&lt;br /&gt;
&lt;br /&gt;
After logging in, you will be presented with your CloudStack management console. The dashboard shows you a general overview of your account&#039;s status.&lt;br /&gt;
[[File:CloudStack Dashboard.png|alt=CloudStack Dashboard|none|thumb|CloudStack Dashboard]]On the right hand side of the dashboard, you will also see recent activity and events that was done within your CloudStack account.&lt;br /&gt;
&lt;br /&gt;
If you wish to see your CloudStack account resource quota and allocation, navigate to: &amp;lt;code&amp;gt;Accounts -&amp;gt; Click on your account -&amp;gt; Resources&amp;lt;/code&amp;gt;. &lt;br /&gt;
[[File:CloudStack Resource Quota.png|alt=CloudStack Resource Quota|none|thumb|CloudStack Resource Quota]]&lt;br /&gt;
&lt;br /&gt;
== Working with virtual machines==&lt;br /&gt;
&lt;br /&gt;
CloudStack allows you to control the lifecycle of virtual machines within your cloud account. VMs may be started, stopped, rebooted, or destroyed within your management console.&lt;br /&gt;
&lt;br /&gt;
===Create a VM===&lt;br /&gt;
&lt;br /&gt;
To create a new VM, enter the CloudStack management console and navigate to:  &amp;lt;code&amp;gt;Compute -&amp;gt; Instances -&amp;gt; Add Instance&amp;lt;/code&amp;gt;[[File:CloudStack Instance Summary.png|alt=CloudStack Instance Summary|thumb|CloudStack Instance Summary|493x493px]]&lt;br /&gt;
&lt;br /&gt;
Virtual Machines require the following details:&lt;br /&gt;
&lt;br /&gt;
# &#039;&#039;&#039;Deployment zone&#039;&#039;&#039;. Your account will already be placed in the appropriate zone.&lt;br /&gt;
# &#039;&#039;&#039;Boot template or ISO&#039;&#039;&#039;. You may choose either a pre-created template or boot from a custom CD-ROM ISO file.&lt;br /&gt;
# &#039;&#039;&#039;Compute offering&#039;&#039;&#039;. You may select an appropriate size for your new VM. Resources will be counted against your account&#039;s quota.&lt;br /&gt;
# &#039;&#039;&#039;Data Disk&#039;&#039;&#039;. You may choose to add an additional virtual disk to your VM to store your data. Alternatively, if you wish to use a single virtual disk for your VM, you may choose to override the size of your root disk in step 2 and select &#039;No thanks&#039; in this step.&lt;br /&gt;
# &#039;&#039;&#039;Networks&#039;&#039;&#039;. You may choose one or more networks your VM should connect to. All CloudStack accounts come with a default network already created and ready to be used.&lt;br /&gt;
# &#039;&#039;&#039;SSH keypairs&#039;&#039;&#039;. For templates that support custom SSH key pairs, you may choose to use a custom SSH keypair to be installed as part of the deployment process.&lt;br /&gt;
# &#039;&#039;&#039;Advanced settings&#039;&#039;&#039;. For templates that support custom user-data (Cloud-Init), you may choose to enable the advanced settings and provide your own Cloud-Init user-data payload. More on this in the advanced tasks section below.&lt;br /&gt;
# &#039;&#039;&#039;Other VM details&#039;&#039;&#039;. You may give your new VM a friendly name and make it part of a group. Groups allow you to group related VMs together for better organization. You may change these details at a later time.&lt;br /&gt;
&lt;br /&gt;
When you are done, review the instance summary on the right hand side and then click on the &#039;Launch Virtual Machine&#039; button.&lt;br /&gt;
&lt;br /&gt;
====Choosing a virtual machine template====&lt;br /&gt;
&lt;br /&gt;
We provide a Rocky Linux 8.5 and a Ubuntu Server 22.04 LTS template for your convenience. These templates are pre-built images with the operating system installed and ready for use. Our templates also support further automated setup configured using Cloud-Init configuration data that can be provided when deploying a new VM. Currently, we offer the following templates: &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Template&lt;br /&gt;
!Cloud-Init Support&lt;br /&gt;
!Password Support&lt;br /&gt;
!Default Username&lt;br /&gt;
|-&lt;br /&gt;
|Rocky Linux 8.5&lt;br /&gt;
|Yes&lt;br /&gt;
|Yes&lt;br /&gt;
|rocky&lt;br /&gt;
|-&lt;br /&gt;
|Ubuntu Server 22.04&lt;br /&gt;
|Yes&lt;br /&gt;
|Yes&lt;br /&gt;
|ubuntu&lt;br /&gt;
|}&lt;br /&gt;
Rocky Linux is an open source Linux distribution that is binary-compatible with Red Hat Enterprise Linux and is what RCS recommends.&lt;br /&gt;
&lt;br /&gt;
For templates that support passwords, the generated password that appears after a VM is created is applied to the default username.&lt;br /&gt;
&lt;br /&gt;
Security note: All VM templates are configured with SSH password authentication enabled. You should be able to SSH to your VM from another system connected to the same guest network. Do not expose port 22 unless required and we highly recommend using key based authentication.&lt;br /&gt;
&lt;br /&gt;
==== Virtual machine credentials ====&lt;br /&gt;
VM templates that support password will have a randomly generated password set when the VM is first created or when a password reset request is made (available only when the VM is powered off). A randomly generated 6 character password will be displayed when a new password is set and appears as a notification in your CloudStack management console. &lt;br /&gt;
[[File:CloudStack VM Password.png|alt=CloudStack VM Password|none|thumb|CloudStack VM Password]]&lt;br /&gt;
This password is set on the default username for your template. For example, the Rocky Linux VM template will set this password to the &#039;&#039;&#039;&#039;rocky&#039;&#039;&#039;&amp;lt;nowiki/&amp;gt;&#039; user account. You may become the super user by logging in as the &amp;lt;code&amp;gt;rocky&amp;lt;/code&amp;gt; user and then running &amp;lt;code&amp;gt;sudo su&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Note: If you specify a custom Cloud-Init config that creates additional users or sets account passwords, the displayed password will be overridden and have no effect.&lt;br /&gt;
&lt;br /&gt;
=== Creating a custom template===&lt;br /&gt;
&lt;br /&gt;
Alternatively, you may decide to install a custom OS such as a different Linux distribution or other UNIX based operating systems and create a template from that. To create a custom template:&lt;br /&gt;
&lt;br /&gt;
# Create a new virtual machine and select your custom ISO media. If you wish to upload your own ISO, see the &#039;register ISO&#039; section below.&lt;br /&gt;
# Start the virtual machine and proceed through the OS setup process&lt;br /&gt;
# Once the system has been set up, prepare the VM to be templated by removing any host-specific files such as SSH host keys, static network configuration settings, temporary files and caches.&lt;br /&gt;
# Power off the virtual machine&lt;br /&gt;
# Navigate to the virtual machine page and click on the &#039;create template&#039; button&lt;br /&gt;
[[File:CloudStack Instance Controls.png|alt=CloudStack Instance Controls|none|thumb|CloudStack Instance Controls]]&lt;br /&gt;
&lt;br /&gt;
===Registering a custom ISO===&lt;br /&gt;
&lt;br /&gt;
You may install custom ISO file into your CloudStack account either by directly uploading the ISO through the web console or by providing a URL to the ISO file on the internet.&lt;br /&gt;
&lt;br /&gt;
Please do not install Windows on our CloudStack infrastructure. If you need a Windows VM, please contact us as we have alternative solutions.&lt;br /&gt;
&lt;br /&gt;
==== Download a ISO from the internet====&lt;br /&gt;
[[File:CloudStack Download ISO.png|alt=CloudStack Download ISO|thumb|CloudStack Download ISO|190x190px]]&lt;br /&gt;
&lt;br /&gt;
To add a custom ISO file from the internet, enter the CloudStack management console and navigate to:  &amp;lt;code&amp;gt;Images -&amp;gt; ISOs -&amp;gt; Register ISO&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You may check the state of the ISO file by clicking on it and verify the state of the file. If the file is successfully downloaded, its ready state should become ‘true’. The ISO file will only appear in the selection list when the file is downloaded successfully.&lt;br /&gt;
[[File:CloudStack ISO Ready.png|alt=CloudStack ISO Ready|none|thumb|172x172px|CloudStack ISO Ready]]&lt;br /&gt;
&lt;br /&gt;
====Upload a custom ISO====&lt;br /&gt;
&lt;br /&gt;
To upload an ISO file, enter the CloudStack management console and navigate to:  &amp;lt;code&amp;gt;Images -&amp;gt; ISOs -&amp;gt; Upload ISO from Local (icon)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:CloudStack Upload ISO.png|alt=CloudStack Upload ISO|none|thumb|CloudStack Upload ISO|217x217px]]&lt;br /&gt;
&lt;br /&gt;
===Connecting to your VM console===&lt;br /&gt;
The CloudStack management console has a KVM (keyboard, video, mouse) feature built-in, allowing you to remotely connect to and interact with your virtual machine. To connect to your virtual machine&#039;s console, navigate to: &amp;lt;code&amp;gt;Compute -&amp;gt; Instances -&amp;gt; Your Instance -&amp;gt; View console&amp;lt;/code&amp;gt;.&lt;br /&gt;
[[File:CloudStack View Console.png|alt=CloudStack View Console|none|thumb|CloudStack View Console]]&lt;br /&gt;
&lt;br /&gt;
=== Expanding a VM disk ===&lt;br /&gt;
[[File:CloudStack Expand Volume.png|alt=CloudStack Expand Volume|thumb|CloudStack Expand Volume]]&lt;br /&gt;
Virtual machine disks can be expanded after they are created within CloudStack. However, you will need to expand the partitions and filesystems manually.&lt;br /&gt;
&lt;br /&gt;
To grow an existing disk:&lt;br /&gt;
&lt;br /&gt;
# Go into your VM details page and click on ‘Volumes’.&lt;br /&gt;
# Click on the volume you wish to expand.&lt;br /&gt;
# Click on the ‘Resize Volume’ icon in the top right.&lt;br /&gt;
Once the volume has been expanded, you should be able to verify the disk volume has grown with &amp;lt;code&amp;gt;lsblk&amp;lt;/code&amp;gt;. There should also be some messages by the kernel when this occurs. However, you will still need to expand any partitions, volumes, and filesystems on your system manually.&lt;br /&gt;
&lt;br /&gt;
To expand your partition, use the &amp;lt;code&amp;gt;growpart&amp;lt;/code&amp;gt; command followed by your disk device and partition number. Eg: &amp;lt;code&amp;gt;/usr/bin/growpart /dev/vda 3&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For LVM volume sets, you can expand the volume using the &amp;lt;code&amp;gt;pvresize&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;lvresize&amp;lt;/code&amp;gt; commands:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/sbin/pvresize -y -q /dev/vda3&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/sbin/lvresize -y -q -r -l +100%FREE /dev/mapper/&amp;lt;volume-name&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To expand your filesystem:&lt;br /&gt;
&lt;br /&gt;
* XFS: &amp;lt;code&amp;gt;/usr/sbin/xfs_growfs &amp;lt;volume&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
* EXT: &amp;lt;code&amp;gt;resize2fs &amp;lt;volume&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Destroying a VM ===&lt;br /&gt;
If you need to delete a VM, click on the red garbage bin icon in the VM instance page. All deletions are irreversible, so please make sure you have a copy of any data you need before proceeding.&lt;br /&gt;
[[File:CloudStack Delete VM.png|alt=CloudStack Delete VM|none|thumb|CloudStack Delete VM]]&lt;br /&gt;
The VM root volume can be deleted immediately by enabling the &#039;Expunge&#039; option in the dialog box. If left disabled, the VM root volume will linger for a day before it is deleted by the system. You may wish to expunge a volume if you are running low on space or volume quota.&lt;br /&gt;
&lt;br /&gt;
== Managing your virtual machine ==&lt;br /&gt;
You will be able to run whatever virtual machine you wish (with the exception of Windows).  Clearly we cannot provide specific management advice on each and every operating system available.  We can provide you with some suggestions on important considerations to be aware of.&lt;br /&gt;
&lt;br /&gt;
=== Educate yourself ===&lt;br /&gt;
All operating systems (OS) have user groups, web sites, wikis, or mailing lists somewhere on the internet.  They can be a valuable resource.  Most OS providers have on-line documentation that describes using their product.  For example Rocky Linux, used by RCS, has a [https://docs.rockylinux.org/ documentation site].  These are excellent resources and can help you understand how to manage your virtual machine.&lt;br /&gt;
&lt;br /&gt;
=== Choose an appropriate OS edition ===&lt;br /&gt;
Many OSs will provide various editions that are tailored to a specific use case.  A desktop VM may not be appropriate when you need to run a database server.  The OS provider will have guides on how to choose an edition.&lt;br /&gt;
&lt;br /&gt;
=== Configure your VM&#039;s OS ===&lt;br /&gt;
It is critical to ensure that the only services running on your VM are the ones you must run.  Each OS has a way of managing what services are running (sysinit, systemd etc).  Please ensure that unnecessary services have been disabled.&lt;br /&gt;
&lt;br /&gt;
Many OSs will have pre-configured accounts, and many applications will have pre-configured accounts.  Make sure they are either disabled or not allowed to login.&lt;br /&gt;
&lt;br /&gt;
All un-used accounts should be disabled or preferably deleted.&lt;br /&gt;
&lt;br /&gt;
All accounts should have strong [https://it.ucalgary.ca/it-security/passwords-do-i-have-change-them/ passwords].&lt;br /&gt;
&lt;br /&gt;
Many OS&#039;s have the ability to automatically update themselves.  If possible please consider doing this.&lt;br /&gt;
&lt;br /&gt;
Updates can also be configured to skip certain software if it will interfere with your research, but please be advised that doing so could place your system at risk.&lt;br /&gt;
&lt;br /&gt;
=== Exposed to the Internet ===&lt;br /&gt;
Not everyone is a computer security expert.  If your VM must be exposed to the internet, please consider using XXX from IT security to enhance your security posture.&lt;br /&gt;
&lt;br /&gt;
== Virtual machine networking ==&lt;br /&gt;
The CloudStack platform allows you to define custom virtual private cloud (VPC) network which can contain any number of guest networks that your virtual machines connect to. Each guest network has its own private network address space and is not directly routable from campus or the internet. For virtual machines that require internet access, the VPC or guest network it is connected to must have a NAT IP address associated. The following diagram shows how a guest network connects to the internet and campus network.&lt;br /&gt;
[[File:CloudStack Guest Networking.png|alt=CloudStack Guest Networking|none|thumb|CloudStack Guest Networking]]&lt;br /&gt;
In order to expose a virtual machine&#039;s services to campus or the internet, the appropriate port forwardings must be set up on the VPC containing the guest network. More on this will be discussed in the next section.&lt;br /&gt;
&lt;br /&gt;
Having multiple guest networks allows for more advanced network setups but is not required. We recommend using a single flat network for most workloads. &lt;br /&gt;
&lt;br /&gt;
By default, all CloudStack accounts come with a default VPC and guest network set up with a NAT IP assigned.&lt;br /&gt;
&lt;br /&gt;
=== IP addresses ===&lt;br /&gt;
Due to the design decisions made during the setup of the CloudStack platform, only internal 10.44.12X.X IPs can be assigned to your VPC. These IP addresses are accessible from the university campus network. However, there is a special section of IP addresses that can be accessed from the internet.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!IP address range&lt;br /&gt;
!Accessible from&lt;br /&gt;
!Internet IP mapping&lt;br /&gt;
|-&lt;br /&gt;
|10.44.120.3-128&lt;br /&gt;
|Campus, Internet&lt;br /&gt;
|10.44.120.X maps to 136.159.140.X (ports 80 and 443 only)&lt;br /&gt;
|-&lt;br /&gt;
|10.44.120.129-255&lt;br /&gt;
|Campus only&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|10.44.121.0-255&lt;br /&gt;
|Campus only&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|10.44.122.0-255&lt;br /&gt;
|Campus only&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|10.44.123.0-255&lt;br /&gt;
|Campus only&lt;br /&gt;
|N/A&lt;br /&gt;
|}&lt;br /&gt;
If you need a service exposed to the internet, please request for a public IP address using our [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=e3c1d6e91be48554cca5ecefbd4bcb6c ServiceNow request form]. Additionally, if your service is not port 80 or 443, you must also request for a firewall change request to allow the special port through. &lt;br /&gt;
&lt;br /&gt;
=== Exposing a network service to campus ===&lt;br /&gt;
In order to make a virtual machine be visible to the campus network, you must first set up a port forwarding from a campus IP address to your virtual machine.&lt;br /&gt;
&lt;br /&gt;
To create a port forwarding, navigate to &amp;lt;code&amp;gt;Network -&amp;gt; VPC -&amp;gt; Select your VPC -&amp;gt; Public IP Addresses&amp;lt;/code&amp;gt;. If you do not have any available IP addresses, you will need to click on &#039;Acquire New IP&#039; and select an available IP address. Click on the IP address you wish to use to create a port forwarding on and then navigate to the &#039;Port Forwarding&#039; tab. Enter the private port range, the public port range, the protocol, and select the target VM. &lt;br /&gt;
&lt;br /&gt;
For example, to port forward only HTTP (tcp/80) traffic, you would enter the following:&lt;br /&gt;
[[File:CloudStack Port Forwarding.png|alt=CloudStack Port Forwarding|none|thumb|CloudStack Port Forwarding]]Once the port forwarding is created, you should be able to access the service from on campus. If for some reason access to your service does not work, there may be a firewall restriction on IT&#039;s network. In such circumstances, please contact us for assistance.&lt;br /&gt;
&lt;br /&gt;
=== Exposing a network to the internet ===&lt;br /&gt;
Exposing a service to the internet is the same as exposing it to campus. However, you must create a port forwarding on an IP address that maps to an internet IP address outlined in the IP address table above. If your account does not have one of these IP addresses available, please request for one on the [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=e3c1d6e91be48554cca5ecefbd4bcb6c ServiceNow request form].&lt;br /&gt;
&lt;br /&gt;
By default, only ports 80 and 443 are allowed through the Internet IP address. For all other ports, please [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=47cd16d113153a00b5b4ff82e144b0bf create a firewall rule change request in ServiceNow].&lt;br /&gt;
&lt;br /&gt;
== Cloud-Init Automation ==&lt;br /&gt;
&lt;br /&gt;
=== Ubuntu ===&lt;br /&gt;
The following Cloud-Init configs apply to Ubuntu VM templates.&lt;br /&gt;
&lt;br /&gt;
==== Ubuntu desktop ====&lt;br /&gt;
Use the following Cloud-Init config with the Ubuntu Server template to set up an Ubuntu desktop environment. The setup step takes a up to 15 minutes to complete and you should see a login screen when the setup finishes. &lt;br /&gt;
&lt;br /&gt;
Adjust the root and user password as desired. Below, the test user password is set to blank, allowing you to login to Gnome without a password.&lt;br /&gt;
{{Highlight|code=#cloud-config&lt;br /&gt;
disable_root: false&lt;br /&gt;
&lt;br /&gt;
users:&lt;br /&gt;
  - name: user&lt;br /&gt;
    lock_passwd: false&lt;br /&gt;
    inactive: false&lt;br /&gt;
    gecos: Test User&lt;br /&gt;
    primary_group: user&lt;br /&gt;
    groups: wheel&lt;br /&gt;
    passwd: $1$ADUODeAy$eCJ1lPSxhSGmSvrmWxjLC1&lt;br /&gt;
      &lt;br /&gt;
chpasswd:&lt;br /&gt;
  list: {{!}}&lt;br /&gt;
    root:password&lt;br /&gt;
  expire: false&lt;br /&gt;
    &lt;br /&gt;
# Install a graphical desktop&lt;br /&gt;
runcmd:&lt;br /&gt;
  - DEBIAN_FRONTEND=noninteractive apt -y upgrade&lt;br /&gt;
  - DEBIAN_FRONTEND=noninteractive apt -y install tasksel&lt;br /&gt;
  - tasksel install gnome-desktop&lt;br /&gt;
  - systemctl set-default graphical.target&lt;br /&gt;
  - systemctl isolate graphical.target}}&lt;br /&gt;
&lt;br /&gt;
=== Rocky Linux ===&lt;br /&gt;
The following Cloud-Init configs apply to Rocky Linux templates.&lt;br /&gt;
&lt;br /&gt;
==== Rocky Linux Desktop ====&lt;br /&gt;
Use the following Cloud-Init config using the Rocky Linux template to set up a Rocky Linux desktop environment. The setup step takes up to 10 minutes to complete and you should see a login screen when the setup finishes. &lt;br /&gt;
&lt;br /&gt;
Adjust the root and user password as desired. Below, the test user password is set to blank, allowing you to login to Gnome without a password.&lt;br /&gt;
{{Highlight|code=#cloud-config&lt;br /&gt;
disable_root: false&lt;br /&gt;
&lt;br /&gt;
users:&lt;br /&gt;
  - name: user&lt;br /&gt;
    lock_passwd: false&lt;br /&gt;
    inactive: false&lt;br /&gt;
    gecos: Test User&lt;br /&gt;
    primary_group: user&lt;br /&gt;
    groups: wheel&lt;br /&gt;
    passwd: $1$ADUODeAy$eCJ1lPSxhSGmSvrmWxjLC1&lt;br /&gt;
      &lt;br /&gt;
chpasswd:&lt;br /&gt;
  list: {{!}}&lt;br /&gt;
    root:password&lt;br /&gt;
  expire: false&lt;br /&gt;
    &lt;br /&gt;
# Install a graphical desktop&lt;br /&gt;
runcmd:&lt;br /&gt;
  - yum -y install &amp;quot;@Workstation&amp;quot;&lt;br /&gt;
  - systemctl set-default graphical.target&lt;br /&gt;
  - systemctl isolate graphical.target}}&lt;br /&gt;
&lt;br /&gt;
==== Rocky Linux Docker host ====&lt;br /&gt;
Use the following Cloud-Init config using the Rocky Linux template to set up a new docker host. This server can then be used to run Docker containers. Also included are:&lt;br /&gt;
&lt;br /&gt;
# The docker-compose utility to help deploy container stacks more easily&lt;br /&gt;
# A helper script to expand the &amp;lt;code&amp;gt;/var&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt; filesystems on first startup based on the available space in the ROOT volume. &lt;br /&gt;
&lt;br /&gt;
Use the CloudStack generated password with the &#039;&amp;lt;code&amp;gt;rocky&amp;lt;/code&amp;gt;&#039; default user account to log in.{{Highlight|code=#cloud-config&lt;br /&gt;
&lt;br /&gt;
write_files:&lt;br /&gt;
  - path: /usr/bin/expand_lvm_root&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      #!/bin/bash&lt;br /&gt;
      /usr/bin/growpart /dev/vda 3&lt;br /&gt;
      /usr/sbin/pvresize -y -q /dev/vda3&lt;br /&gt;
      /usr/sbin/lvresize -y -q -r -l +50%FREE /dev/mapper/*root&lt;br /&gt;
      /usr/sbin/lvresize -y -q -r -l +100%FREE /dev/mapper/*var/dev/mapper/*root&lt;br /&gt;
      /usr/sbin/xfs_growfs /&lt;br /&gt;
      /usr/sbin/xfs_growfs /var&lt;br /&gt;
    permissions: &#039;0700&#039;&lt;br /&gt;
&lt;br /&gt;
  - path: /usr/bin/setup_docker&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      #!/bin/bash&lt;br /&gt;
      yum install -y yum-utils&lt;br /&gt;
      yum-config-manager \&lt;br /&gt;
         --add-repo \&lt;br /&gt;
         https://download.docker.com/linux/centos/docker-ce.repo&lt;br /&gt;
      yum install -y docker-ce docker-ce-cli containerd.io&lt;br /&gt;
      systemctl start docker&lt;br /&gt;
      systemctl enable docker&lt;br /&gt;
      &lt;br /&gt;
      curl -L &amp;quot;https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)&amp;quot; -o /usr/local/bin/docker-compose&lt;br /&gt;
      chmod +x /usr/local/bin/docker-compose&lt;br /&gt;
    permissions: &#039;0700&#039;&lt;br /&gt;
&lt;br /&gt;
  - path: /root/docker-compose.yml&lt;br /&gt;
    permissions: &#039;0700&#039;&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      version: &#039;3.3&#039;&lt;br /&gt;
      services:&lt;br /&gt;
        web:&lt;br /&gt;
          image: php:7.4-apache&lt;br /&gt;
          restart: always&lt;br /&gt;
          user: &amp;quot;0:0&amp;quot;&lt;br /&gt;
          volumes:&lt;br /&gt;
            - /var/www/html:/var/www/html&lt;br /&gt;
          ports:&lt;br /&gt;
            - &amp;quot;80:80&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  - path: /var/www/html/index.php&lt;br /&gt;
    permissions: &#039;0644&#039;&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      &amp;lt;h1&amp;gt;Hello there!&amp;lt;/h1&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;I see you from &amp;lt;?php echo $_SERVER[&#039;REMOTE_ADDR&#039;]; ?&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;&amp;lt;nowiki&amp;gt;pre&amp;lt;/nowiki&amp;gt;&amp;gt;&amp;lt;?php print_r($_SERVER); ?&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
    &lt;br /&gt;
# Ensure VM has the largest / possible&lt;br /&gt;
runcmd:&lt;br /&gt;
  - /usr/bin/expand_lvm_root&lt;br /&gt;
  - /usr/bin/setup_docker&lt;br /&gt;
  - cd /root; docker-compose up -d}}&lt;br /&gt;
&lt;br /&gt;
==== UC Authentication ====&lt;br /&gt;
Use the following Cloud-Init config to allow UC-based authentication. Local accounts with the same username as the IT account may use the IT credential to log in.&lt;br /&gt;
{{Highlight|code=#cloud-config&lt;br /&gt;
&lt;br /&gt;
write_files:&lt;br /&gt;
  - path: /usr/bin/setup_uc_auth&lt;br /&gt;
    permissions: &#039;0700&#039;&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      #!/bin/bash&lt;br /&gt;
      yum install -y sssd sssd-dbus sssd-krb5 krb5-workstation authselect-compat&lt;br /&gt;
      &lt;br /&gt;
      cat &amp;lt;&amp;lt;EOF &amp;gt; /etc/sssd/sssd.conf&lt;br /&gt;
      [sssd]&lt;br /&gt;
      config_file_version = 2&lt;br /&gt;
      services = nss, pam, ifp&lt;br /&gt;
      domains = uc.ucalgary.ca&lt;br /&gt;
      &lt;br /&gt;
      [domain/uc.ucalgary.ca]&lt;br /&gt;
      id_provider = files&lt;br /&gt;
      debug_level = 5&lt;br /&gt;
      auth_provider = krb5&lt;br /&gt;
      chpass_provider = krb5&lt;br /&gt;
      &lt;br /&gt;
      krb5_realm = UC.UCALGARY.CA&lt;br /&gt;
      krb5_server = ITSODCSRV14.UC.UCALGARY.CA:88&lt;br /&gt;
      krb5_validate = false&lt;br /&gt;
      EOF&lt;br /&gt;
      chmod 600 /etc/sssd/sssd.conf&lt;br /&gt;
      &lt;br /&gt;
      cat &amp;lt;&amp;lt;EOF &amp;gt; /etc/krb5.conf&lt;br /&gt;
      [logging]&lt;br /&gt;
       default = FILE:/var/log/krb5libs.log&lt;br /&gt;
       kdc = FILE:/var/log/krb5kdc.log&lt;br /&gt;
       admin_server = FILE:/var/log/kadmind.log&lt;br /&gt;
      &lt;br /&gt;
      [libdefaults]&lt;br /&gt;
       default_realm = UC.UCALGARY.CA&lt;br /&gt;
       dns_lookup_realm = false&lt;br /&gt;
       dns_lookup_kdc = false&lt;br /&gt;
       ticket_lifetime = 24h&lt;br /&gt;
       renew_lifetime = 7d&lt;br /&gt;
       forwardable = true&lt;br /&gt;
      &lt;br /&gt;
      [realms]&lt;br /&gt;
       UC.UCALGARY.CA = {&lt;br /&gt;
        kdc = itsodcsrv14.uc.ucalgary.ca&lt;br /&gt;
       }&lt;br /&gt;
      &lt;br /&gt;
      [domain_realm]&lt;br /&gt;
       uc.ucalgary.ca = UC.UCALGARY.CA&lt;br /&gt;
       .uc.ucalgary.ca = UC.UCALGARY.CA&lt;br /&gt;
      EOF&lt;br /&gt;
      &lt;br /&gt;
      mkdir -p /etc/authselect/custom/rcs&lt;br /&gt;
      cd /etc/authselect/custom/rcs&lt;br /&gt;
      echo H4sIAMYsbGIAA+0ca3PbuDGf+StQ34dLMpEtybLdUevO6BKl56mTuJGv7Uwmo4FISMKJBHQAaEfNub+9uwBJkXrLshxnjjuTyCKJfWF3sQsuFPhS9CtB79keoQpw2mjYT4DZz1r1uPGs1qhX6yf12vFx/Vm1Vqs1jp+R6j6ZSiHWhipCnikpzarn1t3/TuErj8bhhBzccjOs6Igq41MVHBDen71WUey3mCsWHNx5W4wKpT+qSAGjI3lDQxj8SarB0UDIiB2FcsBFRfuKMfHZY4L2QpYbS2MzZMJwnxouxfnXefQHTaNi9nufhprdpQj6XAyYGisuzHIUuYcWIhlTrW/lRkxMNdO0OH5HdHdeXlDNjAGCuhJQ0IM4GjPFx0Om4PGjDM9n8pULP4wDtpkik78q1Lec/WifcMr8kSyE7fB/a9ssYf8Q2PiPc6/3RmNN/K826mez8f/07LSM/48BR4uD8dGaWOytGbc8BK8buSTu5odtFEyPitGxDH2LYXai9kFjjf8fn4HPT/3/GPy/fnZ6Uvr/Y8BXiP/gTHHOL/KZ0Z2HRpGmD2meszi5mIExjbpM3BxquRuOgIkJICE/kE7qsKQYGwjXRbSrfD2Xxe7EVp/yEOMGsjZWzOLSPASuCHJ83iCxwPtdwyN2XqtXqwvYSpHMMKPjfp/7HHFtyIydrWBnXeeFQkT4fak4ZImulwh134n3POr7Mk50sTUWGMy0RoE2hnmRAJHDgzLtwk1ewffmJq/gXZiJBf9iTSaHZGvTA0ZoGGumdsakY1AxC7q8j9qJeUD+SmqwPhAQi5kC7k8B69M4NOc9GhA7TutzOSLIB4g1EvJWnHPIFhT77HBbE9hNXZBmRNxYi0yTlHvhyexaA9cYvhDkGCMZDTfGMmITLiw7wMGNHLECuq2ZCjnIZnVUyeHZmis90YZFNhDtJJ0Mgl9lrxuNhpDwBVxZi4ioHp1Xq2dni0bNu0k2GPwkz8yn1GBqJDWj1FSWCFUwTDCxG+4DHUF8JUXgzBNNrws2u9sspB65k+4SW99s/RdgyNz4w0OsA/eUY6yr/05rtdn8r35S1n+PAjaSBU1rPGA4pA8pjCaJG6NbsS8zbuWDxmTk6jTMogZKxmOHYDsMdiAgEMxkOKYIcp69DEE60CUaMsLQ3twSRzbQBgnr2XprRtKBiCMOJFMWRaoJXcAxn5rCiG9XcRbK7T3RWOP/pyen+fqvgf5/XC3rv0eBb17eYTIbsJDaGs/+cV53dlGWjfsuG+O6zWh8qP43hYW1UQUQ7aqYHC9CYh4P6fx9eanU+3SGn6xiqBGX7CU5X656mKZ/i6uRv52T+rQa2QV3sWbaaQZdwkjwrZSQxqlBxqYiYrCQ0UHTfd4RoybdPlfadDHiz0+V5mYjI1ivmm4i9W6CuSSW9KW6BfdbwnS5tVFubSxmJvUL0oM4wkRXD2kgb70fcnojt0CKUCB3S5IrYG3EZ8pQqO1sZqkJ+JLig6H5C/CgMUaPmQjgsiA6jFW0247Hg0yg5WNXfVkkXdDR2GzKzmacPLH9pPr3sZ20TTQe3/4W05Abm+0UwzyxyuyiGLorRTgpULr3YgPOdFKrE+dT2y49dqMEg6eRo93YSZaIZfjKfbkcmnJf7jvbl3ssGEtt7Gv4PdJYt/93Um/M1P81iC9l/f8YsKFFWydAP7eeZuvVTX13uaf+6X9kEEQv8991/DJZnwvYswJrLhTYGEmtDSNqWKlNNCa5Jj3LayV55KDpvv+uh/IWc00W3G3q1DOUkqIdatVxQGG1nmJ8Yh6+Gj62W2/etfdLY43/n1VPT3P+XwP/rzaOy/3/R4G27boinU7njS1/XKIwu1P2nIauGHcZpU2MsQgKJy+8813B8zosZD62dBEz5JqMlcStc1egsRyDVMMD4GoyVhAuZJ/wAJk0E4+Kud09wHID95U+BAI4OrmgCYWIY3C46x8DpJJEVNABSwtBg8lbJMGtIU0B1qTiTC8iEjF/SAXXkca6YYgcXr5pXb0i/2Cqx5QEJSnyVjF2cdU6JBcmYwKQkfedDkGcV613kLUYpvoUxDISdzycoMlkIF0yDuPBwOqiR32oaYPktgfM+lIIYNMKAnGSj0PkvN9nCmNUWqY4vaE+3uHuFBcwoZGTg/Ygc3dK9oGzHoPJjoEE7pIaOyO/Iv4x6KjpDY0Z6+bREXyLFTvk8ggHHkF+E3jeT5N0BXhVMBaK+2EQ6CHr6isZOVqKgpgKZRXJw+6dCYRvvO+BMbj5D8hziK/sxj08yU8DIzecZlnai0NyjTY0pDcMlQZKRSGFz7weE6zPQQ2QkbJUVn/InFE5wtgNhSqDJLIPsR4+KxEYgpq4R+1UmGGsPcgeY9AlEXEE82yNiesRrB9MWY2iln92HL9CcSYyJoG0hdItFXamRoyNEzXEQqD1g7bhOQXG6A+5YK/sKJgPzz5a8A1tXQb0ghz9GuPWBNdTT0mWVFQGw50LNAox8PAlL4dJc5Nu/Usb/B+KphEIr9DYwwkBtvG2GTpsCWpHLDep1iOSjRKLrIfEaDLD+Ql1bhROUl98/eH924u///KxdX3x4b1XmQPPa8EMOyFBb6hoiUpDH0MEP9qpmspySK5CRjWozNJ3j3iB9OMIptOJazDzYATsiDiXscPBDwx6P9TQwB75ADGtIGIqF72Bxd0qGFFNnRPmFm2Y4dsEIQn7wrWNZM6+CjyCUP9qXVy2frpskw9XKHnrkrxtt65/+djuLFBCThuFzbFm08OEJAndqXfjHSSM9Q9oAk3SSBvYJsiHZn5s+A2zQ2eiGCIGTYDN4k2vWKXNUMP3pTDKhyKLudFACB/Nh0q7VKSLBOqHK4vEbgMQW28cJkJlr1jm6OQ5tLaYPYoeC1Y3cLaQzj2aCITPITUWUfY0RObxWCqYZXQTnEpczNiXcch9bq09CTFIhAs7eKGNzXI82zecSHAJV4lraia3GNlojhf7zgmeZsE8urQsTfBs8t7qkLyXRNogGjEKSxlMxkzjNKJ67opwNI90i+RF5rGJ+Ck/ufbP9VOSe9h6no3Pbmpe5hoTX6bIkxc0KxFjOIdnwO3FIGRFfHB9Fhe+7Cniq0NUgqUUTPG+qItVQ4L9jQvfEIEx9OfrAdC91pg+BLFCHRcNPA5kgqFlN5vxAkYg8I8sraFpUmNzMHxAxXYhBOsJAp762REz/lHSYpBTg8taEiKvhwzsL9laRgNOuUoDBeoEvPS/eaOe7tsVhQXixF0HeRWNmMEZltNNEc97c9GxEa1z1X598fbiNaY1/764fv0zedO6bv3U6qyObdMY9x4XagjCr0jalAQJmqE9cG1tM7bexAbedAkEJ1K3ihu4C84XJw+BA2PgqRQam9w62Jeof7vQ2jJP21wHNO47G0yWGyO95P2dRbuINK57OH14TQqWECbzhD0uYJWgmXMVmmdA1ReO0EHSTZMJvIDoDArXMjPF4DphtkCQtd1McWQNNVugmXbeTPFMm2q2QJR130zxZH0126BJGnByWNyVdUja/2m9u7rMGavnvST58sitQeibOP1zy4vnkbwVpR+QFJMsDpBixF9OIV2ykJJdaNkmayzxCkvsOo6yJX7KIKRn7TZpXXY+TNXwEnMIO8ya9PM/v/iudjhKWAUf2//85eJj+137/XVnXzTWnv85qc7s/+C3cv/nMeAdHWGfinKJc6F4xDwzq5Jc7ZemiqTDkpBVLLEwJEUzewuuqNgPrOjYOrjzKjPhFgVK6+ZXENRSyfWKRDutDXguAjbX0CWkYpeYA8ySsJ/Avp8k5+RaxewAUX2CO5/hGXc08aHk3ZuaFzVqFA6JJZrOVwNLdJ0UBOttbEPKjynztNMtkTe5sExW61E2yaCRLTsiqBxCKzOmhZDrzr+CXEUV7cot+Ji32k2z6S7GiE3w3brbRLUkYZDfH+AmwNp3lKup7gvW9e8VtYwXH0rT6yh/A01nlB9P2/kX8ZXZlHCJphc1AaTavg9dQuymJtN2oxp3nC3yQoAoRoSHkdfGaLuD55vsJUNC+zClvQc9F+li7mEehuwausvW/+IB//3kGOvyv2L//wn2/zfqtTL/ewy45+//zJ8azyciT+JQwZPpsk/61GynaTfiGleOrqBQza/+LaHkr2kG+YBHD8oO6D9IB/RTbsZ9ooe7n37751Zsfdv2z7L/c8ruU+z/dJaxz9Of6/K/WrXWaMzkf7Wzk7My/3sMuF/+9yQSvH2cGt1n4vhA8U1IyLzdC57mIIiaX+DfCD9dy4GmN0w17Q+lVfJXRrkvxdB4n3OvmSwBvv4snnpMhQt4IlkuCZ7LaZOx9m/cNIZQazttyL2O4+7ww0kbmQhZyNbyH8wqT+R+hydyN1LQol+DWFYCZ9zU98jNyjcDy1131u9S1la4dM6d8WjX1JXvw0154jmjUZ54XoulrPeXMlOeeC5PPJcnnssTz+WJ52+95VXueGXsPsUdrxJKKKGEEkoooYQSSiihhBJKKKGEEkr448D/AT019aAAeAAA {{!}} base64 -d {{!}} tar -xzpf -&lt;br /&gt;
      &lt;br /&gt;
      if [[ &amp;quot;`authselect current -r`&amp;quot; != &amp;quot;custom/rcs&amp;quot; ]] ; then&lt;br /&gt;
        authselect select custom/rcs --force&lt;br /&gt;
        systemctl restart sssd&lt;br /&gt;
      fi&lt;br /&gt;
      &lt;br /&gt;
      systemctl enable sssd&lt;br /&gt;
&lt;br /&gt;
runcmd:&lt;br /&gt;
  - /usr/bin/expand_lvm_root}}&lt;br /&gt;
&lt;br /&gt;
== Infrastructure tools ==&lt;br /&gt;
&lt;br /&gt;
=== Generating a CloudStack API ===&lt;br /&gt;
You can request for a CloudStack API key to automate infrastructure deployment using Terraform or CloudMonkey. A new API key can be generated by navigating to your profile page (top right) and then clicking on the &#039;Generate keys&#039; button.&lt;br /&gt;
[[File:CloudStack API Key.png|alt=CloudStack API Key|none|thumb|CloudStack API Key]]&lt;br /&gt;
&lt;br /&gt;
=== CloudMonkey ===&lt;br /&gt;
CloudMonkey is a utility that makes it easier to interact with the CloudStack API. This tool may be used to help automate VM actions (such as start/stop/reboot), or infrastructure tasks (such as creating/destroying VMs, networks, or firewall rules). &lt;br /&gt;
&lt;br /&gt;
To get started with CloudMonkey, refer to the following resources:&lt;br /&gt;
&lt;br /&gt;
* Download from: &amp;lt;nowiki&amp;gt;https://github.com/apache/cloudstack-cloudmonkey/releases/tag/6.1.0&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
* Documentation at: &amp;lt;nowiki&amp;gt;https://cwiki.apache.org/confluence/display/CLOUDSTACK/CloudStack+cloudmonkey+CLI&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Terraform Integration ===&lt;br /&gt;
Terraform allows you to define infrastructure as code and can be used in conjunction with CloudStack to configure your virtual machines and guest networks. Use the official CloudStack provider.&lt;br /&gt;
&lt;br /&gt;
The following is an example Terraform file for reference. Specify your CloudStack API keys either as a separate &amp;lt;code&amp;gt;vars.tf&amp;lt;/code&amp;gt;.&lt;br /&gt;
{{Highlight|code=# Configure the CloudStack Provider&lt;br /&gt;
terraform {&lt;br /&gt;
  required_providers {&lt;br /&gt;
    cloudstack = {&lt;br /&gt;
      source = &amp;quot;cloudstack/cloudstack&amp;quot;&lt;br /&gt;
      version = &amp;quot;0.4.0&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
provider &amp;quot;cloudstack&amp;quot; {&lt;br /&gt;
  api_url    = &amp;quot;${var.cloudstack_api_url}&amp;quot;&lt;br /&gt;
  api_key    = &amp;quot;${var.cloudstack_api_key}&amp;quot;&lt;br /&gt;
  secret_key = &amp;quot;${var.cloudstack_secret_key}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Create a new VPC&lt;br /&gt;
resource &amp;quot;cloudstack_vpc&amp;quot; &amp;quot;default&amp;quot; {&lt;br /&gt;
  name = &amp;quot;wedgenet-vpc&amp;quot;&lt;br /&gt;
  display_text = &amp;quot;wedgenet-vpc&amp;quot;&lt;br /&gt;
  cidr = &amp;quot;100.64.0.0/20&amp;quot;&lt;br /&gt;
  vpc_offering = &amp;quot;Default VPC offering&amp;quot;&lt;br /&gt;
  zone = &amp;quot;zone1&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Create a new ACL&lt;br /&gt;
resource &amp;quot;cloudstack_network_acl&amp;quot; &amp;quot;default&amp;quot; {&lt;br /&gt;
    name  = &amp;quot;vpc-acl&amp;quot;&lt;br /&gt;
    vpc_id = &amp;quot;${cloudstack_vpc.default.id}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# One ingress and one egress rule for the ACL&lt;br /&gt;
resource &amp;quot;cloudstack_network_acl_rule&amp;quot; &amp;quot;ingress&amp;quot; {&lt;br /&gt;
    acl_id = &amp;quot;${cloudstack_network_acl.default.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    rule {&lt;br /&gt;
        action       = &amp;quot;allow&amp;quot;&lt;br /&gt;
        cidr_list    = [&amp;quot;0.0.0.0/0&amp;quot;]&lt;br /&gt;
        protocol     = &amp;quot;tcp&amp;quot;&lt;br /&gt;
        ports        = [&amp;quot;22&amp;quot;, &amp;quot;80&amp;quot;, &amp;quot;443&amp;quot;]&lt;br /&gt;
        traffic_type = &amp;quot;ingress&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
resource &amp;quot;cloudstack_network_acl_rule&amp;quot; &amp;quot;egress&amp;quot; {&lt;br /&gt;
    acl_id = &amp;quot;${cloudstack_network_acl.default.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    rule {&lt;br /&gt;
        action       = &amp;quot;allow&amp;quot;&lt;br /&gt;
        cidr_list    = [&amp;quot;0.0.0.0/0&amp;quot;]&lt;br /&gt;
        protocol     = &amp;quot;all&amp;quot;&lt;br /&gt;
        traffic_type = &amp;quot;egress&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Create a new network in the VPC&lt;br /&gt;
resource &amp;quot;cloudstack_network&amp;quot; &amp;quot;primary&amp;quot; {&lt;br /&gt;
    name = &amp;quot;primary&amp;quot;&lt;br /&gt;
    display_text = &amp;quot;primary&amp;quot;&lt;br /&gt;
    cidr = &amp;quot;100.64.1.0/24&amp;quot;&lt;br /&gt;
    network_offering = &amp;quot;DefaultIsolatedNetworkOfferingForVpcNetworks&amp;quot;&lt;br /&gt;
    acl_id = &amp;quot;${cloudstack_network_acl.default.id}&amp;quot;&lt;br /&gt;
    vpc_id = &amp;quot;${cloudstack_vpc.default.id}&amp;quot;&lt;br /&gt;
    zone = &amp;quot;zone1&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Create a new public IP address for this network&lt;br /&gt;
resource &amp;quot;cloudstack_ipaddress&amp;quot; &amp;quot;public_ip&amp;quot; {&lt;br /&gt;
    vpc_id = &amp;quot;${cloudstack_vpc.default.id}&amp;quot;&lt;br /&gt;
    network_id = &amp;quot;${cloudstack_network.wosnet1.id}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Create VMs. &lt;br /&gt;
resource &amp;quot;cloudstack_instance&amp;quot; &amp;quot;vm&amp;quot; {&lt;br /&gt;
  count = 1&lt;br /&gt;
  name = &amp;quot;vm${count.index+1}&amp;quot;&lt;br /&gt;
  zone = &amp;quot;zone1&amp;quot;&lt;br /&gt;
  service_offering = &amp;quot;rcs.c4&amp;quot;&lt;br /&gt;
  template = &amp;quot;RockyLinux 8.5&amp;quot;&lt;br /&gt;
  network_id = &amp;quot;${cloudstack_network.wosnet1.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  # Cloud Init data can be used to configure your VM on first startup if your template supports Cloud Init&lt;br /&gt;
  user_data = &amp;lt;&amp;lt;EOF&lt;br /&gt;
#cloud-config&lt;br /&gt;
&lt;br /&gt;
# Require specific packages&lt;br /&gt;
packages:&lt;br /&gt;
 - tmux&lt;br /&gt;
 - git&lt;br /&gt;
 - tcpdump&lt;br /&gt;
&lt;br /&gt;
EOF&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
= Troubleshooting =&lt;br /&gt;
&lt;br /&gt;
=== Cannot create a volume snapshot ===&lt;br /&gt;
Volume snapshots can only be taken on VMs that are powered off.&lt;br /&gt;
&lt;br /&gt;
=== Cannot create a VM snapshot ===&lt;br /&gt;
Disk-only VM snapshots cannot be taken when the VM is running. If you intend to snapshot a running system, you must also snapshot its memory.&lt;br /&gt;
&lt;br /&gt;
=== VM state is still running after shutdown ===&lt;br /&gt;
After running &#039;shutdown&#039; on a VM, the VM state reported by CloudStack is still running. &lt;br /&gt;
&lt;br /&gt;
Please try to do a force shutdown from the CloudStack management console. The VM state isn&#039;t updated by CloudStack and as a result, the state of a VM isn&#039;t properly reflected when power state changes outside of CloudStack (likely a bug?)&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=CloudStack&amp;diff=1842</id>
		<title>CloudStack</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=CloudStack&amp;diff=1842"/>
		<updated>2022-06-02T06:18:32Z</updated>

		<summary type="html">&lt;p&gt;Darcy: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Apache CloudStack is an Infrastructure as a Service (IaaS) offering providing Virtual Machines (VMs) for University of Calgary researchers.  It is part of Research Computing Services Digital Research Infrastructure (DRI).  This service will allow you to quickly deploy VMs to support your research projects.&lt;br /&gt;
&lt;br /&gt;
If you have need for a web site, a database, or you wish to experiment with new software tools, test out the latest release of a software package, then CloudStack can provide you with an environment to support your work.&lt;br /&gt;
&lt;br /&gt;
There is no charge for the use of CloudStack.&lt;br /&gt;
&lt;br /&gt;
Not sure if this is what you need?  Please contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca].  We will be more than happy to discuss your needs with you.&lt;br /&gt;
&lt;br /&gt;
=Getting Started=&lt;br /&gt;
Not sure if you need a VM or a compute cluster? Is this the &amp;quot;Cloud&amp;quot;? Don&#039;t understand what we&#039;re talking about? Contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] and we will assist you in using this service to support your research goals.&lt;br /&gt;
&lt;br /&gt;
Please refer to the [[CloudStack User Guide]] for details on using CloudStack.&lt;br /&gt;
&lt;br /&gt;
==Availability==&lt;br /&gt;
CloudStack is available now to any PI at the University of Calgary.&lt;br /&gt;
==Requesting Access==&lt;br /&gt;
Requests to use CloudStack are done through [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=e3c1d6e91be48554cca5ecefbd4bcb6c Service Now].&lt;br /&gt;
&lt;br /&gt;
==End User Agreement==&lt;br /&gt;
Please read the [[CloudStack End User Agreement]].&lt;br /&gt;
&lt;br /&gt;
=Please Note=&lt;br /&gt;
&lt;br /&gt;
CloudStack is a research environment and is supported as such.&lt;br /&gt;
&lt;br /&gt;
Researchers have a great degree of freedom in how they use CloudStack and they are solely responsible for maintaining any VMs they deploy. The expectation is that the owners of the VMs will be patching and doing other maintenance work on a regular basis.  We will be more than happy to provide guidance on this process.  Contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] for details.&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=CloudStack_End_User_Agreement&amp;diff=1841</id>
		<title>CloudStack End User Agreement</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=CloudStack_End_User_Agreement&amp;diff=1841"/>
		<updated>2022-06-02T06:16:18Z</updated>

		<summary type="html">&lt;p&gt;Darcy: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
CloudStack is an Infrastructure as a Service provided to University of Calgary researchers.  It allows them to quickly deploy Virtual Machines to support their research.&lt;br /&gt;
&lt;br /&gt;
Researchers have a great degree of freedom in how they use CloudStack.  They are solely responsible for maintaining any VMs they deploy.&lt;br /&gt;
&lt;br /&gt;
CloudStack is a research environment.  While the system is available 24/7, support is only available during University business hours.&lt;br /&gt;
&lt;br /&gt;
Researchers are asked to follow the best practices listed below to ensure that the system remains available for the campus community.&lt;br /&gt;
&lt;br /&gt;
==Best Practices==&lt;br /&gt;
&lt;br /&gt;
# Please stay abreast of security updates for your OS and apply them.&lt;br /&gt;
# Backup your data (RCS does not provide backups).&lt;br /&gt;
# Do not run Windows (This infrastructure is not licensed to run Windows).&lt;br /&gt;
# The University&#039;s &amp;quot;Acceptable Use of Electronic Resources and Information Policy&amp;quot; applies to your work using a VM.  Please see [https://www.ucalgary.ca/legal-services/university-policies-procedures/ University Policies and Procedures] and search for &amp;quot;Electronic&amp;quot;.&lt;br /&gt;
# This infrastructure is only rated to handle Level 1 and Level 2 data. Please see [https://www.ucalgary.ca/legal-services/university-legal-services/operating-standards-guidelines-forms/ University Legal Services] and select &amp;quot;Information Security Classification Standard&amp;quot; for details.&lt;br /&gt;
# You are responsible for the appropriate use of the VM by any accounts you have created.&lt;br /&gt;
# You should remove/disable accounts that are no longer required.&lt;br /&gt;
# All user accounts on the VM must have good passwords. See [https://it.ucalgary.ca/it-security/passwords-do-i-have-change-them/ here] for details on creating strong passwords.&lt;br /&gt;
# CloudStack is not  meant as a High Performance Computing (HPC) number cruncher. If you have HPC needs, please see &amp;quot;[[How to get an account]]&amp;quot; for details on how to apply for an account on ARC.  Not sure what you need?  Contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca].&lt;br /&gt;
# If your VM faces the outside world, please consider using appropriate security tools.  Contact [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] for assistance.&lt;br /&gt;
&lt;br /&gt;
==Important Notes==&lt;br /&gt;
In the event of a security incident, IT Operations/Security will shut down affected VMs.&lt;br /&gt;
&lt;br /&gt;
Non-critical patches/upgrades to CloudStack that are required will happen on Tuesday of each week.  Running VMs should not be affected.&lt;br /&gt;
&lt;br /&gt;
Non-critical patches/upgrades that require a complete restart of CloudStack will occur after 1 day email notice.&lt;br /&gt;
&lt;br /&gt;
Urgent security patches to CloudStack may happen with little to no notice.&lt;br /&gt;
&lt;br /&gt;
Please check back with this document as changes may occur.&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=CloudStack_End_User_Agreement&amp;diff=1840</id>
		<title>CloudStack End User Agreement</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=CloudStack_End_User_Agreement&amp;diff=1840"/>
		<updated>2022-06-01T23:22:15Z</updated>

		<summary type="html">&lt;p&gt;Darcy: /* Best Practices */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
CloudStack is an Infrastructure as a Service provided to University of Calgary researchers.  It allows them to quickly deploy Virtual Machines to support their research.&lt;br /&gt;
&lt;br /&gt;
Researchers have a great degree of freedom in how they use CloudStack.  They are solely responsible for maintaining any VMs they deploy.&lt;br /&gt;
&lt;br /&gt;
CloudStack is a research environment.  While the system is available 24/7, support is only available during University business hours.&lt;br /&gt;
&lt;br /&gt;
Researchers are asked to follow the best practices listed below to ensure that the system remains available for the campus community.&lt;br /&gt;
&lt;br /&gt;
==Best Practices==&lt;br /&gt;
&lt;br /&gt;
# Please stay abreast of security updates for your OS and apply them.&lt;br /&gt;
# Backup your data (RCS does not provide backups).&lt;br /&gt;
# Do not run Windows (This infrastructure is not licensed to run Windows).&lt;br /&gt;
# The University&#039;s &amp;quot;Acceptable Use of Electronic Resources and Information Policy&amp;quot; applies to your work using a VM.  Please see [https://www.ucalgary.ca/legal-services/university-policies-procedures/ University Policies and Procedures] and search for &amp;quot;Electronic&amp;quot;.&lt;br /&gt;
# This infrastructure is only rated to handle Level 1 and Level 2 data. Please see [https://www.ucalgary.ca/legal-services/university-legal-services/operating-standards-guidelines-forms/ University Legal Services] and select &amp;quot;Information Security Classification Standard&amp;quot; for details.&lt;br /&gt;
# You are responsible for the appropriate use of the VM by any accounts you have created.&lt;br /&gt;
# You should remove/disable accounts that are no longer required.&lt;br /&gt;
# All user accounts on the VM must have good passwords. See [https://it.ucalgary.ca/it-security/passwords-do-i-have-change-them/ here] for details on creating strong passwords.&lt;br /&gt;
# CloudStack is not  meant as a High Performance Computing (HPC) number cruncher. If you have HPC needs, please see &amp;quot;[[How to get an account]]&amp;quot; for details on how to apply for an account on ARC.  Not sure what you need?  Contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca].&lt;br /&gt;
# If your VM faces the outside world, please consider using appropriate security tools.  Contact [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] for assistance.&lt;br /&gt;
&lt;br /&gt;
==Important notes==&lt;br /&gt;
In the event of a security incident, IT Operations/Security will shut down affected VMs.&lt;br /&gt;
&lt;br /&gt;
Non-critical patches/upgrades to CloudStack that are required will happen on Tuesday of each week.  Running VMs should not be affected.&lt;br /&gt;
&lt;br /&gt;
Non-critical patches/upgrades that require a complete restart of CloudStack will occur after 1 day email notice.&lt;br /&gt;
&lt;br /&gt;
Urgent security patches to CloudStack may happen with little to no notice.&lt;br /&gt;
&lt;br /&gt;
Please check back with this document as changes may occur.&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=CloudStack_End_User_Agreement&amp;diff=1839</id>
		<title>CloudStack End User Agreement</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=CloudStack_End_User_Agreement&amp;diff=1839"/>
		<updated>2022-06-01T23:21:57Z</updated>

		<summary type="html">&lt;p&gt;Darcy: /* Best Practices */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
CloudStack is an Infrastructure as a Service provided to University of Calgary researchers.  It allows them to quickly deploy Virtual Machines to support their research.&lt;br /&gt;
&lt;br /&gt;
Researchers have a great degree of freedom in how they use CloudStack.  They are solely responsible for maintaining any VMs they deploy.&lt;br /&gt;
&lt;br /&gt;
CloudStack is a research environment.  While the system is available 24/7, support is only available during University business hours.&lt;br /&gt;
&lt;br /&gt;
Researchers are asked to follow the best practices listed below to ensure that the system remains available for the campus community.&lt;br /&gt;
&lt;br /&gt;
==Best Practices==&lt;br /&gt;
&lt;br /&gt;
# Please stay abreast of security updates for your OS and apply them.&lt;br /&gt;
# Backup your data (RCS does not provide backups).&lt;br /&gt;
# Do not run Windows (This infrastructure is not licensed to run Windows).&lt;br /&gt;
# The University&#039;s &amp;quot;Acceptable Use of Electronic Resources and Information Policy&amp;quot; applies to your work using a VM.  Please look [https://www.ucalgary.ca/legal-services/university-policies-procedures/ University Policies and Procedures] and search for &amp;quot;Electronic&amp;quot;.&lt;br /&gt;
# This infrastructure is only rated to handle Level 1 and Level 2 data. Please see [https://www.ucalgary.ca/legal-services/university-legal-services/operating-standards-guidelines-forms/ University Legal Services] and select &amp;quot;Information Security Classification Standard&amp;quot; for details.&lt;br /&gt;
# You are responsible for the appropriate use of the VM by any accounts you have created.&lt;br /&gt;
# You should remove/disable accounts that are no longer required.&lt;br /&gt;
# All user accounts on the VM must have good passwords. See [https://it.ucalgary.ca/it-security/passwords-do-i-have-change-them/ here] for details on creating strong passwords.&lt;br /&gt;
# CloudStack is not  meant as a High Performance Computing (HPC) number cruncher. If you have HPC needs, please see &amp;quot;[[How to get an account]]&amp;quot; for details on how to apply for an account on ARC.  Not sure what you need?  Contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca].&lt;br /&gt;
# If your VM faces the outside world, please consider using appropriate security tools.  Contact [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] for assistance.&lt;br /&gt;
&lt;br /&gt;
==Important notes==&lt;br /&gt;
In the event of a security incident, IT Operations/Security will shut down affected VMs.&lt;br /&gt;
&lt;br /&gt;
Non-critical patches/upgrades to CloudStack that are required will happen on Tuesday of each week.  Running VMs should not be affected.&lt;br /&gt;
&lt;br /&gt;
Non-critical patches/upgrades that require a complete restart of CloudStack will occur after 1 day email notice.&lt;br /&gt;
&lt;br /&gt;
Urgent security patches to CloudStack may happen with little to no notice.&lt;br /&gt;
&lt;br /&gt;
Please check back with this document as changes may occur.&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=CloudStack_End_User_Agreement&amp;diff=1838</id>
		<title>CloudStack End User Agreement</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=CloudStack_End_User_Agreement&amp;diff=1838"/>
		<updated>2022-06-01T23:20:32Z</updated>

		<summary type="html">&lt;p&gt;Darcy: /* Best Practices */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
CloudStack is an Infrastructure as a Service provided to University of Calgary researchers.  It allows them to quickly deploy Virtual Machines to support their research.&lt;br /&gt;
&lt;br /&gt;
Researchers have a great degree of freedom in how they use CloudStack.  They are solely responsible for maintaining any VMs they deploy.&lt;br /&gt;
&lt;br /&gt;
CloudStack is a research environment.  While the system is available 24/7, support is only available during University business hours.&lt;br /&gt;
&lt;br /&gt;
Researchers are asked to follow the best practices listed below to ensure that the system remains available for the campus community.&lt;br /&gt;
&lt;br /&gt;
==Best Practices==&lt;br /&gt;
&lt;br /&gt;
# Please stay abreast of security updates for your OS and apply them.&lt;br /&gt;
# Backup your data (RCS does not provide backups).&lt;br /&gt;
# Do not run Windows (This infrastructure is not licensed to run Windows).&lt;br /&gt;
# The University&#039;s &amp;quot;Acceptable Use of Electronic Resources and Information Policy&amp;quot; applies to your work using a VM.  Please look [https://www.ucalgary.ca/legal-services/university-policies-procedures/ here] and search for &amp;quot;Electronic&amp;quot;.&lt;br /&gt;
# This infrastructure is only rated to handle Level 1 and Level 2 data. Please see [https://www.ucalgary.ca/legal-services/university-legal-services/operating-standards-guidelines-forms/ University Legal Services] and select &amp;quot;Information Security Classification Standard&amp;quot; for details.&lt;br /&gt;
# You are responsible for the appropriate use of the VM by any accounts you have created.&lt;br /&gt;
# You should remove/disable accounts that are no longer required.&lt;br /&gt;
# All user accounts on the VM must have good passwords. See [https://it.ucalgary.ca/it-security/passwords-do-i-have-change-them/ here] for details on creating strong passwords.&lt;br /&gt;
# CloudStack is not  meant as a High Performance Computing (HPC) number cruncher. If you have HPC needs, please see &amp;quot;[[How to get an account]]&amp;quot; for details on how to apply for an account on ARC.  Not sure what you need?  Contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca].&lt;br /&gt;
# If your VM faces the outside world, please consider using appropriate security tools.  Contact [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] for assistance.&lt;br /&gt;
&lt;br /&gt;
==Important notes==&lt;br /&gt;
In the event of a security incident, IT Operations/Security will shut down affected VMs.&lt;br /&gt;
&lt;br /&gt;
Non-critical patches/upgrades to CloudStack that are required will happen on Tuesday of each week.  Running VMs should not be affected.&lt;br /&gt;
&lt;br /&gt;
Non-critical patches/upgrades that require a complete restart of CloudStack will occur after 1 day email notice.&lt;br /&gt;
&lt;br /&gt;
Urgent security patches to CloudStack may happen with little to no notice.&lt;br /&gt;
&lt;br /&gt;
Please check back with this document as changes may occur.&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=CloudStack_End_User_Agreement&amp;diff=1837</id>
		<title>CloudStack End User Agreement</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=CloudStack_End_User_Agreement&amp;diff=1837"/>
		<updated>2022-06-01T23:17:07Z</updated>

		<summary type="html">&lt;p&gt;Darcy: /* Best Practices */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
CloudStack is an Infrastructure as a Service provided to University of Calgary researchers.  It allows them to quickly deploy Virtual Machines to support their research.&lt;br /&gt;
&lt;br /&gt;
Researchers have a great degree of freedom in how they use CloudStack.  They are solely responsible for maintaining any VMs they deploy.&lt;br /&gt;
&lt;br /&gt;
CloudStack is a research environment.  While the system is available 24/7, support is only available during University business hours.&lt;br /&gt;
&lt;br /&gt;
Researchers are asked to follow the best practices listed below to ensure that the system remains available for the campus community.&lt;br /&gt;
&lt;br /&gt;
==Best Practices==&lt;br /&gt;
&lt;br /&gt;
# Please stay abreast of security updates for your OS and apply them.&lt;br /&gt;
# Backup your data (RCS does not provide backups).&lt;br /&gt;
# Do not run Windows (This infrastructure is not licensed to run Windows).&lt;br /&gt;
# The University&#039;s &amp;quot;Acceptable Use of Electronic Resources and Information Policy&amp;quot; applies to your work using a VM.  Please look [https://www.ucalgary.ca/legal-services/university-policies-procedures/ here] and search for &amp;quot;Electronic&amp;quot;.&lt;br /&gt;
# This infrastructure is only rated to handle Level 1 and Level 2 data. Please see [https://www.ucalgary.ca/legal-services/university-legal-services/operating-standards-guidelines-forms/ here] and select &amp;quot;Information Security Classification Standard&amp;quot; for details.&lt;br /&gt;
# If you create user accounts on your VM, make sure to manage them appropriately (remove/disable accounts that are no longer required).&lt;br /&gt;
# If you have created accounts, you are responsible for the appropriate use of the VM by those accounts.&lt;br /&gt;
# All user accounts on the VM must have good passwords. See [https://it.ucalgary.ca/it-security/passwords-do-i-have-change-them/ here] for details on creating strong passwords.&lt;br /&gt;
# CloudStack is not  meant as a High Performance Computing (HPC) number cruncher. If you have HPC needs, please see &amp;quot;[[How to get an account]]&amp;quot; for details on how to apply for an account on ARC.  Not sure what you need?  Contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca].&lt;br /&gt;
# If your VM faces the outside world, please consider using appropriate security tools.  Contact [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] for assistance.&lt;br /&gt;
&lt;br /&gt;
==Important notes==&lt;br /&gt;
In the event of a security incident, IT Operations/Security will shut down affected VMs.&lt;br /&gt;
&lt;br /&gt;
Non-critical patches/upgrades to CloudStack that are required will happen on Tuesday of each week.  Running VMs should not be affected.&lt;br /&gt;
&lt;br /&gt;
Non-critical patches/upgrades that require a complete restart of CloudStack will occur after 1 day email notice.&lt;br /&gt;
&lt;br /&gt;
Urgent security patches to CloudStack may happen with little to no notice.&lt;br /&gt;
&lt;br /&gt;
Please check back with this document as changes may occur.&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=CloudStack_End_User_Agreement&amp;diff=1836</id>
		<title>CloudStack End User Agreement</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=CloudStack_End_User_Agreement&amp;diff=1836"/>
		<updated>2022-06-01T23:16:03Z</updated>

		<summary type="html">&lt;p&gt;Darcy: /* Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
CloudStack is an Infrastructure as a Service provided to University of Calgary researchers.  It allows them to quickly deploy Virtual Machines to support their research.&lt;br /&gt;
&lt;br /&gt;
Researchers have a great degree of freedom in how they use CloudStack.  They are solely responsible for maintaining any VMs they deploy.&lt;br /&gt;
&lt;br /&gt;
CloudStack is a research environment.  While the system is available 24/7, support is only available during University business hours.&lt;br /&gt;
&lt;br /&gt;
Researchers are asked to follow the best practices listed below to ensure that the system remains available for the campus community.&lt;br /&gt;
&lt;br /&gt;
==Best Practices==&lt;br /&gt;
&lt;br /&gt;
# Please stay abreast of security updates for your OS and apply them.&lt;br /&gt;
# Backup your data (RCS does not provide backups).&lt;br /&gt;
# Do not run Windows (This infrastructure is not licensed to run Windows).&lt;br /&gt;
# The University&#039;s &amp;quot;Acceptable Use of Electronic Resources and Information Policy&amp;quot; applies to your work using a VM.  Please look [https://www.ucalgary.ca/legal-services/university-policies-procedures/ here] and search for &amp;quot;Electronic&amp;quot;.&lt;br /&gt;
# This environment is only rated to handle Level 1 and Level 2 data. Please see [https://www.ucalgary.ca/legal-services/university-legal-services/operating-standards-guidelines-forms/ here] and select &amp;quot;Information Security Classification Standard&amp;quot; for details.&lt;br /&gt;
# If you create user accounts on your VM, make sure to manage them appropriately (remove/disable accounts that are no longer required).&lt;br /&gt;
# If you have created accounts, you are responsible for the appropriate use of the VM by those accounts.&lt;br /&gt;
# All user accounts on the VM must have good passwords. See [https://it.ucalgary.ca/it-security/passwords-do-i-have-change-them/ here] for details on creating strong passwords.&lt;br /&gt;
# CloudStack is not  meant as a High Performance Computing (HPC) number cruncher. If you have HPC needs, please see &amp;quot;[[How to get an account]]&amp;quot; for details on how to apply for an account on ARC.  Not sure what you need?  Contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca].&lt;br /&gt;
# If your VM faces the outside world, please consider using appropriate security tools.  Contact [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] for assistance.&lt;br /&gt;
&lt;br /&gt;
==Important notes==&lt;br /&gt;
In the event of a security incident, IT Operations/Security will shut down affected VMs.&lt;br /&gt;
&lt;br /&gt;
Non-critical patches/upgrades to CloudStack that are required will happen on Tuesday of each week.  Running VMs should not be affected.&lt;br /&gt;
&lt;br /&gt;
Non-critical patches/upgrades that require a complete restart of CloudStack will occur after 1 day email notice.&lt;br /&gt;
&lt;br /&gt;
Urgent security patches to CloudStack may happen with little to no notice.&lt;br /&gt;
&lt;br /&gt;
Please check back with this document as changes may occur.&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=CloudStack_End_User_Agreement&amp;diff=1835</id>
		<title>CloudStack End User Agreement</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=CloudStack_End_User_Agreement&amp;diff=1835"/>
		<updated>2022-06-01T23:14:48Z</updated>

		<summary type="html">&lt;p&gt;Darcy: /* Important notes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
CloudStack is an Infrastructure as a Service provided to University of Calgary researchers to allow them to quickly deploy Virtual Machines to support their research.&lt;br /&gt;
&lt;br /&gt;
Researchers have a great degree of freedom in how they use CloudStack.  They are solely responsible for maintaining any VMs they deploy.&lt;br /&gt;
&lt;br /&gt;
CloudStack is a research environment.  While the system is available 24/7, support is only available during University business hours.&lt;br /&gt;
&lt;br /&gt;
We ask that researchers follow the best practices listed below to ensure that the system remains available for the campus community.&lt;br /&gt;
&lt;br /&gt;
==Best Practices==&lt;br /&gt;
&lt;br /&gt;
# Please stay abreast of security updates for your OS and apply them.&lt;br /&gt;
# Backup your data (RCS does not provide backups).&lt;br /&gt;
# Do not run Windows (This infrastructure is not licensed to run Windows).&lt;br /&gt;
# The University&#039;s &amp;quot;Acceptable Use of Electronic Resources and Information Policy&amp;quot; applies to your work using a VM.  Please look [https://www.ucalgary.ca/legal-services/university-policies-procedures/ here] and search for &amp;quot;Electronic&amp;quot;.&lt;br /&gt;
# This environment is only rated to handle Level 1 and Level 2 data. Please see [https://www.ucalgary.ca/legal-services/university-legal-services/operating-standards-guidelines-forms/ here] and select &amp;quot;Information Security Classification Standard&amp;quot; for details.&lt;br /&gt;
# If you create user accounts on your VM, make sure to manage them appropriately (remove/disable accounts that are no longer required).&lt;br /&gt;
# If you have created accounts, you are responsible for the appropriate use of the VM by those accounts.&lt;br /&gt;
# All user accounts on the VM must have good passwords. See [https://it.ucalgary.ca/it-security/passwords-do-i-have-change-them/ here] for details on creating strong passwords.&lt;br /&gt;
# CloudStack is not  meant as a High Performance Computing (HPC) number cruncher. If you have HPC needs, please see &amp;quot;[[How to get an account]]&amp;quot; for details on how to apply for an account on ARC.  Not sure what you need?  Contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca].&lt;br /&gt;
# If your VM faces the outside world, please consider using appropriate security tools.  Contact [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] for assistance.&lt;br /&gt;
&lt;br /&gt;
==Important notes==&lt;br /&gt;
In the event of a security incident, IT Operations/Security will shut down affected VMs.&lt;br /&gt;
&lt;br /&gt;
Non-critical patches/upgrades to CloudStack that are required will happen on Tuesday of each week.  Running VMs should not be affected.&lt;br /&gt;
&lt;br /&gt;
Non-critical patches/upgrades that require a complete restart of CloudStack will occur after 1 day email notice.&lt;br /&gt;
&lt;br /&gt;
Urgent security patches to CloudStack may happen with little to no notice.&lt;br /&gt;
&lt;br /&gt;
Please check back with this document as changes may occur.&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=CloudStack_End_User_Agreement&amp;diff=1834</id>
		<title>CloudStack End User Agreement</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=CloudStack_End_User_Agreement&amp;diff=1834"/>
		<updated>2022-06-01T23:13:44Z</updated>

		<summary type="html">&lt;p&gt;Darcy: /* Best Practices */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
CloudStack is an Infrastructure as a Service provided to University of Calgary researchers to allow them to quickly deploy Virtual Machines to support their research.&lt;br /&gt;
&lt;br /&gt;
Researchers have a great degree of freedom in how they use CloudStack.  They are solely responsible for maintaining any VMs they deploy.&lt;br /&gt;
&lt;br /&gt;
CloudStack is a research environment.  While the system is available 24/7, support is only available during University business hours.&lt;br /&gt;
&lt;br /&gt;
We ask that researchers follow the best practices listed below to ensure that the system remains available for the campus community.&lt;br /&gt;
&lt;br /&gt;
==Best Practices==&lt;br /&gt;
&lt;br /&gt;
# Please stay abreast of security updates for your OS and apply them.&lt;br /&gt;
# Backup your data (RCS does not provide backups).&lt;br /&gt;
# Do not run Windows (This infrastructure is not licensed to run Windows).&lt;br /&gt;
# The University&#039;s &amp;quot;Acceptable Use of Electronic Resources and Information Policy&amp;quot; applies to your work using a VM.  Please look [https://www.ucalgary.ca/legal-services/university-policies-procedures/ here] and search for &amp;quot;Electronic&amp;quot;.&lt;br /&gt;
# This environment is only rated to handle Level 1 and Level 2 data. Please see [https://www.ucalgary.ca/legal-services/university-legal-services/operating-standards-guidelines-forms/ here] and select &amp;quot;Information Security Classification Standard&amp;quot; for details.&lt;br /&gt;
# If you create user accounts on your VM, make sure to manage them appropriately (remove/disable accounts that are no longer required).&lt;br /&gt;
# If you have created accounts, you are responsible for the appropriate use of the VM by those accounts.&lt;br /&gt;
# All user accounts on the VM must have good passwords. See [https://it.ucalgary.ca/it-security/passwords-do-i-have-change-them/ here] for details on creating strong passwords.&lt;br /&gt;
# CloudStack is not  meant as a High Performance Computing (HPC) number cruncher. If you have HPC needs, please see &amp;quot;[[How to get an account]]&amp;quot; for details on how to apply for an account on ARC.  Not sure what you need?  Contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca].&lt;br /&gt;
# If your VM faces the outside world, please consider using appropriate security tools.  Contact [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] for assistance.&lt;br /&gt;
&lt;br /&gt;
==Important notes==&lt;br /&gt;
In the event of a security incident, IT Operations/Security will shut down affected VMs.&lt;br /&gt;
&lt;br /&gt;
Non-critical patches/upgrades to CloudStack that are required will happen on Tuesday of each week.  Running VMs should not not be affected.&lt;br /&gt;
&lt;br /&gt;
Non-critical patches/upgrades that require a complete restart of CloudStack will occur after 1 day email notice.&lt;br /&gt;
&lt;br /&gt;
Urgent security patches to CloudStack may happen with little to no notice.&lt;br /&gt;
&lt;br /&gt;
Please check back with this document as changes may occur.&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=CloudStack_End_User_Agreement&amp;diff=1833</id>
		<title>CloudStack End User Agreement</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=CloudStack_End_User_Agreement&amp;diff=1833"/>
		<updated>2022-06-01T23:09:35Z</updated>

		<summary type="html">&lt;p&gt;Darcy: /* Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
CloudStack is an Infrastructure as a Service provided to University of Calgary researchers to allow them to quickly deploy Virtual Machines to support their research.&lt;br /&gt;
&lt;br /&gt;
Researchers have a great degree of freedom in how they use CloudStack.  They are solely responsible for maintaining any VMs they deploy.&lt;br /&gt;
&lt;br /&gt;
CloudStack is a research environment.  While the system is available 24/7, support is only available during University business hours.&lt;br /&gt;
&lt;br /&gt;
We ask that researchers follow the best practices listed below to ensure that the system remains available for the campus community.&lt;br /&gt;
&lt;br /&gt;
==Best Practices==&lt;br /&gt;
&lt;br /&gt;
# Please stay abreast of security updates for your OS and apply them.&lt;br /&gt;
# Backup your data (RCS does not provide backups).&lt;br /&gt;
# Do not run Windows (Our infrastructure is not licensed to run Windows).&lt;br /&gt;
# The University&#039;s &amp;quot;Acceptable Use of Electronic Resources and Information Policy&amp;quot; applies to your work using a VM.  Please look [https://www.ucalgary.ca/legal-services/university-policies-procedures/ here] and search for &amp;quot;Electronic&amp;quot;.&lt;br /&gt;
# This environment is only rated to handle Level 1 and Level 2 data. Please see [https://www.ucalgary.ca/legal-services/university-legal-services/operating-standards-guidelines-forms/ here] and select &amp;quot;Information Security Classification Standard&amp;quot; for details.&lt;br /&gt;
# If you create user accounts on your VM, make sure to manage them appropriately (remove/disable accounts that are no longer required).&lt;br /&gt;
# If you have created accounts, you are responsible for the appropriate use of the VM by those accounts.&lt;br /&gt;
# All user accounts on the VM must have good passwords. See [https://it.ucalgary.ca/it-security/passwords-do-i-have-change-them/ here] for details on creating strong passwords.&lt;br /&gt;
# CloudStack is not  meant as a High Performance Computing (HPC) number cruncher. If you have HPC needs, please see &amp;quot;[[How to get an account]]&amp;quot; for details on how to apply for an account on ARC.  Not sure what you need?  Contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca].&lt;br /&gt;
# If your VM faces the outside world, please consider using appropriate security tools.  Contact [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] for assistance.&lt;br /&gt;
&lt;br /&gt;
==Important notes==&lt;br /&gt;
In the event of a security incident, IT Operations/Security will shut down affected VMs.&lt;br /&gt;
&lt;br /&gt;
Non-critical patches/upgrades to CloudStack that are required will happen on Tuesday of each week.  Running VMs should not not be affected.&lt;br /&gt;
&lt;br /&gt;
Non-critical patches/upgrades that require a complete restart of CloudStack will occur after 1 day email notice.&lt;br /&gt;
&lt;br /&gt;
Urgent security patches to CloudStack may happen with little to no notice.&lt;br /&gt;
&lt;br /&gt;
Please check back with this document as changes may occur.&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=CloudStack_End_User_Agreement&amp;diff=1832</id>
		<title>CloudStack End User Agreement</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=CloudStack_End_User_Agreement&amp;diff=1832"/>
		<updated>2022-06-01T23:08:42Z</updated>

		<summary type="html">&lt;p&gt;Darcy: /* Best Practices */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
CloudStack is an Infrastructure as a Service provided to University of Calgary researchers to allow them to quickly deploy Virtual Machines to support their research.&lt;br /&gt;
&lt;br /&gt;
Researchers have a great degree of freedom in how they use CloudStack.  As such, they are solely responsible for maintaining any VMs they deploy.&lt;br /&gt;
&lt;br /&gt;
CloudStack is a research environment.  While the system is available 24/7, support is only available during University business hours.&lt;br /&gt;
&lt;br /&gt;
We ask that researchers follow the best practices listed below to ensure that the system remains available for the campus community.&lt;br /&gt;
&lt;br /&gt;
==Best Practices==&lt;br /&gt;
&lt;br /&gt;
# Please stay abreast of security updates for your OS and apply them.&lt;br /&gt;
# Backup your data (RCS does not provide backups).&lt;br /&gt;
# Do not run Windows (Our infrastructure is not licensed to run Windows).&lt;br /&gt;
# The University&#039;s &amp;quot;Acceptable Use of Electronic Resources and Information Policy&amp;quot; applies to your work using a VM.  Please look [https://www.ucalgary.ca/legal-services/university-policies-procedures/ here] and search for &amp;quot;Electronic&amp;quot;.&lt;br /&gt;
# This environment is only rated to handle Level 1 and Level 2 data. Please see [https://www.ucalgary.ca/legal-services/university-legal-services/operating-standards-guidelines-forms/ here] and select &amp;quot;Information Security Classification Standard&amp;quot; for details.&lt;br /&gt;
# If you create user accounts on your VM, make sure to manage them appropriately (remove/disable accounts that are no longer required).&lt;br /&gt;
# If you have created accounts, you are responsible for the appropriate use of the VM by those accounts.&lt;br /&gt;
# All user accounts on the VM must have good passwords. See [https://it.ucalgary.ca/it-security/passwords-do-i-have-change-them/ here] for details on creating strong passwords.&lt;br /&gt;
# CloudStack is not  meant as a High Performance Computing (HPC) number cruncher. If you have HPC needs, please see &amp;quot;[[How to get an account]]&amp;quot; for details on how to apply for an account on ARC.  Not sure what you need?  Contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca].&lt;br /&gt;
# If your VM faces the outside world, please consider using appropriate security tools.  Contact [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] for assistance.&lt;br /&gt;
&lt;br /&gt;
==Important notes==&lt;br /&gt;
In the event of a security incident, IT Operations/Security will shut down affected VMs.&lt;br /&gt;
&lt;br /&gt;
Non-critical patches/upgrades to CloudStack that are required will happen on Tuesday of each week.  Running VMs should not not be affected.&lt;br /&gt;
&lt;br /&gt;
Non-critical patches/upgrades that require a complete restart of CloudStack will occur after 1 day email notice.&lt;br /&gt;
&lt;br /&gt;
Urgent security patches to CloudStack may happen with little to no notice.&lt;br /&gt;
&lt;br /&gt;
Please check back with this document as changes may occur.&lt;/div&gt;</summary>
		<author><name>Darcy</name></author>
	</entry>
</feed>