<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://rcs.ucalgary.ca/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Lleung</id>
	<title>RCSWiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://rcs.ucalgary.ca/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Lleung"/>
	<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/Special:Contributions/Lleung"/>
	<updated>2026-04-07T04:34:01Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.3</generator>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=Template:ARC_Cluster_Status&amp;diff=3904</id>
		<title>Template:ARC Cluster Status</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=Template:ARC_Cluster_Status&amp;diff=3904"/>
		<updated>2025-09-17T17:30:41Z</updated>

		<summary type="html">&lt;p&gt;Lleung: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Cluster Status&lt;br /&gt;
| cluster = ARC&lt;br /&gt;
| status = yellow&lt;br /&gt;
| title = Cluster maintenance ongoing. &lt;br /&gt;
| message = System is being upgraded.  SLURM and ARC login nodes may be unavailable at certain times.&lt;br /&gt;
&lt;br /&gt;
See the [[ARC Cluster Status]] page for system notices. &lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=Template:ARC_Cluster_Status&amp;diff=3812</id>
		<title>Template:ARC Cluster Status</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=Template:ARC_Cluster_Status&amp;diff=3812"/>
		<updated>2025-07-29T20:31:39Z</updated>

		<summary type="html">&lt;p&gt;Lleung: Updated to reflect dave&amp;#039;s changes.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Cluster Status&lt;br /&gt;
| cluster = ARC&lt;br /&gt;
| status = yellow&lt;br /&gt;
| title = Cluster operational - Problems with /bulk&lt;br /&gt;
| message = System is generally operational. Emergency outage planned for July 31 on /bulk&lt;br /&gt;
&lt;br /&gt;
See the [[ARC Cluster Status]] page for system notices. &lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=ARC_Cluster_Status&amp;diff=3811</id>
		<title>ARC Cluster Status</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=ARC_Cluster_Status&amp;diff=3811"/>
		<updated>2025-07-29T20:31:20Z</updated>

		<summary type="html">&lt;p&gt;Lleung: Use the templated arc cluster status widget&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{ARC Cluster Status}}&lt;br /&gt;
&lt;br /&gt;
== System Messages ==&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = ⚠️ January System Updates&lt;br /&gt;
| date = 2023/01/01&lt;br /&gt;
| message =&lt;br /&gt;
Beginning January 16, 2023, the ARC cluster will undergo operating system updates. We shall do our utmost to minimize disruption and allow ongoing jobs to be completed. New jobs may be temporarily held from scheduling.&lt;br /&gt;
&lt;br /&gt;
The ARC login node will reboot on the morning of January 16. Please save your work and log out if possible.&lt;br /&gt;
&lt;br /&gt;
The upgrade is planned to be fully complete by January 20.&lt;br /&gt;
&lt;br /&gt;
If you encounter any system issues, do not hesitate to let us know.&lt;br /&gt;
&lt;br /&gt;
Thank you for your cooperation.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = System Updates Completed&lt;br /&gt;
| date = 2023/01/24&lt;br /&gt;
| message =&lt;br /&gt;
The upgrade has been completed. The following has been changed:&lt;br /&gt;
* OS Updated to Rocky Linux 8.7&lt;br /&gt;
* Slurm updated to 22.05.7&lt;br /&gt;
* Apptainer replaces Singularity&lt;br /&gt;
* Each job will have its own /tmp, /dev/shm, /run/user/$uid mounted&lt;br /&gt;
&lt;br /&gt;
If you encounter any system issues, do not hesitate to let us know.&lt;br /&gt;
&lt;br /&gt;
Thank you for your cooperation.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Filesystem Issues&lt;br /&gt;
| date = 2023/02/28&lt;br /&gt;
| message =&lt;br /&gt;
We are currently investigating a filesystem issue that is causing filesystem slowdowns across ARC.&lt;br /&gt;
&lt;br /&gt;
We will update you with more information as it becomes available.&lt;br /&gt;
&lt;br /&gt;
Thank you for your patience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Filesystem Issues&lt;br /&gt;
| date = 2023/03/1&lt;br /&gt;
| message =&lt;br /&gt;
We are still currently investigating a filesystem issue that is causing filesystem slowdowns across ARC. Some jobs on ARC have been paused to help us find the root cause of the slowdowns.&lt;br /&gt;
&lt;br /&gt;
We will update you with more information as it becomes available.&lt;br /&gt;
&lt;br /&gt;
Thank you for your patience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = ARC Login node reboot&lt;br /&gt;
| date = 2023/03/2&lt;br /&gt;
| message =&lt;br /&gt;
The ARC login node will be rebooted this afternoon for an emergency maintenance. This downtime is needed to help mitigate the filesystem slowdowns experienced on the login node.  Jobs will continue running and scheduling during this time.&lt;br /&gt;
&lt;br /&gt;
All logins to the ARC login node will be terminated at 3:00PM and will remain unavailable until 4:00PM.&lt;br /&gt;
&lt;br /&gt;
We apologize for the inconvenience and thank you for your patience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = ⚠️ Filesystem Issues&lt;br /&gt;
| date = 2023/03/2&lt;br /&gt;
| message =&lt;br /&gt;
We are still currently investigating a filesystem issue that is causing filesystem slowdowns on specific nodes in our MSRDC location.&lt;br /&gt;
&lt;br /&gt;
We will update you with more information as it becomes available.&lt;br /&gt;
&lt;br /&gt;
We apologize for the inconvenience and thank you for your patience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Filesystem Issues Resolved&lt;br /&gt;
| date = 2023/03/10&lt;br /&gt;
| message =&lt;br /&gt;
We have upgraded the filesystem routers in our MSRDC location to address the performance issues.&lt;br /&gt;
&lt;br /&gt;
Please let us know if you experience any issues with the filesystem performance.&lt;br /&gt;
&lt;br /&gt;
Thank-you for your patience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Open OnDemand reboot&lt;br /&gt;
| date = 2023/05/01&lt;br /&gt;
| message =&lt;br /&gt;
On May 1, 2023, the ARC Open OnDemand node will be rebooted between 5PM and 6PM. Expected downtime will be approximately 15 minutes.&lt;br /&gt;
&lt;br /&gt;
If you encounter any system issues, do not hesitate to let us know.&lt;br /&gt;
&lt;br /&gt;
Thank you for your cooperation.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Apptainer (Singularity) on ARC Login Node&lt;br /&gt;
| date = 2023/06/22&lt;br /&gt;
| message =&lt;br /&gt;
Apptainer (Singularity) containers may experience an error when&lt;br /&gt;
running on the Arc login node. If apptainer complains that a system&lt;br /&gt;
administrator needs to enable user namespaces, simply run your&lt;br /&gt;
containers inside a job.&lt;br /&gt;
&lt;br /&gt;
This is a temporary measure due to security vulnerability that will be&lt;br /&gt;
patched soon.&lt;br /&gt;
&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Lattice, Single, cpu2013 partition changes&lt;br /&gt;
| date = 2023/07/13&lt;br /&gt;
| message =&lt;br /&gt;
The Lattice and Single, and cpu2013 have all been decomissioned.  The Single&lt;br /&gt;
partition will be replaced by the nodes formerly in the cpu2013 partition but&lt;br /&gt;
will be called single.&lt;br /&gt;
&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Open OnDemand reboot&lt;br /&gt;
| date = 2023/10/17&lt;br /&gt;
| message =&lt;br /&gt;
Open OnDemand will be rebooted on October 17, 2023 for an update. It will be down for up to 30 minutes.&lt;br /&gt;
&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Storage Upgrade MARC/ARC cluster&lt;br /&gt;
| date = 2023/10/23&lt;br /&gt;
| message =&lt;br /&gt;
We will be performing storage upgrades on the MARC/ARC cluster on &lt;br /&gt;
November 16 and 17, 2023. To facilitate this, we will be throttling &lt;br /&gt;
down the number of jobs on both clusters while the upgrades are &lt;br /&gt;
performed&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Systems Operating Normally&lt;br /&gt;
| date = 2024/05/3&lt;br /&gt;
| message =&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Power Interruption&lt;br /&gt;
| date = 2024/05/07&lt;br /&gt;
| message = Arc Experienced an brief power outage around 11AM May 7, 2024.&lt;br /&gt;
Most compute nodes have or are rebooting.  Most jobs running at this time &lt;br /&gt;
were lost. Arc administrators are actively working on restarting compute &lt;br /&gt;
nodes. Sorry for the inconvenience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = GPU a100 Node Reservation&lt;br /&gt;
| date = 2024/06/03&lt;br /&gt;
| message = Job submissions targeted to the  GPU a100 partition will be &lt;br /&gt;
affected by a temporary reservation on the nodes to accommodate the RCS&lt;br /&gt;
summer school class taking place on 2024/Jun/10. Reservation will end &lt;br /&gt;
shortly after. Please submit your jobs normally and the scheduler will &lt;br /&gt;
start them as soon as the nodes are available. Sorry for the inconvenience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = GPU a100 Node Reservation Removed&lt;br /&gt;
| date = 2024/06/11&lt;br /&gt;
| message = GPU a100 Nodes in ARC have been returned to normal scheduling. &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Notice of Upcoming Partial Outage&lt;br /&gt;
| date = 2024/08/23&lt;br /&gt;
| message = Several compute nodes from the ARC cluster will be unavailable &lt;br /&gt;
between Sept 23 to Sept 27 inclusive (subject to change). All compute nodes &lt;br /&gt;
in cpu2019, cpu2021/2, gpu-v100 most nodes from bigmem and gpu-a100 will be &lt;br /&gt;
affected. These nodes will return to service as soon as the work is complete.  &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Partial Outage Update I&lt;br /&gt;
| date = 2024/09/25&lt;br /&gt;
| message = Due to hardware issues that is blocking our original maintenance window, most compute nodes that were taken offline on Monday has been brought back online today. An additional partial outage will occur again starting next Tuesday for the same nodes.&lt;br /&gt;
&lt;br /&gt;
On Tuesday, October 1, 2024, the compute nodes in cpu2019, cpu2021, cpu2022, gpu-v100, gpu-a100, and most nodes from bigmem will be unavailable until Friday October 4, 2024.&lt;br /&gt;
&lt;br /&gt;
We apologise for the inconvenience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Partial Outage Update II&lt;br /&gt;
| date = 2024/10/04&lt;br /&gt;
| message = The maintenance window will be extended until at least Monday, October 7, 2024 due to a power distribution issue in our renovated data centre.&lt;br /&gt;
&lt;br /&gt;
Currently, the compute nodes in cpu2019, cpu2021, cpu2022, gpu-v100, gpu-a100, and most nodes from bigmem will be unavailable until at least Monday, October 7, 2024. Affected WDF-Altis GPU nodes include: wdfgpu[1-2,6,8-12].&lt;br /&gt;
&lt;br /&gt;
We apologize for the extended downtime and will update you as soon as we have additional information from our operations team.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Partial Outage Update III&lt;br /&gt;
| date = 2024/10/07&lt;br /&gt;
| message = Due to technical issues beyond our control the maintenance window will be extended until at least Tuesday, October 15, 2024.&lt;br /&gt;
&lt;br /&gt;
Currently, the compute nodes in cpu2019, cpu2021, cpu2022, gpu-v100, gpu-a100, and most nodes from bigmem will be unavailable until at least Tuesday, October 15, 2024. Affected WDF-Altis GPU nodes include: wdfgpu[1-2,6,8-12].&lt;br /&gt;
&lt;br /&gt;
We apologize for the extended downtime and will update you as soon as we have additional information from our operations team.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Normal Scheduling has resumed. &lt;br /&gt;
| date = 2024/10/08&lt;br /&gt;
| message = The ARC cluster has been successfully brought online and nodes are running jobs normally. We apologize for the extended downtime. &lt;br /&gt;
&lt;br /&gt;
Please reach out to support@hpc.ucalgary.ca with any issues or concerns. &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Scheduled Maintenance &lt;br /&gt;
| date = 2024/12/11&lt;br /&gt;
| message = The ARC login node will be rebooted on Tuesday December 17 for scheduled maintenance. It will be down for a few minutes and return shortly. Job scheduling and jobs running on the cluster will not be affected. Thank you for understanding. &lt;br /&gt;
&lt;br /&gt;
Please reach out to support@hpc.ucalgary.ca with any issues or concerns. &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Scheduled Maintenance and OS Update&lt;br /&gt;
| date = 2025/01/07&lt;br /&gt;
| message = The ARC cluster will be rebooted for OS updates on Monday January 20, 2025. Please make sure to save your work and log out before the reboot happens. Scheduling will be paused until the cluster is back, but queued jobs will remain in the queue and nodes will start scheduling when the cluster is ready. Thank you for understanding. &lt;br /&gt;
&lt;br /&gt;
Please reach out to support@hpc.ucalgary.ca with any issues or concerns. &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = ⚠️ Scheduled Maintenance and OS Update&lt;br /&gt;
| date = 2025/01/15&lt;br /&gt;
| message = The ARC cluster will be down for maintenance and upgrades starting 9AM Monday, January 20, 2025 through Wednesday, January 22, 2025. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the duration of the upgrade window:&lt;br /&gt;
* Scheduling will be paused and new jobs will be queued. Any queued jobs will start scheduling only after the upgrade is complete.&lt;br /&gt;
* Access to files via the login node and arc-dtn will generally be available but intermittent. File transfers on the DTN node, including Globus file transfers, may be interrupted during this window.&lt;br /&gt;
&lt;br /&gt;
Please make sure to save your work prior to this outage window to avoid any loss of work.&lt;br /&gt;
&lt;br /&gt;
During this time the following changes will happen:&lt;br /&gt;
&lt;br /&gt;
1. Ethernet will replace the 11 year old, unsupported Infiniband on the following partitions:&lt;br /&gt;
* cpu2023 (temporary)&lt;br /&gt;
* Parallel&lt;br /&gt;
* Theia/Synergy/cpu2017-bf05&lt;br /&gt;
* Single&lt;br /&gt;
Any multi-node jobs (MPI) running on these partitions will have increased latency going forward. If you run multi-node jobs, make sure to run on a partition such as cpu2019, cpu2021, cpu2022.&lt;br /&gt;
&lt;br /&gt;
2. A component of the NetApp filer will be replaced. Access to /bulk will be unavailable on Wednesday, January 22, 2025.&lt;br /&gt;
&lt;br /&gt;
3. The compute node operating system will be updated to Rocky Linux 8.10.&lt;br /&gt;
&lt;br /&gt;
4. The Slurm scheduling system will be upgraded.&lt;br /&gt;
&lt;br /&gt;
5. The Open OnDemand web portal will be upgraded.&lt;br /&gt;
&lt;br /&gt;
Please reach out to support@hpc.calgary.ca with any issues or concerns.&lt;br /&gt;
&lt;br /&gt;
⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️&lt;br /&gt;
Update Jan 18, 2025&lt;br /&gt;
&lt;br /&gt;
Around 10AM Arc experienced an electrical power brownout.  Some percentage (how many is unknown at this time) of the nodes lost electrical power during this time causing a loss of a number of running jobs.  &lt;br /&gt;
&lt;br /&gt;
Sorry for the inconvenience.  &lt;br /&gt;
&lt;br /&gt;
Since Arc is shutting down for maintenance on Monday Jan 20, replacement jobs will likely not start unless they request a timelimit less than the time until 8AM Monday.  &lt;br /&gt;
⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Maintenance Complete&lt;br /&gt;
| date = 2025/01/22&lt;br /&gt;
| message = The ARC cluster upgrade is complete&lt;br /&gt;
&lt;br /&gt;
During this time the following changes happened:&lt;br /&gt;
&lt;br /&gt;
1. Ethernet will replace the 11 year old, unsupported Infiniband on the following partitions:&lt;br /&gt;
* cpu2023 (temporary)&lt;br /&gt;
* Parallel&lt;br /&gt;
* Theia/Synergy/cpu2017-bf05&lt;br /&gt;
* Single&lt;br /&gt;
Any multi-node jobs (MPI) running on these partitions will have increased latency going forward. If you run multi-node jobs, make sure to run on a partition such as cpu2019, cpu2021, cpu2022.&lt;br /&gt;
&lt;br /&gt;
2. A component of the NetApp filer was replaced successfully.&lt;br /&gt;
&lt;br /&gt;
3. The compute node operating was updated to Rocky Linux 8.10.&lt;br /&gt;
&lt;br /&gt;
4. The Slurm scheduling system was upgraded.&lt;br /&gt;
&lt;br /&gt;
5. The Open OnDemand web portal was upgraded.&lt;br /&gt;
&lt;br /&gt;
6. The Parallel partition was renamed to Legacy to show the lack of an interconnect for parallel MPI work and was restricted to maximum 4 node jobs.&lt;br /&gt;
&lt;br /&gt;
Please reach out to support@hpc.calgary.ca with any issues or concerns.&lt;br /&gt;
&lt;br /&gt;
Jan 23, 9:08AM&lt;br /&gt;
&lt;br /&gt;
Remount complete.  arc is back in full service.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Module Command Upgrade&lt;br /&gt;
| date = 2025/02/03&lt;br /&gt;
| message = Upgrade of the module command&lt;br /&gt;
&lt;br /&gt;
On Tuesday, February 11, 2025 the module command will be upgraded to a new verson on Arc. This should result in new capabilities and a slightly different visual experience when using the module command.  Loading modules is not expected to change.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Module Command Upgraded&lt;br /&gt;
| date = 2025/02/12&lt;br /&gt;
| message = The module command was upgraded&lt;br /&gt;
&lt;br /&gt;
On Tuesday, February 12, 2025 the module command was upgraded to a new verson on Arc. This should result in new capabilities and a slightly different visual experience when using the module command.  Loading modules should not have changed.&lt;br /&gt;
&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Support email address down&lt;br /&gt;
| date = 2025/03/07&lt;br /&gt;
| message = support@hpc.ucalgary.ca Unavailable&lt;br /&gt;
&lt;br /&gt;
Please be informed that our support email address (support@hpc.ucalgary.ca) for RCS is currently not working. We are working to bring it back as soon as possible. Please keep an eye on this space for updates. The clusters are working normally, but support will not receive your messages at this time. We will begin responding as soon as we can get it back. &lt;br /&gt;
&lt;br /&gt;
Apologies for the inconvenience.&lt;br /&gt;
&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Support email address functional&lt;br /&gt;
| date = 2025/03/07&lt;br /&gt;
| message = support@hpc.ucalgary.ca is back&lt;br /&gt;
&lt;br /&gt;
support@hpc.ucalgary.ca has been repaired and RCS can be contacted there. If you had reached out for assistance in recent days without response please follow up as we may not have received your initial email. &lt;br /&gt;
&lt;br /&gt;
Apologies for the inconvenience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Interactive Job Timelimit will be Enforced&lt;br /&gt;
| date = 2025/04/11&lt;br /&gt;
| message = In order to improve the scheduling and job throughput efficiency of ARC, interactive jobs will be limited to a maximum of 5 hours of runtime.  Interactive jobs that are submitted with a timelimit over 5 hours will be rejected at submission time.  This change will be made on Monday, April 28, 2025.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Interactive Job Timelimit Is Now Enforced&lt;br /&gt;
| date = 2025/04/28&lt;br /&gt;
| message = In order to improve the scheduling and job throughput efficiency of ARC, interactive jobs are now limited to a maximum of 5 hours of runtime.  Interactive jobs that are submitted with a timelimit over 5 hours will be rejected at submission time.&lt;br /&gt;
&lt;br /&gt;
Apr 29, 2025&lt;br /&gt;
To increase the security posture of the Arc cluster administrators will be installing Trend Micro on cluster login nodes over the week starting Apr 30.  Please report any inconsistencies to support@hpc.ucalgary.ca&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = ⚠️ Bulk Filesystem Emergency Maintenance&lt;br /&gt;
| date = 2025/07/29&lt;br /&gt;
| message = The filer that provides the /bulk filesystem will be down for emergency repairs at 12 Noon on Thursday July 31.  No access to files on /bulk will be possible for the duration of the multi-hour outage.  Any jobs running that access /bulk will start and then pause when access to /bulk is attempted.  Jobs should continue once service is restored.  Sorry for the inconvenience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Navbox ARC}}&lt;br /&gt;
[[Category:ARC]]&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=How_to_get_an_account&amp;diff=3790</id>
		<title>How to get an account</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=How_to_get_an_account&amp;diff=3790"/>
		<updated>2025-05-27T17:18:54Z</updated>

		<summary type="html">&lt;p&gt;Lleung: ARC Application Form URL&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;All eligible University of Calgary Researchers may request for a HPC account. Visiting or external collaborators must first obtain a General Associate account before they are eligible for a HPC account. Undergraduates must be confirmed by their research supervisor. In all cases, the applicant must must have an &#039;&#039;&#039;active UCalgary IT account&#039;&#039;&#039; to be able to get access to our HPC systems. &lt;br /&gt;
&lt;br /&gt;
This process is only for production Level 1/2 HPC systems. &lt;br /&gt;
&lt;br /&gt;
* For teaching/learning applications and for undergraduates who need an account on TALC, please visit [[TALC Cluster|Teaching And Learning Cluster (TALC)]] for more information.&lt;br /&gt;
* For Level 3/4 (high-security) HPC applications, please apply for a [[MARC accounts|MARC]] account instead.&lt;br /&gt;
&lt;br /&gt;
Please refer to the following table for next steps: &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: left;&amp;quot; |I am a University of Calgary researcher or graduate student&lt;br /&gt;
|All University of Calgary researchers, including graduate students, can directly request an account on the ARC cluster.&lt;br /&gt;
&#039;&#039;&#039;Next steps&#039;&#039;&#039;: Please email the completed application form to support@hpc.ucalgary.ca.&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: left;&amp;quot; | I am not a University of Calgary researcher&lt;br /&gt;
|If you are not University of Calgary researcher and have collaboration work that required access to the ARC cluster, you must first obtain the status of the [[External collaborators |&#039;&#039;&#039;General Associate&#039;&#039;&#039;]] with the University in order to obtain a UCalgary email account. You may apply for an ARC account only after obtaining your University of Calgary IT and Email account.&lt;br /&gt;
&#039;&#039;&#039;Next steps&#039;&#039;&#039;: Obtain a [[External collaborators |&#039;&#039;&#039;General Associate&#039;&#039;&#039;]], then email the completed application form to support@hpc.ucalgary.ca.&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: left;&amp;quot; |I am an undergraduate student&lt;br /&gt;
|If an undergraduate student is working for a research group and needs access to HPC infrastructure, their account must be confirmed by their research supervisor. Additionally, the supervisor must confirm the nature of the research work the student is conducting is related to the supervisor&#039;s area of research and that the HPC infrastructure is necessary to faciliate this research.&lt;br /&gt;
Every undergraduate student who needs an account on ARC is still expected to submit their own application, and answer all the questions &#039;&#039;&#039;on their own&#039;&#039;&#039;. If the student has difficulties with answering the application questions, it may be too early to create and account or the ARC environment is not appropriate for their research. ARC is a production research system and untrained users can potentially disrupt other researchers work.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Next steps&#039;&#039;&#039;: Please have undergraduate student email the completed application form to support@hpc.ucalgary.ca and cc&#039;d to the research supervisor. The research supervisor must then reply to support@hpc.ucalgary.ca with their approval.&lt;br /&gt;
|} &lt;br /&gt;
&lt;br /&gt;
To apply, please &#039;&#039;&#039;copy and paste&#039;&#039;&#039; the [[How to get an account#ARC_application_form|Application form]] below into an email message to [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] and then respond to the questions in the text. &lt;br /&gt;
&lt;br /&gt;
Please also include the subsequent &#039;&#039;&#039;[[How to get an account#Clauses of understanding|clauses of understanding]]&#039;&#039;&#039; into your application as your agreement to these terms are mandatory.&lt;br /&gt;
&lt;br /&gt;
== ARC application form ==&lt;br /&gt;
&#039;&#039;&#039;About myself:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*What is your &#039;&#039;&#039;status&#039;&#039;&#039; with the University of Calgary? (Eg. undergraduate student, PhD student, MS student, postdoc, visiting researcher)&lt;br /&gt;
&lt;br /&gt;
* What &#039;&#039;&#039;department&#039;&#039;&#039; are you in?&lt;br /&gt;
&lt;br /&gt;
*What research &#039;&#039;&#039;group&#039;&#039;&#039; do you work for?&lt;br /&gt;
&lt;br /&gt;
*Who is your &#039;&#039;&#039;supervisor&#039;&#039;&#039;?&lt;br /&gt;
:(If you are a Principal Investigator yourself, please respond accordingly).&lt;br /&gt;
&lt;br /&gt;
*How did you learn about the ARC cluster?&lt;br /&gt;
&lt;br /&gt;
*Do you have any experience with &#039;&#039;&#039;Linux&#039;&#039;&#039;?&lt;br /&gt;
&lt;br /&gt;
*Have you used &#039;&#039;&#039;compute clusters&#039;&#039;&#039; before?&lt;br /&gt;
&lt;br /&gt;
*Does anybody else in your group use ARC for their work?&lt;br /&gt;
&lt;br /&gt;
*What &#039;&#039;&#039;shortcoming of your work computer&#039;&#039;&#039; are you trying to address by using a compute cluster? &#039;&#039;&#039;What is lacking&#039;&#039;&#039; on your computer that is required for your work?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;About the project(s) I am going to work on&#039;&#039;&#039;: &lt;br /&gt;
&lt;br /&gt;
*Please tell us a briefly about the &#039;&#039;&#039;research topic&#039;&#039;&#039; you are going to be working on on ARC.&lt;br /&gt;
&lt;br /&gt;
*What are the &#039;&#039;&#039;data&#039;&#039;&#039; you are planning to work on? What &#039;&#039;&#039;form&#039;&#039;&#039; is it in?&lt;br /&gt;
&lt;br /&gt;
*What &#039;&#039;&#039;kind of analysis&#039;&#039;&#039; is it?&lt;br /&gt;
&lt;br /&gt;
*What &#039;&#039;&#039;software&#039;&#039;&#039; are you going to be using?&lt;br /&gt;
&lt;br /&gt;
*Do you have an estimate for the &#039;&#039;&#039;amount&#039;&#039;&#039; of work (please provide, if known. For example: 3000 simulations; 6 month; 560 CPU-years; etc.)?&lt;br /&gt;
&lt;br /&gt;
=== Clauses of understanding ===&lt;br /&gt;
By applying for an ARC account I certify &#039;&#039;&#039;I understand that&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
* The storage provided by the ARC cluster is only suitable for &#039;&#039;&#039;Level 1 and Level 2 data&#039;&#039;&#039;, as classified according to the University of Calgary Information &#039;&#039;&#039;Security Classification Standard&#039;&#039;&#039; (https://www.ucalgary.ca/legal-services/sites/default/files/teams/1/Standards-Legal-Information-Security-Classification-Standard.pdf)&lt;br /&gt;
* ARC&#039;s availability may be changed with little to no warning. While RCS take precautions to avoid interrupting running jobs, there may be instances where interruptions are occur including power or network interruptions. RCS may also take nodes offline for regular maintenance and node availability is subject to change.&lt;br /&gt;
* ARC&#039;s storage should not be used as your main storage facility for research data. Access to your data may be interrupted or temporarily unavailable when ARC is under maintenance. Data on ARC is not backed up. Your research group should ensure that the master copy of research data should be stored elsewhere. We highly recommend that only data used for computational analysis on ARC should reside on ARC.&lt;br /&gt;
* User&#039;s accounts on ARC are subject to &#039;&#039;&#039;automatic deletion after 12 months of inactivity&#039;&#039;&#039;. Please log in periodically to prevent your account from being deleted. You will be notified before the account is deleted. Please note that when an account is delete &#039;&#039;&#039;all the data&#039;&#039;&#039; stored in the home directory of the account are &#039;&#039;&#039;deleted&#039;&#039;&#039; as well.&lt;br /&gt;
&lt;br /&gt;
== Book online training sessions ==&lt;br /&gt;
After obtaining your ARC account, you may [[book online training sessions]] with one of our analysts to get started with ARC.&lt;br /&gt;
&lt;br /&gt;
[[Category:Guides]]&lt;br /&gt;
[[Category:How-Tos]]&lt;br /&gt;
{{Navbox Guides}}&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=How_to_get_an_account&amp;diff=3789</id>
		<title>How to get an account</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=How_to_get_an_account&amp;diff=3789"/>
		<updated>2025-05-27T17:18:21Z</updated>

		<summary type="html">&lt;p&gt;Lleung: ARC Application Form URL&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;All eligible University of Calgary Researchers may request for a HPC account. Visiting or external collaborators must first obtain a General Associate account before they are eligible for a HPC account. Undergraduates must be confirmed by their research supervisor. In all cases, the applicant must must have an &#039;&#039;&#039;active UCalgary IT account&#039;&#039;&#039; to be able to get access to our HPC systems. &lt;br /&gt;
&lt;br /&gt;
This process is only for production Level 1/2 HPC systems. &lt;br /&gt;
&lt;br /&gt;
* For teaching/learning applications and for undergraduates who need an account on TALC, please visit [[TALC Cluster|Teaching And Learning Cluster (TALC)]] for more information.&lt;br /&gt;
* For Level 3/4 (high-security) HPC applications, please apply for a [[MARC accounts|MARC]] account instead.&lt;br /&gt;
&lt;br /&gt;
Please refer to the following table for next steps: &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: left;&amp;quot; |I am a University of Calgary researcher or graduate student&lt;br /&gt;
|All University of Calgary researchers, including graduate students, can directly request an account on the ARC cluster.&lt;br /&gt;
&#039;&#039;&#039;Next steps&#039;&#039;&#039;: Please email the completed application form to support@hpc.ucalgary.ca.&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: left;&amp;quot; | I am not a University of Calgary researcher&lt;br /&gt;
|If you are not University of Calgary researcher and have collaboration work that required access to the ARC cluster, you must first obtain the status of the [[External collaborators |&#039;&#039;&#039;General Associate&#039;&#039;&#039;]] with the University in order to obtain a UCalgary email account. You may apply for an ARC account only after obtaining your University of Calgary IT and Email account.&lt;br /&gt;
&#039;&#039;&#039;Next steps&#039;&#039;&#039;: Obtain a [[External collaborators |&#039;&#039;&#039;General Associate&#039;&#039;&#039;]], then email the completed application form to support@hpc.ucalgary.ca.&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: left;&amp;quot; |I am an undergraduate student&lt;br /&gt;
|If an undergraduate student is working for a research group and needs access to HPC infrastructure, their account must be confirmed by their research supervisor. Additionally, the supervisor must confirm the nature of the research work the student is conducting is related to the supervisor&#039;s area of research and that the HPC infrastructure is necessary to faciliate this research.&lt;br /&gt;
Every undergraduate student who needs an account on ARC is still expected to submit their own application, and answer all the questions &#039;&#039;&#039;on their own&#039;&#039;&#039;. If the student has difficulties with answering the application questions, it may be too early to create and account or the ARC environment is not appropriate for their research. ARC is a production research system and untrained users can potentially disrupt other researchers work.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Next steps&#039;&#039;&#039;: Please have undergraduate student email the completed application form to support@hpc.ucalgary.ca and cc&#039;d to the research supervisor. The research supervisor must then reply to support@hpc.ucalgary.ca with their approval.&lt;br /&gt;
|} &lt;br /&gt;
&lt;br /&gt;
To apply, please &#039;&#039;&#039;copy and paste&#039;&#039;&#039; the [[How to get an account#ARC_Application_form|Application form]] below into an email message to [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] and then respond to the questions in the text. &lt;br /&gt;
&lt;br /&gt;
Please also include the subsequent &#039;&#039;&#039;[[How to get an account#Clauses of understanding|clauses of understanding]]&#039;&#039;&#039; into your application as your agreement to these terms are mandatory.&lt;br /&gt;
&lt;br /&gt;
== ARC application form ==&lt;br /&gt;
&#039;&#039;&#039;About myself:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*What is your &#039;&#039;&#039;status&#039;&#039;&#039; with the University of Calgary? (Eg. undergraduate student, PhD student, MS student, postdoc, visiting researcher)&lt;br /&gt;
&lt;br /&gt;
* What &#039;&#039;&#039;department&#039;&#039;&#039; are you in?&lt;br /&gt;
&lt;br /&gt;
*What research &#039;&#039;&#039;group&#039;&#039;&#039; do you work for?&lt;br /&gt;
&lt;br /&gt;
*Who is your &#039;&#039;&#039;supervisor&#039;&#039;&#039;?&lt;br /&gt;
:(If you are a Principal Investigator yourself, please respond accordingly).&lt;br /&gt;
&lt;br /&gt;
*How did you learn about the ARC cluster?&lt;br /&gt;
&lt;br /&gt;
*Do you have any experience with &#039;&#039;&#039;Linux&#039;&#039;&#039;?&lt;br /&gt;
&lt;br /&gt;
*Have you used &#039;&#039;&#039;compute clusters&#039;&#039;&#039; before?&lt;br /&gt;
&lt;br /&gt;
*Does anybody else in your group use ARC for their work?&lt;br /&gt;
&lt;br /&gt;
*What &#039;&#039;&#039;shortcoming of your work computer&#039;&#039;&#039; are you trying to address by using a compute cluster? &#039;&#039;&#039;What is lacking&#039;&#039;&#039; on your computer that is required for your work?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;About the project(s) I am going to work on&#039;&#039;&#039;: &lt;br /&gt;
&lt;br /&gt;
*Please tell us a briefly about the &#039;&#039;&#039;research topic&#039;&#039;&#039; you are going to be working on on ARC.&lt;br /&gt;
&lt;br /&gt;
*What are the &#039;&#039;&#039;data&#039;&#039;&#039; you are planning to work on? What &#039;&#039;&#039;form&#039;&#039;&#039; is it in?&lt;br /&gt;
&lt;br /&gt;
*What &#039;&#039;&#039;kind of analysis&#039;&#039;&#039; is it?&lt;br /&gt;
&lt;br /&gt;
*What &#039;&#039;&#039;software&#039;&#039;&#039; are you going to be using?&lt;br /&gt;
&lt;br /&gt;
*Do you have an estimate for the &#039;&#039;&#039;amount&#039;&#039;&#039; of work (please provide, if known. For example: 3000 simulations; 6 month; 560 CPU-years; etc.)?&lt;br /&gt;
&lt;br /&gt;
=== Clauses of understanding ===&lt;br /&gt;
By applying for an ARC account I certify &#039;&#039;&#039;I understand that&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
* The storage provided by the ARC cluster is only suitable for &#039;&#039;&#039;Level 1 and Level 2 data&#039;&#039;&#039;, as classified according to the University of Calgary Information &#039;&#039;&#039;Security Classification Standard&#039;&#039;&#039; (https://www.ucalgary.ca/legal-services/sites/default/files/teams/1/Standards-Legal-Information-Security-Classification-Standard.pdf)&lt;br /&gt;
* ARC&#039;s availability may be changed with little to no warning. While RCS take precautions to avoid interrupting running jobs, there may be instances where interruptions are occur including power or network interruptions. RCS may also take nodes offline for regular maintenance and node availability is subject to change.&lt;br /&gt;
* ARC&#039;s storage should not be used as your main storage facility for research data. Access to your data may be interrupted or temporarily unavailable when ARC is under maintenance. Data on ARC is not backed up. Your research group should ensure that the master copy of research data should be stored elsewhere. We highly recommend that only data used for computational analysis on ARC should reside on ARC.&lt;br /&gt;
* User&#039;s accounts on ARC are subject to &#039;&#039;&#039;automatic deletion after 12 months of inactivity&#039;&#039;&#039;. Please log in periodically to prevent your account from being deleted. You will be notified before the account is deleted. Please note that when an account is delete &#039;&#039;&#039;all the data&#039;&#039;&#039; stored in the home directory of the account are &#039;&#039;&#039;deleted&#039;&#039;&#039; as well.&lt;br /&gt;
&lt;br /&gt;
== Book online training sessions ==&lt;br /&gt;
After obtaining your ARC account, you may [[book online training sessions]] with one of our analysts to get started with ARC.&lt;br /&gt;
&lt;br /&gt;
[[Category:Guides]]&lt;br /&gt;
[[Category:How-Tos]]&lt;br /&gt;
{{Navbox Guides}}&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=RCS_Home_Page&amp;diff=3757</id>
		<title>RCS Home Page</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=RCS_Home_Page&amp;diff=3757"/>
		<updated>2025-03-10T21:39:34Z</updated>

		<summary type="html">&lt;p&gt;Lleung: Undo revision 3750 by Dmitri (talk)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Research Computing Services (RCS) is a group within the wider University of Calgary Information Technologies team that plans, manages, and supports high performance computing (HPC) systems in use by researchers throughout the University of Calgary.  Our primary focus is to meet the increasing demand for engineering and scientific computation by offering a wide range of specialized services to help researchers solve highly complex real-world problems or run large scale computationally intensive workloads on our high-end HPC resources.&lt;br /&gt;
&lt;br /&gt;
This RCS Wiki contains technical documentation for use by users of HPC systems operated by RCS&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- &lt;br /&gt;
In case cluster status changes:&lt;br /&gt;
    *  set the status to yellow or red &lt;br /&gt;
    *  provide a custom &#039;title&#039; and &#039;message&#039;&lt;br /&gt;
&lt;br /&gt;
{{Cluster Status&lt;br /&gt;
|status=green&lt;br /&gt;
}}&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
=== Contact us for support ===&lt;br /&gt;
&lt;br /&gt;
* For general RCS/HPC inquiries, please email: [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca]&lt;br /&gt;
* For IT related issues (networking, VPN, email), please email: [mailto:it@ucalgary.ca it@ucalgary.ca]&lt;br /&gt;
* For Compute Canada specific questions: [mailto:support@tech.alliancecan.ca support@tech.alliancecan.ca]&lt;br /&gt;
&lt;br /&gt;
{{Clear}}&lt;br /&gt;
&amp;lt;div class=&amp;quot;row&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;div class=&amp;quot;col-md-6&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== General information ==&lt;br /&gt;
* [[General Cluster Guidelines and Policies]]&lt;br /&gt;
* [[How to get an account]]&lt;br /&gt;
* [[Data ownership]]&lt;br /&gt;
* [[Connecting to RCS HPC Systems]]&lt;br /&gt;
* [[External collaborators]]&lt;br /&gt;
&lt;br /&gt;
* [[CloudStack|Cloud/Virtual Machine Infrastructure (CloudStack)]]&lt;br /&gt;
&lt;br /&gt;
* [[On-line resources for new Linux and ARC users]]&lt;br /&gt;
* [[Acknowledging Research Computing Services Group]]&lt;br /&gt;
&lt;br /&gt;
== Cluster Guides ==&lt;br /&gt;
* [[ ARC Cluster Guide]] - ARC is a general purpose cluster for University of Calgary researchers.&lt;br /&gt;
*  [[GLaDOS Cluster Guide]] - GLaDOS is a researcher-owned cluster maintained by Research Computing Services.&lt;br /&gt;
*  [[TALC Cluster Guide]] - Teaching and Learning Cluster (TALC) is a cluster created by Research Computing Services to support academic courses and workshops.&lt;br /&gt;
* [[MARC Cluster Guide]] -- Medical Advanced Research Computing cluster at the University of Calgary created by Research Computing Services in 2020.&lt;br /&gt;
&lt;br /&gt;
== Other services ==&lt;br /&gt;
&lt;br /&gt;
* [[Jupyter Notebooks]]&lt;br /&gt;
* [[Open OnDemand | Open OnDemand portal]]&lt;br /&gt;
&lt;br /&gt;
== Software pages ==&lt;br /&gt;
* [[Managing software on ARC]]&lt;br /&gt;
* [[Gaussian on ARC]] -- How to use Gaussian 16 on ARC.&lt;br /&gt;
* [[Apache Spark on ARC]]&lt;br /&gt;
* [[ARC Software pages]]&lt;br /&gt;
* [[Bioinformatics applications]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div class=&amp;quot;col-md-6&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Running courses on HPC resources ==&lt;br /&gt;
* [[TALC Cluster|TALC]] - Teaching and Learning Cluster (TALC) is a cluster created by Research Computing Services to support academic courses and workshops.&lt;br /&gt;
* [[TALC Terms of Use]] - Terms of use to which TALC account holders must agree to use the cluster.&lt;br /&gt;
* [[List of courses on TALC]] - A list of current and historical courses taught using TALC.&lt;br /&gt;
&lt;br /&gt;
== Training ==&lt;br /&gt;
* Our [[HPC Systems]]&lt;br /&gt;
* [[HPC Linux topics]] - A list of topics on which RCS technical support staff can provide one-on-one or group training&lt;br /&gt;
* [[Courses]]&lt;br /&gt;
* [[Linux Introduction]]&lt;br /&gt;
* [[What is a scheduler?]]&lt;br /&gt;
* [[Running jobs]]&lt;br /&gt;
* [[Data storage options for UofC researchers]]&lt;br /&gt;
* [[Security and privacy]]&lt;br /&gt;
* [[How to transfer data]]&lt;br /&gt;
&lt;br /&gt;
* [[UofC Services]]&lt;br /&gt;
&lt;br /&gt;
* [[Book online training sessions]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [[How-Tos | More How-Tos]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Clear}}&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=Open_OnDemand&amp;diff=3737</id>
		<title>Open OnDemand</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=Open_OnDemand&amp;diff=3737"/>
		<updated>2025-02-24T21:46:02Z</updated>

		<summary type="html">&lt;p&gt;Lleung: /* Additional Apps */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Open OnDemand is a web portal for accessing certain clusters operated by Research Computing Services. The web portal provides a convenient way to access the login node, your files, and certain graphical applications such as Jupyter Notebooks and remote desktops. This service is an additional way to access HPC resources in addition to the existing command-line based options.&lt;br /&gt;
&lt;br /&gt;
Open OnDemand is an open source project actively developed by the Ohio Supercomputer Center.&lt;br /&gt;
&lt;br /&gt;
== Access ==&lt;br /&gt;
&lt;br /&gt;
=== Get an account ===&lt;br /&gt;
Before using Open OnDemand, you will need to have an account on the cluster. If you do not already have an account, please review the cluster&#039;s quick start guide for information on getting started.&lt;br /&gt;
&lt;br /&gt;
=== Connect to OnDemand ===&lt;br /&gt;
You may access Open OnDemand for the following clusters using your UCIT credentials. Sign-on is handled through the University&#039;s Single Sign-On mechanism and requires Multi-Factor Authentication enabled.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Cluster&lt;br /&gt;
!Open OnDemand Access&lt;br /&gt;
|-&lt;br /&gt;
|[[ARC Cluster Guide|ARC Cluster]]&lt;br /&gt;
|[https://ood-arc.rcs.ucalgary.ca ood-arc.rcs.ucalgary.ca]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Open OnDemand Dashboard ==&lt;br /&gt;
After logging in, you will see the Open OnDemand dashboard. The Message Of The Day  (MOTD) will show any news and announcements relating to the cluster. Any quota warnings will also be displayed on your dashboard.&lt;br /&gt;
[[File:Open OnDemand Dashboard.jpg|none|thumb|Open OnDemand Dashboard]]Other components and applications can be accessed through the top navigation.&lt;br /&gt;
&lt;br /&gt;
=== File Browser ===&lt;br /&gt;
The file browser interface allows you to manage, upload, or download files from your directories, drag &amp;amp; drop file management, and basic file viewing and editing. You can access all files across all filesystems available to the cluster with this interface. &lt;br /&gt;
&lt;br /&gt;
There is a 128 MB limit on file uploads. Please do not use this interface for large file transfers and instead look at [[How to transfer data|other methods for file transfers]].&lt;br /&gt;
&lt;br /&gt;
=== Job Explorer ===&lt;br /&gt;
You may view and manage your current jobs on the cluster through the Active Jobs page. This may be helpful for users new to visualize scheduled jobs in Slurm.&lt;br /&gt;
&lt;br /&gt;
=== Shell Access ===&lt;br /&gt;
You can launch a SSH session via Shell Access to the login node of the cluster.&lt;br /&gt;
&lt;br /&gt;
== Interactive Applications ==&lt;br /&gt;
We have created a small selection of graphical applications launchable through Open OnDemand.&lt;br /&gt;
&lt;br /&gt;
=== Desktop (Container) ===&lt;br /&gt;
The Desktop interactive app may be helpful when exploring data or to run certain GUI based applications. This app simplifies the process of launching a full Linux desktop session or other graphical based applications all within your web browser without the need to use SSH and X11 forwarding. We currently offer only XFCE as the desktop manager.&lt;br /&gt;
&lt;br /&gt;
Your desktop will run as an interactive job on the Slurm cluster and will be subject to a maximum time limit before it is terminated. As a result, you will need to restart your desktop after the time limit is reached. When using this desktop feature, be sure to save your work frequently and be mindful of the remaining time left for your session to avoid loss of work.&lt;br /&gt;
&lt;br /&gt;
To start a desktop, go to the &#039;Interactive Apps&#039; menu, then click on &#039;Desktop (Container)&#039;. &lt;br /&gt;
[[File:Open OnDemand Desktop.jpg|alt=Open OnDemand Desktop|none|thumb|Open OnDemand Desktop]]&lt;br /&gt;
Specify the partition you wish to run the desktop on as well as the desired runtime and resources your session should need. &lt;br /&gt;
&lt;br /&gt;
After you launch the desktop, the job will be sent to Slurm. Depending on the partition and resources you selected, the job may be queued for some time. Once the desktop is started, you will see the option to launch the Remote Desktop in green:&lt;br /&gt;
[[File:Open OnDemand Desktop Launch.png|alt=Open OnDemand Desktop Launch|none|thumb|Open OnDemand Desktop Launch]]Click on the blue &amp;quot;Launch Desktop&amp;quot; button to connect to your desktop.&lt;br /&gt;
[[File:Remote Desktop on Open OnDemand.png|alt=Remote Desktop on Open OnDemand|none|thumb|Remote Desktop on Open OnDemand]]&lt;br /&gt;
&lt;br /&gt;
==== Connecting to CIFS/Samba (eg. ResearchFS) ====&lt;br /&gt;
You may connect to CIFS/Samba shares through the desktop. Do so by opening the Thunar file manager and entering a SMB address:&lt;br /&gt;
[[File:Thunar SMB connection.png|alt=Thunar SMB connection|none|thumb|Thunar SMB connection]]&lt;br /&gt;
&lt;br /&gt;
==== Additional Apps ====&lt;br /&gt;
Please be aware that the desktop runs within an Apptainer environment. Certain features that require containerisation or Linux namespaces may not be available. This is particularly an issue with applications based on webkit/chromium and may require workarounds.&lt;br /&gt;
&lt;br /&gt;
The following are some applications that have been installed within the Desktop environment:&lt;br /&gt;
&lt;br /&gt;
===== Applications / Development =====&lt;br /&gt;
* Visual Studio Code&lt;br /&gt;
* RStudio&lt;br /&gt;
&lt;br /&gt;
===== Applications / Internet =====&lt;br /&gt;
* FileZilla&lt;br /&gt;
* RClone Browser&lt;br /&gt;
* Chromium&lt;br /&gt;
* Firefox&lt;br /&gt;
* Citrix Workspace (Use Firefox or Chromium to connect to myappmf.ucalgary.ca)&lt;br /&gt;
&lt;br /&gt;
===== Applications / Office =====&lt;br /&gt;
* LibreOffice Suite&lt;br /&gt;
&lt;br /&gt;
===== Applications / Other =====&lt;br /&gt;
* Visual Molecular Dynamics (VMD)&lt;br /&gt;
=== Jupyter Notebooks ===&lt;br /&gt;
Open OnDemand offers Jupyter Notebooks via containers or environment modules. For most users, we recommend using the containers way to launch your Jupyter notebooks as it is the simplest. We also offer the environment module based method for users that already have a working workflow based on environment modules.&lt;br /&gt;
&lt;br /&gt;
==== Container-based Jupyter notebooks ====&lt;br /&gt;
[[File:Open OnDemand Jupyter.jpg|alt=Open OnDemand Jupyter Notebook|thumb|Open OnDemand Jupyter Notebook ]]&lt;br /&gt;
We offer a selection of different Jupyter Notebooks derived from the [https://jupyter-docker-stacks.readthedocs.io/en/latest/using/selecting.html. official Jupyter Stacks Docker images]. When launching a Jupyter Notebook using this method, your notebook will run within an Apptainer container environment.&lt;br /&gt;
&lt;br /&gt;
To start a Jupyter Notebook, go to the &#039;Interactive Apps&#039; menu, then click on &#039;Jupyter Notebook (Container)&#039;. On the launch page, select the desired image in the drop-down menu or use a custom image by selecting &#039;Custom Singularity Image...&#039;.&lt;br /&gt;
&lt;br /&gt;
The following flavours of Jupyter notebook images are currently available. When launching a notebook, specify an image that best suits your needs.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
!Image&lt;br /&gt;
!Description&lt;br /&gt;
|-&lt;br /&gt;
|Spark Notebook&lt;br /&gt;
|all-spark-notebook_spark-#.#.#.sif&lt;br /&gt;
|Includes Python, R, and Scala support for Apache Spark. &lt;br /&gt;
Spark version is listed in the filename.&lt;br /&gt;
|-&lt;br /&gt;
|Julia Data Science Notebook&lt;br /&gt;
|datascience-notebook_julia-#.#.#.sif&lt;br /&gt;
|Includes libraries for data analysis from the Julia, Python, and R communities. &lt;br /&gt;
Julia version is listed in the filename.&lt;br /&gt;
|-&lt;br /&gt;
|Python Scipy Notebook&lt;br /&gt;
|scipy-notebook_python-#.#.#.sif&lt;br /&gt;
|Includes Python support for Apache Spark.&lt;br /&gt;
Python version is listed in the filename&lt;br /&gt;
|-&lt;br /&gt;
|Tensorflow Notebook&lt;br /&gt;
|tensorflow-notebook_tensorflow-#.#.#.sif&lt;br /&gt;
|Based on the Scipy notebook but includes Tensorflow.&lt;br /&gt;
Tensorflow version is listed in the filename.&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|gpu-jupyter.sif&lt;br /&gt;
|Includes NVIDIA CUDA along with the data science notebook.&lt;br /&gt;
Based on https://github.com/iot-salzburg/gpu-jupyter/&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===== Create a custom Jupyter Notebook image =====&lt;br /&gt;
If additional packages are required in the notebook that is not available as part of an existing Jupyter Notebook container image, you can create an extended and customized container image instead.  &lt;br /&gt;
&lt;br /&gt;
To build a custom container image, create a Singularity build definition file that references the appropriate base Jupyter notebook image. All custom Jupyter images should be based from one of the official base Jupyter notebook image for compatibility and to ensure that it will start properly within our Open OnDemand environment. As all the Jupyter notebook images are based on Ubuntu with mamba, you may install additional packages using &amp;lt;code&amp;gt;apt-get&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;mamba&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;pip&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
In the following example, we will extend the &amp;lt;code&amp;gt;tensorflow-notebook&amp;lt;/code&amp;gt; image with a C++ compiler (g++) and an additional pip package (corels). &lt;br /&gt;
&lt;br /&gt;
Create a Singularity build file &amp;lt;code&amp;gt;custom-notebook.def&amp;lt;/code&amp;gt; with the following contents:&lt;br /&gt;
{{Highlight|code=Bootstrap: docker&lt;br /&gt;
From: jupyter/tensorflow-notebook&lt;br /&gt;
&lt;br /&gt;
%post&lt;br /&gt;
        apt-get autoremove; \&lt;br /&gt;
        apt-get update --yes; \&lt;br /&gt;
        apt-get install --yes --no-install-recommends g++; \&lt;br /&gt;
        pip install corels|lang=text}}&lt;br /&gt;
To build &amp;lt;code&amp;gt;custom-notebook.def&amp;lt;/code&amp;gt;, run: &amp;lt;code&amp;gt;singularity build custom-notebook.sif custom-notebook.def&amp;lt;/code&amp;gt;. This will generate a &amp;lt;code&amp;gt;custom-notebook.sif&amp;lt;/code&amp;gt; singularity image file.&lt;br /&gt;
&lt;br /&gt;
To use this custom image in Open OnDemand, move the newly generated &amp;lt;code&amp;gt;.sif&amp;lt;/code&amp;gt; file to &amp;lt;code&amp;gt;$HOME/ondemand/jupyter&amp;lt;/code&amp;gt;. In Open OnDemand, when launching a new notebook, select  &#039;Custom Singularity Image...&#039; on the launch page and select the appropriate &amp;lt;code&amp;gt;.sif&amp;lt;/code&amp;gt; file.&lt;br /&gt;
&lt;br /&gt;
==== Environment module based Jupyter notebooks ====&lt;br /&gt;
This is the alternative way of launching Jupyter notebooks on Open OnDemand. This option automates our previous recommended way of running Jupyter on ARC (described on our [[Jupyter Notebooks|Jupyter Notebooks page]]) by loading python from an Anaconda environment and creating a port forwarding. We recommend this option &#039;&#039;&#039;only&#039;&#039;&#039; if you have an existing workflow that uses Jupyter with modules. &lt;br /&gt;
&lt;br /&gt;
When using this method, you will be asked for an initialization bash script that sets up and starts Jupyter. An example script is given below. You may modify and add to this example script to customize your Jupyter Notebook setup.  If no script is given when launching the notebook or if the script is not found, this example script will be used instead. &lt;br /&gt;
&lt;br /&gt;
{{Highlight|code=#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
# TODO: Set up your environment as required.&lt;br /&gt;
&lt;br /&gt;
# Purge the module environment to avoid conflicts&lt;br /&gt;
module purge&lt;br /&gt;
module load python/anaconda3-2018.12&lt;br /&gt;
# TODO: Add any additional modules desired here.&lt;br /&gt;
module list&lt;br /&gt;
&lt;br /&gt;
# As per https://rcs.ucalgary.ca/index.php/Jupyter_Notebooks&lt;br /&gt;
unset XDG_RUNTIME_DIR&lt;br /&gt;
&lt;br /&gt;
# Launch the Jupyter Notebook Server&lt;br /&gt;
set -x&lt;br /&gt;
jupyter notebook --config=&amp;quot;${CONFIG_FILE}&amp;quot;|lang=bash}}&lt;br /&gt;
=== RStudio Server (Container) ===&lt;br /&gt;
RStudio Server is a web-based version of RStudio and functions similar to Jupyter Notebooks. In our instance of Open OnDemand, RStudio Server will run within a Apptainer container environment.&lt;br /&gt;
&lt;br /&gt;
After requesting for a RStudio Server session through Open OnDemand, you will see a session information box similar to the one below. Each session will use a randomised password.&lt;br /&gt;
[[File:RStudio Server launch on OOD.png|alt=RStudio Server launch on Open OnDemand|none|thumb|RStudio Server launch on Open OnDemand]]&lt;br /&gt;
Because RStudio Server runs within an Apptainer environment, some features may not be working as expected. We are still testing this app -- If you notice any issues, please reach out to RCS.&lt;br /&gt;
&lt;br /&gt;
=== VS Code Server ===&lt;br /&gt;
Open OnDemand offers VS Code Server as an Interactive App. VS Code Server is a web-based version of Visual Studio Code and runs directly on the remote compute node. This is an alternative to running Visual Studio Code as a native app within the Open OnDemand Desktop.&lt;br /&gt;
&lt;br /&gt;
To start a VS Code Server, go to the &#039;Interactive Apps&#039; menu, then click on &#039;VS Code Server&#039;.&lt;br /&gt;
[[File:Open OnDemand VSCodeServer.png|alt=Open OnDemand VSCodeServer|none|thumb|Open OnDemand VS Code Server]]&lt;br /&gt;
&lt;br /&gt;
=== Visual Studio Code ===&lt;br /&gt;
As an alternative to Visual Studio Code Server, you can run Visual Studio Code from the Desktop app. This will run Visual Studio Code as a desktop application within a Linux desktop environment.&lt;br /&gt;
&lt;br /&gt;
To start a VS Code, go to the &#039;Interactive Apps&#039; menu, then click on &#039;Desktop (Container)&#039;. After launching the desktop, navigate to &#039;Application&#039; -&amp;gt; &#039;Development&#039; -&amp;gt; &#039;Visual Studio Code&#039;&lt;br /&gt;
[[File:Visual Studio Code on OOD Desktop.png|alt=Visual Studio Code on OOD Desktop|none|thumb|Visual Studio Code on OOD Desktop]]&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting ==&lt;br /&gt;
&lt;br /&gt;
=== VNC session cannot reconnect ===&lt;br /&gt;
If your VNC session closed, you may need to re-connect by going back to the Open OnDemand dashboard, listing your sessions under &amp;quot;My Interactive Sessions&amp;quot; page, then click on the blue &amp;quot;Launch Desktop&amp;quot; button.&lt;br /&gt;
&lt;br /&gt;
{{Support}}&lt;br /&gt;
&lt;br /&gt;
[[Category:ARC]]&lt;br /&gt;
{{Navbox ARC}}&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=Open_OnDemand&amp;diff=3736</id>
		<title>Open OnDemand</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=Open_OnDemand&amp;diff=3736"/>
		<updated>2025-02-24T21:44:13Z</updated>

		<summary type="html">&lt;p&gt;Lleung: /* Interactive Applications */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Open OnDemand is a web portal for accessing certain clusters operated by Research Computing Services. The web portal provides a convenient way to access the login node, your files, and certain graphical applications such as Jupyter Notebooks and remote desktops. This service is an additional way to access HPC resources in addition to the existing command-line based options.&lt;br /&gt;
&lt;br /&gt;
Open OnDemand is an open source project actively developed by the Ohio Supercomputer Center.&lt;br /&gt;
&lt;br /&gt;
== Access ==&lt;br /&gt;
&lt;br /&gt;
=== Get an account ===&lt;br /&gt;
Before using Open OnDemand, you will need to have an account on the cluster. If you do not already have an account, please review the cluster&#039;s quick start guide for information on getting started.&lt;br /&gt;
&lt;br /&gt;
=== Connect to OnDemand ===&lt;br /&gt;
You may access Open OnDemand for the following clusters using your UCIT credentials. Sign-on is handled through the University&#039;s Single Sign-On mechanism and requires Multi-Factor Authentication enabled.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Cluster&lt;br /&gt;
!Open OnDemand Access&lt;br /&gt;
|-&lt;br /&gt;
|[[ARC Cluster Guide|ARC Cluster]]&lt;br /&gt;
|[https://ood-arc.rcs.ucalgary.ca ood-arc.rcs.ucalgary.ca]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Open OnDemand Dashboard ==&lt;br /&gt;
After logging in, you will see the Open OnDemand dashboard. The Message Of The Day  (MOTD) will show any news and announcements relating to the cluster. Any quota warnings will also be displayed on your dashboard.&lt;br /&gt;
[[File:Open OnDemand Dashboard.jpg|none|thumb|Open OnDemand Dashboard]]Other components and applications can be accessed through the top navigation.&lt;br /&gt;
&lt;br /&gt;
=== File Browser ===&lt;br /&gt;
The file browser interface allows you to manage, upload, or download files from your directories, drag &amp;amp; drop file management, and basic file viewing and editing. You can access all files across all filesystems available to the cluster with this interface. &lt;br /&gt;
&lt;br /&gt;
There is a 128 MB limit on file uploads. Please do not use this interface for large file transfers and instead look at [[How to transfer data|other methods for file transfers]].&lt;br /&gt;
&lt;br /&gt;
=== Job Explorer ===&lt;br /&gt;
You may view and manage your current jobs on the cluster through the Active Jobs page. This may be helpful for users new to visualize scheduled jobs in Slurm.&lt;br /&gt;
&lt;br /&gt;
=== Shell Access ===&lt;br /&gt;
You can launch a SSH session via Shell Access to the login node of the cluster.&lt;br /&gt;
&lt;br /&gt;
== Interactive Applications ==&lt;br /&gt;
We have created a small selection of graphical applications launchable through Open OnDemand.&lt;br /&gt;
&lt;br /&gt;
=== Desktop (Container) ===&lt;br /&gt;
The Desktop interactive app may be helpful when exploring data or to run certain GUI based applications. This app simplifies the process of launching a full Linux desktop session or other graphical based applications all within your web browser without the need to use SSH and X11 forwarding. We currently offer only XFCE as the desktop manager.&lt;br /&gt;
&lt;br /&gt;
Your desktop will run as an interactive job on the Slurm cluster and will be subject to a maximum time limit before it is terminated. As a result, you will need to restart your desktop after the time limit is reached. When using this desktop feature, be sure to save your work frequently and be mindful of the remaining time left for your session to avoid loss of work.&lt;br /&gt;
&lt;br /&gt;
To start a desktop, go to the &#039;Interactive Apps&#039; menu, then click on &#039;Desktop (Container)&#039;. &lt;br /&gt;
[[File:Open OnDemand Desktop.jpg|alt=Open OnDemand Desktop|none|thumb|Open OnDemand Desktop]]&lt;br /&gt;
Specify the partition you wish to run the desktop on as well as the desired runtime and resources your session should need. &lt;br /&gt;
&lt;br /&gt;
After you launch the desktop, the job will be sent to Slurm. Depending on the partition and resources you selected, the job may be queued for some time. Once the desktop is started, you will see the option to launch the Remote Desktop in green:&lt;br /&gt;
[[File:Open OnDemand Desktop Launch.png|alt=Open OnDemand Desktop Launch|none|thumb|Open OnDemand Desktop Launch]]Click on the blue &amp;quot;Launch Desktop&amp;quot; button to connect to your desktop.&lt;br /&gt;
[[File:Remote Desktop on Open OnDemand.png|alt=Remote Desktop on Open OnDemand|none|thumb|Remote Desktop on Open OnDemand]]&lt;br /&gt;
&lt;br /&gt;
==== Connecting to CIFS/Samba (eg. ResearchFS) ====&lt;br /&gt;
You may connect to CIFS/Samba shares through the desktop. Do so by opening the Thunar file manager and entering a SMB address:&lt;br /&gt;
[[File:Thunar SMB connection.png|alt=Thunar SMB connection|none|thumb|Thunar SMB connection]]&lt;br /&gt;
&lt;br /&gt;
==== Additional Apps ====&lt;br /&gt;
The desktop has additional software available.&lt;br /&gt;
&lt;br /&gt;
===== Applications / Development =====&lt;br /&gt;
* Visual Studio Code&lt;br /&gt;
* RStudio&lt;br /&gt;
&lt;br /&gt;
===== Applications / Internet =====&lt;br /&gt;
* FileZilla&lt;br /&gt;
* RClone Browser&lt;br /&gt;
* Chromium&lt;br /&gt;
* Firefox&lt;br /&gt;
* Citrix Workspace (Use Firefox or Chromium to connect to myappmf.ucalgary.ca)&lt;br /&gt;
&lt;br /&gt;
==== Applications / Office ====&lt;br /&gt;
* LibreOffice Suite&lt;br /&gt;
&lt;br /&gt;
==== Applications / Other ====&lt;br /&gt;
* Visual Molecular Dynamics (VMD)&lt;br /&gt;
Please be aware that the desktop runs within an Apptainer environment. Certain features that require containerisation or Linux namespaces may not be available. This is particularly an issue with applications based on webkit/chromium and may require workarounds.&lt;br /&gt;
&lt;br /&gt;
=== Jupyter Notebooks ===&lt;br /&gt;
Open OnDemand offers Jupyter Notebooks via containers or environment modules. For most users, we recommend using the containers way to launch your Jupyter notebooks as it is the simplest. We also offer the environment module based method for users that already have a working workflow based on environment modules.&lt;br /&gt;
&lt;br /&gt;
==== Container-based Jupyter notebooks ====&lt;br /&gt;
[[File:Open OnDemand Jupyter.jpg|alt=Open OnDemand Jupyter Notebook|thumb|Open OnDemand Jupyter Notebook ]]&lt;br /&gt;
We offer a selection of different Jupyter Notebooks derived from the [https://jupyter-docker-stacks.readthedocs.io/en/latest/using/selecting.html. official Jupyter Stacks Docker images]. When launching a Jupyter Notebook using this method, your notebook will run within an Apptainer container environment.&lt;br /&gt;
&lt;br /&gt;
To start a Jupyter Notebook, go to the &#039;Interactive Apps&#039; menu, then click on &#039;Jupyter Notebook (Container)&#039;. On the launch page, select the desired image in the drop-down menu or use a custom image by selecting &#039;Custom Singularity Image...&#039;.&lt;br /&gt;
&lt;br /&gt;
The following flavours of Jupyter notebook images are currently available. When launching a notebook, specify an image that best suits your needs.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
!Image&lt;br /&gt;
!Description&lt;br /&gt;
|-&lt;br /&gt;
|Spark Notebook&lt;br /&gt;
|all-spark-notebook_spark-#.#.#.sif&lt;br /&gt;
|Includes Python, R, and Scala support for Apache Spark. &lt;br /&gt;
Spark version is listed in the filename.&lt;br /&gt;
|-&lt;br /&gt;
|Julia Data Science Notebook&lt;br /&gt;
|datascience-notebook_julia-#.#.#.sif&lt;br /&gt;
|Includes libraries for data analysis from the Julia, Python, and R communities. &lt;br /&gt;
Julia version is listed in the filename.&lt;br /&gt;
|-&lt;br /&gt;
|Python Scipy Notebook&lt;br /&gt;
|scipy-notebook_python-#.#.#.sif&lt;br /&gt;
|Includes Python support for Apache Spark.&lt;br /&gt;
Python version is listed in the filename&lt;br /&gt;
|-&lt;br /&gt;
|Tensorflow Notebook&lt;br /&gt;
|tensorflow-notebook_tensorflow-#.#.#.sif&lt;br /&gt;
|Based on the Scipy notebook but includes Tensorflow.&lt;br /&gt;
Tensorflow version is listed in the filename.&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|gpu-jupyter.sif&lt;br /&gt;
|Includes NVIDIA CUDA along with the data science notebook.&lt;br /&gt;
Based on https://github.com/iot-salzburg/gpu-jupyter/&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===== Create a custom Jupyter Notebook image =====&lt;br /&gt;
If additional packages are required in the notebook that is not available as part of an existing Jupyter Notebook container image, you can create an extended and customized container image instead.  &lt;br /&gt;
&lt;br /&gt;
To build a custom container image, create a Singularity build definition file that references the appropriate base Jupyter notebook image. All custom Jupyter images should be based from one of the official base Jupyter notebook image for compatibility and to ensure that it will start properly within our Open OnDemand environment. As all the Jupyter notebook images are based on Ubuntu with mamba, you may install additional packages using &amp;lt;code&amp;gt;apt-get&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;mamba&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;pip&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
In the following example, we will extend the &amp;lt;code&amp;gt;tensorflow-notebook&amp;lt;/code&amp;gt; image with a C++ compiler (g++) and an additional pip package (corels). &lt;br /&gt;
&lt;br /&gt;
Create a Singularity build file &amp;lt;code&amp;gt;custom-notebook.def&amp;lt;/code&amp;gt; with the following contents:&lt;br /&gt;
{{Highlight|code=Bootstrap: docker&lt;br /&gt;
From: jupyter/tensorflow-notebook&lt;br /&gt;
&lt;br /&gt;
%post&lt;br /&gt;
        apt-get autoremove; \&lt;br /&gt;
        apt-get update --yes; \&lt;br /&gt;
        apt-get install --yes --no-install-recommends g++; \&lt;br /&gt;
        pip install corels|lang=text}}&lt;br /&gt;
To build &amp;lt;code&amp;gt;custom-notebook.def&amp;lt;/code&amp;gt;, run: &amp;lt;code&amp;gt;singularity build custom-notebook.sif custom-notebook.def&amp;lt;/code&amp;gt;. This will generate a &amp;lt;code&amp;gt;custom-notebook.sif&amp;lt;/code&amp;gt; singularity image file.&lt;br /&gt;
&lt;br /&gt;
To use this custom image in Open OnDemand, move the newly generated &amp;lt;code&amp;gt;.sif&amp;lt;/code&amp;gt; file to &amp;lt;code&amp;gt;$HOME/ondemand/jupyter&amp;lt;/code&amp;gt;. In Open OnDemand, when launching a new notebook, select  &#039;Custom Singularity Image...&#039; on the launch page and select the appropriate &amp;lt;code&amp;gt;.sif&amp;lt;/code&amp;gt; file.&lt;br /&gt;
&lt;br /&gt;
==== Environment module based Jupyter notebooks ====&lt;br /&gt;
This is the alternative way of launching Jupyter notebooks on Open OnDemand. This option automates our previous recommended way of running Jupyter on ARC (described on our [[Jupyter Notebooks|Jupyter Notebooks page]]) by loading python from an Anaconda environment and creating a port forwarding. We recommend this option &#039;&#039;&#039;only&#039;&#039;&#039; if you have an existing workflow that uses Jupyter with modules. &lt;br /&gt;
&lt;br /&gt;
When using this method, you will be asked for an initialization bash script that sets up and starts Jupyter. An example script is given below. You may modify and add to this example script to customize your Jupyter Notebook setup.  If no script is given when launching the notebook or if the script is not found, this example script will be used instead. &lt;br /&gt;
&lt;br /&gt;
{{Highlight|code=#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
# TODO: Set up your environment as required.&lt;br /&gt;
&lt;br /&gt;
# Purge the module environment to avoid conflicts&lt;br /&gt;
module purge&lt;br /&gt;
module load python/anaconda3-2018.12&lt;br /&gt;
# TODO: Add any additional modules desired here.&lt;br /&gt;
module list&lt;br /&gt;
&lt;br /&gt;
# As per https://rcs.ucalgary.ca/index.php/Jupyter_Notebooks&lt;br /&gt;
unset XDG_RUNTIME_DIR&lt;br /&gt;
&lt;br /&gt;
# Launch the Jupyter Notebook Server&lt;br /&gt;
set -x&lt;br /&gt;
jupyter notebook --config=&amp;quot;${CONFIG_FILE}&amp;quot;|lang=bash}}&lt;br /&gt;
=== RStudio Server (Container) ===&lt;br /&gt;
RStudio Server is a web-based version of RStudio and functions similar to Jupyter Notebooks. In our instance of Open OnDemand, RStudio Server will run within a Apptainer container environment.&lt;br /&gt;
&lt;br /&gt;
After requesting for a RStudio Server session through Open OnDemand, you will see a session information box similar to the one below. Each session will use a randomised password.&lt;br /&gt;
[[File:RStudio Server launch on OOD.png|alt=RStudio Server launch on Open OnDemand|none|thumb|RStudio Server launch on Open OnDemand]]&lt;br /&gt;
Because RStudio Server runs within an Apptainer environment, some features may not be working as expected. We are still testing this app -- If you notice any issues, please reach out to RCS.&lt;br /&gt;
&lt;br /&gt;
=== VS Code Server ===&lt;br /&gt;
Open OnDemand offers VS Code Server as an Interactive App. VS Code Server is a web-based version of Visual Studio Code and runs directly on the remote compute node. This is an alternative to running Visual Studio Code as a native app within the Open OnDemand Desktop.&lt;br /&gt;
&lt;br /&gt;
To start a VS Code Server, go to the &#039;Interactive Apps&#039; menu, then click on &#039;VS Code Server&#039;.&lt;br /&gt;
[[File:Open OnDemand VSCodeServer.png|alt=Open OnDemand VSCodeServer|none|thumb|Open OnDemand VS Code Server]]&lt;br /&gt;
&lt;br /&gt;
=== Visual Studio Code ===&lt;br /&gt;
As an alternative to Visual Studio Code Server, you can run Visual Studio Code from the Desktop app. This will run Visual Studio Code as a desktop application within a Linux desktop environment.&lt;br /&gt;
&lt;br /&gt;
To start a VS Code, go to the &#039;Interactive Apps&#039; menu, then click on &#039;Desktop (Container)&#039;. After launching the desktop, navigate to &#039;Application&#039; -&amp;gt; &#039;Development&#039; -&amp;gt; &#039;Visual Studio Code&#039;&lt;br /&gt;
[[File:Visual Studio Code on OOD Desktop.png|alt=Visual Studio Code on OOD Desktop|none|thumb|Visual Studio Code on OOD Desktop]]&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting ==&lt;br /&gt;
&lt;br /&gt;
=== VNC session cannot reconnect ===&lt;br /&gt;
If your VNC session closed, you may need to re-connect by going back to the Open OnDemand dashboard, listing your sessions under &amp;quot;My Interactive Sessions&amp;quot; page, then click on the blue &amp;quot;Launch Desktop&amp;quot; button.&lt;br /&gt;
&lt;br /&gt;
{{Support}}&lt;br /&gt;
&lt;br /&gt;
[[Category:ARC]]&lt;br /&gt;
{{Navbox ARC}}&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=Open_OnDemand&amp;diff=3735</id>
		<title>Open OnDemand</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=Open_OnDemand&amp;diff=3735"/>
		<updated>2025-02-24T21:35:35Z</updated>

		<summary type="html">&lt;p&gt;Lleung: /* Interactive Applications */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Open OnDemand is a web portal for accessing certain clusters operated by Research Computing Services. The web portal provides a convenient way to access the login node, your files, and certain graphical applications such as Jupyter Notebooks and remote desktops. This service is an additional way to access HPC resources in addition to the existing command-line based options.&lt;br /&gt;
&lt;br /&gt;
Open OnDemand is an open source project actively developed by the Ohio Supercomputer Center.&lt;br /&gt;
&lt;br /&gt;
== Access ==&lt;br /&gt;
&lt;br /&gt;
=== Get an account ===&lt;br /&gt;
Before using Open OnDemand, you will need to have an account on the cluster. If you do not already have an account, please review the cluster&#039;s quick start guide for information on getting started.&lt;br /&gt;
&lt;br /&gt;
=== Connect to OnDemand ===&lt;br /&gt;
You may access Open OnDemand for the following clusters using your UCIT credentials. Sign-on is handled through the University&#039;s Single Sign-On mechanism and requires Multi-Factor Authentication enabled.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Cluster&lt;br /&gt;
!Open OnDemand Access&lt;br /&gt;
|-&lt;br /&gt;
|[[ARC Cluster Guide|ARC Cluster]]&lt;br /&gt;
|[https://ood-arc.rcs.ucalgary.ca ood-arc.rcs.ucalgary.ca]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Open OnDemand Dashboard ==&lt;br /&gt;
After logging in, you will see the Open OnDemand dashboard. The Message Of The Day  (MOTD) will show any news and announcements relating to the cluster. Any quota warnings will also be displayed on your dashboard.&lt;br /&gt;
[[File:Open OnDemand Dashboard.jpg|none|thumb|Open OnDemand Dashboard]]Other components and applications can be accessed through the top navigation.&lt;br /&gt;
&lt;br /&gt;
=== File Browser ===&lt;br /&gt;
The file browser interface allows you to manage, upload, or download files from your directories, drag &amp;amp; drop file management, and basic file viewing and editing. You can access all files across all filesystems available to the cluster with this interface. &lt;br /&gt;
&lt;br /&gt;
There is a 128 MB limit on file uploads. Please do not use this interface for large file transfers and instead look at [[How to transfer data|other methods for file transfers]].&lt;br /&gt;
&lt;br /&gt;
=== Job Explorer ===&lt;br /&gt;
You may view and manage your current jobs on the cluster through the Active Jobs page. This may be helpful for users new to visualize scheduled jobs in Slurm.&lt;br /&gt;
&lt;br /&gt;
=== Shell Access ===&lt;br /&gt;
You can launch a SSH session via Shell Access to the login node of the cluster.&lt;br /&gt;
&lt;br /&gt;
== Interactive Applications ==&lt;br /&gt;
We have created a small selection of graphical applications launchable through Open OnDemand.&lt;br /&gt;
&lt;br /&gt;
=== Desktop (Container) ===&lt;br /&gt;
The Desktop interactive app may be helpful when exploring data or to run certain GUI based applications. This app simplifies the process of launching a full Linux desktop session or other graphical based applications all within your web browser without the need to use SSH and X11 forwarding. We currently offer only XFCE as the desktop manager.&lt;br /&gt;
&lt;br /&gt;
Your desktop will run as an interactive job on the Slurm cluster and will be subject to a maximum time limit before it is terminated. As a result, you will need to restart your desktop after the time limit is reached. When using this desktop feature, be sure to save your work frequently and be mindful of the remaining time left for your session to avoid loss of work.&lt;br /&gt;
&lt;br /&gt;
To start a desktop, go to the &#039;Interactive Apps&#039; menu, then click on &#039;Desktop (Container)&#039;. &lt;br /&gt;
[[File:Open OnDemand Desktop.jpg|alt=Open OnDemand Desktop|none|thumb|Open OnDemand Desktop]]&lt;br /&gt;
Specify the partition you wish to run the desktop on as well as the desired runtime and resources your session should need. &lt;br /&gt;
&lt;br /&gt;
After you launch the desktop, the job will be sent to Slurm. Depending on the partition and resources you selected, the job may be queued for some time. Once the desktop is started, you will see the option to launch the Remote Desktop in green:&lt;br /&gt;
[[File:Open OnDemand Desktop Launch.png|alt=Open OnDemand Desktop Launch|none|thumb|Open OnDemand Desktop Launch]]Click on the blue &amp;quot;Launch Desktop&amp;quot; button to connect to your desktop.&lt;br /&gt;
[[File:Remote Desktop on Open OnDemand.png|alt=Remote Desktop on Open OnDemand|none|thumb|Remote Desktop on Open OnDemand]]&lt;br /&gt;
&lt;br /&gt;
==== Connecting to CIFS/Samba (eg. ResearchFS) ====&lt;br /&gt;
You may connect to CIFS/Samba shares through the desktop. Do so by opening the Thunar file manager and entering a SMB address:&lt;br /&gt;
[[File:Thunar SMB connection.png|alt=Thunar SMB connection|none|thumb|Thunar SMB connection]]&lt;br /&gt;
&lt;br /&gt;
==== Additional Apps ====&lt;br /&gt;
The desktop has additional software available, including:&lt;br /&gt;
&lt;br /&gt;
* VMD&lt;br /&gt;
* Visual Studio Code&lt;br /&gt;
* RStudio&lt;br /&gt;
* FileZilla&lt;br /&gt;
* RClone Browser&lt;br /&gt;
* Chromium&lt;br /&gt;
* Firefox&lt;br /&gt;
&lt;br /&gt;
Because the desktop runs within an Apptainer environment, certain features that require containerisation or Linux namespaces may not be available. &lt;br /&gt;
&lt;br /&gt;
=== Jupyter Notebooks ===&lt;br /&gt;
Open OnDemand offers Jupyter Notebooks via containers or environment modules. For most users, we recommend using the containers way to launch your Jupyter notebooks as it is the simplest. We also offer the environment module based method for users that already have a working workflow based on environment modules.&lt;br /&gt;
&lt;br /&gt;
==== Container-based Jupyter notebooks ====&lt;br /&gt;
[[File:Open OnDemand Jupyter.jpg|alt=Open OnDemand Jupyter Notebook|thumb|Open OnDemand Jupyter Notebook ]]&lt;br /&gt;
We offer a selection of different Jupyter Notebooks derived from the [https://jupyter-docker-stacks.readthedocs.io/en/latest/using/selecting.html. official Jupyter Stacks Docker images]. When launching a Jupyter Notebook using this method, your notebook will run within an Apptainer container environment.&lt;br /&gt;
&lt;br /&gt;
To start a Jupyter Notebook, go to the &#039;Interactive Apps&#039; menu, then click on &#039;Jupyter Notebook (Container)&#039;. On the launch page, select the desired image in the drop-down menu or use a custom image by selecting &#039;Custom Singularity Image...&#039;.&lt;br /&gt;
&lt;br /&gt;
The following flavours of Jupyter notebook images are currently available. When launching a notebook, specify an image that best suits your needs.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
!Image&lt;br /&gt;
!Description&lt;br /&gt;
|-&lt;br /&gt;
|Spark Notebook&lt;br /&gt;
|all-spark-notebook_spark-#.#.#.sif&lt;br /&gt;
|Includes Python, R, and Scala support for Apache Spark. &lt;br /&gt;
Spark version is listed in the filename.&lt;br /&gt;
|-&lt;br /&gt;
|Julia Data Science Notebook&lt;br /&gt;
|datascience-notebook_julia-#.#.#.sif&lt;br /&gt;
|Includes libraries for data analysis from the Julia, Python, and R communities. &lt;br /&gt;
Julia version is listed in the filename.&lt;br /&gt;
|-&lt;br /&gt;
|Python Scipy Notebook&lt;br /&gt;
|scipy-notebook_python-#.#.#.sif&lt;br /&gt;
|Includes Python support for Apache Spark.&lt;br /&gt;
Python version is listed in the filename&lt;br /&gt;
|-&lt;br /&gt;
|Tensorflow Notebook&lt;br /&gt;
|tensorflow-notebook_tensorflow-#.#.#.sif&lt;br /&gt;
|Based on the Scipy notebook but includes Tensorflow.&lt;br /&gt;
Tensorflow version is listed in the filename.&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|gpu-jupyter.sif&lt;br /&gt;
|Includes NVIDIA CUDA along with the data science notebook.&lt;br /&gt;
Based on https://github.com/iot-salzburg/gpu-jupyter/&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===== Create a custom Jupyter Notebook image =====&lt;br /&gt;
If additional packages are required in the notebook that is not available as part of an existing Jupyter Notebook container image, you can create an extended and customized container image instead.  &lt;br /&gt;
&lt;br /&gt;
To build a custom container image, create a Singularity build definition file that references the appropriate base Jupyter notebook image. All custom Jupyter images should be based from one of the official base Jupyter notebook image for compatibility and to ensure that it will start properly within our Open OnDemand environment. As all the Jupyter notebook images are based on Ubuntu with mamba, you may install additional packages using &amp;lt;code&amp;gt;apt-get&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;mamba&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;pip&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
In the following example, we will extend the &amp;lt;code&amp;gt;tensorflow-notebook&amp;lt;/code&amp;gt; image with a C++ compiler (g++) and an additional pip package (corels). &lt;br /&gt;
&lt;br /&gt;
Create a Singularity build file &amp;lt;code&amp;gt;custom-notebook.def&amp;lt;/code&amp;gt; with the following contents:&lt;br /&gt;
{{Highlight|code=Bootstrap: docker&lt;br /&gt;
From: jupyter/tensorflow-notebook&lt;br /&gt;
&lt;br /&gt;
%post&lt;br /&gt;
        apt-get autoremove; \&lt;br /&gt;
        apt-get update --yes; \&lt;br /&gt;
        apt-get install --yes --no-install-recommends g++; \&lt;br /&gt;
        pip install corels|lang=text}}&lt;br /&gt;
To build &amp;lt;code&amp;gt;custom-notebook.def&amp;lt;/code&amp;gt;, run: &amp;lt;code&amp;gt;singularity build custom-notebook.sif custom-notebook.def&amp;lt;/code&amp;gt;. This will generate a &amp;lt;code&amp;gt;custom-notebook.sif&amp;lt;/code&amp;gt; singularity image file.&lt;br /&gt;
&lt;br /&gt;
To use this custom image in Open OnDemand, move the newly generated &amp;lt;code&amp;gt;.sif&amp;lt;/code&amp;gt; file to &amp;lt;code&amp;gt;$HOME/ondemand/jupyter&amp;lt;/code&amp;gt;. In Open OnDemand, when launching a new notebook, select  &#039;Custom Singularity Image...&#039; on the launch page and select the appropriate &amp;lt;code&amp;gt;.sif&amp;lt;/code&amp;gt; file.&lt;br /&gt;
&lt;br /&gt;
==== Environment module based Jupyter notebooks ====&lt;br /&gt;
This is the alternative way of launching Jupyter notebooks on Open OnDemand. This option automates our previous recommended way of running Jupyter on ARC (described on our [[Jupyter Notebooks|Jupyter Notebooks page]]) by loading python from an Anaconda environment and creating a port forwarding. We recommend this option &#039;&#039;&#039;only&#039;&#039;&#039; if you have an existing workflow that uses Jupyter with modules. &lt;br /&gt;
&lt;br /&gt;
When using this method, you will be asked for an initialization bash script that sets up and starts Jupyter. An example script is given below. You may modify and add to this example script to customize your Jupyter Notebook setup.  If no script is given when launching the notebook or if the script is not found, this example script will be used instead. &lt;br /&gt;
&lt;br /&gt;
{{Highlight|code=#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
# TODO: Set up your environment as required.&lt;br /&gt;
&lt;br /&gt;
# Purge the module environment to avoid conflicts&lt;br /&gt;
module purge&lt;br /&gt;
module load python/anaconda3-2018.12&lt;br /&gt;
# TODO: Add any additional modules desired here.&lt;br /&gt;
module list&lt;br /&gt;
&lt;br /&gt;
# As per https://rcs.ucalgary.ca/index.php/Jupyter_Notebooks&lt;br /&gt;
unset XDG_RUNTIME_DIR&lt;br /&gt;
&lt;br /&gt;
# Launch the Jupyter Notebook Server&lt;br /&gt;
set -x&lt;br /&gt;
jupyter notebook --config=&amp;quot;${CONFIG_FILE}&amp;quot;|lang=bash}}&lt;br /&gt;
=== RStudio Server (Container) ===&lt;br /&gt;
RStudio Server is a web-based version of RStudio and functions similar to Jupyter Notebooks. In our instance of Open OnDemand, RStudio Server will run within a Apptainer container environment.&lt;br /&gt;
&lt;br /&gt;
After requesting for a RStudio Server session through Open OnDemand, you will see a session information box similar to the one below. Each session will use a randomised password.&lt;br /&gt;
[[File:RStudio Server launch on OOD.png|alt=RStudio Server launch on Open OnDemand|none|thumb|RStudio Server launch on Open OnDemand]]&lt;br /&gt;
Because RStudio Server runs within an Apptainer environment, some features may not be working as expected. We are still testing this app -- If you notice any issues, please reach out to RCS.&lt;br /&gt;
&lt;br /&gt;
=== VS Code Server ===&lt;br /&gt;
Open OnDemand offers VS Code Server as an Interactive App. VS Code Server is a web-based version of Visual Studio Code and runs directly on the remote compute node. This is an alternative to running Visual Studio Code as a native app within the Open OnDemand Desktop.&lt;br /&gt;
&lt;br /&gt;
To start a VS Code Server, go to the &#039;Interactive Apps&#039; menu, then click on &#039;VS Code Server&#039;.&lt;br /&gt;
[[File:Open OnDemand VSCodeServer.png|alt=Open OnDemand VSCodeServer|none|thumb|Open OnDemand VS Code Server]]&lt;br /&gt;
&lt;br /&gt;
=== Visual Studio Code ===&lt;br /&gt;
As an alternative to Visual Studio Code Server, you can run Visual Studio Code from the Desktop app. This will run Visual Studio Code as a desktop application within a Linux desktop environment.&lt;br /&gt;
&lt;br /&gt;
To start a VS Code, go to the &#039;Interactive Apps&#039; menu, then click on &#039;Desktop (Container)&#039;. After launching the desktop, navigate to &#039;Application&#039; -&amp;gt; &#039;Development&#039; -&amp;gt; &#039;Visual Studio Code&#039;&lt;br /&gt;
[[File:Visual Studio Code on OOD Desktop.png|alt=Visual Studio Code on OOD Desktop|none|thumb|Visual Studio Code on OOD Desktop]]&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting ==&lt;br /&gt;
&lt;br /&gt;
=== VNC session cannot reconnect ===&lt;br /&gt;
If your VNC session closed, you may need to re-connect by going back to the Open OnDemand dashboard, listing your sessions under &amp;quot;My Interactive Sessions&amp;quot; page, then click on the blue &amp;quot;Launch Desktop&amp;quot; button.&lt;br /&gt;
&lt;br /&gt;
{{Support}}&lt;br /&gt;
&lt;br /&gt;
[[Category:ARC]]&lt;br /&gt;
{{Navbox ARC}}&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=File:Visual_Studio_Code_on_OOD_Desktop.png&amp;diff=3734</id>
		<title>File:Visual Studio Code on OOD Desktop.png</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=File:Visual_Studio_Code_on_OOD_Desktop.png&amp;diff=3734"/>
		<updated>2025-02-24T21:35:19Z</updated>

		<summary type="html">&lt;p&gt;Lleung: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Visual Studio Code on OOD Desktop&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=File:RStudio_Server_launch_on_OOD.png&amp;diff=3733</id>
		<title>File:RStudio Server launch on OOD.png</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=File:RStudio_Server_launch_on_OOD.png&amp;diff=3733"/>
		<updated>2025-02-24T21:30:05Z</updated>

		<summary type="html">&lt;p&gt;Lleung: Lleung uploaded a new version of File:RStudio Server launch on OOD.png&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;RStudio Server launch on OOD&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=File:Open_OnDemand_VSCodeServer.png&amp;diff=3732</id>
		<title>File:Open OnDemand VSCodeServer.png</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=File:Open_OnDemand_VSCodeServer.png&amp;diff=3732"/>
		<updated>2025-02-24T21:29:00Z</updated>

		<summary type="html">&lt;p&gt;Lleung: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Open OnDemand VSCodeServer&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=Altis_Login_Node_Status&amp;diff=3692</id>
		<title>Altis Login Node Status</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=Altis_Login_Node_Status&amp;diff=3692"/>
		<updated>2025-01-15T17:44:00Z</updated>

		<summary type="html">&lt;p&gt;Lleung: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{ARC Cluster Status}}&lt;br /&gt;
&lt;br /&gt;
== System Messages ==&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Systems Operating Normally&lt;br /&gt;
| date = 2024/09/03&lt;br /&gt;
| message =&lt;br /&gt;
The ARC Cluster and the Altis login node is operational. No upcoming upgrades are planned.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Notice of Upcoming Partial Outage&lt;br /&gt;
| date = 2024/08/27&lt;br /&gt;
| message = Several compute nodes from the ARC cluster will be unavailable &lt;br /&gt;
between Sept 23 to Sept 27 inclusive (subject to change). Some Altis GPU nodes will be affected during this maintenance window. These nodes will return to service as soon as the work is complete.  &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Partial Outage Update I&lt;br /&gt;
| date = 2024/09/25&lt;br /&gt;
| message = Due to hardware issues that is blocking our original maintenance window, most compute nodes that were taken offline on Monday has been brought back online today. An additional partial outage will occur again starting next Tuesday for the same nodes.&lt;br /&gt;
&lt;br /&gt;
On Tuesday, October 1, 2024, the compute nodes in cpu2019, cpu2021, cpu2022, gpu-v100, gpu-a100, and most nodes from bigmem will be unavailable until Friday October 4, 2024. Affected WDF-Altis GPU nodes include: wdfgpu[1-2,6,8-12].&lt;br /&gt;
&lt;br /&gt;
We apologise for the inconvenience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Partial Outage Update II&lt;br /&gt;
| date = 2024/10/04&lt;br /&gt;
| message = The maintenance window will be extended until at least Monday, October 7, 2024 due to a power distribution issue in our renovated data centre.&lt;br /&gt;
&lt;br /&gt;
Currently, the compute nodes in cpu2019, cpu2021, cpu2022, gpu-v100, gpu-a100, and most nodes from bigmem will be unavailable until at least Monday, October 7, 2024. Affected WDF-Altis GPU nodes include: wdfgpu[1-2,6,8-12].&lt;br /&gt;
&lt;br /&gt;
We apologize for the extended downtime and will update you as soon as we have additional information from our operations team.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Partial Outage Update III&lt;br /&gt;
| date = 2024/10/07&lt;br /&gt;
| message = Due to technical issues beyond our control the maintenance window will be extended until at least Tuesday, October 15, 2024.&lt;br /&gt;
&lt;br /&gt;
Currently, the compute nodes in cpu2019, cpu2021, cpu2022, gpu-v100, gpu-a100, and most nodes from bigmem will be unavailable until at least Tuesday, October 15, 2024. Affected WDF-Altis GPU nodes include: wdfgpu[1-2,6,8-12].&lt;br /&gt;
&lt;br /&gt;
We apologize for the extended downtime and will update you as soon as we have additional information from our operations team.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Normal Scheduling has resumed. &lt;br /&gt;
| date = 2024/10/08&lt;br /&gt;
| message = The ARC cluster has been successfully brought online and nodes are running jobs normally. We apologize for the extended downtime. &lt;br /&gt;
&lt;br /&gt;
Please reach out to support@hpc.ucalgary.ca with any issues or concerns. &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = wdfgpu[1-12] System Update Reboots &lt;br /&gt;
| date = 2024/12/02&lt;br /&gt;
| message = wdfgpu[1-12] will be updated today for a short reboot to install important system updates and will return shortly. &lt;br /&gt;
&lt;br /&gt;
Please reach out to support@hpc.ucalgary.ca with any issues or concerns. &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Scheduled Maintenance and OS Update&lt;br /&gt;
| date = 2025/01/07&lt;br /&gt;
| message = The ARC cluster will be rebooted for OS updates on Monday January 20, 2025. Please make sure to save your work and log out before the reboot happens. Scheduling will be paused until the cluster is back, but queued jobs will remain in the queue and nodes will start scheduling when the cluster is ready. Thank you for understanding. &lt;br /&gt;
&lt;br /&gt;
Please reach out to support@hpc.ucalgary.ca with any issues or concerns. &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = ⚠️ Scheduled Maintenance and OS Update&lt;br /&gt;
| date = 2025/01/15&lt;br /&gt;
| message = The ARC cluster will be down for maintenance and upgrades starting 9AM Monday, January 20, 2025 through Wednesday, January 22, 2025. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the duration of the upgrade window:&lt;br /&gt;
* Scheduling will be paused and new jobs will be queued. Any queued jobs will start scheduling only after the upgrade is complete.&lt;br /&gt;
* Access to files via the login node and arc-dtn will generally be available but intermittent. File transfers on the DTN node, including Globus file transfers, may be interrupted during this window.&lt;br /&gt;
&lt;br /&gt;
Please make sure to save your work prior to this outage window to avoid any loss of work.&lt;br /&gt;
&lt;br /&gt;
During this time the following changes will happen:&lt;br /&gt;
&lt;br /&gt;
1. Ethernet will replace the 11 year old, unsupported Infiniband on the following partitions:&lt;br /&gt;
* cpu2023 (temporary)&lt;br /&gt;
* Parallel&lt;br /&gt;
* Theia/Synergy/cpu2017-bf05&lt;br /&gt;
* Single&lt;br /&gt;
Any multi-node jobs (MPI) running on these partitions will have increased latency going forward. If you run multi-node jobs, make sure to run on a partition such as cpu2019, cpu2021, cpu2022.&lt;br /&gt;
&lt;br /&gt;
2. A component of the NetApp filer will be replaced. Access to /bulk will be unavailable on Wednesday, January 22, 2025.&lt;br /&gt;
&lt;br /&gt;
3. The compute node operating system will be updated to Rocky Linux 8.10.&lt;br /&gt;
&lt;br /&gt;
4. The Slurm scheduling system will be upgraded.&lt;br /&gt;
&lt;br /&gt;
5. The Open OnDemand web portal will be upgraded.&lt;br /&gt;
&lt;br /&gt;
Please reach out to support@hpc.calgary.ca with any issues or concerns.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
[[Category:ARC]]&lt;br /&gt;
{{Navbox ARC}}&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=Think_Login_Node_Status&amp;diff=3691</id>
		<title>Think Login Node Status</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=Think_Login_Node_Status&amp;diff=3691"/>
		<updated>2025-01-15T17:43:54Z</updated>

		<summary type="html">&lt;p&gt;Lleung: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{ARC Cluster Status}}&lt;br /&gt;
&lt;br /&gt;
== System Messages ==&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Systems Operating Normally&lt;br /&gt;
| date = 2024/09/03&lt;br /&gt;
| message =&lt;br /&gt;
The ARC Cluster and the Think login node is operational. No upcoming upgrades are planned.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Notice of Upcoming Partial Outage&lt;br /&gt;
| date = 2024/08/23&lt;br /&gt;
| message = Several compute nodes from the ARC cluster will be unavailable &lt;br /&gt;
between Sept 23 to Sept 27 inclusive (subject to change). Some Think GPU nodes will be affected during this maintenance window. These nodes will return to service as soon as the work is complete.  &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Partial Outage Update I&lt;br /&gt;
| date = 2024/09/25&lt;br /&gt;
| message = Due to hardware issues that is blocking our original maintenance window, most compute nodes that were taken offline on Monday has been brought back online today. An additional partial outage will occur again starting next Tuesday for the same nodes.&lt;br /&gt;
&lt;br /&gt;
On Tuesday, October 1, 2024, the compute nodes in cpu2019, cpu2021, cpu2022, gpu-v100, gpu-a100, and most nodes from bigmem will be unavailable until Friday October 4, 2024.&lt;br /&gt;
&lt;br /&gt;
We apologise for the inconvenience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Partial Outage Update II&lt;br /&gt;
| date = 2024/10/04&lt;br /&gt;
| message = The maintenance window will be extended until at least Monday, October 7, 2024 due to a power distribution issue in our renovated data centre.&lt;br /&gt;
&lt;br /&gt;
Currently, the compute nodes in cpu2019, cpu2021, cpu2022, gpu-v100, gpu-a100, and most nodes from bigmem will be unavailable until at least Monday, October 7, 2024. Affected WDF-Altis GPU nodes include: wdfgpu[1-2,6,8-12].&lt;br /&gt;
&lt;br /&gt;
We apologize for the extended downtime and will update you as soon as we have additional information from our operations team.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Partial Outage Update III&lt;br /&gt;
| date = 2024/10/07&lt;br /&gt;
| message = Due to technical issues beyond our control the maintenance window will be extended until at least Tuesday, October 15, 2024.&lt;br /&gt;
&lt;br /&gt;
Currently, the compute nodes in cpu2019, cpu2021, cpu2022, gpu-v100, gpu-a100, and most nodes from bigmem will be unavailable until at least Tuesday, October 15, 2024. Affected WDF GPU nodes include: wdfgpu[1-2,6,8-12].&lt;br /&gt;
&lt;br /&gt;
We apologize for the extended downtime and will update you as soon as we have additional information from our operations team.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Normal Scheduling has resumed. &lt;br /&gt;
| date = 2024/10/08&lt;br /&gt;
| message = The ARC cluster has been successfully brought online and nodes are running jobs normally. We apologize for the extended downtime. &lt;br /&gt;
&lt;br /&gt;
Please reach out to support@hpc.ucalgary.ca with any issues or concerns. &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = wdfgpu[1-12] System Update Reboots &lt;br /&gt;
| date = 2024/12/02&lt;br /&gt;
| message = wdfgpu[1-12] will be updated today for a short reboot to install important system updates and will return shortly. &lt;br /&gt;
&lt;br /&gt;
Please reach out to support@hpc.ucalgary.ca with any issues or concerns. &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Scheduled Maintenance and OS Update&lt;br /&gt;
| date = 2025/01/07&lt;br /&gt;
| message = The ARC cluster will be rebooted for OS updates on Monday January 20, 2025. Please make sure to save your work and log out before the reboot happens. Scheduling will be paused until the cluster is back, but queued jobs will remain in the queue and nodes will start scheduling when the cluster is ready. Thank you for understanding. &lt;br /&gt;
&lt;br /&gt;
Please reach out to support@hpc.ucalgary.ca with any issues or concerns. &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = ⚠️ Scheduled Maintenance and OS Update&lt;br /&gt;
| date = 2025/01/15&lt;br /&gt;
| message = The ARC cluster will be down for maintenance and upgrades starting 9AM Monday, January 20, 2025 through Wednesday, January 22, 2025. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the duration of the upgrade window:&lt;br /&gt;
* Scheduling will be paused and new jobs will be queued. Any queued jobs will start scheduling only after the upgrade is complete.&lt;br /&gt;
* Access to files via the login node and arc-dtn will generally be available but intermittent. File transfers on the DTN node, including Globus file transfers, may be interrupted during this window.&lt;br /&gt;
&lt;br /&gt;
Please make sure to save your work prior to this outage window to avoid any loss of work.&lt;br /&gt;
&lt;br /&gt;
During this time the following changes will happen:&lt;br /&gt;
&lt;br /&gt;
1. Ethernet will replace the 11 year old, unsupported Infiniband on the following partitions:&lt;br /&gt;
* cpu2023 (temporary)&lt;br /&gt;
* Parallel&lt;br /&gt;
* Theia/Synergy/cpu2017-bf05&lt;br /&gt;
* Single&lt;br /&gt;
Any multi-node jobs (MPI) running on these partitions will have increased latency going forward. If you run multi-node jobs, make sure to run on a partition such as cpu2019, cpu2021, cpu2022.&lt;br /&gt;
&lt;br /&gt;
2. A component of the NetApp filer will be replaced. Access to /bulk will be unavailable on Wednesday, January 22, 2025.&lt;br /&gt;
&lt;br /&gt;
3. The compute node operating system will be updated to Rocky Linux 8.10.&lt;br /&gt;
&lt;br /&gt;
4. The Slurm scheduling system will be upgraded.&lt;br /&gt;
&lt;br /&gt;
5. The Open OnDemand web portal will be upgraded.&lt;br /&gt;
&lt;br /&gt;
Please reach out to support@hpc.calgary.ca with any issues or concerns.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
[[Category:ARC]]&lt;br /&gt;
{{Navbox ARC}}&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=ARC_Cluster_Status&amp;diff=3690</id>
		<title>ARC Cluster Status</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=ARC_Cluster_Status&amp;diff=3690"/>
		<updated>2025-01-15T17:33:56Z</updated>

		<summary type="html">&lt;p&gt;Lleung: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Cluster Status&lt;br /&gt;
| cluster = ARC&lt;br /&gt;
| status = green&lt;br /&gt;
| title = Cluster operational&lt;br /&gt;
| message = System is operational. Updates are planned for Jan 20. Please see MOTD&lt;br /&gt;
&lt;br /&gt;
See the [[ARC Cluster Status]] page for system notices. &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
== System Messages ==&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = January System Updates&lt;br /&gt;
| date = 2023/01/01&lt;br /&gt;
| message =&lt;br /&gt;
Beginning January 16, 2023, the ARC cluster will undergo operating system updates. We shall do our utmost to minimize disruption and allow ongoing jobs to be completed. New jobs may be temporarily held from scheduling.&lt;br /&gt;
&lt;br /&gt;
The ARC login node will reboot on the morning of January 16. Please save your work and log out if possible.&lt;br /&gt;
&lt;br /&gt;
The upgrade is planned to be fully complete by January 20.&lt;br /&gt;
&lt;br /&gt;
If you encounter any system issues, do not hesitate to let us know.&lt;br /&gt;
&lt;br /&gt;
Thank you for your cooperation.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = System Updates Completed&lt;br /&gt;
| date = 2023/01/24&lt;br /&gt;
| message =&lt;br /&gt;
The upgrade has been completed. The following has been changed:&lt;br /&gt;
* OS Updated to Rocky Linux 8.7&lt;br /&gt;
* Slurm updated to 22.05.7&lt;br /&gt;
* Apptainer replaces Singularity&lt;br /&gt;
* Each job will have its own /tmp, /dev/shm, /run/user/$uid mounted&lt;br /&gt;
&lt;br /&gt;
If you encounter any system issues, do not hesitate to let us know.&lt;br /&gt;
&lt;br /&gt;
Thank you for your cooperation.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Filesystem Issues&lt;br /&gt;
| date = 2023/02/28&lt;br /&gt;
| message =&lt;br /&gt;
We are currently investigating a filesystem issue that is causing filesystem slowdowns across ARC.&lt;br /&gt;
&lt;br /&gt;
We will update you with more information as it becomes available.&lt;br /&gt;
&lt;br /&gt;
Thank you for your patience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Filesystem Issues&lt;br /&gt;
| date = 2023/03/1&lt;br /&gt;
| message =&lt;br /&gt;
We are still currently investigating a filesystem issue that is causing filesystem slowdowns across ARC. Some jobs on ARC have been paused to help us find the root cause of the slowdowns.&lt;br /&gt;
&lt;br /&gt;
We will update you with more information as it becomes available.&lt;br /&gt;
&lt;br /&gt;
Thank you for your patience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = ARC Login node reboot&lt;br /&gt;
| date = 2023/03/2&lt;br /&gt;
| message =&lt;br /&gt;
The ARC login node will be rebooted this afternoon for an emergency maintenance. This downtime is needed to help mitigate the filesystem slowdowns experienced on the login node.  Jobs will continue running and scheduling during this time.&lt;br /&gt;
&lt;br /&gt;
All logins to the ARC login node will be terminated at 3:00PM and will remain unavailable until 4:00PM.&lt;br /&gt;
&lt;br /&gt;
We apologize for the inconvenience and thank you for your patience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = ⚠️ Filesystem Issues&lt;br /&gt;
| date = 2023/03/2&lt;br /&gt;
| message =&lt;br /&gt;
We are still currently investigating a filesystem issue that is causing filesystem slowdowns on specific nodes in our MSRDC location.&lt;br /&gt;
&lt;br /&gt;
We will update you with more information as it becomes available.&lt;br /&gt;
&lt;br /&gt;
We apologize for the inconvenience and thank you for your patience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Filesystem Issues Resolved&lt;br /&gt;
| date = 2023/03/10&lt;br /&gt;
| message =&lt;br /&gt;
We have upgraded the filesystem routers in our MSRDC location to address the performance issues.&lt;br /&gt;
&lt;br /&gt;
Please let us know if you experience any issues with the filesystem performance.&lt;br /&gt;
&lt;br /&gt;
Thank-you for your patience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Open OnDemand reboot&lt;br /&gt;
| date = 2023/05/01&lt;br /&gt;
| message =&lt;br /&gt;
On May 1, 2023, the ARC Open OnDemand node will be rebooted between 5PM and 6PM. Expected downtime will be approximately 15 minutes.&lt;br /&gt;
&lt;br /&gt;
If you encounter any system issues, do not hesitate to let us know.&lt;br /&gt;
&lt;br /&gt;
Thank you for your cooperation.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Apptainer (Singularity) on ARC Login Node&lt;br /&gt;
| date = 2023/06/22&lt;br /&gt;
| message =&lt;br /&gt;
Apptainer (Singularity) containers may experience an error when&lt;br /&gt;
running on the Arc login node. If apptainer complains that a system&lt;br /&gt;
administrator needs to enable user namespaces, simply run your&lt;br /&gt;
containers inside a job.&lt;br /&gt;
&lt;br /&gt;
This is a temporary measure due to security vulnerability that will be&lt;br /&gt;
patched soon.&lt;br /&gt;
&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Lattice, Single, cpu2013 partition changes&lt;br /&gt;
| date = 2023/07/13&lt;br /&gt;
| message =&lt;br /&gt;
The Lattice and Single, and cpu2013 have all been decomissioned.  The Single&lt;br /&gt;
partition will be replaced by the nodes formerly in the cpu2013 partition but&lt;br /&gt;
will be called single.&lt;br /&gt;
&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Open OnDemand reboot&lt;br /&gt;
| date = 2023/10/17&lt;br /&gt;
| message =&lt;br /&gt;
Open OnDemand will be rebooted on October 17, 2023 for an update. It will be down for up to 30 minutes.&lt;br /&gt;
&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Storage Upgrade MARC/ARC cluster&lt;br /&gt;
| date = 2023/10/23&lt;br /&gt;
| message =&lt;br /&gt;
We will be performing storage upgrades on the MARC/ARC cluster on &lt;br /&gt;
November 16 and 17, 2023. To facilitate this, we will be throttling &lt;br /&gt;
down the number of jobs on both clusters while the upgrades are &lt;br /&gt;
performed&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Systems Operating Normally&lt;br /&gt;
| date = 2024/05/3&lt;br /&gt;
| message =&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Power Interruption&lt;br /&gt;
| date = 2024/05/07&lt;br /&gt;
| message = Arc Experienced an brief power outage around 11AM May 7, 2024.&lt;br /&gt;
Most compute nodes have or are rebooting.  Most jobs running at this time &lt;br /&gt;
were lost. Arc administrators are actively working on restarting compute &lt;br /&gt;
nodes. Sorry for the inconvenience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = GPU a100 Node Reservation&lt;br /&gt;
| date = 2024/06/03&lt;br /&gt;
| message = Job submissions targeted to the  GPU a100 partition will be &lt;br /&gt;
affected by a temporary reservation on the nodes to accommodate the RCS&lt;br /&gt;
summer school class taking place on 2024/Jun/10. Reservation will end &lt;br /&gt;
shortly after. Please submit your jobs normally and the scheduler will &lt;br /&gt;
start them as soon as the nodes are available. Sorry for the inconvenience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = GPU a100 Node Reservation Removed&lt;br /&gt;
| date = 2024/06/11&lt;br /&gt;
| message = GPU a100 Nodes in ARC have been returned to normal scheduling. &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Notice of Upcoming Partial Outage&lt;br /&gt;
| date = 2024/08/23&lt;br /&gt;
| message = Several compute nodes from the ARC cluster will be unavailable &lt;br /&gt;
between Sept 23 to Sept 27 inclusive (subject to change). All compute nodes &lt;br /&gt;
in cpu2019, cpu2021/2, gpu-v100 most nodes from bigmem and gpu-a100 will be &lt;br /&gt;
affected. These nodes will return to service as soon as the work is complete.  &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Partial Outage Update I&lt;br /&gt;
| date = 2024/09/25&lt;br /&gt;
| message = Due to hardware issues that is blocking our original maintenance window, most compute nodes that were taken offline on Monday has been brought back online today. An additional partial outage will occur again starting next Tuesday for the same nodes.&lt;br /&gt;
&lt;br /&gt;
On Tuesday, October 1, 2024, the compute nodes in cpu2019, cpu2021, cpu2022, gpu-v100, gpu-a100, and most nodes from bigmem will be unavailable until Friday October 4, 2024.&lt;br /&gt;
&lt;br /&gt;
We apologise for the inconvenience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Partial Outage Update II&lt;br /&gt;
| date = 2024/10/04&lt;br /&gt;
| message = The maintenance window will be extended until at least Monday, October 7, 2024 due to a power distribution issue in our renovated data centre.&lt;br /&gt;
&lt;br /&gt;
Currently, the compute nodes in cpu2019, cpu2021, cpu2022, gpu-v100, gpu-a100, and most nodes from bigmem will be unavailable until at least Monday, October 7, 2024. Affected WDF-Altis GPU nodes include: wdfgpu[1-2,6,8-12].&lt;br /&gt;
&lt;br /&gt;
We apologize for the extended downtime and will update you as soon as we have additional information from our operations team.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Partial Outage Update III&lt;br /&gt;
| date = 2024/10/07&lt;br /&gt;
| message = Due to technical issues beyond our control the maintenance window will be extended until at least Tuesday, October 15, 2024.&lt;br /&gt;
&lt;br /&gt;
Currently, the compute nodes in cpu2019, cpu2021, cpu2022, gpu-v100, gpu-a100, and most nodes from bigmem will be unavailable until at least Tuesday, October 15, 2024. Affected WDF-Altis GPU nodes include: wdfgpu[1-2,6,8-12].&lt;br /&gt;
&lt;br /&gt;
We apologize for the extended downtime and will update you as soon as we have additional information from our operations team.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Normal Scheduling has resumed. &lt;br /&gt;
| date = 2024/10/08&lt;br /&gt;
| message = The ARC cluster has been successfully brought online and nodes are running jobs normally. We apologize for the extended downtime. &lt;br /&gt;
&lt;br /&gt;
Please reach out to support@hpc.ucalgary.ca with any issues or concerns. &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Scheduled Maintenance &lt;br /&gt;
| date = 2024/12/11&lt;br /&gt;
| message = The ARC login node will be rebooted on Tuesday December 17 for scheduled maintenance. It will be down for a few minutes and return shortly. Job scheduling and jobs running on the cluster will not be affected. Thank you for understanding. &lt;br /&gt;
&lt;br /&gt;
Please reach out to support@hpc.ucalgary.ca with any issues or concerns. &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Scheduled Maintenance and OS Update&lt;br /&gt;
| date = 2025/01/07&lt;br /&gt;
| message = The ARC cluster will be rebooted for OS updates on Monday January 20, 2025. Please make sure to save your work and log out before the reboot happens. Scheduling will be paused until the cluster is back, but queued jobs will remain in the queue and nodes will start scheduling when the cluster is ready. Thank you for understanding. &lt;br /&gt;
&lt;br /&gt;
Please reach out to support@hpc.ucalgary.ca with any issues or concerns. &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = ⚠️ Scheduled Maintenance and OS Update&lt;br /&gt;
| date = 2025/01/15&lt;br /&gt;
| message = The ARC cluster will be down for maintenance and upgrades starting 9AM Monday, January 20, 2025 through Wednesday, January 22, 2025. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the duration of the upgrade window:&lt;br /&gt;
* Scheduling will be paused and new jobs will be queued. Any queued jobs will start scheduling only after the upgrade is complete.&lt;br /&gt;
* Access to files via the login node and arc-dtn will generally be available but intermittent. File transfers on the DTN node, including Globus file transfers, may be interrupted during this window.&lt;br /&gt;
&lt;br /&gt;
Please make sure to save your work prior to this outage window to avoid any loss of work.&lt;br /&gt;
&lt;br /&gt;
During this time the following changes will happen:&lt;br /&gt;
&lt;br /&gt;
1. Ethernet will replace the 11 year old, unsupported Infiniband on the following partitions:&lt;br /&gt;
* cpu2023 (temporary)&lt;br /&gt;
* Parallel&lt;br /&gt;
* Theia/Synergy/cpu2017-bf05&lt;br /&gt;
* Single&lt;br /&gt;
Any multi-node jobs (MPI) running on these partitions will have increased latency going forward. If you run multi-node jobs, make sure to run on a partition such as cpu2019, cpu2021, cpu2022.&lt;br /&gt;
&lt;br /&gt;
2. A component of the NetApp filer will be replaced. Access to /bulk will be unavailable on Wednesday, January 22, 2025.&lt;br /&gt;
&lt;br /&gt;
3. The compute node operating system will be updated to Rocky Linux 8.10.&lt;br /&gt;
&lt;br /&gt;
4. The Slurm scheduling system will be upgraded.&lt;br /&gt;
&lt;br /&gt;
5. The Open OnDemand web portal will be upgraded.&lt;br /&gt;
&lt;br /&gt;
Please reach out to support@hpc.calgary.ca with any issues or concerns.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Navbox ARC}}&lt;br /&gt;
[[Category:ARC]]&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=ARC_Cluster_Status&amp;diff=3689</id>
		<title>ARC Cluster Status</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=ARC_Cluster_Status&amp;diff=3689"/>
		<updated>2025-01-15T17:29:50Z</updated>

		<summary type="html">&lt;p&gt;Lleung: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Cluster Status&lt;br /&gt;
| cluster = ARC&lt;br /&gt;
| status = green&lt;br /&gt;
| title = Cluster operational&lt;br /&gt;
| message = System is operational. Updates are planned for Jan 20. Please see MOTD&lt;br /&gt;
&lt;br /&gt;
See the [[ARC Cluster Status]] page for system notices. &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
== System Messages ==&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = January System Updates&lt;br /&gt;
| date = 2023/01/01&lt;br /&gt;
| message =&lt;br /&gt;
Beginning January 16, 2023, the ARC cluster will undergo operating system updates. We shall do our utmost to minimize disruption and allow ongoing jobs to be completed. New jobs may be temporarily held from scheduling.&lt;br /&gt;
&lt;br /&gt;
The ARC login node will reboot on the morning of January 16. Please save your work and log out if possible.&lt;br /&gt;
&lt;br /&gt;
The upgrade is planned to be fully complete by January 20.&lt;br /&gt;
&lt;br /&gt;
If you encounter any system issues, do not hesitate to let us know.&lt;br /&gt;
&lt;br /&gt;
Thank you for your cooperation.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = System Updates Completed&lt;br /&gt;
| date = 2023/01/24&lt;br /&gt;
| message =&lt;br /&gt;
The upgrade has been completed. The following has been changed:&lt;br /&gt;
* OS Updated to Rocky Linux 8.7&lt;br /&gt;
* Slurm updated to 22.05.7&lt;br /&gt;
* Apptainer replaces Singularity&lt;br /&gt;
* Each job will have its own /tmp, /dev/shm, /run/user/$uid mounted&lt;br /&gt;
&lt;br /&gt;
If you encounter any system issues, do not hesitate to let us know.&lt;br /&gt;
&lt;br /&gt;
Thank you for your cooperation.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Filesystem Issues&lt;br /&gt;
| date = 2023/02/28&lt;br /&gt;
| message =&lt;br /&gt;
We are currently investigating a filesystem issue that is causing filesystem slowdowns across ARC.&lt;br /&gt;
&lt;br /&gt;
We will update you with more information as it becomes available.&lt;br /&gt;
&lt;br /&gt;
Thank you for your patience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Filesystem Issues&lt;br /&gt;
| date = 2023/03/1&lt;br /&gt;
| message =&lt;br /&gt;
We are still currently investigating a filesystem issue that is causing filesystem slowdowns across ARC. Some jobs on ARC have been paused to help us find the root cause of the slowdowns.&lt;br /&gt;
&lt;br /&gt;
We will update you with more information as it becomes available.&lt;br /&gt;
&lt;br /&gt;
Thank you for your patience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = ARC Login node reboot&lt;br /&gt;
| date = 2023/03/2&lt;br /&gt;
| message =&lt;br /&gt;
The ARC login node will be rebooted this afternoon for an emergency maintenance. This downtime is needed to help mitigate the filesystem slowdowns experienced on the login node.  Jobs will continue running and scheduling during this time.&lt;br /&gt;
&lt;br /&gt;
All logins to the ARC login node will be terminated at 3:00PM and will remain unavailable until 4:00PM.&lt;br /&gt;
&lt;br /&gt;
We apologize for the inconvenience and thank you for your patience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = ⚠️ Filesystem Issues&lt;br /&gt;
| date = 2023/03/2&lt;br /&gt;
| message =&lt;br /&gt;
We are still currently investigating a filesystem issue that is causing filesystem slowdowns on specific nodes in our MSRDC location.&lt;br /&gt;
&lt;br /&gt;
We will update you with more information as it becomes available.&lt;br /&gt;
&lt;br /&gt;
We apologize for the inconvenience and thank you for your patience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Filesystem Issues Resolved&lt;br /&gt;
| date = 2023/03/10&lt;br /&gt;
| message =&lt;br /&gt;
We have upgraded the filesystem routers in our MSRDC location to address the performance issues.&lt;br /&gt;
&lt;br /&gt;
Please let us know if you experience any issues with the filesystem performance.&lt;br /&gt;
&lt;br /&gt;
Thank-you for your patience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Open OnDemand reboot&lt;br /&gt;
| date = 2023/05/01&lt;br /&gt;
| message =&lt;br /&gt;
On May 1, 2023, the ARC Open OnDemand node will be rebooted between 5PM and 6PM. Expected downtime will be approximately 15 minutes.&lt;br /&gt;
&lt;br /&gt;
If you encounter any system issues, do not hesitate to let us know.&lt;br /&gt;
&lt;br /&gt;
Thank you for your cooperation.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Apptainer (Singularity) on ARC Login Node&lt;br /&gt;
| date = 2023/06/22&lt;br /&gt;
| message =&lt;br /&gt;
Apptainer (Singularity) containers may experience an error when&lt;br /&gt;
running on the Arc login node. If apptainer complains that a system&lt;br /&gt;
administrator needs to enable user namespaces, simply run your&lt;br /&gt;
containers inside a job.&lt;br /&gt;
&lt;br /&gt;
This is a temporary measure due to security vulnerability that will be&lt;br /&gt;
patched soon.&lt;br /&gt;
&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Lattice, Single, cpu2013 partition changes&lt;br /&gt;
| date = 2023/07/13&lt;br /&gt;
| message =&lt;br /&gt;
The Lattice and Single, and cpu2013 have all been decomissioned.  The Single&lt;br /&gt;
partition will be replaced by the nodes formerly in the cpu2013 partition but&lt;br /&gt;
will be called single.&lt;br /&gt;
&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Open OnDemand reboot&lt;br /&gt;
| date = 2023/10/17&lt;br /&gt;
| message =&lt;br /&gt;
Open OnDemand will be rebooted on October 17, 2023 for an update. It will be down for up to 30 minutes.&lt;br /&gt;
&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Storage Upgrade MARC/ARC cluster&lt;br /&gt;
| date = 2023/10/23&lt;br /&gt;
| message =&lt;br /&gt;
We will be performing storage upgrades on the MARC/ARC cluster on &lt;br /&gt;
November 16 and 17, 2023. To facilitate this, we will be throttling &lt;br /&gt;
down the number of jobs on both clusters while the upgrades are &lt;br /&gt;
performed&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Systems Operating Normally&lt;br /&gt;
| date = 2024/05/3&lt;br /&gt;
| message =&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Power Interruption&lt;br /&gt;
| date = 2024/05/07&lt;br /&gt;
| message = Arc Experienced an brief power outage around 11AM May 7, 2024.&lt;br /&gt;
Most compute nodes have or are rebooting.  Most jobs running at this time &lt;br /&gt;
were lost. Arc administrators are actively working on restarting compute &lt;br /&gt;
nodes. Sorry for the inconvenience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = GPU a100 Node Reservation&lt;br /&gt;
| date = 2024/06/03&lt;br /&gt;
| message = Job submissions targeted to the  GPU a100 partition will be &lt;br /&gt;
affected by a temporary reservation on the nodes to accommodate the RCS&lt;br /&gt;
summer school class taking place on 2024/Jun/10. Reservation will end &lt;br /&gt;
shortly after. Please submit your jobs normally and the scheduler will &lt;br /&gt;
start them as soon as the nodes are available. Sorry for the inconvenience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = GPU a100 Node Reservation Removed&lt;br /&gt;
| date = 2024/06/11&lt;br /&gt;
| message = GPU a100 Nodes in ARC have been returned to normal scheduling. &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Notice of Upcoming Partial Outage&lt;br /&gt;
| date = 2024/08/23&lt;br /&gt;
| message = Several compute nodes from the ARC cluster will be unavailable &lt;br /&gt;
between Sept 23 to Sept 27 inclusive (subject to change). All compute nodes &lt;br /&gt;
in cpu2019, cpu2021/2, gpu-v100 most nodes from bigmem and gpu-a100 will be &lt;br /&gt;
affected. These nodes will return to service as soon as the work is complete.  &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Partial Outage Update I&lt;br /&gt;
| date = 2024/09/25&lt;br /&gt;
| message = Due to hardware issues that is blocking our original maintenance window, most compute nodes that were taken offline on Monday has been brought back online today. An additional partial outage will occur again starting next Tuesday for the same nodes.&lt;br /&gt;
&lt;br /&gt;
On Tuesday, October 1, 2024, the compute nodes in cpu2019, cpu2021, cpu2022, gpu-v100, gpu-a100, and most nodes from bigmem will be unavailable until Friday October 4, 2024.&lt;br /&gt;
&lt;br /&gt;
We apologise for the inconvenience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Partial Outage Update II&lt;br /&gt;
| date = 2024/10/04&lt;br /&gt;
| message = The maintenance window will be extended until at least Monday, October 7, 2024 due to a power distribution issue in our renovated data centre.&lt;br /&gt;
&lt;br /&gt;
Currently, the compute nodes in cpu2019, cpu2021, cpu2022, gpu-v100, gpu-a100, and most nodes from bigmem will be unavailable until at least Monday, October 7, 2024. Affected WDF-Altis GPU nodes include: wdfgpu[1-2,6,8-12].&lt;br /&gt;
&lt;br /&gt;
We apologize for the extended downtime and will update you as soon as we have additional information from our operations team.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Partial Outage Update III&lt;br /&gt;
| date = 2024/10/07&lt;br /&gt;
| message = Due to technical issues beyond our control the maintenance window will be extended until at least Tuesday, October 15, 2024.&lt;br /&gt;
&lt;br /&gt;
Currently, the compute nodes in cpu2019, cpu2021, cpu2022, gpu-v100, gpu-a100, and most nodes from bigmem will be unavailable until at least Tuesday, October 15, 2024. Affected WDF-Altis GPU nodes include: wdfgpu[1-2,6,8-12].&lt;br /&gt;
&lt;br /&gt;
We apologize for the extended downtime and will update you as soon as we have additional information from our operations team.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Normal Scheduling has resumed. &lt;br /&gt;
| date = 2024/10/08&lt;br /&gt;
| message = The ARC cluster has been successfully brought online and nodes are running jobs normally. We apologize for the extended downtime. &lt;br /&gt;
&lt;br /&gt;
Please reach out to support@hpc.ucalgary.ca with any issues or concerns. &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Scheduled Maintenance &lt;br /&gt;
| date = 2024/12/11&lt;br /&gt;
| message = The ARC login node will be rebooted on Tuesday December 17 for scheduled maintenance. It will be down for a few minutes and return shortly. Job scheduling and jobs running on the cluster will not be affected. Thank you for understanding. &lt;br /&gt;
&lt;br /&gt;
Please reach out to support@hpc.ucalgary.ca with any issues or concerns. &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Scheduled Maintenance and OS Update&lt;br /&gt;
| date = 2025/01/07&lt;br /&gt;
| message = The ARC cluster will be rebooted for OS updates on Monday January 20, 2025. Please make sure to save your work and log out before the reboot happens. Scheduling will be paused until the cluster is back, but queued jobs will remain in the queue and nodes will start scheduling when the cluster is ready. Thank you for understanding. &lt;br /&gt;
&lt;br /&gt;
Please reach out to support@hpc.ucalgary.ca with any issues or concerns. &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = ⚠️ Scheduled Maintenance and OS Update&lt;br /&gt;
| date = 2025/01/15&lt;br /&gt;
| message = The ARC cluster will be down for maintenance and upgrades starting 9AM Monday, January 20, 2025 through Wednesday, January 22, 2025. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the duration of the upgrade window:&lt;br /&gt;
* Scheduling will be paused and new jobs will be queued. Any queued jobs will start scheduling after the upgrade is complete.&lt;br /&gt;
* Access to files via the login node and arc-dtn will generally be available but intermittent. File transfers on the DTN node, including Globus file transfers, may be interrupted during this window.&lt;br /&gt;
&lt;br /&gt;
Please make sure to save your work prior to this outage window to avoid any loss of work.&lt;br /&gt;
&lt;br /&gt;
During this time the following changes will happen:&lt;br /&gt;
&lt;br /&gt;
1. Ethernet will replace the 11 year old, unsupported Infiniband on the following partitions:&lt;br /&gt;
* cpu2023 (temporary)&lt;br /&gt;
* Parallel&lt;br /&gt;
* Theia/Synergy/cpu2017-bf05&lt;br /&gt;
* Single&lt;br /&gt;
Any multi-node jobs, (MPI) run on these partitions will have increased latency. If you run multi-node jobs, make sure to run on a partition such as cpu2019, cpu2021, cpu2022&lt;br /&gt;
&lt;br /&gt;
2. A component of the NetApp filer will be replaced. Access to /bulk may be unavailable on Wednesday, January 22, 2025.&lt;br /&gt;
&lt;br /&gt;
3. The compute node operating system will be updated to Rocky Linux 8.10.&lt;br /&gt;
&lt;br /&gt;
4. The Slurm scheduling system will be upgraded.&lt;br /&gt;
&lt;br /&gt;
5. The Open OnDemand web portal will be upgraded.&lt;br /&gt;
&lt;br /&gt;
Please reach out to support@hpc.calgary.ca with any issues or concerns.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Navbox ARC}}&lt;br /&gt;
[[Category:ARC]]&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=ARC_Cluster_Status&amp;diff=3688</id>
		<title>ARC Cluster Status</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=ARC_Cluster_Status&amp;diff=3688"/>
		<updated>2025-01-15T17:28:47Z</updated>

		<summary type="html">&lt;p&gt;Lleung: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Cluster Status&lt;br /&gt;
| cluster = ARC&lt;br /&gt;
| status = green&lt;br /&gt;
| title = Cluster operational&lt;br /&gt;
| message = System is operational. Updates are planned for Jan 20. Please see MOTD&lt;br /&gt;
&lt;br /&gt;
See the [[ARC Cluster Status]] page for system notices. &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
== System Messages ==&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = January System Updates&lt;br /&gt;
| date = 2023/01/01&lt;br /&gt;
| message =&lt;br /&gt;
Beginning January 16, 2023, the ARC cluster will undergo operating system updates. We shall do our utmost to minimize disruption and allow ongoing jobs to be completed. New jobs may be temporarily held from scheduling.&lt;br /&gt;
&lt;br /&gt;
The ARC login node will reboot on the morning of January 16. Please save your work and log out if possible.&lt;br /&gt;
&lt;br /&gt;
The upgrade is planned to be fully complete by January 20.&lt;br /&gt;
&lt;br /&gt;
If you encounter any system issues, do not hesitate to let us know.&lt;br /&gt;
&lt;br /&gt;
Thank you for your cooperation.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = System Updates Completed&lt;br /&gt;
| date = 2023/01/24&lt;br /&gt;
| message =&lt;br /&gt;
The upgrade has been completed. The following has been changed:&lt;br /&gt;
* OS Updated to Rocky Linux 8.7&lt;br /&gt;
* Slurm updated to 22.05.7&lt;br /&gt;
* Apptainer replaces Singularity&lt;br /&gt;
* Each job will have its own /tmp, /dev/shm, /run/user/$uid mounted&lt;br /&gt;
&lt;br /&gt;
If you encounter any system issues, do not hesitate to let us know.&lt;br /&gt;
&lt;br /&gt;
Thank you for your cooperation.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Filesystem Issues&lt;br /&gt;
| date = 2023/02/28&lt;br /&gt;
| message =&lt;br /&gt;
We are currently investigating a filesystem issue that is causing filesystem slowdowns across ARC.&lt;br /&gt;
&lt;br /&gt;
We will update you with more information as it becomes available.&lt;br /&gt;
&lt;br /&gt;
Thank you for your patience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Filesystem Issues&lt;br /&gt;
| date = 2023/03/1&lt;br /&gt;
| message =&lt;br /&gt;
We are still currently investigating a filesystem issue that is causing filesystem slowdowns across ARC. Some jobs on ARC have been paused to help us find the root cause of the slowdowns.&lt;br /&gt;
&lt;br /&gt;
We will update you with more information as it becomes available.&lt;br /&gt;
&lt;br /&gt;
Thank you for your patience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = ARC Login node reboot&lt;br /&gt;
| date = 2023/03/2&lt;br /&gt;
| message =&lt;br /&gt;
The ARC login node will be rebooted this afternoon for an emergency maintenance. This downtime is needed to help mitigate the filesystem slowdowns experienced on the login node.  Jobs will continue running and scheduling during this time.&lt;br /&gt;
&lt;br /&gt;
All logins to the ARC login node will be terminated at 3:00PM and will remain unavailable until 4:00PM.&lt;br /&gt;
&lt;br /&gt;
We apologize for the inconvenience and thank you for your patience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = ⚠️ Filesystem Issues&lt;br /&gt;
| date = 2023/03/2&lt;br /&gt;
| message =&lt;br /&gt;
We are still currently investigating a filesystem issue that is causing filesystem slowdowns on specific nodes in our MSRDC location.&lt;br /&gt;
&lt;br /&gt;
We will update you with more information as it becomes available.&lt;br /&gt;
&lt;br /&gt;
We apologize for the inconvenience and thank you for your patience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Filesystem Issues Resolved&lt;br /&gt;
| date = 2023/03/10&lt;br /&gt;
| message =&lt;br /&gt;
We have upgraded the filesystem routers in our MSRDC location to address the performance issues.&lt;br /&gt;
&lt;br /&gt;
Please let us know if you experience any issues with the filesystem performance.&lt;br /&gt;
&lt;br /&gt;
Thank-you for your patience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Open OnDemand reboot&lt;br /&gt;
| date = 2023/05/01&lt;br /&gt;
| message =&lt;br /&gt;
On May 1, 2023, the ARC Open OnDemand node will be rebooted between 5PM and 6PM. Expected downtime will be approximately 15 minutes.&lt;br /&gt;
&lt;br /&gt;
If you encounter any system issues, do not hesitate to let us know.&lt;br /&gt;
&lt;br /&gt;
Thank you for your cooperation.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Apptainer (Singularity) on ARC Login Node&lt;br /&gt;
| date = 2023/06/22&lt;br /&gt;
| message =&lt;br /&gt;
Apptainer (Singularity) containers may experience an error when&lt;br /&gt;
running on the Arc login node. If apptainer complains that a system&lt;br /&gt;
administrator needs to enable user namespaces, simply run your&lt;br /&gt;
containers inside a job.&lt;br /&gt;
&lt;br /&gt;
This is a temporary measure due to security vulnerability that will be&lt;br /&gt;
patched soon.&lt;br /&gt;
&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Lattice, Single, cpu2013 partition changes&lt;br /&gt;
| date = 2023/07/13&lt;br /&gt;
| message =&lt;br /&gt;
The Lattice and Single, and cpu2013 have all been decomissioned.  The Single&lt;br /&gt;
partition will be replaced by the nodes formerly in the cpu2013 partition but&lt;br /&gt;
will be called single.&lt;br /&gt;
&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Open OnDemand reboot&lt;br /&gt;
| date = 2023/10/17&lt;br /&gt;
| message =&lt;br /&gt;
Open OnDemand will be rebooted on October 17, 2023 for an update. It will be down for up to 30 minutes.&lt;br /&gt;
&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Storage Upgrade MARC/ARC cluster&lt;br /&gt;
| date = 2023/10/23&lt;br /&gt;
| message =&lt;br /&gt;
We will be performing storage upgrades on the MARC/ARC cluster on &lt;br /&gt;
November 16 and 17, 2023. To facilitate this, we will be throttling &lt;br /&gt;
down the number of jobs on both clusters while the upgrades are &lt;br /&gt;
performed&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Systems Operating Normally&lt;br /&gt;
| date = 2024/05/3&lt;br /&gt;
| message =&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Power Interruption&lt;br /&gt;
| date = 2024/05/07&lt;br /&gt;
| message = Arc Experienced an brief power outage around 11AM May 7, 2024.&lt;br /&gt;
Most compute nodes have or are rebooting.  Most jobs running at this time &lt;br /&gt;
were lost. Arc administrators are actively working on restarting compute &lt;br /&gt;
nodes. Sorry for the inconvenience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = GPU a100 Node Reservation&lt;br /&gt;
| date = 2024/06/03&lt;br /&gt;
| message = Job submissions targeted to the  GPU a100 partition will be &lt;br /&gt;
affected by a temporary reservation on the nodes to accommodate the RCS&lt;br /&gt;
summer school class taking place on 2024/Jun/10. Reservation will end &lt;br /&gt;
shortly after. Please submit your jobs normally and the scheduler will &lt;br /&gt;
start them as soon as the nodes are available. Sorry for the inconvenience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = GPU a100 Node Reservation Removed&lt;br /&gt;
| date = 2024/06/11&lt;br /&gt;
| message = GPU a100 Nodes in ARC have been returned to normal scheduling. &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Notice of Upcoming Partial Outage&lt;br /&gt;
| date = 2024/08/23&lt;br /&gt;
| message = Several compute nodes from the ARC cluster will be unavailable &lt;br /&gt;
between Sept 23 to Sept 27 inclusive (subject to change). All compute nodes &lt;br /&gt;
in cpu2019, cpu2021/2, gpu-v100 most nodes from bigmem and gpu-a100 will be &lt;br /&gt;
affected. These nodes will return to service as soon as the work is complete.  &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Partial Outage Update I&lt;br /&gt;
| date = 2024/09/25&lt;br /&gt;
| message = Due to hardware issues that is blocking our original maintenance window, most compute nodes that were taken offline on Monday has been brought back online today. An additional partial outage will occur again starting next Tuesday for the same nodes.&lt;br /&gt;
&lt;br /&gt;
On Tuesday, October 1, 2024, the compute nodes in cpu2019, cpu2021, cpu2022, gpu-v100, gpu-a100, and most nodes from bigmem will be unavailable until Friday October 4, 2024.&lt;br /&gt;
&lt;br /&gt;
We apologise for the inconvenience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Partial Outage Update II&lt;br /&gt;
| date = 2024/10/04&lt;br /&gt;
| message = The maintenance window will be extended until at least Monday, October 7, 2024 due to a power distribution issue in our renovated data centre.&lt;br /&gt;
&lt;br /&gt;
Currently, the compute nodes in cpu2019, cpu2021, cpu2022, gpu-v100, gpu-a100, and most nodes from bigmem will be unavailable until at least Monday, October 7, 2024. Affected WDF-Altis GPU nodes include: wdfgpu[1-2,6,8-12].&lt;br /&gt;
&lt;br /&gt;
We apologize for the extended downtime and will update you as soon as we have additional information from our operations team.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Partial Outage Update III&lt;br /&gt;
| date = 2024/10/07&lt;br /&gt;
| message = Due to technical issues beyond our control the maintenance window will be extended until at least Tuesday, October 15, 2024.&lt;br /&gt;
&lt;br /&gt;
Currently, the compute nodes in cpu2019, cpu2021, cpu2022, gpu-v100, gpu-a100, and most nodes from bigmem will be unavailable until at least Tuesday, October 15, 2024. Affected WDF-Altis GPU nodes include: wdfgpu[1-2,6,8-12].&lt;br /&gt;
&lt;br /&gt;
We apologize for the extended downtime and will update you as soon as we have additional information from our operations team.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Normal Scheduling has resumed. &lt;br /&gt;
| date = 2024/10/08&lt;br /&gt;
| message = The ARC cluster has been successfully brought online and nodes are running jobs normally. We apologize for the extended downtime. &lt;br /&gt;
&lt;br /&gt;
Please reach out to support@hpc.ucalgary.ca with any issues or concerns. &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Scheduled Maintenance &lt;br /&gt;
| date = 2024/12/11&lt;br /&gt;
| message = The ARC login node will be rebooted on Tuesday December 17 for scheduled maintenance. It will be down for a few minutes and return shortly. Job scheduling and jobs running on the cluster will not be affected. Thank you for understanding. &lt;br /&gt;
&lt;br /&gt;
Please reach out to support@hpc.ucalgary.ca with any issues or concerns. &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Scheduled Maintenance and OS Update&lt;br /&gt;
| date = 2025/01/07&lt;br /&gt;
| message = The ARC cluster will be rebooted for OS updates on Monday January 20, 2025. Please make sure to save your work and log out before the reboot happens. Scheduling will be paused until the cluster is back, but queued jobs will remain in the queue and nodes will start scheduling when the cluster is ready. Thank you for understanding. &lt;br /&gt;
&lt;br /&gt;
Please reach out to support@hpc.ucalgary.ca with any issues or concerns. &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = ⚠️ Scheduled Maintenance and OS Update&lt;br /&gt;
| date = 2025/01/15&lt;br /&gt;
| message = The ARC cluster will be down for maintenance and upgrades starting 9AM Monday, January 20, 2025 through Wednesday, January 22, 2025. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the duration of the upgrade window:&lt;br /&gt;
* Scheduling will be paused and new jobs will be queued. Any queued jobs will start scheduling after the upgrade is complete.&lt;br /&gt;
* Access to files via the login node and arc-dtn will generally be available but intermittent. File transfers on the DTN node, including Globus file transfers, may be interrupted during this window.&lt;br /&gt;
&lt;br /&gt;
Please make sure to save your work prior to this outage window to avoid any loss of work.&lt;br /&gt;
&lt;br /&gt;
During this time the following changes will happen:&lt;br /&gt;
&lt;br /&gt;
1. Ethernet will replace the 11 year old, unsupported Infiniband on the following partitions:&lt;br /&gt;
* cpu2023 (temporary)&lt;br /&gt;
* Parallel&lt;br /&gt;
* Theia/Synergy/cpu2017-bf05&lt;br /&gt;
* Single&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;Any multi-node jobs, (MPI) run on these partitions will have increased latency. If you run multi-node jobs, make sure to run on a partition such as cpu2019, cpu2021, cpu2022&lt;br /&gt;
&lt;br /&gt;
2. A component of the NetApp filer will be replaced. Access to /bulk may be unavailable on Wednesday, January 22, 2025.&lt;br /&gt;
&lt;br /&gt;
3. The compute node operating system will be updated to Rocky Linux 8.10.&lt;br /&gt;
&lt;br /&gt;
4. The Slurm scheduling system will be upgraded.&lt;br /&gt;
&lt;br /&gt;
5. The Open OnDemand web portal will be upgraded.&lt;br /&gt;
&lt;br /&gt;
Please reach out to support@hpc.calgary.ca with any issues or concerns.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Navbox ARC}}&lt;br /&gt;
[[Category:ARC]]&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=ARC_Cluster_Status&amp;diff=3687</id>
		<title>ARC Cluster Status</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=ARC_Cluster_Status&amp;diff=3687"/>
		<updated>2025-01-15T17:24:34Z</updated>

		<summary type="html">&lt;p&gt;Lleung: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Cluster Status&lt;br /&gt;
| cluster = ARC&lt;br /&gt;
| status = green&lt;br /&gt;
| title = Cluster operational&lt;br /&gt;
| message = System is operational. Updates are planned for Jan 20. Please see MOTD&lt;br /&gt;
&lt;br /&gt;
See the [[ARC Cluster Status]] page for system notices. &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
== System Messages ==&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = January System Updates&lt;br /&gt;
| date = 2023/01/01&lt;br /&gt;
| message =&lt;br /&gt;
Beginning January 16, 2023, the ARC cluster will undergo operating system updates. We shall do our utmost to minimize disruption and allow ongoing jobs to be completed. New jobs may be temporarily held from scheduling.&lt;br /&gt;
&lt;br /&gt;
The ARC login node will reboot on the morning of January 16. Please save your work and log out if possible.&lt;br /&gt;
&lt;br /&gt;
The upgrade is planned to be fully complete by January 20.&lt;br /&gt;
&lt;br /&gt;
If you encounter any system issues, do not hesitate to let us know.&lt;br /&gt;
&lt;br /&gt;
Thank you for your cooperation.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = System Updates Completed&lt;br /&gt;
| date = 2023/01/24&lt;br /&gt;
| message =&lt;br /&gt;
The upgrade has been completed. The following has been changed:&lt;br /&gt;
* OS Updated to Rocky Linux 8.7&lt;br /&gt;
* Slurm updated to 22.05.7&lt;br /&gt;
* Apptainer replaces Singularity&lt;br /&gt;
* Each job will have its own /tmp, /dev/shm, /run/user/$uid mounted&lt;br /&gt;
&lt;br /&gt;
If you encounter any system issues, do not hesitate to let us know.&lt;br /&gt;
&lt;br /&gt;
Thank you for your cooperation.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Filesystem Issues&lt;br /&gt;
| date = 2023/02/28&lt;br /&gt;
| message =&lt;br /&gt;
We are currently investigating a filesystem issue that is causing filesystem slowdowns across ARC.&lt;br /&gt;
&lt;br /&gt;
We will update you with more information as it becomes available.&lt;br /&gt;
&lt;br /&gt;
Thank you for your patience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Filesystem Issues&lt;br /&gt;
| date = 2023/03/1&lt;br /&gt;
| message =&lt;br /&gt;
We are still currently investigating a filesystem issue that is causing filesystem slowdowns across ARC. Some jobs on ARC have been paused to help us find the root cause of the slowdowns.&lt;br /&gt;
&lt;br /&gt;
We will update you with more information as it becomes available.&lt;br /&gt;
&lt;br /&gt;
Thank you for your patience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = ARC Login node reboot&lt;br /&gt;
| date = 2023/03/2&lt;br /&gt;
| message =&lt;br /&gt;
The ARC login node will be rebooted this afternoon for an emergency maintenance. This downtime is needed to help mitigate the filesystem slowdowns experienced on the login node.  Jobs will continue running and scheduling during this time.&lt;br /&gt;
&lt;br /&gt;
All logins to the ARC login node will be terminated at 3:00PM and will remain unavailable until 4:00PM.&lt;br /&gt;
&lt;br /&gt;
We apologize for the inconvenience and thank you for your patience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = ⚠️ Filesystem Issues&lt;br /&gt;
| date = 2023/03/2&lt;br /&gt;
| message =&lt;br /&gt;
We are still currently investigating a filesystem issue that is causing filesystem slowdowns on specific nodes in our MSRDC location.&lt;br /&gt;
&lt;br /&gt;
We will update you with more information as it becomes available.&lt;br /&gt;
&lt;br /&gt;
We apologize for the inconvenience and thank you for your patience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Filesystem Issues Resolved&lt;br /&gt;
| date = 2023/03/10&lt;br /&gt;
| message =&lt;br /&gt;
We have upgraded the filesystem routers in our MSRDC location to address the performance issues.&lt;br /&gt;
&lt;br /&gt;
Please let us know if you experience any issues with the filesystem performance.&lt;br /&gt;
&lt;br /&gt;
Thank-you for your patience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Open OnDemand reboot&lt;br /&gt;
| date = 2023/05/01&lt;br /&gt;
| message =&lt;br /&gt;
On May 1, 2023, the ARC Open OnDemand node will be rebooted between 5PM and 6PM. Expected downtime will be approximately 15 minutes.&lt;br /&gt;
&lt;br /&gt;
If you encounter any system issues, do not hesitate to let us know.&lt;br /&gt;
&lt;br /&gt;
Thank you for your cooperation.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Apptainer (Singularity) on ARC Login Node&lt;br /&gt;
| date = 2023/06/22&lt;br /&gt;
| message =&lt;br /&gt;
Apptainer (Singularity) containers may experience an error when&lt;br /&gt;
running on the Arc login node. If apptainer complains that a system&lt;br /&gt;
administrator needs to enable user namespaces, simply run your&lt;br /&gt;
containers inside a job.&lt;br /&gt;
&lt;br /&gt;
This is a temporary measure due to security vulnerability that will be&lt;br /&gt;
patched soon.&lt;br /&gt;
&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Lattice, Single, cpu2013 partition changes&lt;br /&gt;
| date = 2023/07/13&lt;br /&gt;
| message =&lt;br /&gt;
The Lattice and Single, and cpu2013 have all been decomissioned.  The Single&lt;br /&gt;
partition will be replaced by the nodes formerly in the cpu2013 partition but&lt;br /&gt;
will be called single.&lt;br /&gt;
&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Open OnDemand reboot&lt;br /&gt;
| date = 2023/10/17&lt;br /&gt;
| message =&lt;br /&gt;
Open OnDemand will be rebooted on October 17, 2023 for an update. It will be down for up to 30 minutes.&lt;br /&gt;
&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Storage Upgrade MARC/ARC cluster&lt;br /&gt;
| date = 2023/10/23&lt;br /&gt;
| message =&lt;br /&gt;
We will be performing storage upgrades on the MARC/ARC cluster on &lt;br /&gt;
November 16 and 17, 2023. To facilitate this, we will be throttling &lt;br /&gt;
down the number of jobs on both clusters while the upgrades are &lt;br /&gt;
performed&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Systems Operating Normally&lt;br /&gt;
| date = 2024/05/3&lt;br /&gt;
| message =&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Power Interruption&lt;br /&gt;
| date = 2024/05/07&lt;br /&gt;
| message = Arc Experienced an brief power outage around 11AM May 7, 2024.&lt;br /&gt;
Most compute nodes have or are rebooting.  Most jobs running at this time &lt;br /&gt;
were lost. Arc administrators are actively working on restarting compute &lt;br /&gt;
nodes. Sorry for the inconvenience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = GPU a100 Node Reservation&lt;br /&gt;
| date = 2024/06/03&lt;br /&gt;
| message = Job submissions targeted to the  GPU a100 partition will be &lt;br /&gt;
affected by a temporary reservation on the nodes to accommodate the RCS&lt;br /&gt;
summer school class taking place on 2024/Jun/10. Reservation will end &lt;br /&gt;
shortly after. Please submit your jobs normally and the scheduler will &lt;br /&gt;
start them as soon as the nodes are available. Sorry for the inconvenience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = GPU a100 Node Reservation Removed&lt;br /&gt;
| date = 2024/06/11&lt;br /&gt;
| message = GPU a100 Nodes in ARC have been returned to normal scheduling. &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Notice of Upcoming Partial Outage&lt;br /&gt;
| date = 2024/08/23&lt;br /&gt;
| message = Several compute nodes from the ARC cluster will be unavailable &lt;br /&gt;
between Sept 23 to Sept 27 inclusive (subject to change). All compute nodes &lt;br /&gt;
in cpu2019, cpu2021/2, gpu-v100 most nodes from bigmem and gpu-a100 will be &lt;br /&gt;
affected. These nodes will return to service as soon as the work is complete.  &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Partial Outage Update I&lt;br /&gt;
| date = 2024/09/25&lt;br /&gt;
| message = Due to hardware issues that is blocking our original maintenance window, most compute nodes that were taken offline on Monday has been brought back online today. An additional partial outage will occur again starting next Tuesday for the same nodes.&lt;br /&gt;
&lt;br /&gt;
On Tuesday, October 1, 2024, the compute nodes in cpu2019, cpu2021, cpu2022, gpu-v100, gpu-a100, and most nodes from bigmem will be unavailable until Friday October 4, 2024.&lt;br /&gt;
&lt;br /&gt;
We apologise for the inconvenience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Partial Outage Update II&lt;br /&gt;
| date = 2024/10/04&lt;br /&gt;
| message = The maintenance window will be extended until at least Monday, October 7, 2024 due to a power distribution issue in our renovated data centre.&lt;br /&gt;
&lt;br /&gt;
Currently, the compute nodes in cpu2019, cpu2021, cpu2022, gpu-v100, gpu-a100, and most nodes from bigmem will be unavailable until at least Monday, October 7, 2024. Affected WDF-Altis GPU nodes include: wdfgpu[1-2,6,8-12].&lt;br /&gt;
&lt;br /&gt;
We apologize for the extended downtime and will update you as soon as we have additional information from our operations team.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Partial Outage Update III&lt;br /&gt;
| date = 2024/10/07&lt;br /&gt;
| message = Due to technical issues beyond our control the maintenance window will be extended until at least Tuesday, October 15, 2024.&lt;br /&gt;
&lt;br /&gt;
Currently, the compute nodes in cpu2019, cpu2021, cpu2022, gpu-v100, gpu-a100, and most nodes from bigmem will be unavailable until at least Tuesday, October 15, 2024. Affected WDF-Altis GPU nodes include: wdfgpu[1-2,6,8-12].&lt;br /&gt;
&lt;br /&gt;
We apologize for the extended downtime and will update you as soon as we have additional information from our operations team.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Normal Scheduling has resumed. &lt;br /&gt;
| date = 2024/10/08&lt;br /&gt;
| message = The ARC cluster has been successfully brought online and nodes are running jobs normally. We apologize for the extended downtime. &lt;br /&gt;
&lt;br /&gt;
Please reach out to support@hpc.ucalgary.ca with any issues or concerns. &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Scheduled Maintenance &lt;br /&gt;
| date = 2024/12/11&lt;br /&gt;
| message = The ARC login node will be rebooted on Tuesday December 17 for scheduled maintenance. It will be down for a few minutes and return shortly. Job scheduling and jobs running on the cluster will not be affected. Thank you for understanding. &lt;br /&gt;
&lt;br /&gt;
Please reach out to support@hpc.ucalgary.ca with any issues or concerns. &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Scheduled Maintenance and OS Update&lt;br /&gt;
| date = 2025/01/07&lt;br /&gt;
| message = The ARC cluster will be rebooted for OS updates on Monday January 20, 2025. Please make sure to save your work and log out before the reboot happens. Scheduling will be paused until the cluster is back, but queued jobs will remain in the queue and nodes will start scheduling when the cluster is ready. Thank you for understanding. &lt;br /&gt;
&lt;br /&gt;
Please reach out to support@hpc.ucalgary.ca with any issues or concerns. &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Scheduled Maintenance and OS Update&lt;br /&gt;
| date = 2025/01/15&lt;br /&gt;
| message = The ARC cluster will be down starting 9AM Monday, January 20, 2025 through Wednesday, January 22, 2025. &lt;br /&gt;
&lt;br /&gt;
For the duration of the upgrade window:&lt;br /&gt;
* Scheduling will be paused and new jobs will be queued. Any queued jobs will start scheduling after the upgrade is complete.&lt;br /&gt;
* Access to files via the login node and arc-dtn will generally be available but intermittent. File transfers on the DTN node, including Globus file transfers, may be interrupted during this window.&lt;br /&gt;
&lt;br /&gt;
Please make sure to save your work prior to this outage window to avoid any loss of work.&lt;br /&gt;
&lt;br /&gt;
During this time the following changes will happen:&lt;br /&gt;
&lt;br /&gt;
1. Ethernet will replace the 11 year old, unsupported Infiniband on the following partitions:&lt;br /&gt;
* cpu2023 (temporary)&lt;br /&gt;
* Parallel&lt;br /&gt;
* Theia/Synergy/cpu2017-bf05&lt;br /&gt;
* Single&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;Any multi-node jobs, (MPI) run on these partitions will have increased latency. If you run multi-node jobs, make sure to run on a partition such as cpu2019, cpu2021, cpu2022&lt;br /&gt;
&lt;br /&gt;
2. A component of the NetApp filer will be replaced. Access to /bulk may be unavailable on Wednesday, January 22, 2025.&lt;br /&gt;
&lt;br /&gt;
3. The compute node operating system will be updated to Rocky Linux 8.10.&lt;br /&gt;
&lt;br /&gt;
4. The Slurm scheduling system will be upgraded.&lt;br /&gt;
&lt;br /&gt;
5. The Open OnDemand web portal will be upgraded.&lt;br /&gt;
&lt;br /&gt;
Please reach out to support@hpc.calgary.ca with any issues or concerns.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Navbox ARC}}&lt;br /&gt;
[[Category:ARC]]&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=CloudStack_User_Guide&amp;diff=3607</id>
		<title>CloudStack User Guide</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=CloudStack_User_Guide&amp;diff=3607"/>
		<updated>2024-10-16T22:33:55Z</updated>

		<summary type="html">&lt;p&gt;Lleung: /* Troubleshooting */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This is a user&#039;s guide on using CloudStack provided by Research Computing Services.&lt;br /&gt;
&lt;br /&gt;
== Introduction==&lt;br /&gt;
Apache CloudStack is an Infrastructure as a Service (IaaS) platform that allows users to quickly spin up Linux/Non-Windows based virtual machines. RCS is providing this service to help researchers quickly set up and prototype research related software on premises. It is ideal for short-term projects that involve level 1 and level 2 data. Please refer to our [[CloudStack End User Agreement]] for acceptable uses and requirements. &lt;br /&gt;
&lt;br /&gt;
=== Service limitations ===&lt;br /&gt;
Please be aware and understand each of the following limitations with CloudStack.&lt;br /&gt;
* &amp;lt;u&amp;gt;&#039;&#039;&#039;CloudStack is not appropriate for Windows&#039;&#039;&#039;&amp;lt;/u&amp;gt; based workloads. &lt;br /&gt;
* &amp;lt;u&amp;gt;&#039;&#039;&#039;CloudStack has no VM backups&#039;&#039;&#039;&amp;lt;/u&amp;gt;. It is up to the user to perform backups of their work. It is not intended for mission critical workloads that require high availability.&lt;br /&gt;
* Internet facing applications on ports other than 80 and 443 may be limited and subject to IT security restrictions. If you have an application that needs to be accessible from outside campus, a security exception has to be submitted to IT security for approval.&lt;br /&gt;
Please contact us if you have any questions with these limitations and to consult whether your workload may work within the CloudStack environment.&lt;br /&gt;
&lt;br /&gt;
=== Get started ===&lt;br /&gt;
To get started, request for a new CloudStack account via [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=e3c1d6e91be48554cca5ecefbd4bcb6c ServiceNow]. You may add multiple users to the CloudStack account on the same ServiceNow form.&lt;br /&gt;
&lt;br /&gt;
== Using your virtual machine ==&lt;br /&gt;
You will be able to run any non-Windows based virtual machine in the CloudStack infrastructure. While we cannot provide specific management advice on each and every operating system available, we can provide you with some suggestions on important considerations to be aware of.&lt;br /&gt;
&lt;br /&gt;
=== Educate yourself ===&lt;br /&gt;
All operating systems (OS) have user groups, web sites, wikis, or mailing lists somewhere on the internet.  They can be a valuable resource.  Most OS providers have on-line documentation that describes using their product.  For example Rocky Linux, used by RCS, has a [https://docs.rockylinux.org/ documentation site].  These are excellent resources and can help you understand how to manage your virtual machine.&lt;br /&gt;
=== Keep security in mind===&lt;br /&gt;
To help keep our network and infrastructure safe from cyber attacks, it is critical that your VMs are properly configured to reduce the number of ways that hackers could exploit it. Here are some common tasks that you can do to help harden your VM:&lt;br /&gt;
*Ensure that the only services running on your VM are the ones you must run.  Each OS has a way of managing what services are running (sysinit, systemd etc).  Please ensure that unnecessary services have been disabled.&lt;br /&gt;
&lt;br /&gt;
*Disable or delete any unused accounts. Many OSs will have pre-configured accounts, and many applications will have pre-configured accounts.  Make sure they are either disabled or not allowed to login.&lt;br /&gt;
&lt;br /&gt;
*All accounts should have strong [https://it.ucalgary.ca/it-security/passwords-do-i-have-change-them/ passwords].&lt;br /&gt;
&lt;br /&gt;
*Many OS&#039;s have the ability to automatically update themselves.  If possible please consider doing this. Updates can also be configured to skip certain software if it will interfere with your research, but please be advised that doing so could place your system at risk.&lt;br /&gt;
&lt;br /&gt;
*If your VM must be exposed to the internet, consider using some kind of end-point security tool to help monitor for and block cyber attacks.&lt;br /&gt;
&lt;br /&gt;
== Accessing CloudStack ==&lt;br /&gt;
===Accessing the CloudStack management console===&lt;br /&gt;
&lt;br /&gt;
The CloudStack management console is a web-based portal that allows you to view and manage your cloud infrastructure including virtual machines, storage, and network. Any modern web browsers including Chrome, Firefox, Edge, and Safari is supported. &lt;br /&gt;
&lt;br /&gt;
Access the CloudStack management console is possible only from an IT-managed computer or through the IT General VPN when working on unmanaged machines (eg. AirUC) or when working off campus (eg. at home). Please review the IT [https://ucalgary.service-now.com/it?id=kb_article&amp;amp;sys_id=52a169d6dbe5bc506ad32637059619cd knowledge base article on connecting to the General VPN] or contact IT support if you need assistance connecting to the General VPN. &lt;br /&gt;
[[File:CloudStack VPN Connection.png|alt=CloudStack VPN Connection|none|thumb|CloudStack VPN Connection]]&lt;br /&gt;
=== Login to CloudStack===&lt;br /&gt;
&lt;br /&gt;
To log in to CloudStack, navigate to https://cloudstack.rcs.ucalgary.ca/. If this site fails to load, please make sure you are either on a IT managed computer or connected to the General VPN.&lt;br /&gt;
&lt;br /&gt;
Sign in to CloudStack using the Single Sign-On option as shown in the image below. This method will require you to authenticate through our central authentication service using your University of Calgary IT credentials and will require multi-factor authentication. You must have multi-factor authentication set up either via your phone or with the Microsoft Authenticator app.&lt;br /&gt;
[[File:CloudStack Login Page.png|alt=CloudStack Login Page|none|thumb|CloudStack Login Page]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; Due to a bug with the UI, if the Single Sign-On option is disabled, please refresh the login page and try again. This issue should be addressed in our next update for CloudStack.&lt;br /&gt;
&lt;br /&gt;
=== CloudStack Dashboard===&lt;br /&gt;
&lt;br /&gt;
After logging in, you will be presented with your CloudStack management console. The dashboard shows you a general overview of your account&#039;s status.&lt;br /&gt;
[[File:CloudStack Dashboard.png|alt=CloudStack Dashboard|none|thumb|CloudStack Dashboard]]On the right hand side of the dashboard, you will also see recent activity and events that was done within your CloudStack account.&lt;br /&gt;
&lt;br /&gt;
If you wish to see your CloudStack account resource quota and allocation, navigate to: &amp;lt;code&amp;gt;Accounts -&amp;gt; Click on your account -&amp;gt; Resources&amp;lt;/code&amp;gt;. &lt;br /&gt;
[[File:CloudStack Resource Quota.png|alt=CloudStack Resource Quota|none|thumb|CloudStack Resource Quota]]&lt;br /&gt;
&lt;br /&gt;
== Working with virtual machines==&lt;br /&gt;
&lt;br /&gt;
CloudStack allows you to control the lifecycle of virtual machines within your cloud account. VMs may be started, stopped, rebooted, or destroyed within your management console.&lt;br /&gt;
&lt;br /&gt;
===Create a VM===&lt;br /&gt;
&lt;br /&gt;
To create a new VM, enter the CloudStack management console and navigate to:  &amp;lt;code&amp;gt;Compute -&amp;gt; Instances -&amp;gt; Add Instance&amp;lt;/code&amp;gt;[[File:CloudStack Instance Summary.png|alt=CloudStack Instance Summary|thumb|CloudStack Instance Summary|493x493px]]&lt;br /&gt;
&lt;br /&gt;
Virtual Machines require the following details:&lt;br /&gt;
&lt;br /&gt;
# &#039;&#039;&#039;Deployment zone&#039;&#039;&#039;. Your account will already be placed in the appropriate zone.&lt;br /&gt;
# &#039;&#039;&#039;Boot template or ISO&#039;&#039;&#039;. You may choose either a pre-created template or boot from a custom CD-ROM ISO file.&lt;br /&gt;
# &#039;&#039;&#039;Compute offering&#039;&#039;&#039;. You may select an appropriate size for your new VM. Resources will be counted against your account&#039;s quota.&lt;br /&gt;
# &#039;&#039;&#039;Data Disk&#039;&#039;&#039;. You may choose to add an additional virtual disk to your VM to store your data. Alternatively, if you wish to use a single virtual disk for your VM, you may choose to override the size of your root disk in step 2 and select &#039;No thanks&#039; in this step.&lt;br /&gt;
# &#039;&#039;&#039;Networks&#039;&#039;&#039;. You may choose one or more networks your VM should connect to. All CloudStack accounts come with a default network already created and ready to be used.&lt;br /&gt;
# &#039;&#039;&#039;SSH keypairs&#039;&#039;&#039;. For templates that support custom SSH key pairs, you may choose to use a custom SSH keypair to be installed as part of the deployment process.&lt;br /&gt;
# &#039;&#039;&#039;Advanced settings&#039;&#039;&#039;. For templates that support custom user-data (Cloud-Init), you may choose to enable the advanced settings and provide your own Cloud-Init user-data payload. More on this in the advanced tasks section below.&lt;br /&gt;
# &#039;&#039;&#039;Other VM details&#039;&#039;&#039;. You may give your new VM a friendly name and make it part of a group. Groups allow you to group related VMs together for better organization. You may change these details at a later time.&lt;br /&gt;
&lt;br /&gt;
When you are done, review the instance summary on the right hand side and then click on the &#039;Launch Virtual Machine&#039; button.&lt;br /&gt;
&lt;br /&gt;
=== Selecting your VM Operating system ===&lt;br /&gt;
Many OSs will provide various editions that are tailored to a specific use case.  A desktop VM may not be appropriate when you need to run a database server.  The OS provider will have guides on how to choose an edition.&lt;br /&gt;
&lt;br /&gt;
You may choose to install the operating system to your virtual machine using either pre-built templates or from scratch using an ISO image.&lt;br /&gt;
&lt;br /&gt;
====Install from a virtual machine template====&lt;br /&gt;
&lt;br /&gt;
We provide a Rocky Linux 8.5 and a Ubuntu Server 22.04 LTS template for your convenience. These templates are pre-built images with the operating system installed and ready for use. Our templates also support further automated setup configured using Cloud-Init configuration data that can be provided when deploying a new VM. Currently, we offer the following templates: &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Template&lt;br /&gt;
!Cloud-Init Support&lt;br /&gt;
!Password Support&lt;br /&gt;
!Default Username&lt;br /&gt;
|-&lt;br /&gt;
|Rocky Linux 8.5&lt;br /&gt;
|Yes&lt;br /&gt;
|Yes&lt;br /&gt;
|rocky&lt;br /&gt;
|-&lt;br /&gt;
|Ubuntu Server 22.04&lt;br /&gt;
|Yes&lt;br /&gt;
|Yes&lt;br /&gt;
|ubuntu&lt;br /&gt;
|}&lt;br /&gt;
Rocky Linux is an open source Linux distribution that is binary-compatible with Red Hat Enterprise Linux and is what RCS recommends.&lt;br /&gt;
&lt;br /&gt;
For templates that support passwords, the generated password that appears after a VM is created is applied to the default username.&lt;br /&gt;
&lt;br /&gt;
Security note: All VM templates are configured with SSH password authentication enabled. You should be able to SSH to your VM from another system connected to the same guest network. Do not expose port 22 unless required and we highly recommend using key based authentication.&lt;br /&gt;
&lt;br /&gt;
===== Virtual machine credentials =====&lt;br /&gt;
VM templates that have password support will have a randomly generated password set when the VM is first created or when a password reset request is made (available only when the VM is powered off). A randomly generated 6 character password will be displayed when a new password is set and appears as a notification in your CloudStack management console. &lt;br /&gt;
[[File:CloudStack VM Password.png|alt=CloudStack VM Password|none|thumb|CloudStack VM Password]]&lt;br /&gt;
This password is set on the default username for your template. For example, the Rocky Linux VM template will set this password to the &#039;&#039;&#039;&#039;rocky&#039;&#039;&#039;&amp;lt;nowiki/&amp;gt;&#039; user account. You may become the super user by logging in as the &amp;lt;code&amp;gt;rocky&amp;lt;/code&amp;gt; user and then running &amp;lt;code&amp;gt;sudo su&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Note: If you specify a custom Cloud-Init config that creates additional users or sets account passwords, the displayed password will be overridden and have no effect.&lt;br /&gt;
&lt;br /&gt;
==== Install from an ISO image====&lt;br /&gt;
We provide various ISO images for popular Linux distributions. You may select one of these ISO images instead of using a pre-built template when deploying a new virtual machine. We currently provide:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Distribution&lt;br /&gt;
!ISO&lt;br /&gt;
|-&lt;br /&gt;
|Ubuntu 20.04&lt;br /&gt;
|ubuntu-20.04.4-desktop-amd64.iso&lt;br /&gt;
&lt;br /&gt;
ubuntu-20.04.4-live-server-amd64.iso&lt;br /&gt;
|-&lt;br /&gt;
|Ubuntu 21.10&lt;br /&gt;
|ubuntu-21.10-desktop-amd64.iso&lt;br /&gt;
&lt;br /&gt;
ubuntu-21.10-live-server-amd64.iso&lt;br /&gt;
|-&lt;br /&gt;
|Ubuntu 22.04&lt;br /&gt;
|ubuntu-22.04-live-server-amd64.iso&lt;br /&gt;
|-&lt;br /&gt;
|Rocky Linux 8.5&lt;br /&gt;
|Rocky-8.5-x86_64-minimal.iso&lt;br /&gt;
|-&lt;br /&gt;
|Fedora 35&lt;br /&gt;
|Fedora-Workstation-Live-x86_64-35-1.2.iso&lt;br /&gt;
|}&lt;br /&gt;
You may install custom ISO file into your CloudStack account either by directly uploading the ISO through the web console or by providing a URL to the ISO file on the internet.&lt;br /&gt;
&lt;br /&gt;
Please do not install Windows on our CloudStack infrastructure. It is against our user agreement to run Windows based systems in this infrastructure. If you need a Windows VM, please contact us for alternative solutions.&lt;br /&gt;
&lt;br /&gt;
===== Register a ISO with a URL=====&lt;br /&gt;
[[File:CloudStack Download ISO.png|alt=CloudStack Download ISO|thumb|CloudStack Download ISO|190x190px]]&lt;br /&gt;
&lt;br /&gt;
To add a custom ISO file from the internet, enter the CloudStack management console and navigate to:  &amp;lt;code&amp;gt;Images -&amp;gt; ISOs -&amp;gt; Register ISO&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You may check the state of the ISO file by clicking on it and verify the state of the file. If the file is successfully downloaded, its ready state should become ‘true’. The ISO file will only appear in the selection list when the file is downloaded successfully.&lt;br /&gt;
[[File:CloudStack ISO Ready.png|alt=CloudStack ISO Ready|none|thumb|172x172px|CloudStack ISO Ready]]&lt;br /&gt;
&lt;br /&gt;
=====Upload a custom ISO=====&lt;br /&gt;
&lt;br /&gt;
To upload an ISO file, enter the CloudStack management console and navigate to:  &amp;lt;code&amp;gt;Images -&amp;gt; ISOs -&amp;gt; Upload ISO from Local (icon)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:CloudStack Upload ISO.png|alt=CloudStack Upload ISO|none|thumb|CloudStack Upload ISO|217x217px]]&lt;br /&gt;
&lt;br /&gt;
=== Selecting a compute offering ===&lt;br /&gt;
Compute offerings are predefined virtual machine sizes. For academic and research virtual machines, we offer the following compute offerings.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Compute offering&lt;br /&gt;
!CPU cores&lt;br /&gt;
!Memory&lt;br /&gt;
![[CloudStack User Guide#Enabling High Availability (HA)|HA Available]]&lt;br /&gt;
!Notes&lt;br /&gt;
|-&lt;br /&gt;
|rcs.tiny&lt;br /&gt;
|1&lt;br /&gt;
|1 GB&lt;br /&gt;
|Yes&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot; |Generic QEMU CPU&lt;br /&gt;
Use this class of compute offerings for typical workloads that do not require specific CPU feature flags.&lt;br /&gt;
&lt;br /&gt;
Virtual machines will run on both newer or older RCS hardware, depending on availability.&lt;br /&gt;
|-&lt;br /&gt;
|rcs.small&lt;br /&gt;
|2&lt;br /&gt;
|2 GB&lt;br /&gt;
|Yes&lt;br /&gt;
|-&lt;br /&gt;
|rcs.medium&lt;br /&gt;
|4&lt;br /&gt;
|4 GB&lt;br /&gt;
|Yes&lt;br /&gt;
|-&lt;br /&gt;
|rcs.large&lt;br /&gt;
|4&lt;br /&gt;
|8 GB&lt;br /&gt;
|Yes&lt;br /&gt;
|-&lt;br /&gt;
|rcs.2xlarge&lt;br /&gt;
|8&lt;br /&gt;
|16 GB&lt;br /&gt;
|Yes&lt;br /&gt;
|-&lt;br /&gt;
|icelake.small&lt;br /&gt;
|2&lt;br /&gt;
|2 GB&lt;br /&gt;
|Yes&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Intel Ice Lake CPU feature set&lt;br /&gt;
Use this class of compute offerings if your workload is compute intensive or requires specific CPU feature flags such as AVX512.&lt;br /&gt;
&lt;br /&gt;
Virtual machines will run on only on newer RCS hardware.&lt;br /&gt;
|-&lt;br /&gt;
|icelake.medium&lt;br /&gt;
|4&lt;br /&gt;
|4 GB&lt;br /&gt;
|Yes&lt;br /&gt;
|-&lt;br /&gt;
|icelake.large&lt;br /&gt;
|8&lt;br /&gt;
|8 GB&lt;br /&gt;
|Yes&lt;br /&gt;
|-&lt;br /&gt;
|icelake.2xlarge&lt;br /&gt;
|16&lt;br /&gt;
|16 GB&lt;br /&gt;
|Yes&lt;br /&gt;
|-&lt;br /&gt;
|bigmem.medium&lt;br /&gt;
|2&lt;br /&gt;
|64 GB&lt;br /&gt;
|No&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Generic QEMU CPU with large memory allocations&lt;br /&gt;
Use this class of compute offerings if your workload requires lots of memory&lt;br /&gt;
&lt;br /&gt;
Virtual machines will run on older large memory RCS hardware.&lt;br /&gt;
&lt;br /&gt;
HA is not available for big memory virtual machines due to limited hardware.&lt;br /&gt;
|-&lt;br /&gt;
|bigmem.large&lt;br /&gt;
|4&lt;br /&gt;
|128 GB&lt;br /&gt;
|No&lt;br /&gt;
|-&lt;br /&gt;
|bigmem.2xlarge&lt;br /&gt;
|8&lt;br /&gt;
|256 GB&lt;br /&gt;
|No&lt;br /&gt;
|-&lt;br /&gt;
|bigmem.4xlarge&lt;br /&gt;
|16&lt;br /&gt;
|512 GB&lt;br /&gt;
|No&lt;br /&gt;
|}&lt;br /&gt;
If you need a customized compute offering for a specific workload, please contact us.&lt;br /&gt;
&lt;br /&gt;
===Connecting to your VM console===&lt;br /&gt;
The CloudStack management console has a KVM (keyboard, video, mouse) feature built-in, allowing you to remotely connect to and interact with your virtual machine. To connect to your virtual machine&#039;s console, navigate to: &amp;lt;code&amp;gt;Compute -&amp;gt; Instances -&amp;gt; Your Instance -&amp;gt; View console&amp;lt;/code&amp;gt;.&lt;br /&gt;
[[File:CloudStack View Console.png|alt=CloudStack View Console|none|thumb|CloudStack View Console]]&lt;br /&gt;
&lt;br /&gt;
=== Expanding a VM disk ===&lt;br /&gt;
[[File:CloudStack Expand Volume.png|alt=CloudStack Expand Volume|thumb|CloudStack Expand Volume]]&lt;br /&gt;
Virtual machine disks can be expanded after they are created within CloudStack. However, you will need to expand the partitions and filesystems manually.&lt;br /&gt;
&lt;br /&gt;
To grow an existing disk:&lt;br /&gt;
&lt;br /&gt;
# Go into your VM details page and click on ‘Volumes’.&lt;br /&gt;
# Click on the volume you wish to expand.&lt;br /&gt;
# Click on the ‘Resize Volume’ icon in the top right.&lt;br /&gt;
Once the volume has been expanded, you should be able to verify the disk volume has grown with &amp;lt;code&amp;gt;lsblk&amp;lt;/code&amp;gt;. There should also be some messages by the kernel when this occurs. However, you will still need to expand any partitions, volumes, and filesystems on your system manually.&lt;br /&gt;
&lt;br /&gt;
To expand your partition, use the &amp;lt;code&amp;gt;growpart&amp;lt;/code&amp;gt; command followed by your disk device and partition number. Eg: &amp;lt;code&amp;gt;/usr/bin/growpart /dev/vda 3&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For LVM volume sets, you can expand the volume using the &amp;lt;code&amp;gt;pvresize&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;lvresize&amp;lt;/code&amp;gt; commands:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/sbin/pvresize -y -q /dev/vda3&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/sbin/lvresize -y -q -r -l +100%FREE /dev/mapper/&amp;lt;volume-name&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To expand your filesystem:&lt;br /&gt;
&lt;br /&gt;
* XFS: &amp;lt;code&amp;gt;/usr/sbin/xfs_growfs &amp;lt;volume&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
* EXT: &amp;lt;code&amp;gt;resize2fs &amp;lt;volume&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Destroying a VM ===&lt;br /&gt;
If you need to delete a VM, click on the red garbage bin icon in the VM instance page. All deletions are irreversible, so please make sure you have a copy of any data you need before proceeding.&lt;br /&gt;
[[File:CloudStack Delete VM.png|alt=CloudStack Delete VM|none|thumb|CloudStack Delete VM]]&lt;br /&gt;
The VM root volume can be deleted immediately by enabling the &#039;Expunge&#039; option in the dialog box. If left disabled, the VM root volume will linger for a day before it is deleted by the system. You may wish to expunge a volume if you are running low on space or volume quota.&lt;br /&gt;
&lt;br /&gt;
=== Enabling High Availability (HA) ===&lt;br /&gt;
Virtual machines using one of the rcs or icelake compute offerings may be configured with high availability (HA) enabled. HA enabled virtual machines will be restarted if the underlying hardware crashes or becomes unavailable. This should be enabled for critical VMs in your infrastructure such as servers.&lt;br /&gt;
&lt;br /&gt;
The bigmem compute offerings do not offer HA because there is limited big memory hosts available for CloudStack.&lt;br /&gt;
&lt;br /&gt;
You may enable HA on a virtual machine by editing the virtual machine and toggling the &#039;HA enabled&#039; option to the on state.&lt;br /&gt;
== Virtual machine networking ==&lt;br /&gt;
By default, all CloudStack accounts come with a default virtual machine network that can be used for most general use cases. For more complex network setups, users can create additional guest networks that virtual machines can connect to. For example, you may wish to create an internal private network only for database traffic between your database server and web server.&lt;br /&gt;
&lt;br /&gt;
A virtual machine network connects to the internet and the rest of campus through a virtual private cloud (VPC). Each VPC has a group of publicly accessible IP addresses as well as network policies (ACLs) that control what traffic is accepted.&lt;br /&gt;
&lt;br /&gt;
The following diagram shows how a guest network connects to the internet and campus network.&lt;br /&gt;
[[File:CloudStack Guest Networking.png|alt=CloudStack Guest Networking|none|thumb|CloudStack Guest Networking]]&lt;br /&gt;
To expose a virtual machine&#039;s services to campus or the internet, the appropriate port forwardings and ACL changes must be created on the VPC containing the guest network. More on this will be discussed in the next section.&lt;br /&gt;
&lt;br /&gt;
Having multiple guest networks allows for more advanced network setups but is not required. We recommend using a single flat network for most workloads. &lt;br /&gt;
&lt;br /&gt;
By default, all CloudStack accounts come with a default VPC and guest network set up with a NAT IP assigned. This default network is appropriate for most use cases.&lt;br /&gt;
&lt;br /&gt;
=== IP addresses ===&lt;br /&gt;
Due to the design decisions made during the setup of the CloudStack platform, only internal 10.44.12X.X IPs can be assigned to your VPC. These IP addresses are accessible from the university campus network. However, there is a special section of IP addresses that can be accessed from the internet.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!IP address range&lt;br /&gt;
!Accessible from&lt;br /&gt;
!Internet IP mapping&lt;br /&gt;
|-&lt;br /&gt;
|10.44.120.3-128&lt;br /&gt;
|Campus, Internet&lt;br /&gt;
|10.44.120.X maps to 136.159.140.X (ports 80 and 443 only)&lt;br /&gt;
|-&lt;br /&gt;
|10.44.120.129-255&lt;br /&gt;
|Campus only&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|10.44.121.0-255&lt;br /&gt;
|Campus only&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|10.44.122.0-255&lt;br /&gt;
|Campus only&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|10.44.123.0-255&lt;br /&gt;
|Campus only&lt;br /&gt;
|N/A&lt;br /&gt;
|}&lt;br /&gt;
If you need a service exposed to the internet, please request for a public IP address using our [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=e3c1d6e91be48554cca5ecefbd4bcb6c ServiceNow request form]. Additionally, if your service is not port 80 or 443, you must also request for a firewall change request to allow the special port through. &lt;br /&gt;
&lt;br /&gt;
=== Exposing a network service to campus ===&lt;br /&gt;
In order to make a virtual machine be visible to the campus network, you must first set up a port forwarding from a campus IP address to your virtual machine and allow the traffic in your network ACL.&lt;br /&gt;
&lt;br /&gt;
To create a port forwarding, navigate to &amp;lt;code&amp;gt;Network -&amp;gt; VPC -&amp;gt; Select your VPC -&amp;gt; Public IP Addresses&amp;lt;/code&amp;gt;. If you do not have any available IP addresses, you will need to click on &#039;Acquire New IP&#039; and select an available IP address. Click on the IP address you wish to use to create a port forwarding on and then navigate to the &#039;Port Forwarding&#039; tab. Enter the private port range, the public port range, the protocol, and select the target VM. &lt;br /&gt;
&lt;br /&gt;
For example, to port forward only HTTP (tcp/80) traffic, you would enter the following:&lt;br /&gt;
[[File:CloudStack Port Forwarding.png|alt=CloudStack Port Forwarding|none|thumb|CloudStack Port Forwarding]]To update the ACL, navigate to &amp;lt;code&amp;gt;Network -&amp;gt; VPC -&amp;gt; Select your VPC -&amp;gt; Network ACL lists -&amp;gt; Select your ACL&amp;lt;/code&amp;gt;. Alternatively, navigate to your guest network and click on the associated network ACL. Under the &#039;ACL list rules&#039;, you may add or remove rules to allow or deny network access. By default, the ACLs do not allow any incoming traffic except those explicitly listed. If you have non-standard ports being forwarded (anything other than 80 or 443), you must add an ACL rule to allow these ports for them to be accessible.&lt;br /&gt;
&lt;br /&gt;
Once the port forwarding is created and the appropriate ACL added, you should be able to access the service from on campus. If for some reason access to your service does not work, there may be a firewall restriction on IT&#039;s network. In such circumstances, please contact us for assistance.&lt;br /&gt;
&lt;br /&gt;
=== Exposing a network to the internet ===&lt;br /&gt;
Exposing a service to the internet is the same as exposing it to campus. However, you must create a port forwarding on an IP address that maps to an internet IP address outlined in the IP address table above. If your account does not have one of these IP addresses available, please request for one on the [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=e3c1d6e91be48554cca5ecefbd4bcb6c ServiceNow request form].&lt;br /&gt;
&lt;br /&gt;
By default, only ports 80 and 443 are allowed through the Internet IP address. For all other ports, please [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=47cd16d113153a00b5b4ff82e144b0bf create a firewall rule change request in ServiceNow].&lt;br /&gt;
&lt;br /&gt;
== Cloud-Init Automation ==&lt;br /&gt;
&lt;br /&gt;
=== Ubuntu ===&lt;br /&gt;
The following Cloud-Init configs apply to Ubuntu VM templates.&lt;br /&gt;
&lt;br /&gt;
==== Ubuntu desktop ====&lt;br /&gt;
Use the following Cloud-Init config with the Ubuntu Server template to set up an Ubuntu desktop environment. The setup step takes a up to 15 minutes to complete and you should see a login screen when the setup finishes. &lt;br /&gt;
&lt;br /&gt;
Adjust the root and user password as desired. Below, the test user password is set to blank, allowing you to login to Gnome without a password.&lt;br /&gt;
{{Highlight|code=#cloud-config&lt;br /&gt;
disable_root: false&lt;br /&gt;
&lt;br /&gt;
write_files:&lt;br /&gt;
  - path: /usr/bin/expand_lvm_root&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      #!/bin/bash&lt;br /&gt;
      /usr/bin/growpart /dev/vda 3&lt;br /&gt;
      /usr/sbin/pvresize -y -q /dev/vda3&lt;br /&gt;
      /usr/sbin/lvresize -y -q -r -l +100%FREE /dev/mapper/ubuntu*&lt;br /&gt;
      /usr/sbin/resize2fs /dev/mapper/ubuntu*&lt;br /&gt;
    permissions: &#039;0700&#039;&lt;br /&gt;
&lt;br /&gt;
users:&lt;br /&gt;
  - name: user&lt;br /&gt;
    lock_passwd: false&lt;br /&gt;
    inactive: false&lt;br /&gt;
    gecos: Test User&lt;br /&gt;
    primary_group: user&lt;br /&gt;
    groups: admin&lt;br /&gt;
    passwd: $1$ADUODeAy$eCJ1lPSxhSGmSvrmWxjLC1&lt;br /&gt;
    &lt;br /&gt;
# Install a graphical desktop&lt;br /&gt;
runcmd:&lt;br /&gt;
  - /usr/bin/expand_lvm_root&lt;br /&gt;
  - DEBIAN_FRONTEND=noninteractive apt -y update&lt;br /&gt;
  - DEBIAN_FRONTEND=noninteractive apt -y upgrade&lt;br /&gt;
  - DEBIAN_FRONTEND=noninteractive apt -y install tasksel&lt;br /&gt;
  - tasksel install gnome-desktop&lt;br /&gt;
  - systemctl set-default graphical.target&lt;br /&gt;
  - systemctl isolate graphical.target|style=max-height: 300px;overflow:scroll;}}&lt;br /&gt;
&lt;br /&gt;
==== Ubuntu with UC Authentication ====&lt;br /&gt;
Use the following Cloud-Init config to allow UC-based authentication. Local accounts with the same username as the IT account may use the IT credential to log in. {{Highlight|code=#cloud-config&lt;br /&gt;
disable_root: false&lt;br /&gt;
&lt;br /&gt;
write_files:&lt;br /&gt;
  - path: /usr/bin/expand_lvm_root&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      #!/bin/bash&lt;br /&gt;
      /usr/bin/growpart /dev/vda 3&lt;br /&gt;
      /usr/sbin/pvresize -y -q /dev/vda3&lt;br /&gt;
      /usr/sbin/lvresize -y -q -r -l +100%FREE /dev/mapper/ubuntu*&lt;br /&gt;
      /usr/sbin/resize2fs /dev/mapper/ubuntu*&lt;br /&gt;
    permissions: &#039;0700&#039;&lt;br /&gt;
&lt;br /&gt;
  - path: /etc/sssd/sssd.conf&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      [sssd]&lt;br /&gt;
      config_file_version = 2&lt;br /&gt;
      services = nss, pam, ifp&lt;br /&gt;
      domains = uc.ucalgary.ca&lt;br /&gt;
      [domain/uc.ucalgary.ca]&lt;br /&gt;
      id_provider = files&lt;br /&gt;
      debug_level = 5&lt;br /&gt;
      auth_provider = krb5&lt;br /&gt;
      chpass_provider = krb5&lt;br /&gt;
      krb5_realm = UC.UCALGARY.CA&lt;br /&gt;
      krb5_server = UC.UCALGARY.CA:88&lt;br /&gt;
      krb5_validate = false&lt;br /&gt;
&lt;br /&gt;
  - path: /etc/krb5.conf&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      [logging]&lt;br /&gt;
       default = FILE:/var/log/krb5libs.log&lt;br /&gt;
       kdc = FILE:/var/log/krb5kdc.log&lt;br /&gt;
       admin_server = FILE:/var/log/kadmind.log&lt;br /&gt;
      [libdefaults]&lt;br /&gt;
       default_realm = UC.UCALGARY.CA&lt;br /&gt;
       dns_lookup_realm = false&lt;br /&gt;
       dns_lookup_kdc = false&lt;br /&gt;
       ticket_lifetime = 24h&lt;br /&gt;
       renew_lifetime = 7d&lt;br /&gt;
       forwardable = true&lt;br /&gt;
      [realms]&lt;br /&gt;
       UC.UCALGARY.CA = {&lt;br /&gt;
        kdc = uc.ucalgary.ca&lt;br /&gt;
       }&lt;br /&gt;
      [domain_realm]&lt;br /&gt;
       uc.ucalgary.ca = UC.UCALGARY.CA&lt;br /&gt;
       .uc.ucalgary.ca = UC.UCALGARY.CA&lt;br /&gt;
&lt;br /&gt;
users:&lt;br /&gt;
  - name: myuofc.username&lt;br /&gt;
    lock_passwd: false&lt;br /&gt;
    inactive: false&lt;br /&gt;
    gecos: UofC User&lt;br /&gt;
    primary_group: user&lt;br /&gt;
    groups: admin&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Ensure VM has the largest / possible&lt;br /&gt;
# Install sss&lt;br /&gt;
runcmd:&lt;br /&gt;
  - /usr/bin/expand_lvm_root&lt;br /&gt;
  - DEBIAN_FRONTEND=noninteractive apt -y update&lt;br /&gt;
  - DEBIAN_FRONTEND=noninteractive apt -y upgrade&lt;br /&gt;
  - DEBIAN_FRONTEND=noninteractive apt -y install sssd-common sssd-krb5 sssd-krb5-common krb5-user sssd-dbus&lt;br /&gt;
  - chmod 600 /etc/sssd/sssd.conf&lt;br /&gt;
  - systemctl restart sssd&lt;br /&gt;
  - systemctl enable sssd&lt;br /&gt;
  - pam-auth-update --enable sss|style=max-height: 300px;overflow:scroll;}}&lt;br /&gt;
&lt;br /&gt;
=== Rocky Linux ===&lt;br /&gt;
The following Cloud-Init configs apply to Rocky Linux templates.&lt;br /&gt;
&lt;br /&gt;
==== Rocky Linux Desktop ====&lt;br /&gt;
Use the following Cloud-Init config using the Rocky Linux template to set up a Rocky Linux desktop environment. The setup step takes up to 10 minutes to complete and you should see a login screen when the setup finishes. &lt;br /&gt;
&lt;br /&gt;
Adjust the root and user password as desired. Below, the test user password is set to blank, allowing you to login to Gnome without a password.&lt;br /&gt;
{{Highlight|code=#cloud-config&lt;br /&gt;
disable_root: false&lt;br /&gt;
&lt;br /&gt;
users:&lt;br /&gt;
  - name: user&lt;br /&gt;
    lock_passwd: false&lt;br /&gt;
    inactive: false&lt;br /&gt;
    gecos: Test User&lt;br /&gt;
    primary_group: user&lt;br /&gt;
    groups: wheel&lt;br /&gt;
    passwd: $1$ADUODeAy$eCJ1lPSxhSGmSvrmWxjLC1&lt;br /&gt;
    &lt;br /&gt;
# Install a graphical desktop&lt;br /&gt;
runcmd:&lt;br /&gt;
  - yum -y install &amp;quot;@Workstation&amp;quot;&lt;br /&gt;
  - systemctl set-default graphical.target&lt;br /&gt;
  - systemctl isolate graphical.target}}&lt;br /&gt;
&lt;br /&gt;
==== Rocky Linux Docker host ====&lt;br /&gt;
Use the following Cloud-Init config using the Rocky Linux template to set up a new docker host. This server can then be used to run Docker containers. Also included are:&lt;br /&gt;
&lt;br /&gt;
# The docker-compose utility to help deploy container stacks more easily&lt;br /&gt;
# A helper script to expand the &amp;lt;code&amp;gt;/var&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt; filesystems on first startup based on the available space in the ROOT volume. &lt;br /&gt;
&lt;br /&gt;
Use the CloudStack generated password with the &#039;&amp;lt;code&amp;gt;rocky&amp;lt;/code&amp;gt;&#039; default user account to log in.{{Highlight|code=#cloud-config&lt;br /&gt;
&lt;br /&gt;
write_files:&lt;br /&gt;
  - path: /usr/bin/expand_lvm_root&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      #!/bin/bash&lt;br /&gt;
      /usr/bin/growpart /dev/vda 3&lt;br /&gt;
      /usr/sbin/pvresize -y -q /dev/vda3&lt;br /&gt;
      /usr/sbin/lvresize -y -q -r -l +50%FREE /dev/mapper/*root&lt;br /&gt;
      /usr/sbin/lvresize -y -q -r -l +100%FREE /dev/mapper/*var&lt;br /&gt;
      /usr/sbin/xfs_growfs /&lt;br /&gt;
      /usr/sbin/xfs_growfs /var&lt;br /&gt;
    permissions: &#039;0700&#039;&lt;br /&gt;
&lt;br /&gt;
  - path: /usr/bin/setup_docker&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      #!/bin/bash&lt;br /&gt;
      yum install -y yum-utils&lt;br /&gt;
      yum-config-manager \&lt;br /&gt;
         --add-repo \&lt;br /&gt;
         https://download.docker.com/linux/centos/docker-ce.repo&lt;br /&gt;
      yum install -y docker-ce docker-ce-cli containerd.io&lt;br /&gt;
      systemctl start docker&lt;br /&gt;
      systemctl enable docker&lt;br /&gt;
      &lt;br /&gt;
      curl -L &amp;quot;https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)&amp;quot; -o /usr/local/bin/docker-compose&lt;br /&gt;
      chmod +x /usr/local/bin/docker-compose&lt;br /&gt;
    permissions: &#039;0700&#039;&lt;br /&gt;
&lt;br /&gt;
  - path: /root/docker-compose.yml&lt;br /&gt;
    permissions: &#039;0700&#039;&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      version: &#039;3.3&#039;&lt;br /&gt;
      services:&lt;br /&gt;
        web:&lt;br /&gt;
          image: php:7.4-apache&lt;br /&gt;
          restart: always&lt;br /&gt;
          user: &amp;quot;0:0&amp;quot;&lt;br /&gt;
          volumes:&lt;br /&gt;
            - /var/www/html:/var/www/html&lt;br /&gt;
          ports:&lt;br /&gt;
            - &amp;quot;80:80&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  - path: /var/www/html/index.php&lt;br /&gt;
    permissions: &#039;0644&#039;&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      &amp;lt;h1&amp;gt;Hello there!&amp;lt;/h1&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;I see you from &amp;lt;?php echo $_SERVER[&#039;REMOTE_ADDR&#039;]; ?&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;&amp;lt;nowiki&amp;gt;pre&amp;lt;/nowiki&amp;gt;&amp;gt;&amp;lt;?php print_r($_SERVER); ?&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
    &lt;br /&gt;
# Ensure VM has the largest / possible&lt;br /&gt;
runcmd:&lt;br /&gt;
  - /usr/bin/expand_lvm_root&lt;br /&gt;
  - /usr/bin/setup_docker&lt;br /&gt;
  - cd /root; docker-compose up -d|style=max-height: 300px;overflow:scroll;}}&lt;br /&gt;
&lt;br /&gt;
==== Rocky Linux with UC Authentication ====&lt;br /&gt;
Use the following Cloud-Init config to allow UC-based authentication. Local accounts with the same username as the IT account may use the IT credential to log in.&lt;br /&gt;
{{Highlight|code=#cloud-config&lt;br /&gt;
&lt;br /&gt;
write_files:&lt;br /&gt;
  - path: /usr/bin/setup_uc_auth&lt;br /&gt;
    permissions: &#039;0700&#039;&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      #!/bin/bash&lt;br /&gt;
      yum install -y sssd sssd-dbus sssd-krb5 krb5-workstation authselect-compat&lt;br /&gt;
      &lt;br /&gt;
      cat &amp;lt;&amp;lt;EOF &amp;gt; /etc/sssd/sssd.conf&lt;br /&gt;
      [sssd]&lt;br /&gt;
      config_file_version = 2&lt;br /&gt;
      services = nss, pam, ifp&lt;br /&gt;
      domains = uc.ucalgary.ca&lt;br /&gt;
      &lt;br /&gt;
      [domain/uc.ucalgary.ca]&lt;br /&gt;
      id_provider = files&lt;br /&gt;
      debug_level = 5&lt;br /&gt;
      auth_provider = krb5&lt;br /&gt;
      chpass_provider = krb5&lt;br /&gt;
      &lt;br /&gt;
      krb5_realm = UC.UCALGARY.CA&lt;br /&gt;
      krb5_server = UC.UCALGARY.CA:88&lt;br /&gt;
      krb5_validate = false&lt;br /&gt;
      EOF&lt;br /&gt;
      chmod 600 /etc/sssd/sssd.conf&lt;br /&gt;
      &lt;br /&gt;
      cat &amp;lt;&amp;lt;EOF &amp;gt; /etc/krb5.conf&lt;br /&gt;
      [logging]&lt;br /&gt;
       default = FILE:/var/log/krb5libs.log&lt;br /&gt;
       kdc = FILE:/var/log/krb5kdc.log&lt;br /&gt;
       admin_server = FILE:/var/log/kadmind.log&lt;br /&gt;
      &lt;br /&gt;
      [libdefaults]&lt;br /&gt;
       default_realm = UC.UCALGARY.CA&lt;br /&gt;
       dns_lookup_realm = false&lt;br /&gt;
       dns_lookup_kdc = false&lt;br /&gt;
       ticket_lifetime = 24h&lt;br /&gt;
       renew_lifetime = 7d&lt;br /&gt;
       forwardable = true&lt;br /&gt;
      &lt;br /&gt;
      [realms]&lt;br /&gt;
       UC.UCALGARY.CA = {&lt;br /&gt;
        kdc = itsodcsrv14.uc.ucalgary.ca&lt;br /&gt;
       }&lt;br /&gt;
      &lt;br /&gt;
      [domain_realm]&lt;br /&gt;
       uc.ucalgary.ca = UC.UCALGARY.CA&lt;br /&gt;
       .uc.ucalgary.ca = UC.UCALGARY.CA&lt;br /&gt;
      EOF&lt;br /&gt;
      &lt;br /&gt;
      mkdir -p /etc/authselect/custom/rcs&lt;br /&gt;
      cd /etc/authselect/custom/rcs&lt;br /&gt;
      echo H4sIAMYsbGIAA+0ca3PbuDGf+StQ34dLMpEtybLdUevO6BKl56mTuJGv7Uwmo4FISMKJBHQAaEfNub+9uwBJkXrLshxnjjuTyCKJfWF3sQsuFPhS9CtB79keoQpw2mjYT4DZz1r1uPGs1qhX6yf12vFx/Vm1Vqs1jp+R6j6ZSiHWhipCnikpzarn1t3/TuErj8bhhBzccjOs6Igq41MVHBDen71WUey3mCsWHNx5W4wKpT+qSAGjI3lDQxj8SarB0UDIiB2FcsBFRfuKMfHZY4L2QpYbS2MzZMJwnxouxfnXefQHTaNi9nufhprdpQj6XAyYGisuzHIUuYcWIhlTrW/lRkxMNdO0OH5HdHdeXlDNjAGCuhJQ0IM4GjPFx0Om4PGjDM9n8pULP4wDtpkik78q1Lec/WifcMr8kSyE7fB/a9ssYf8Q2PiPc6/3RmNN/K826mez8f/07LSM/48BR4uD8dGaWOytGbc8BK8buSTu5odtFEyPitGxDH2LYXai9kFjjf8fn4HPT/3/GPy/fnZ6Uvr/Y8BXiP/gTHHOL/KZ0Z2HRpGmD2meszi5mIExjbpM3BxquRuOgIkJICE/kE7qsKQYGwjXRbSrfD2Xxe7EVp/yEOMGsjZWzOLSPASuCHJ83iCxwPtdwyN2XqtXqwvYSpHMMKPjfp/7HHFtyIydrWBnXeeFQkT4fak4ZImulwh134n3POr7Mk50sTUWGMy0RoE2hnmRAJHDgzLtwk1ewffmJq/gXZiJBf9iTSaHZGvTA0ZoGGumdsakY1AxC7q8j9qJeUD+SmqwPhAQi5kC7k8B69M4NOc9GhA7TutzOSLIB4g1EvJWnHPIFhT77HBbE9hNXZBmRNxYi0yTlHvhyexaA9cYvhDkGCMZDTfGMmITLiw7wMGNHLECuq2ZCjnIZnVUyeHZmis90YZFNhDtJJ0Mgl9lrxuNhpDwBVxZi4ioHp1Xq2dni0bNu0k2GPwkz8yn1GBqJDWj1FSWCFUwTDCxG+4DHUF8JUXgzBNNrws2u9sspB65k+4SW99s/RdgyNz4w0OsA/eUY6yr/05rtdn8r35S1n+PAjaSBU1rPGA4pA8pjCaJG6NbsS8zbuWDxmTk6jTMogZKxmOHYDsMdiAgEMxkOKYIcp69DEE60CUaMsLQ3twSRzbQBgnr2XprRtKBiCMOJFMWRaoJXcAxn5rCiG9XcRbK7T3RWOP/pyen+fqvgf5/XC3rv0eBb17eYTIbsJDaGs/+cV53dlGWjfsuG+O6zWh8qP43hYW1UQUQ7aqYHC9CYh4P6fx9eanU+3SGn6xiqBGX7CU5X656mKZ/i6uRv52T+rQa2QV3sWbaaQZdwkjwrZSQxqlBxqYiYrCQ0UHTfd4RoybdPlfadDHiz0+V5mYjI1ivmm4i9W6CuSSW9KW6BfdbwnS5tVFubSxmJvUL0oM4wkRXD2kgb70fcnojt0CKUCB3S5IrYG3EZ8pQqO1sZqkJ+JLig6H5C/CgMUaPmQjgsiA6jFW0247Hg0yg5WNXfVkkXdDR2GzKzmacPLH9pPr3sZ20TTQe3/4W05Abm+0UwzyxyuyiGLorRTgpULr3YgPOdFKrE+dT2y49dqMEg6eRo93YSZaIZfjKfbkcmnJf7jvbl3ssGEtt7Gv4PdJYt/93Um/M1P81iC9l/f8YsKFFWydAP7eeZuvVTX13uaf+6X9kEEQv8991/DJZnwvYswJrLhTYGEmtDSNqWKlNNCa5Jj3LayV55KDpvv+uh/IWc00W3G3q1DOUkqIdatVxQGG1nmJ8Yh6+Gj62W2/etfdLY43/n1VPT3P+XwP/rzaOy/3/R4G27boinU7njS1/XKIwu1P2nIauGHcZpU2MsQgKJy+8813B8zosZD62dBEz5JqMlcStc1egsRyDVMMD4GoyVhAuZJ/wAJk0E4+Kud09wHID95U+BAI4OrmgCYWIY3C46x8DpJJEVNABSwtBg8lbJMGtIU0B1qTiTC8iEjF/SAXXkca6YYgcXr5pXb0i/2Cqx5QEJSnyVjF2cdU6JBcmYwKQkfedDkGcV613kLUYpvoUxDISdzycoMlkIF0yDuPBwOqiR32oaYPktgfM+lIIYNMKAnGSj0PkvN9nCmNUWqY4vaE+3uHuFBcwoZGTg/Ygc3dK9oGzHoPJjoEE7pIaOyO/Iv4x6KjpDY0Z6+bREXyLFTvk8ggHHkF+E3jeT5N0BXhVMBaK+2EQ6CHr6isZOVqKgpgKZRXJw+6dCYRvvO+BMbj5D8hziK/sxj08yU8DIzecZlnai0NyjTY0pDcMlQZKRSGFz7weE6zPQQ2QkbJUVn/InFE5wtgNhSqDJLIPsR4+KxEYgpq4R+1UmGGsPcgeY9AlEXEE82yNiesRrB9MWY2iln92HL9CcSYyJoG0hdItFXamRoyNEzXEQqD1g7bhOQXG6A+5YK/sKJgPzz5a8A1tXQb0ghz9GuPWBNdTT0mWVFQGw50LNAox8PAlL4dJc5Nu/Usb/B+KphEIr9DYwwkBtvG2GTpsCWpHLDep1iOSjRKLrIfEaDLD+Ql1bhROUl98/eH924u///KxdX3x4b1XmQPPa8EMOyFBb6hoiUpDH0MEP9qpmspySK5CRjWozNJ3j3iB9OMIptOJazDzYATsiDiXscPBDwx6P9TQwB75ADGtIGIqF72Bxd0qGFFNnRPmFm2Y4dsEIQn7wrWNZM6+CjyCUP9qXVy2frpskw9XKHnrkrxtt65/+djuLFBCThuFzbFm08OEJAndqXfjHSSM9Q9oAk3SSBvYJsiHZn5s+A2zQ2eiGCIGTYDN4k2vWKXNUMP3pTDKhyKLudFACB/Nh0q7VKSLBOqHK4vEbgMQW28cJkJlr1jm6OQ5tLaYPYoeC1Y3cLaQzj2aCITPITUWUfY0RObxWCqYZXQTnEpczNiXcch9bq09CTFIhAs7eKGNzXI82zecSHAJV4lraia3GNlojhf7zgmeZsE8urQsTfBs8t7qkLyXRNogGjEKSxlMxkzjNKJ67opwNI90i+RF5rGJ+Ck/ufbP9VOSe9h6no3Pbmpe5hoTX6bIkxc0KxFjOIdnwO3FIGRFfHB9Fhe+7Cniq0NUgqUUTPG+qItVQ4L9jQvfEIEx9OfrAdC91pg+BLFCHRcNPA5kgqFlN5vxAkYg8I8sraFpUmNzMHxAxXYhBOsJAp762REz/lHSYpBTg8taEiKvhwzsL9laRgNOuUoDBeoEvPS/eaOe7tsVhQXixF0HeRWNmMEZltNNEc97c9GxEa1z1X598fbiNaY1/764fv0zedO6bv3U6qyObdMY9x4XagjCr0jalAQJmqE9cG1tM7bexAbedAkEJ1K3ihu4C84XJw+BA2PgqRQam9w62Jeof7vQ2jJP21wHNO47G0yWGyO95P2dRbuINK57OH14TQqWECbzhD0uYJWgmXMVmmdA1ReO0EHSTZMJvIDoDArXMjPF4DphtkCQtd1McWQNNVugmXbeTPFMm2q2QJR130zxZH0126BJGnByWNyVdUja/2m9u7rMGavnvST58sitQeibOP1zy4vnkbwVpR+QFJMsDpBixF9OIV2ykJJdaNkmayzxCkvsOo6yJX7KIKRn7TZpXXY+TNXwEnMIO8ya9PM/v/iudjhKWAUf2//85eJj+137/XVnXzTWnv85qc7s/+C3cv/nMeAdHWGfinKJc6F4xDwzq5Jc7ZemiqTDkpBVLLEwJEUzewuuqNgPrOjYOrjzKjPhFgVK6+ZXENRSyfWKRDutDXguAjbX0CWkYpeYA8ySsJ/Avp8k5+RaxewAUX2CO5/hGXc08aHk3ZuaFzVqFA6JJZrOVwNLdJ0UBOttbEPKjynztNMtkTe5sExW61E2yaCRLTsiqBxCKzOmhZDrzr+CXEUV7cot+Ji32k2z6S7GiE3w3brbRLUkYZDfH+AmwNp3lKup7gvW9e8VtYwXH0rT6yh/A01nlB9P2/kX8ZXZlHCJphc1AaTavg9dQuymJtN2oxp3nC3yQoAoRoSHkdfGaLuD55vsJUNC+zClvQc9F+li7mEehuwausvW/+IB//3kGOvyv2L//wn2/zfqtTL/ewy45+//zJ8azyciT+JQwZPpsk/61GynaTfiGleOrqBQza/+LaHkr2kG+YBHD8oO6D9IB/RTbsZ9ooe7n37751Zsfdv2z7L/c8ruU+z/dJaxz9Of6/K/WrXWaMzkf7Wzk7My/3sMuF/+9yQSvH2cGt1n4vhA8U1IyLzdC57mIIiaX+DfCD9dy4GmN0w17Q+lVfJXRrkvxdB4n3OvmSwBvv4snnpMhQt4IlkuCZ7LaZOx9m/cNIZQazttyL2O4+7ww0kbmQhZyNbyH8wqT+R+hydyN1LQol+DWFYCZ9zU98jNyjcDy1131u9S1la4dM6d8WjX1JXvw0154jmjUZ54XoulrPeXMlOeeC5PPJcnnssTz+WJ52+95VXueGXsPsUdrxJKKKGEEkoooYQSSiihhBJKKKGEEkr448D/AT019aAAeAAA {{!}} base64 -d {{!}} tar -xzpf -&lt;br /&gt;
      &lt;br /&gt;
      if [[ &amp;quot;`authselect current -r`&amp;quot; != &amp;quot;custom/rcs&amp;quot; ]] ; then&lt;br /&gt;
        authselect select custom/rcs --force&lt;br /&gt;
        systemctl restart sssd&lt;br /&gt;
      fi&lt;br /&gt;
      &lt;br /&gt;
      systemctl enable sssd&lt;br /&gt;
&lt;br /&gt;
runcmd:&lt;br /&gt;
  - /usr/bin/setup_uc_auth|style=max-height: 300px;overflow:scroll;}}&lt;br /&gt;
&lt;br /&gt;
==== Rocky Linux NFS server ====&lt;br /&gt;
Because there is no ability to share storage among multiple VMs, a local NFS server could be useful if you need to share data between multiple VMs.{{Highlight|code=#cloud-config&lt;br /&gt;
&lt;br /&gt;
write_files:&lt;br /&gt;
  - path: /usr/bin/expand_lvm_root&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      #!/bin/bash&lt;br /&gt;
      /usr/bin/growpart /dev/vda 3&lt;br /&gt;
      /usr/sbin/pvresize -y -q /dev/vda3&lt;br /&gt;
      /usr/sbin/lvresize -y -q -r -l +50%FREE /dev/mapper/*root&lt;br /&gt;
      /usr/sbin/lvresize -y -q -r -l +100%FREE /dev/mapper/*var&lt;br /&gt;
      /usr/sbin/xfs_growfs /&lt;br /&gt;
      /usr/sbin/xfs_growfs /var&lt;br /&gt;
    permissions: &#039;0700&#039;&lt;br /&gt;
&lt;br /&gt;
  - path: /usr/bin/setup_nfs&lt;br /&gt;
    content: {{!}}&lt;br /&gt;
      #!/bin/bash&lt;br /&gt;
      yum install -y nfs-utils&lt;br /&gt;
&lt;br /&gt;
      mkdir /export&lt;br /&gt;
      if [ -b /dev/vdb ] ; then&lt;br /&gt;
        mkfs.xfs /dev/vdb&lt;br /&gt;
        echo &amp;quot;/dev/vdb  /export     xfs    defaults    1 2&amp;quot; &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
        mount -a&lt;br /&gt;
      fi&lt;br /&gt;
&lt;br /&gt;
      ip a {{!}} grep -w inet {{!}} awk &#039;{print $2}&#039; {{!}} while read subnet ; do&lt;br /&gt;
        echo &amp;quot;/export     $subnet(rw,no_subtree_check,no_root_squash,async)&amp;quot; &amp;gt;&amp;gt; /etc/exports&lt;br /&gt;
      done&lt;br /&gt;
&lt;br /&gt;
      systemctl start nfs-server&lt;br /&gt;
      systemctl enable nfs-server&lt;br /&gt;
     &lt;br /&gt;
      exportfs -ra&lt;br /&gt;
&lt;br /&gt;
    permissions: &#039;0700&#039;&lt;br /&gt;
&lt;br /&gt;
runcmd:&lt;br /&gt;
  - /usr/bin/expand_lvm_root&lt;br /&gt;
  - /usr/bin/setup_nfs|style=max-height: 300px;overflow:scroll;}}NFS clients connected to the same network as the NFS server can then mount &amp;lt;code&amp;gt;/export&amp;lt;/code&amp;gt; using a command similar to: &amp;lt;code&amp;gt;mount -t nfs nfs-server:/export /mnt&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Infrastructure tools ==&lt;br /&gt;
&lt;br /&gt;
=== Generating a CloudStack API ===&lt;br /&gt;
You can request for a CloudStack API key to automate infrastructure deployment using Terraform or CloudMonkey. A new API key can be generated by navigating to your profile page (top right) and then clicking on the &#039;Generate keys&#039; button.&lt;br /&gt;
[[File:CloudStack API Key.png|alt=CloudStack API Key|none|thumb|CloudStack API Key]]&lt;br /&gt;
&lt;br /&gt;
=== CloudMonkey ===&lt;br /&gt;
CloudMonkey is a utility that makes it easier to interact with the CloudStack API. This tool may be used to help automate VM actions (such as start/stop/reboot), or infrastructure tasks (such as creating/destroying VMs, networks, or firewall rules). &lt;br /&gt;
&lt;br /&gt;
To get started with CloudMonkey, refer to the following resources:&lt;br /&gt;
&lt;br /&gt;
* Download from: &amp;lt;nowiki&amp;gt;https://github.com/apache/cloudstack-cloudmonkey/releases/tag/6.1.0&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
* Documentation at: &amp;lt;nowiki&amp;gt;https://cwiki.apache.org/confluence/display/CLOUDSTACK/CloudStack+cloudmonkey+CLI&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Terraform Integration ===&lt;br /&gt;
Terraform allows you to define infrastructure as code and can be used in conjunction with CloudStack to configure your virtual machines and guest networks. Use the official CloudStack provider.&lt;br /&gt;
&lt;br /&gt;
The following is an example Terraform file for reference. Specify your CloudStack API keys either as a separate &amp;lt;code&amp;gt;vars.tf&amp;lt;/code&amp;gt;.&lt;br /&gt;
{{Highlight|code=# Configure the CloudStack Provider&lt;br /&gt;
terraform {&lt;br /&gt;
  required_providers {&lt;br /&gt;
    cloudstack = {&lt;br /&gt;
      source = &amp;quot;cloudstack/cloudstack&amp;quot;&lt;br /&gt;
      version = &amp;quot;0.4.0&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
provider &amp;quot;cloudstack&amp;quot; {&lt;br /&gt;
  api_url    = &amp;quot;${var.cloudstack_api_url}&amp;quot;&lt;br /&gt;
  api_key    = &amp;quot;${var.cloudstack_api_key}&amp;quot;&lt;br /&gt;
  secret_key = &amp;quot;${var.cloudstack_secret_key}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Create a new VPC&lt;br /&gt;
resource &amp;quot;cloudstack_vpc&amp;quot; &amp;quot;default&amp;quot; {&lt;br /&gt;
  name = &amp;quot;wedgenet-vpc&amp;quot;&lt;br /&gt;
  display_text = &amp;quot;wedgenet-vpc&amp;quot;&lt;br /&gt;
  cidr = &amp;quot;100.64.0.0/20&amp;quot;&lt;br /&gt;
  vpc_offering = &amp;quot;Default VPC offering&amp;quot;&lt;br /&gt;
  zone = &amp;quot;zone1&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Create a new ACL&lt;br /&gt;
resource &amp;quot;cloudstack_network_acl&amp;quot; &amp;quot;default&amp;quot; {&lt;br /&gt;
    name  = &amp;quot;vpc-acl&amp;quot;&lt;br /&gt;
    vpc_id = &amp;quot;${cloudstack_vpc.default.id}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# One ingress and one egress rule for the ACL&lt;br /&gt;
resource &amp;quot;cloudstack_network_acl_rule&amp;quot; &amp;quot;ingress&amp;quot; {&lt;br /&gt;
    acl_id = &amp;quot;${cloudstack_network_acl.default.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    rule {&lt;br /&gt;
        action       = &amp;quot;allow&amp;quot;&lt;br /&gt;
        cidr_list    = [&amp;quot;0.0.0.0/0&amp;quot;]&lt;br /&gt;
        protocol     = &amp;quot;tcp&amp;quot;&lt;br /&gt;
        ports        = [&amp;quot;22&amp;quot;, &amp;quot;80&amp;quot;, &amp;quot;443&amp;quot;]&lt;br /&gt;
        traffic_type = &amp;quot;ingress&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
resource &amp;quot;cloudstack_network_acl_rule&amp;quot; &amp;quot;egress&amp;quot; {&lt;br /&gt;
    acl_id = &amp;quot;${cloudstack_network_acl.default.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    rule {&lt;br /&gt;
        action       = &amp;quot;allow&amp;quot;&lt;br /&gt;
        cidr_list    = [&amp;quot;0.0.0.0/0&amp;quot;]&lt;br /&gt;
        protocol     = &amp;quot;all&amp;quot;&lt;br /&gt;
        traffic_type = &amp;quot;egress&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Create a new network in the VPC&lt;br /&gt;
resource &amp;quot;cloudstack_network&amp;quot; &amp;quot;primary&amp;quot; {&lt;br /&gt;
    name = &amp;quot;primary&amp;quot;&lt;br /&gt;
    display_text = &amp;quot;primary&amp;quot;&lt;br /&gt;
    cidr = &amp;quot;100.64.1.0/24&amp;quot;&lt;br /&gt;
    network_offering = &amp;quot;DefaultIsolatedNetworkOfferingForVpcNetworks&amp;quot;&lt;br /&gt;
    acl_id = &amp;quot;${cloudstack_network_acl.default.id}&amp;quot;&lt;br /&gt;
    vpc_id = &amp;quot;${cloudstack_vpc.default.id}&amp;quot;&lt;br /&gt;
    zone = &amp;quot;zone1&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Create a new public IP address for this network&lt;br /&gt;
resource &amp;quot;cloudstack_ipaddress&amp;quot; &amp;quot;public_ip&amp;quot; {&lt;br /&gt;
    vpc_id = &amp;quot;${cloudstack_vpc.default.id}&amp;quot;&lt;br /&gt;
    network_id = &amp;quot;${cloudstack_network.wosnet1.id}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Create VMs. &lt;br /&gt;
resource &amp;quot;cloudstack_instance&amp;quot; &amp;quot;vm&amp;quot; {&lt;br /&gt;
  count = 1&lt;br /&gt;
  name = &amp;quot;vm${count.index+1}&amp;quot;&lt;br /&gt;
  zone = &amp;quot;zone1&amp;quot;&lt;br /&gt;
  service_offering = &amp;quot;rcs.c4&amp;quot;&lt;br /&gt;
  template = &amp;quot;RockyLinux 8.5&amp;quot;&lt;br /&gt;
  network_id = &amp;quot;${cloudstack_network.wosnet1.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  # Cloud Init data can be used to configure your VM on first startup if your template supports Cloud Init&lt;br /&gt;
  user_data = &amp;lt;&amp;lt;EOF&lt;br /&gt;
#cloud-config&lt;br /&gt;
&lt;br /&gt;
# Require specific packages&lt;br /&gt;
packages:&lt;br /&gt;
 - tmux&lt;br /&gt;
 - git&lt;br /&gt;
 - tcpdump&lt;br /&gt;
&lt;br /&gt;
EOF&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|style=max-height: 300px;overflow:scroll;}}&lt;br /&gt;
&lt;br /&gt;
== Applying system updates ==&lt;br /&gt;
Research Computing Services strongly recommends users apply operating system updates as soon as possible. The commands used to apply system updates differ depending on the installed operating system.&lt;br /&gt;
&lt;br /&gt;
=== Ubuntu / Debian based systems ===&lt;br /&gt;
Connect to the Linux system via the CloudStack console or through SSH, then:&lt;br /&gt;
&lt;br /&gt;
# Update the refresh package database: &#039;&#039;&#039;&amp;lt;code&amp;gt;sudo apt update&amp;lt;/code&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
# Determine what packages will be updated: &#039;&#039;&#039;&amp;lt;code&amp;gt;sudo apt list --upgradable&amp;lt;/code&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
# Apply the updates: &amp;lt;code&amp;gt;sudo apt upgrade&amp;lt;/code&amp;gt;&lt;br /&gt;
# You may wish to reboot the VM to apply all updates, such as when the kernel is updated: &amp;lt;code&amp;gt;sudo reboot&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Rocky Linux / Red Hat based systems ===&lt;br /&gt;
Connect to the Linux system via the CloudStack console or through SSH, then:&lt;br /&gt;
&lt;br /&gt;
# Clear any cached data in the package database: &#039;&#039;&#039;&amp;lt;code&amp;gt;sudo dnf clean all&amp;lt;/code&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
# Apply all security updates: &#039;&#039;&#039;&amp;lt;code&amp;gt;dnf update --security&amp;lt;/code&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
# You may wish to reboot the VM to apply all updates, such as when the kernel is updated: &amp;lt;code&amp;gt;sudo reboot&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more information, please see [https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_and_monitoring_security_updates/installing-security-updates_managing-and-monitoring-security-updates#installing-security-updates_managing-and-monitoring-security-updates Red Hat&#039;s documentation].&lt;br /&gt;
&lt;br /&gt;
= Troubleshooting =&lt;br /&gt;
&lt;br /&gt;
=== Cannot create a volume snapshot ===&lt;br /&gt;
Volume snapshots can only be taken on VMs that are powered off.&lt;br /&gt;
&lt;br /&gt;
=== Cannot create a VM snapshot ===&lt;br /&gt;
Disk-only VM snapshots cannot be taken when the VM is running. If you intend to snapshot a running system, you must also snapshot its memory.&lt;br /&gt;
&lt;br /&gt;
=== VM state is still running after shutdown ===&lt;br /&gt;
After running &#039;shutdown&#039; on a VM, the VM state reported by CloudStack is still running. &lt;br /&gt;
&lt;br /&gt;
Please try to do a force shutdown from the CloudStack management console. The VM state isn&#039;t updated by CloudStack and as a result, the state of a VM isn&#039;t properly reflected when power state changes outside of CloudStack (likely a bug?)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:CloudStack]]&lt;br /&gt;
{{Navbox CloudStack}}&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=Template:ARC_Cluster_Status&amp;diff=3597</id>
		<title>Template:ARC Cluster Status</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=Template:ARC_Cluster_Status&amp;diff=3597"/>
		<updated>2024-10-08T19:13:33Z</updated>

		<summary type="html">&lt;p&gt;Lleung: Undo revision 3584 by Lleung (talk)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Cluster Status&lt;br /&gt;
| cluster = ARC&lt;br /&gt;
| status = green&lt;br /&gt;
| title = Cluster operational&lt;br /&gt;
| message = System is operational. No updates are planned.&lt;br /&gt;
&lt;br /&gt;
See the [[ARC Cluster Status]] page for system notices. &lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=Template:ARC_Cluster_Status&amp;diff=3596</id>
		<title>Template:ARC Cluster Status</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=Template:ARC_Cluster_Status&amp;diff=3596"/>
		<updated>2024-10-08T19:13:19Z</updated>

		<summary type="html">&lt;p&gt;Lleung: Undo revision 3585 by Lleung (talk)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Cluster Status&lt;br /&gt;
| cluster = ARC&lt;br /&gt;
| status = green&lt;br /&gt;
| title = Cluster operational&lt;br /&gt;
| message = Data center move is in progress. Currently, the compute nodes in cpu2019, cpu2021, cpu2022, gpu-v100, gpu-a100, and most nodes from bigmem will be unavailable&lt;br /&gt;
&lt;br /&gt;
See the [[ARC Cluster Status]] page for system notices. &lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=Template:ARC_Cluster_Status&amp;diff=3585</id>
		<title>Template:ARC Cluster Status</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=Template:ARC_Cluster_Status&amp;diff=3585"/>
		<updated>2024-10-04T22:27:22Z</updated>

		<summary type="html">&lt;p&gt;Lleung: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Cluster Status&lt;br /&gt;
| cluster = ARC&lt;br /&gt;
| status = yellow&lt;br /&gt;
| title = Cluster partly unavailable&lt;br /&gt;
| message = Data center move is in progress. Currently, the compute nodes in cpu2019, cpu2021, cpu2022, gpu-v100, gpu-a100, and most nodes from bigmem will be unavailable&lt;br /&gt;
&lt;br /&gt;
See the [[ARC Cluster Status]] page for system notices. &lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=Template:ARC_Cluster_Status&amp;diff=3584</id>
		<title>Template:ARC Cluster Status</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=Template:ARC_Cluster_Status&amp;diff=3584"/>
		<updated>2024-10-04T22:27:11Z</updated>

		<summary type="html">&lt;p&gt;Lleung: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Cluster Status&lt;br /&gt;
| cluster = ARC&lt;br /&gt;
| status = green&lt;br /&gt;
| title = Cluster operational&lt;br /&gt;
| message = Data center move is in progress. Currently, the compute nodes in cpu2019, cpu2021, cpu2022, gpu-v100, gpu-a100, and most nodes from bigmem will be unavailable&lt;br /&gt;
&lt;br /&gt;
See the [[ARC Cluster Status]] page for system notices. &lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=ARC_Cluster_Status&amp;diff=3583</id>
		<title>ARC Cluster Status</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=ARC_Cluster_Status&amp;diff=3583"/>
		<updated>2024-10-04T22:26:13Z</updated>

		<summary type="html">&lt;p&gt;Lleung: Power issue update&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{ARC Cluster Status}}&lt;br /&gt;
&lt;br /&gt;
== System Messages ==&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = January System Updates&lt;br /&gt;
| date = 2023/01/01&lt;br /&gt;
| message =&lt;br /&gt;
Beginning January 16, 2023, the ARC cluster will undergo operating system updates. We shall do our utmost to minimize disruption and allow ongoing jobs to be completed. New jobs may be temporarily held from scheduling.&lt;br /&gt;
&lt;br /&gt;
The ARC login node will reboot on the morning of January 16. Please save your work and log out if possible.&lt;br /&gt;
&lt;br /&gt;
The upgrade is planned to be fully complete by January 20.&lt;br /&gt;
&lt;br /&gt;
If you encounter any system issues, do not hesitate to let us know.&lt;br /&gt;
&lt;br /&gt;
Thank you for your cooperation.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = System Updates Completed&lt;br /&gt;
| date = 2023/01/24&lt;br /&gt;
| message =&lt;br /&gt;
The upgrade has been completed. The following has been changed:&lt;br /&gt;
* OS Updated to Rocky Linux 8.7&lt;br /&gt;
* Slurm updated to 22.05.7&lt;br /&gt;
* Apptainer replaces Singularity&lt;br /&gt;
* Each job will have its own /tmp, /dev/shm, /run/user/$uid mounted&lt;br /&gt;
&lt;br /&gt;
If you encounter any system issues, do not hesitate to let us know.&lt;br /&gt;
&lt;br /&gt;
Thank you for your cooperation.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Filesystem Issues&lt;br /&gt;
| date = 2023/02/28&lt;br /&gt;
| message =&lt;br /&gt;
We are currently investigating a filesystem issue that is causing filesystem slowdowns across ARC.&lt;br /&gt;
&lt;br /&gt;
We will update you with more information as it becomes available.&lt;br /&gt;
&lt;br /&gt;
Thank you for your patience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Filesystem Issues&lt;br /&gt;
| date = 2023/03/1&lt;br /&gt;
| message =&lt;br /&gt;
We are still currently investigating a filesystem issue that is causing filesystem slowdowns across ARC. Some jobs on ARC have been paused to help us find the root cause of the slowdowns.&lt;br /&gt;
&lt;br /&gt;
We will update you with more information as it becomes available.&lt;br /&gt;
&lt;br /&gt;
Thank you for your patience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = ARC Login node reboot&lt;br /&gt;
| date = 2023/03/2&lt;br /&gt;
| message =&lt;br /&gt;
The ARC login node will be rebooted this afternoon for an emergency maintenance. This downtime is needed to help mitigate the filesystem slowdowns experienced on the login node.  Jobs will continue running and scheduling during this time.&lt;br /&gt;
&lt;br /&gt;
All logins to the ARC login node will be terminated at 3:00PM and will remain unavailable until 4:00PM.&lt;br /&gt;
&lt;br /&gt;
We apologize for the inconvenience and thank you for your patience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = ⚠️ Filesystem Issues&lt;br /&gt;
| date = 2023/03/2&lt;br /&gt;
| message =&lt;br /&gt;
We are still currently investigating a filesystem issue that is causing filesystem slowdowns on specific nodes in our MSRDC location.&lt;br /&gt;
&lt;br /&gt;
We will update you with more information as it becomes available.&lt;br /&gt;
&lt;br /&gt;
We apologize for the inconvenience and thank you for your patience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Filesystem Issues Resolved&lt;br /&gt;
| date = 2023/03/10&lt;br /&gt;
| message =&lt;br /&gt;
We have upgraded the filesystem routers in our MSRDC location to address the performance issues.&lt;br /&gt;
&lt;br /&gt;
Please let us know if you experience any issues with the filesystem performance.&lt;br /&gt;
&lt;br /&gt;
Thank-you for your patience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Open OnDemand reboot&lt;br /&gt;
| date = 2023/05/01&lt;br /&gt;
| message =&lt;br /&gt;
On May 1, 2023, the ARC Open OnDemand node will be rebooted between 5PM and 6PM. Expected downtime will be approximately 15 minutes.&lt;br /&gt;
&lt;br /&gt;
If you encounter any system issues, do not hesitate to let us know.&lt;br /&gt;
&lt;br /&gt;
Thank you for your cooperation.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Apptainer (Singularity) on ARC Login Node&lt;br /&gt;
| date = 2023/06/22&lt;br /&gt;
| message =&lt;br /&gt;
Apptainer (Singularity) containers may experience an error when&lt;br /&gt;
running on the Arc login node. If apptainer complains that a system&lt;br /&gt;
administrator needs to enable user namespaces, simply run your&lt;br /&gt;
containers inside a job.&lt;br /&gt;
&lt;br /&gt;
This is a temporary measure due to security vulnerability that will be&lt;br /&gt;
patched soon.&lt;br /&gt;
&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Lattice, Single, cpu2013 partition changes&lt;br /&gt;
| date = 2023/07/13&lt;br /&gt;
| message =&lt;br /&gt;
The Lattice and Single, and cpu2013 have all been decomissioned.  The Single&lt;br /&gt;
partition will be replaced by the nodes formerly in the cpu2013 partition but&lt;br /&gt;
will be called single.&lt;br /&gt;
&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Open OnDemand reboot&lt;br /&gt;
| date = 2023/10/17&lt;br /&gt;
| message =&lt;br /&gt;
Open OnDemand will be rebooted on October 17, 2023 for an update. It will be down for up to 30 minutes.&lt;br /&gt;
&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Storage Upgrade MARC/ARC cluster&lt;br /&gt;
| date = 2023/10/23&lt;br /&gt;
| message =&lt;br /&gt;
We will be performing storage upgrades on the MARC/ARC cluster on &lt;br /&gt;
November 16 and 17, 2023. To facilitate this, we will be throttling &lt;br /&gt;
down the number of jobs on both clusters while the upgrades are &lt;br /&gt;
performed&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Systems Operating Normally&lt;br /&gt;
| date = 2024/05/3&lt;br /&gt;
| message =&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Power Interruption&lt;br /&gt;
| date = 2024/05/07&lt;br /&gt;
| message = Arc Experienced an brief power outage around 11AM May 7, 2024.&lt;br /&gt;
Most compute nodes have or are rebooting.  Most jobs running at this time &lt;br /&gt;
were lost. Arc administrators are actively working on restarting compute &lt;br /&gt;
nodes. Sorry for the inconvenience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = GPU a100 Node Reservation&lt;br /&gt;
| date = 2024/06/03&lt;br /&gt;
| message = Job submissions targeted to the  GPU a100 partition will be &lt;br /&gt;
affected by a temporary reservation on the nodes to accommodate the RCS&lt;br /&gt;
summer school class taking place on 2024/Jun/10. Reservation will end &lt;br /&gt;
shortly after. Please submit your jobs normally and the scheduler will &lt;br /&gt;
start them as soon as the nodes are available. Sorry for the inconvenience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = GPU a100 Node Reservation Removed&lt;br /&gt;
| date = 2024/06/11&lt;br /&gt;
| message = GPU a100 Nodes in ARC have been returned to normal scheduling. &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Notice of Upcoming Partial Outage&lt;br /&gt;
| date = 2024/08/23&lt;br /&gt;
| message = Several compute nodes from the ARC cluster will be unavailable &lt;br /&gt;
between Sept 23 to Sept 27 inclusive (subject to change). All compute nodes &lt;br /&gt;
in cpu2019, cpu2021/2, gpu-v100 most nodes from bigmem and gpu-a100 will be &lt;br /&gt;
affected. These nodes will return to service as soon as the work is complete.  &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Partial Outage Update I&lt;br /&gt;
| date = 2024/09/25&lt;br /&gt;
| message = Due to hardware issues that is blocking our original maintenance window, most compute nodes that were taken offline on Monday has been brought back online today. An additional partial outage will occur again starting next Tuesday for the same nodes.&lt;br /&gt;
&lt;br /&gt;
On Tuesday, October 1, 2024, the compute nodes in cpu2019, cpu2021, cpu2022, gpu-v100, gpu-a100, and most nodes from bigmem will be unavailable until Friday October 4, 2024.&lt;br /&gt;
&lt;br /&gt;
We apologise for the inconvenience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Partial Outage Update II&lt;br /&gt;
| date = 2024/10/04&lt;br /&gt;
| message = The maintenance window will be extended until at least Monday, October 7, 2024 due to a power distribution issue in our renovated data centre.&lt;br /&gt;
&lt;br /&gt;
Currently, the compute nodes in cpu2019, cpu2021, cpu2022, gpu-v100, gpu-a100, and most nodes from bigmem will be unavailable until at least Monday, October 7, 2024. Affected WDF-Altis GPU nodes include: wdfgpu[1-2,6,8-12].&lt;br /&gt;
&lt;br /&gt;
We apologize for the extended downtime and will update you as soon as we have additional information from our operations team.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Navbox ARC}}&lt;br /&gt;
[[Category:ARC]]&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=Think_Login_Node_Status&amp;diff=3582</id>
		<title>Think Login Node Status</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=Think_Login_Node_Status&amp;diff=3582"/>
		<updated>2024-10-04T22:26:04Z</updated>

		<summary type="html">&lt;p&gt;Lleung: Power issue update&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{ARC Cluster Status}}&lt;br /&gt;
&lt;br /&gt;
== System Messages ==&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Systems Operating Normally&lt;br /&gt;
| date = 2024/09/03&lt;br /&gt;
| message =&lt;br /&gt;
The ARC Cluster and the Think login node is operational. No upcoming upgrades are planned.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Notice of Upcoming Partial Outage&lt;br /&gt;
| date = 2024/08/23&lt;br /&gt;
| message = Several compute nodes from the ARC cluster will be unavailable &lt;br /&gt;
between Sept 23 to Sept 27 inclusive (subject to change). Some Think GPU nodes will be affected during this maintenance window. These nodes will return to service as soon as the work is complete.  &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Partial Outage Update I&lt;br /&gt;
| date = 2024/09/25&lt;br /&gt;
| message = Due to hardware issues that is blocking our original maintenance window, most compute nodes that were taken offline on Monday has been brought back online today. An additional partial outage will occur again starting next Tuesday for the same nodes.&lt;br /&gt;
&lt;br /&gt;
On Tuesday, October 1, 2024, the compute nodes in cpu2019, cpu2021, cpu2022, gpu-v100, gpu-a100, and most nodes from bigmem will be unavailable until Friday October 4, 2024.&lt;br /&gt;
&lt;br /&gt;
We apologise for the inconvenience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Partial Outage Update II&lt;br /&gt;
| date = 2024/10/04&lt;br /&gt;
| message = The maintenance window will be extended until at least Monday, October 7, 2024 due to a power distribution issue in our renovated data centre.&lt;br /&gt;
&lt;br /&gt;
Currently, the compute nodes in cpu2019, cpu2021, cpu2022, gpu-v100, gpu-a100, and most nodes from bigmem will be unavailable until at least Monday, October 7, 2024. Affected WDF-Altis GPU nodes include: wdfgpu[1-2,6,8-12].&lt;br /&gt;
&lt;br /&gt;
We apologize for the extended downtime and will update you as soon as we have additional information from our operations team.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:ARC]]&lt;br /&gt;
{{Navbox ARC}}&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=Altis_Login_Node_Status&amp;diff=3581</id>
		<title>Altis Login Node Status</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=Altis_Login_Node_Status&amp;diff=3581"/>
		<updated>2024-10-04T22:25:55Z</updated>

		<summary type="html">&lt;p&gt;Lleung: Power issue update&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{ARC Cluster Status}}&lt;br /&gt;
&lt;br /&gt;
== System Messages ==&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Systems Operating Normally&lt;br /&gt;
| date = 2024/09/03&lt;br /&gt;
| message =&lt;br /&gt;
The ARC Cluster and the Altis login node is operational. No upcoming upgrades are planned.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Notice of Upcoming Partial Outage&lt;br /&gt;
| date = 2024/08/27&lt;br /&gt;
| message = Several compute nodes from the ARC cluster will be unavailable &lt;br /&gt;
between Sept 23 to Sept 27 inclusive (subject to change). Some Altis GPU nodes will be affected during this maintenance window. These nodes will return to service as soon as the work is complete.  &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Partial Outage Update I&lt;br /&gt;
| date = 2024/09/25&lt;br /&gt;
| message = Due to hardware issues that is blocking our original maintenance window, most compute nodes that were taken offline on Monday has been brought back online today. An additional partial outage will occur again starting next Tuesday for the same nodes.&lt;br /&gt;
&lt;br /&gt;
On Tuesday, October 1, 2024, the compute nodes in cpu2019, cpu2021, cpu2022, gpu-v100, gpu-a100, and most nodes from bigmem will be unavailable until Friday October 4, 2024. Affected WDF-Altis GPU nodes include: wdfgpu[1-2,6,8-12].&lt;br /&gt;
&lt;br /&gt;
We apologise for the inconvenience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Partial Outage Update II&lt;br /&gt;
| date = 2024/10/04&lt;br /&gt;
| message = The maintenance window will be extended until at least Monday, October 7, 2024 due to a power distribution issue in our renovated data centre.&lt;br /&gt;
&lt;br /&gt;
Currently, the compute nodes in cpu2019, cpu2021, cpu2022, gpu-v100, gpu-a100, and most nodes from bigmem will be unavailable until at least Monday, October 7, 2024. Affected WDF-Altis GPU nodes include: wdfgpu[1-2,6,8-12].&lt;br /&gt;
&lt;br /&gt;
We apologize for the extended downtime and will update you as soon as we have additional information from our operations team.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:ARC]]&lt;br /&gt;
{{Navbox ARC}}&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=Altis_Login_Node_Status&amp;diff=3567</id>
		<title>Altis Login Node Status</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=Altis_Login_Node_Status&amp;diff=3567"/>
		<updated>2024-09-25T21:07:02Z</updated>

		<summary type="html">&lt;p&gt;Lleung: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{ARC Cluster Status}}&lt;br /&gt;
&lt;br /&gt;
== System Messages ==&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Systems Operating Normally&lt;br /&gt;
| date = 2024/09/03&lt;br /&gt;
| message =&lt;br /&gt;
The ARC Cluster and the Altis login node is operational. No upcoming upgrades are planned.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Notice of Upcoming Partial Outage&lt;br /&gt;
| date = 2024/08/27&lt;br /&gt;
| message = Several compute nodes from the ARC cluster will be unavailable &lt;br /&gt;
between Sept 23 to Sept 27 inclusive (subject to change). Some Altis GPU nodes will be affected during this maintenance window. These nodes will return to service as soon as the work is complete.  &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Partial Outage Update I&lt;br /&gt;
| date = 2024/09/25&lt;br /&gt;
| message = Due to hardware issues that is blocking our original maintenance window, most compute nodes that were taken offline on Monday has been brought back online today. An additional partial outage will occur again starting next Tuesday for the same nodes.&lt;br /&gt;
&lt;br /&gt;
On Tuesday, October 1, 2024, the compute nodes in cpu2019, cpu2021, cpu2022, gpu-v100, gpu-a100, and most nodes from bigmem will be unavailable until Friday October 4, 2024. Affected WDF-Altis GPU nodes include: wdfgpu[1-2,6,8-12].&lt;br /&gt;
&lt;br /&gt;
We apologise for the inconvenience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:ARC]]&lt;br /&gt;
{{Navbox ARC}}&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=Think_Login_Node_Status&amp;diff=3566</id>
		<title>Think Login Node Status</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=Think_Login_Node_Status&amp;diff=3566"/>
		<updated>2024-09-25T21:06:52Z</updated>

		<summary type="html">&lt;p&gt;Lleung: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{ARC Cluster Status}}&lt;br /&gt;
&lt;br /&gt;
== System Messages ==&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Systems Operating Normally&lt;br /&gt;
| date = 2024/09/03&lt;br /&gt;
| message =&lt;br /&gt;
The ARC Cluster and the Think login node is operational. No upcoming upgrades are planned.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Notice of Upcoming Partial Outage&lt;br /&gt;
| date = 2024/08/23&lt;br /&gt;
| message = Several compute nodes from the ARC cluster will be unavailable &lt;br /&gt;
between Sept 23 to Sept 27 inclusive (subject to change). Some Think GPU nodes will be affected during this maintenance window. These nodes will return to service as soon as the work is complete.  &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Partial Outage Update I&lt;br /&gt;
| date = 2024/09/25&lt;br /&gt;
| message = Due to hardware issues that is blocking our original maintenance window, most compute nodes that were taken offline on Monday has been brought back online today. An additional partial outage will occur again starting next Tuesday for the same nodes.&lt;br /&gt;
&lt;br /&gt;
On Tuesday, October 1, 2024, the compute nodes in cpu2019, cpu2021, cpu2022, gpu-v100, gpu-a100, and most nodes from bigmem will be unavailable until Friday October 4, 2024.&lt;br /&gt;
&lt;br /&gt;
We apologise for the inconvenience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:ARC]]&lt;br /&gt;
{{Navbox ARC}}&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=ARC_Cluster_Status&amp;diff=3565</id>
		<title>ARC Cluster Status</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=ARC_Cluster_Status&amp;diff=3565"/>
		<updated>2024-09-25T20:48:09Z</updated>

		<summary type="html">&lt;p&gt;Lleung: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{ARC Cluster Status}}&lt;br /&gt;
&lt;br /&gt;
== System Messages ==&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = January System Updates&lt;br /&gt;
| date = 2023/01/01&lt;br /&gt;
| message =&lt;br /&gt;
Beginning January 16, 2023, the ARC cluster will undergo operating system updates. We shall do our utmost to minimize disruption and allow ongoing jobs to be completed. New jobs may be temporarily held from scheduling.&lt;br /&gt;
&lt;br /&gt;
The ARC login node will reboot on the morning of January 16. Please save your work and log out if possible.&lt;br /&gt;
&lt;br /&gt;
The upgrade is planned to be fully complete by January 20.&lt;br /&gt;
&lt;br /&gt;
If you encounter any system issues, do not hesitate to let us know.&lt;br /&gt;
&lt;br /&gt;
Thank you for your cooperation.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = System Updates Completed&lt;br /&gt;
| date = 2023/01/24&lt;br /&gt;
| message =&lt;br /&gt;
The upgrade has been completed. The following has been changed:&lt;br /&gt;
* OS Updated to Rocky Linux 8.7&lt;br /&gt;
* Slurm updated to 22.05.7&lt;br /&gt;
* Apptainer replaces Singularity&lt;br /&gt;
* Each job will have its own /tmp, /dev/shm, /run/user/$uid mounted&lt;br /&gt;
&lt;br /&gt;
If you encounter any system issues, do not hesitate to let us know.&lt;br /&gt;
&lt;br /&gt;
Thank you for your cooperation.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Filesystem Issues&lt;br /&gt;
| date = 2023/02/28&lt;br /&gt;
| message =&lt;br /&gt;
We are currently investigating a filesystem issue that is causing filesystem slowdowns across ARC.&lt;br /&gt;
&lt;br /&gt;
We will update you with more information as it becomes available.&lt;br /&gt;
&lt;br /&gt;
Thank you for your patience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Filesystem Issues&lt;br /&gt;
| date = 2023/03/1&lt;br /&gt;
| message =&lt;br /&gt;
We are still currently investigating a filesystem issue that is causing filesystem slowdowns across ARC. Some jobs on ARC have been paused to help us find the root cause of the slowdowns.&lt;br /&gt;
&lt;br /&gt;
We will update you with more information as it becomes available.&lt;br /&gt;
&lt;br /&gt;
Thank you for your patience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = ARC Login node reboot&lt;br /&gt;
| date = 2023/03/2&lt;br /&gt;
| message =&lt;br /&gt;
The ARC login node will be rebooted this afternoon for an emergency maintenance. This downtime is needed to help mitigate the filesystem slowdowns experienced on the login node.  Jobs will continue running and scheduling during this time.&lt;br /&gt;
&lt;br /&gt;
All logins to the ARC login node will be terminated at 3:00PM and will remain unavailable until 4:00PM.&lt;br /&gt;
&lt;br /&gt;
We apologize for the inconvenience and thank you for your patience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = ⚠️ Filesystem Issues&lt;br /&gt;
| date = 2023/03/2&lt;br /&gt;
| message =&lt;br /&gt;
We are still currently investigating a filesystem issue that is causing filesystem slowdowns on specific nodes in our MSRDC location.&lt;br /&gt;
&lt;br /&gt;
We will update you with more information as it becomes available.&lt;br /&gt;
&lt;br /&gt;
We apologize for the inconvenience and thank you for your patience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Filesystem Issues Resolved&lt;br /&gt;
| date = 2023/03/10&lt;br /&gt;
| message =&lt;br /&gt;
We have upgraded the filesystem routers in our MSRDC location to address the performance issues.&lt;br /&gt;
&lt;br /&gt;
Please let us know if you experience any issues with the filesystem performance.&lt;br /&gt;
&lt;br /&gt;
Thank-you for your patience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Open OnDemand reboot&lt;br /&gt;
| date = 2023/05/01&lt;br /&gt;
| message =&lt;br /&gt;
On May 1, 2023, the ARC Open OnDemand node will be rebooted between 5PM and 6PM. Expected downtime will be approximately 15 minutes.&lt;br /&gt;
&lt;br /&gt;
If you encounter any system issues, do not hesitate to let us know.&lt;br /&gt;
&lt;br /&gt;
Thank you for your cooperation.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Apptainer (Singularity) on ARC Login Node&lt;br /&gt;
| date = 2023/06/22&lt;br /&gt;
| message =&lt;br /&gt;
Apptainer (Singularity) containers may experience an error when&lt;br /&gt;
running on the Arc login node. If apptainer complains that a system&lt;br /&gt;
administrator needs to enable user namespaces, simply run your&lt;br /&gt;
containers inside a job.&lt;br /&gt;
&lt;br /&gt;
This is a temporary measure due to security vulnerability that will be&lt;br /&gt;
patched soon.&lt;br /&gt;
&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Lattice, Single, cpu2013 partition changes&lt;br /&gt;
| date = 2023/07/13&lt;br /&gt;
| message =&lt;br /&gt;
The Lattice and Single, and cpu2013 have all been decomissioned.  The Single&lt;br /&gt;
partition will be replaced by the nodes formerly in the cpu2013 partition but&lt;br /&gt;
will be called single.&lt;br /&gt;
&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Open OnDemand reboot&lt;br /&gt;
| date = 2023/10/17&lt;br /&gt;
| message =&lt;br /&gt;
Open OnDemand will be rebooted on October 17, 2023 for an update. It will be down for up to 30 minutes.&lt;br /&gt;
&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Storage Upgrade MARC/ARC cluster&lt;br /&gt;
| date = 2023/10/23&lt;br /&gt;
| message =&lt;br /&gt;
We will be performing storage upgrades on the MARC/ARC cluster on &lt;br /&gt;
November 16 and 17, 2023. To facilitate this, we will be throttling &lt;br /&gt;
down the number of jobs on both clusters while the upgrades are &lt;br /&gt;
performed&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Systems Operating Normally&lt;br /&gt;
| date = 2024/05/3&lt;br /&gt;
| message =&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Power Interruption&lt;br /&gt;
| date = 2024/05/07&lt;br /&gt;
| message = Arc Experienced an brief power outage around 11AM May 7, 2024.&lt;br /&gt;
Most compute nodes have or are rebooting.  Most jobs running at this time &lt;br /&gt;
were lost. Arc administrators are actively working on restarting compute &lt;br /&gt;
nodes. Sorry for the inconvenience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = GPU a100 Node Reservation&lt;br /&gt;
| date = 2024/06/03&lt;br /&gt;
| message = Job submissions targeted to the  GPU a100 partition will be &lt;br /&gt;
affected by a temporary reservation on the nodes to accommodate the RCS&lt;br /&gt;
summer school class taking place on 2024/Jun/10. Reservation will end &lt;br /&gt;
shortly after. Please submit your jobs normally and the scheduler will &lt;br /&gt;
start them as soon as the nodes are available. Sorry for the inconvenience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = GPU a100 Node Reservation Removed&lt;br /&gt;
| date = 2024/06/11&lt;br /&gt;
| message = GPU a100 Nodes in ARC have been returned to normal scheduling. &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Notice of Upcoming Partial Outage&lt;br /&gt;
| date = 2024/08/23&lt;br /&gt;
| message = Several compute nodes from the ARC cluster will be unavailable &lt;br /&gt;
between Sept 23 to Sept 27 inclusive (subject to change). All compute nodes &lt;br /&gt;
in cpu2019, cpu2021/2, gpu-v100 most nodes from bigmem and gpu-a100 will be &lt;br /&gt;
affected. These nodes will return to service as soon as the work is complete.  &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Partial Outage Update I&lt;br /&gt;
| date = 2024/09/25&lt;br /&gt;
| message = Due to hardware issues that is blocking our original maintenance window, most compute nodes that were taken offline on Monday has been brought back online today. An additional partial outage will occur again starting next Tuesday for the same nodes.&lt;br /&gt;
&lt;br /&gt;
On Tuesday, October 1, 2024, the compute nodes in cpu2019, cpu2021, cpu2022, gpu-v100, gpu-a100, and most nodes from bigmem will be unavailable until Friday October 4, 2024.&lt;br /&gt;
&lt;br /&gt;
We apologise for the inconvenience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Navbox ARC}}&lt;br /&gt;
[[Category:ARC]]&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=Altis_Login_Node_Status&amp;diff=3564</id>
		<title>Altis Login Node Status</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=Altis_Login_Node_Status&amp;diff=3564"/>
		<updated>2024-09-25T20:41:34Z</updated>

		<summary type="html">&lt;p&gt;Lleung: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{ARC Cluster Status}}&lt;br /&gt;
&lt;br /&gt;
== System Messages ==&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Systems Operating Normally&lt;br /&gt;
| date = 2024/09/03&lt;br /&gt;
| message =&lt;br /&gt;
The ARC Cluster and the Altis login node is operational. No upcoming upgrades are planned.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Notice of Upcoming Partial Outage&lt;br /&gt;
| date = 2024/08/27&lt;br /&gt;
| message = Several compute nodes from the ARC cluster will be unavailable &lt;br /&gt;
between Sept 23 to Sept 27 inclusive (subject to change). Some Altis GPU nodes will be affected during this maintenance window. These nodes will return to service as soon as the work is complete.  &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Partial Outage Update I&lt;br /&gt;
| date = 2024/08/25&lt;br /&gt;
| message = Due to hardware issues that is blocking our original maintenance window, most compute nodes that were taken offline on Monday has been brought back online today. An additional partial outage will occur again starting next Tuesday for the same nodes.&lt;br /&gt;
&lt;br /&gt;
On Tuesday, October 1, 2024, the compute nodes in cpu2019, cpu2021, cpu2022, gpu-v100, gpu-a100, and most nodes from bigmem will be unavailable until Friday October 4, 2024. Affected WDF-Altis GPU nodes include: wdfgpu[1-2,6,8-12].&lt;br /&gt;
&lt;br /&gt;
We apologise for the inconvenience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:ARC]]&lt;br /&gt;
{{Navbox ARC}}&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=Think_Login_Node_Status&amp;diff=3563</id>
		<title>Think Login Node Status</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=Think_Login_Node_Status&amp;diff=3563"/>
		<updated>2024-09-25T20:40:34Z</updated>

		<summary type="html">&lt;p&gt;Lleung: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{ARC Cluster Status}}&lt;br /&gt;
&lt;br /&gt;
== System Messages ==&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Systems Operating Normally&lt;br /&gt;
| date = 2024/09/03&lt;br /&gt;
| message =&lt;br /&gt;
The ARC Cluster and the Think login node is operational. No upcoming upgrades are planned.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Notice of Upcoming Partial Outage&lt;br /&gt;
| date = 2024/08/23&lt;br /&gt;
| message = Several compute nodes from the ARC cluster will be unavailable &lt;br /&gt;
between Sept 23 to Sept 27 inclusive (subject to change). Some Think GPU nodes will be affected during this maintenance window. These nodes will return to service as soon as the work is complete.  &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Partial Outage Update I&lt;br /&gt;
| date = 2024/08/25&lt;br /&gt;
| message = Due to hardware issues that is blocking our original maintenance window, most compute nodes that were taken offline on Monday has been brought back online today. An additional partial outage will occur again starting next Tuesday for the same nodes.&lt;br /&gt;
&lt;br /&gt;
On Tuesday, October 1, 2024, the compute nodes in cpu2019, cpu2021, cpu2022, gpu-v100, gpu-a100, and most nodes from bigmem will be unavailable until Friday October 4, 2024.&lt;br /&gt;
&lt;br /&gt;
We apologise for the inconvenience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:ARC]]&lt;br /&gt;
{{Navbox ARC}}&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=ARC_Cluster_Status&amp;diff=3562</id>
		<title>ARC Cluster Status</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=ARC_Cluster_Status&amp;diff=3562"/>
		<updated>2024-09-25T20:40:13Z</updated>

		<summary type="html">&lt;p&gt;Lleung: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{ARC Cluster Status}}&lt;br /&gt;
&lt;br /&gt;
== System Messages ==&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = January System Updates&lt;br /&gt;
| date = 2023/01/01&lt;br /&gt;
| message =&lt;br /&gt;
Beginning January 16, 2023, the ARC cluster will undergo operating system updates. We shall do our utmost to minimize disruption and allow ongoing jobs to be completed. New jobs may be temporarily held from scheduling.&lt;br /&gt;
&lt;br /&gt;
The ARC login node will reboot on the morning of January 16. Please save your work and log out if possible.&lt;br /&gt;
&lt;br /&gt;
The upgrade is planned to be fully complete by January 20.&lt;br /&gt;
&lt;br /&gt;
If you encounter any system issues, do not hesitate to let us know.&lt;br /&gt;
&lt;br /&gt;
Thank you for your cooperation.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = System Updates Completed&lt;br /&gt;
| date = 2023/01/24&lt;br /&gt;
| message =&lt;br /&gt;
The upgrade has been completed. The following has been changed:&lt;br /&gt;
* OS Updated to Rocky Linux 8.7&lt;br /&gt;
* Slurm updated to 22.05.7&lt;br /&gt;
* Apptainer replaces Singularity&lt;br /&gt;
* Each job will have its own /tmp, /dev/shm, /run/user/$uid mounted&lt;br /&gt;
&lt;br /&gt;
If you encounter any system issues, do not hesitate to let us know.&lt;br /&gt;
&lt;br /&gt;
Thank you for your cooperation.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Filesystem Issues&lt;br /&gt;
| date = 2023/02/28&lt;br /&gt;
| message =&lt;br /&gt;
We are currently investigating a filesystem issue that is causing filesystem slowdowns across ARC.&lt;br /&gt;
&lt;br /&gt;
We will update you with more information as it becomes available.&lt;br /&gt;
&lt;br /&gt;
Thank you for your patience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Filesystem Issues&lt;br /&gt;
| date = 2023/03/1&lt;br /&gt;
| message =&lt;br /&gt;
We are still currently investigating a filesystem issue that is causing filesystem slowdowns across ARC. Some jobs on ARC have been paused to help us find the root cause of the slowdowns.&lt;br /&gt;
&lt;br /&gt;
We will update you with more information as it becomes available.&lt;br /&gt;
&lt;br /&gt;
Thank you for your patience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = ARC Login node reboot&lt;br /&gt;
| date = 2023/03/2&lt;br /&gt;
| message =&lt;br /&gt;
The ARC login node will be rebooted this afternoon for an emergency maintenance. This downtime is needed to help mitigate the filesystem slowdowns experienced on the login node.  Jobs will continue running and scheduling during this time.&lt;br /&gt;
&lt;br /&gt;
All logins to the ARC login node will be terminated at 3:00PM and will remain unavailable until 4:00PM.&lt;br /&gt;
&lt;br /&gt;
We apologize for the inconvenience and thank you for your patience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = ⚠️ Filesystem Issues&lt;br /&gt;
| date = 2023/03/2&lt;br /&gt;
| message =&lt;br /&gt;
We are still currently investigating a filesystem issue that is causing filesystem slowdowns on specific nodes in our MSRDC location.&lt;br /&gt;
&lt;br /&gt;
We will update you with more information as it becomes available.&lt;br /&gt;
&lt;br /&gt;
We apologize for the inconvenience and thank you for your patience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Filesystem Issues Resolved&lt;br /&gt;
| date = 2023/03/10&lt;br /&gt;
| message =&lt;br /&gt;
We have upgraded the filesystem routers in our MSRDC location to address the performance issues.&lt;br /&gt;
&lt;br /&gt;
Please let us know if you experience any issues with the filesystem performance.&lt;br /&gt;
&lt;br /&gt;
Thank-you for your patience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Open OnDemand reboot&lt;br /&gt;
| date = 2023/05/01&lt;br /&gt;
| message =&lt;br /&gt;
On May 1, 2023, the ARC Open OnDemand node will be rebooted between 5PM and 6PM. Expected downtime will be approximately 15 minutes.&lt;br /&gt;
&lt;br /&gt;
If you encounter any system issues, do not hesitate to let us know.&lt;br /&gt;
&lt;br /&gt;
Thank you for your cooperation.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Apptainer (Singularity) on ARC Login Node&lt;br /&gt;
| date = 2023/06/22&lt;br /&gt;
| message =&lt;br /&gt;
Apptainer (Singularity) containers may experience an error when&lt;br /&gt;
running on the Arc login node. If apptainer complains that a system&lt;br /&gt;
administrator needs to enable user namespaces, simply run your&lt;br /&gt;
containers inside a job.&lt;br /&gt;
&lt;br /&gt;
This is a temporary measure due to security vulnerability that will be&lt;br /&gt;
patched soon.&lt;br /&gt;
&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Lattice, Single, cpu2013 partition changes&lt;br /&gt;
| date = 2023/07/13&lt;br /&gt;
| message =&lt;br /&gt;
The Lattice and Single, and cpu2013 have all been decomissioned.  The Single&lt;br /&gt;
partition will be replaced by the nodes formerly in the cpu2013 partition but&lt;br /&gt;
will be called single.&lt;br /&gt;
&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Open OnDemand reboot&lt;br /&gt;
| date = 2023/10/17&lt;br /&gt;
| message =&lt;br /&gt;
Open OnDemand will be rebooted on October 17, 2023 for an update. It will be down for up to 30 minutes.&lt;br /&gt;
&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Storage Upgrade MARC/ARC cluster&lt;br /&gt;
| date = 2023/10/23&lt;br /&gt;
| message =&lt;br /&gt;
We will be performing storage upgrades on the MARC/ARC cluster on &lt;br /&gt;
November 16 and 17, 2023. To facilitate this, we will be throttling &lt;br /&gt;
down the number of jobs on both clusters while the upgrades are &lt;br /&gt;
performed&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Systems Operating Normally&lt;br /&gt;
| date = 2024/05/3&lt;br /&gt;
| message =&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Power Interruption&lt;br /&gt;
| date = 2024/05/07&lt;br /&gt;
| message = Arc Experienced an brief power outage around 11AM May 7, 2024.&lt;br /&gt;
Most compute nodes have or are rebooting.  Most jobs running at this time &lt;br /&gt;
were lost. Arc administrators are actively working on restarting compute &lt;br /&gt;
nodes. Sorry for the inconvenience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = GPU a100 Node Reservation&lt;br /&gt;
| date = 2024/06/03&lt;br /&gt;
| message = Job submissions targeted to the  GPU a100 partition will be &lt;br /&gt;
affected by a temporary reservation on the nodes to accommodate the RCS&lt;br /&gt;
summer school class taking place on 2024/Jun/10. Reservation will end &lt;br /&gt;
shortly after. Please submit your jobs normally and the scheduler will &lt;br /&gt;
start them as soon as the nodes are available. Sorry for the inconvenience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = GPU a100 Node Reservation Removed&lt;br /&gt;
| date = 2024/06/11&lt;br /&gt;
| message = GPU a100 Nodes in ARC have been returned to normal scheduling. &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Notice of Upcoming Partial Outage&lt;br /&gt;
| date = 2024/08/23&lt;br /&gt;
| message = Several compute nodes from the ARC cluster will be unavailable &lt;br /&gt;
between Sept 23 to Sept 27 inclusive (subject to change). All compute nodes &lt;br /&gt;
in cpu2019, cpu2021/2, gpu-v100 most nodes from bigmem and gpu-a100 will be &lt;br /&gt;
affected. These nodes will return to service as soon as the work is complete.  &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Partial Outage Update I&lt;br /&gt;
| date = 2024/08/25&lt;br /&gt;
| message = Due to hardware issues that is blocking our original maintenance window, most compute nodes that were taken offline on Monday has been brought back online today. An additional partial outage will occur again starting next Tuesday for the same nodes.&lt;br /&gt;
&lt;br /&gt;
On Tuesday, October 1, 2024, the compute nodes in cpu2019, cpu2021, cpu2022, gpu-v100, gpu-a100, and most nodes from bigmem will be unavailable until Friday October 4, 2024.&lt;br /&gt;
&lt;br /&gt;
We apologise for the inconvenience.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Navbox ARC}}&lt;br /&gt;
[[Category:ARC]]&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=Think_Login_Node_Status&amp;diff=3561</id>
		<title>Think Login Node Status</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=Think_Login_Node_Status&amp;diff=3561"/>
		<updated>2024-09-25T20:26:02Z</updated>

		<summary type="html">&lt;p&gt;Lleung: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{ARC Cluster Status}}&lt;br /&gt;
&lt;br /&gt;
== System Messages ==&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Systems Operating Normally&lt;br /&gt;
| date = 2024/09/03&lt;br /&gt;
| message =&lt;br /&gt;
The ARC Cluster and the Think login node is operational. No upcoming upgrades are planned.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Notice of Upcoming Partial Outage&lt;br /&gt;
| date = 2024/08/27&lt;br /&gt;
| message = Several compute nodes from the ARC cluster will be unavailable &lt;br /&gt;
between Sept 23 to Sept 27 inclusive (subject to change). Some Think GPU nodes will be affected during this maintenance window. These nodes will return to service as soon as the work is complete.  &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
[[Category:ARC]]&lt;br /&gt;
{{Navbox ARC}}&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=Altis_Login_Node_Status&amp;diff=3560</id>
		<title>Altis Login Node Status</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=Altis_Login_Node_Status&amp;diff=3560"/>
		<updated>2024-09-25T20:25:48Z</updated>

		<summary type="html">&lt;p&gt;Lleung: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{ARC Cluster Status}}&lt;br /&gt;
&lt;br /&gt;
== System Messages ==&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Systems Operating Normally&lt;br /&gt;
| date = 2024/09/03&lt;br /&gt;
| message =&lt;br /&gt;
The ARC Cluster and the Altis login node is operational. No upcoming upgrades are planned.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Notice of Upcoming Partial Outage&lt;br /&gt;
| date = 2024/08/27&lt;br /&gt;
| message = Several compute nodes from the ARC cluster will be unavailable &lt;br /&gt;
between Sept 23 to Sept 27 inclusive (subject to change). Some Altis GPU nodes will be affected during this maintenance window. These nodes will return to service as soon as the work is complete.  &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:ARC]]&lt;br /&gt;
{{Navbox ARC}}&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=TALC_Cluster&amp;diff=3558</id>
		<title>TALC Cluster</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=TALC_Cluster&amp;diff=3558"/>
		<updated>2024-09-23T22:28:04Z</updated>

		<summary type="html">&lt;p&gt;Lleung: Lleung moved page TALC Cluster to TALC Cluster Guide over redirect&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[TALC Cluster Guide]]&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=TALC_Cluster_Guide&amp;diff=3557</id>
		<title>TALC Cluster Guide</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=TALC_Cluster_Guide&amp;diff=3557"/>
		<updated>2024-09-23T22:28:04Z</updated>

		<summary type="html">&lt;p&gt;Lleung: Lleung moved page TALC Cluster to TALC Cluster Guide over redirect&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{TALC Cluster Status}}{{Message Box&lt;br /&gt;
|icon=Security Icon.png&lt;br /&gt;
|title=Cybersecurity awareness at the U of C&lt;br /&gt;
|message=Please note that there are typically about 950 phishing attempts targeting University of Calgary accounts each month. This is just a reminder to be careful about computer security issues, both at home and at the University. Please visit https://it.ucalgary.ca/it-security for more information, tips on secure computing, and how to report suspected security problems.}}&lt;br /&gt;
&lt;br /&gt;
This guide gives an overview of the Teaching and Learning Cluster (TALC) at the University of Calgary and is intended to be read by new account holders getting started on TALC. This guide covers topics as the hardware and performance characteristics, available software, usage policies and how to log in and run jobs. &lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
TALC is a cluster of computers created by Research Computing Services (RCS) in response to requests for a central computing resource to support academic courses and workshops offered at the University of Calgary. It is a complement to the Advanced Research Computing (ARC) cluster that is used for research, rather than educational purposes. The software environment in the TALC and ARC clusters is very similar and workflows between the two clusters are identical.  What students learn about using TALC will have direct applicability to using ARC should they go on to use ARC for research work. &lt;br /&gt;
&lt;br /&gt;
If you are the instructor for a course that could benefit from using TALC, please review this guide and the [[TALC Terms of Use]] and then contact us at support@hpc.ucalgary.ca to discuss your requirements.  &lt;br /&gt;
&lt;br /&gt;
Please note that in order to ensure that the appropriate software is available, student accounts are in place, and appropriate training has been provided for your teaching assistants, it is best to start this discussion several months prior to the start of the course.&lt;br /&gt;
&lt;br /&gt;
If you are a student in a course using TALC, please review this guide for basic instructions in using the cluster.  Questions should first be directed to the teaching assistants or instructor for your course.&lt;br /&gt;
&lt;br /&gt;
===Obtaining an account===&lt;br /&gt;
TALC account requests are expected to be submitted by the course instructor rather than from individual students. You must have a University of Calgary IT account in order to use TALC. If you do not have a University of IT account or email address, please register for one at https://itregport.ucalgary.ca/. In order to ensure TALC is provisioned in time for a course start date, the instructor should submit the initial list of @ucalgary.ca accounts needed for the course 2 weeks before the start date.&lt;br /&gt;
&lt;br /&gt;
User accounts for classes will exist for the duration of the semester they are being taught, and are deleted along with the data in the home directories when the semester ends. You must ensure you save anything you want to access later saved elsewhere before these dates. We do not keep backups of data on TALC. &lt;br /&gt;
&lt;br /&gt;
For the upcoming academic calendar, accounts will be deleted on the following dates:&lt;br /&gt;
&lt;br /&gt;
Spring: 23 June 2023&lt;br /&gt;
&lt;br /&gt;
Summer: 25 Aug 2023&lt;br /&gt;
&lt;br /&gt;
Fall: 22 Dec 2023&lt;br /&gt;
&lt;br /&gt;
Winter: 30 Apr 2024&lt;br /&gt;
&lt;br /&gt;
=== Getting Support ===&lt;br /&gt;
{{Message Box&lt;br /&gt;
|title=Need Help or have other TALC Related Questions?&lt;br /&gt;
|message=&#039;&#039;&#039;Students&#039;&#039;&#039;, please send TALC-related questions to your course instructor or teaching assistants.&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Course instructors and TAs&#039;&#039;&#039;, please report system issues to support@hpc.ucalgary.ca.&lt;br /&gt;
|icon=Support Icon.png}}&lt;br /&gt;
&lt;br /&gt;
==Hardware==&lt;br /&gt;
The TALC cluster is comprised of repurposed research clusters that are a few generations old. As a result, individual processor performance will not be comparable to the latest processors but should be sufficient for educational purposes and course work.  &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Partition&lt;br /&gt;
!Description&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores, Model, and Year&lt;br /&gt;
!Installed Memory&lt;br /&gt;
!GPU&lt;br /&gt;
!Network&lt;br /&gt;
|-&lt;br /&gt;
|gpu&lt;br /&gt;
|GPU Compute&lt;br /&gt;
|3&lt;br /&gt;
|12 cores, 2x Intel Xeon Bronze 3204 CPU @ 1.90GHz (2019)&lt;br /&gt;
|192 GB&lt;br /&gt;
|5x NVIDIA Corporation TU104GL [Tesla T4]&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|cpu16&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|36&lt;br /&gt;
|16 cores, 2x Eight-Core Intel Xeon CPU E5-2650 @ 2.00GHz (2012)&lt;br /&gt;
|64 GB&lt;br /&gt;
|N/A&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|bigmem&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|2&lt;br /&gt;
|32 cores, 4x Intel(R) Xeon(R) CPU E7- 4830  @ 2.13GHz (2015)&lt;br /&gt;
|1024 GB&lt;br /&gt;
|N/A&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Storage===&lt;br /&gt;
{{Message Box&lt;br /&gt;
| title=No Backup Policy!&lt;br /&gt;
| message=You are responsible for your own backups.   Since accounts on TALC and related data are removed shortly after the associated course has finished, you should download anything you need to save to your own computer before the end of the course.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
TALC is connected to a network disk storage system. This storage is split across the &amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; file systems.  &lt;br /&gt;
====&amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt;: Home file system====&lt;br /&gt;
Each user has a directory under /home and is the default working directory when logging in to TALC. Each home directory has a per-user quota of 500 GB. This limit is fixed and cannot be increased.&lt;br /&gt;
&lt;br /&gt;
Note on file sharing: Due to security concerns, permissions set using &amp;lt;code&amp;gt;chmod&amp;lt;/code&amp;gt; on your home directory to allow other users to read/write to your home directory will be automatically reverted by an automated system process unless an explicit exception is made.  If you need to share files with others on the TALC cluster, please write to support@hpc.ucalgary.ca to ask for such an exception.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt;: Scratch file system for large job-oriented storage====&lt;br /&gt;
Associated with each job, under the &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; directory, a subdirectory is created that can be referenced in job scripts as &amp;lt;code&amp;gt;/scratch/${SLURM_JOB_ID}&amp;lt;/code&amp;gt;. You can use that directory for temporary files needed during the course of a job. Up to 30 TB of storage may be used, per user (total for all your jobs) in the &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; file system.  &lt;br /&gt;
&lt;br /&gt;
Data in &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; associated with a given job will be deleted automatically, without exception, five days after the job finishes.&lt;br /&gt;
&lt;br /&gt;
== Software ==&lt;br /&gt;
{{Message Box&lt;br /&gt;
| title=Software Package Requests&lt;br /&gt;
| message=Course instructors or teaching assistants should write to support@hpc.ucalgary.ca if additional software is required for their course.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
All TALC nodes run a version of Rocky Linux. For your convenience, we have packaged commonly used software packages and dependencies as modules available under &amp;lt;code&amp;gt;/global/software&amp;lt;/code&amp;gt;. If your software package is not available as a module, you may also try Anaconda which allows users to manage and install custom packages in an isolated environment.&lt;br /&gt;
&lt;br /&gt;
For a list of available packages that have been made available, please see [[ARC Software pages]]. &lt;br /&gt;
&lt;br /&gt;
=== Modules ===&lt;br /&gt;
The setup of the environment for using some of the installed software is through the &amp;lt;code&amp;gt;module&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
Software packages bundled as a module will be available under &amp;lt;code&amp;gt;/global/software&amp;lt;/code&amp;gt; and can be listed with the &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module avail&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To enable Python, load the Python module by running:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module load python/anaconda-3.6-5.1.0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To unload the Python module, run:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module remove python/anaconda-3.6-5.1.0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To see currently loaded modules, run:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ module list&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Using TALC==&lt;br /&gt;
{{Message Box&lt;br /&gt;
|title=Usage subject to [[TALC Terms of Use]]&lt;br /&gt;
|message=Please review the [[TALC Terms of Use]] prior to using TALC.&lt;br /&gt;
|icon=Support Icon.png}}&lt;br /&gt;
&lt;br /&gt;
===Logging in===&lt;br /&gt;
To log in to TALC, connect using SSH to talc.ucalgary.ca. Connections to TALC are accepted only from the University of Calgary network (on campus) or through the University of Calgary General VPN (off campus).&lt;br /&gt;
&lt;br /&gt;
When logging into a new TALC account for &#039;&#039;&#039;the first time&#039;&#039;&#039; the new user has to agree to the &#039;&#039;&#039;conditions of use&#039;&#039;&#039; for TALC. &lt;br /&gt;
Until the conditions are accepted the account is not active.&lt;br /&gt;
&lt;br /&gt;
See [[Connecting to RCS HPC Systems]] for more information.&lt;br /&gt;
&lt;br /&gt;
===Working interactively===&lt;br /&gt;
&amp;lt;!-- original chunk --&amp;gt;&lt;br /&gt;
TALC uses the Linux operating system. The program that responds to your typed commands and allows you to run other programs is called the Linux shell. There are several different shells available, but, by default you will use one called bash. It is useful to have some knowledge of the shell and a variety of other command-line programs that you can use to manipulate files. If you are new to Linux systems, we recommend that you work through one of the many online tutorials that are available, such as the [http://www.ee.surrey.ac.uk/Teaching/Unix/index.html UNIX Tutorial for Beginners (external link)] provided by the University of Surrey. The tutorial covers such fundamental topics, among others, as creating, renaming and deleting files and directories, how to produce a listing of your files and how to tell how much disk space you are using.  For a more comprehensive introduction to Linux, see [http://linuxcommand.sourceforge.net/tlcl.php The Linux Command Line (external link)].&lt;br /&gt;
&lt;br /&gt;
The TALC login node may be used for such tasks as editing files, compiling programs and running short tests while developing programs. CPU intensive workloads on the login node should be restricted to under 15 minutes as per [[General Cluster Guidelines and Policies|our cluster guidelines]]. For interactive workloads exceeding 15 minutes, use the &#039;&#039;&#039;[[Running_jobs#Interactive_jobs|salloc command]]&#039;&#039;&#039; to allocate an interactive session on a compute node.&lt;br /&gt;
&lt;br /&gt;
The default &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; allocation is 1 CPU and 1 GB of memory. Adjust this by specifying &amp;lt;code&amp;gt;-n CPU#&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--mem Megabytes&amp;lt;/code&amp;gt;. You may request up to 5 hours of CPU time for interactive jobs.&lt;br /&gt;
 salloc --time 5:00:00 --partition cpu16&lt;br /&gt;
&lt;br /&gt;
===Running non-interactive jobs (batch processing)===&lt;br /&gt;
Production runs and longer test runs should be submitted as (non-interactive) batch jobs, in which commands to be executed are listed in a script (text file). Batch jobs scripts are submitted using the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command, part of the Slurm job management and scheduling software. #SBATCH directive lines at the beginning of the script are used to specify the resources needed for the job (cores, memory, run time limit and any specialized hardware needed).&lt;br /&gt;
&lt;br /&gt;
Most of the information on the Running Jobs page on the Compute Canada web site is also relevant for submitting and managing batch jobs and reserving processors for interactive work on TALC.  One major difference between running jobs on the TALC and Compute Canada clusters is in selecting the type of hardware that should be used for a job. On TALC, you choose the hardware to use primarily by specifying a partition, as described below.&lt;br /&gt;
&lt;br /&gt;
===Using JupyterHub on Talc===&lt;br /&gt;
TALC has a Jupyterhub server which runs a Jupyter server on one of the TALC compute nodes and provides all the necessary encryption and plumbing to deliver the notebook to your computer.  To access this service you must have a TALC account. Point your browser at http://talc.ucalgary.ca and login with your usual UC account.  As of this writing, the job that runs the jupyter notebook is 1 cpu and 10GiB of memory on a cpu16 node.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Please note&#039;&#039;&#039; that before using the Jupyterhub on TALC a new user has to login into his/her TALC account using SSH at least once to &#039;&#039;&#039;accept the conditions of TALC use&#039;&#039;&#039;. &lt;br /&gt;
Until the conditions are accepted the account is not activated and the Jupyterhub login will not work either.&lt;br /&gt;
&lt;br /&gt;
===Selecting a partition===&lt;br /&gt;
TALC currently has the following partitions available for use. The &amp;lt;code&amp;gt;gpu&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;cpu12&amp;lt;/code&amp;gt; partitions are refer to the same nodes. The &amp;lt;code&amp;gt;cpu12&amp;lt;/code&amp;gt; partition was created to only expose the CPUs on the GPU hardware for general purpose use. Each GPU node has 5 Tesla T4 GPUs installed, but you may only request one per job within the TALC environment.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Partition&lt;br /&gt;
!Description&lt;br /&gt;
!Nodes&lt;br /&gt;
!Cores&lt;br /&gt;
!Memory &lt;br /&gt;
!Memory Request Limit&lt;br /&gt;
!Time Limit&lt;br /&gt;
!GPU Request per Job&lt;br /&gt;
!Network&lt;br /&gt;
|-&lt;br /&gt;
|gpu&lt;br /&gt;
|GPU Compute&lt;br /&gt;
|3&lt;br /&gt;
|12 cores&lt;br /&gt;
|192 GB&lt;br /&gt;
|190 GB&lt;br /&gt;
|24 hours&lt;br /&gt;
|1x NVIDIA Corporation TU104GL [Tesla T4]&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|cpu12&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|3&lt;br /&gt;
|12 cores&lt;br /&gt;
|192 GB&lt;br /&gt;
|190 GB&lt;br /&gt;
|24 hours&lt;br /&gt;
|None&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|cpu16&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|36&lt;br /&gt;
|16 cores&lt;br /&gt;
|64 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
|24 hours&lt;br /&gt;
|None&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|-&lt;br /&gt;
|bigmem&lt;br /&gt;
|General Purpose Compute&lt;br /&gt;
|2&lt;br /&gt;
|32 cores&lt;br /&gt;
|1024 GB&lt;br /&gt;
|1022 GB&lt;br /&gt;
|24 hours&lt;br /&gt;
|None&lt;br /&gt;
|40 Gbit/s InfiniBand&lt;br /&gt;
|}&lt;br /&gt;
There are some aspects to consider when selecting a partition including:&lt;br /&gt;
* Resource requirements in terms of memory and CPU cores&lt;br /&gt;
* Hardware specific requirements, such as GPU or CPU Instruction Set Extensions&lt;br /&gt;
* Partition resource limits and potential wait time&lt;br /&gt;
* Software support for parallel processing using Message Passing Interface (MPI), OpenMP, etc. For example, MPI for parallel processing can distribute memory across multiple nodes, so that per-node memory requirements could be lower. Whereas OpenMP or single process serial code that is restricted to one node would require a higher memory node.&lt;br /&gt;
&lt;br /&gt;
Since resources that are requested are reserved for your job, please request only as much CPU and memory as your job requires to avoid reducing the cluster efficiency.  If you are unsure which partition to use or the specific resource requests that are appropriate for your jobs, &#039;&#039;&#039;Course instructors and TAs&#039;&#039;&#039; may contact us at [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] and we would be happy to work with you.&lt;br /&gt;
&lt;br /&gt;
=== Using a partition ===&lt;br /&gt;
&lt;br /&gt;
==== CPU only jobs ====&lt;br /&gt;
To select the &amp;lt;code&amp;gt;cpu16&amp;lt;/code&amp;gt; partition, include the following line in your batch job script:&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --partition=cpu16&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;You may also start an interactive session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt;:&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
$ salloc --time 1:00:00 -p cpu16&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== GPU jobs ====&lt;br /&gt;
In TALC, you are limited to exactly 1 GPU per job. Jobs that request for 0 GPUs or 2 or more GPUs will not be scheduled.&lt;br /&gt;
&lt;br /&gt;
To submit a job using the &amp;lt;code&amp;gt;gpu&amp;lt;/code&amp;gt; partition with one GPU request, include the following to your batch job script:&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --partition=gpu&lt;br /&gt;
#SBATCH --gpus-per-node=1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Like the previous example, you may also request interactive sessions with GPU nodes using &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt;. Just specify the &amp;lt;code&amp;gt;gpu&amp;lt;/code&amp;gt; partition and the number of GPUs required. &amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
$ salloc --time 1:00:00 -p gpu -n 1 --gpus-per-node 1 &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;You may verify that a GPU was assigned to your job or interactive session by running &amp;lt;code&amp;gt;nvidia-smi&amp;lt;/code&amp;gt;. This command will show you the status of the GPU that was assigned to you.&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
$ nvidia-smi&lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
| NVIDIA-SMI 450.80.02    Driver Version: 450.80.02    CUDA Version: 11.0     |&lt;br /&gt;
|-------------------------------+----------------------+----------------------+&lt;br /&gt;
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |&lt;br /&gt;
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |&lt;br /&gt;
|                               |                      |               MIG M. |&lt;br /&gt;
|===============================+======================+======================|&lt;br /&gt;
|   0  Tesla T4            Off  | 00000000:3B:00.0 Off |                    0 |&lt;br /&gt;
| N/A   36C    P0    14W /  70W |      0MiB / 15109MiB |      5%      Default |&lt;br /&gt;
|                               |                      |                  N/A |&lt;br /&gt;
+-------------------------------+----------------------+----------------------+&lt;br /&gt;
                                                                               &lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
| Processes:                                                                  |&lt;br /&gt;
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |&lt;br /&gt;
|        ID   ID                                                   Usage      |&lt;br /&gt;
|=============================================================================|&lt;br /&gt;
|  No running processes found                                                 |&lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Partition limitations ====&lt;br /&gt;
In addition to the hardware limitations of the nodes within the partition, please be aware that there may also be policy limits imposed on your account for each partition. These limits restrict the number of cores, nodes, or GPUs that can be used at any given time. Since the limits are applied on a partition-by-partition basis, using resources in one partition should not affect the available resources you can use in another partition.&lt;br /&gt;
&lt;br /&gt;
These limits can be listed by running:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sacctmgr show qos format=Name,MaxWall,MaxTRESPU%20,MaxSubmitJobs&lt;br /&gt;
      Name     MaxWall            MaxTRESPU MaxSubmit &lt;br /&gt;
---------- ----------- -------------------- --------- &lt;br /&gt;
    normal  1-00:00:00                                &lt;br /&gt;
  cpulimit                           cpu=48           &lt;br /&gt;
gpucpulim+                           cpu=18           &lt;br /&gt;
  gpulimit                 cpu=2,gres/gpu=1                &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Time limits ===&lt;br /&gt;
Use the &amp;lt;code&amp;gt;--time&amp;lt;/code&amp;gt; directive to tell the job scheduler the maximum time that your job might run.  For example:&lt;br /&gt;
 #SBATCH --time=hh:mm:ss&lt;br /&gt;
&lt;br /&gt;
You can use &amp;lt;code&amp;gt;scontrol show partitions&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; to see the current maximum time that a job can run.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; highlight=&amp;quot;6&amp;quot;&amp;gt;&lt;br /&gt;
$ scontrol show partitions&lt;br /&gt;
PartitionName=cpu16&lt;br /&gt;
   AllowGroups=ALL AllowAccounts=ALL AllowQos=ALL&lt;br /&gt;
   AllocNodes=ALL Default=YES QoS=cpulimit&lt;br /&gt;
   DefaultTime=NONE DisableRootJobs=NO ExclusiveUser=NO GraceTime=0 Hidden=NO&lt;br /&gt;
   MaxNodes=UNLIMITED MaxTime=1-00:00:00 MinNodes=0 LLN=NO MaxCPUsPerNode=UNLIMITED&lt;br /&gt;
   Nodes=n[1-36]&lt;br /&gt;
   PriorityJobFactor=1 PriorityTier=1 RootOnly=NO ReqResv=NO OverSubscribe=NO&lt;br /&gt;
   OverTimeLimit=NONE PreemptMode=OFF&lt;br /&gt;
   State=UP TotalCPUs=576 TotalNodes=36 SelectTypeParameters=NONE&lt;br /&gt;
   JobDefaults=(null)&lt;br /&gt;
   DefMemPerNode=UNLIMITED MaxMemPerNode=UNLIMITED&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, with &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; under the &amp;lt;code&amp;gt;TIMELIMIT&amp;lt;/code&amp;gt; column:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sinfo&lt;br /&gt;
PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST&lt;br /&gt;
cpu12        up 1-00:00:00      3   idle t[1-3]&lt;br /&gt;
cpu16        up 1-00:00:00     36   idle n[1-36]&lt;br /&gt;
bigmem       up 1-00:00:00      2   idle bigmem[1-2]&lt;br /&gt;
gpu          up 1-00:00:00      3   idle t[1-3]&lt;br /&gt;
 &lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
[[Category:TALC]]&lt;br /&gt;
[[Category:Guides]]&lt;br /&gt;
{{Navbox TALC}}&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=Altis_Login_Node_Status&amp;diff=3538</id>
		<title>Altis Login Node Status</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=Altis_Login_Node_Status&amp;diff=3538"/>
		<updated>2024-09-03T21:26:15Z</updated>

		<summary type="html">&lt;p&gt;Lleung: Created page with &amp;quot;{{ARC Cluster Status}}  == System Messages == {{Message of the day item | title = Systems Operating Normally | date = 2024/09/03 | message = The ARC Cluster and the Altis login node is operational. No upcoming upgrades are planned. }}  Category:ARC {{Navbox ARC}}&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{ARC Cluster Status}}&lt;br /&gt;
&lt;br /&gt;
== System Messages ==&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Systems Operating Normally&lt;br /&gt;
| date = 2024/09/03&lt;br /&gt;
| message =&lt;br /&gt;
The ARC Cluster and the Altis login node is operational. No upcoming upgrades are planned.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
[[Category:ARC]]&lt;br /&gt;
{{Navbox ARC}}&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=Template:ARC_Cluster_Status&amp;diff=3537</id>
		<title>Template:ARC Cluster Status</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=Template:ARC_Cluster_Status&amp;diff=3537"/>
		<updated>2024-09-03T21:19:06Z</updated>

		<summary type="html">&lt;p&gt;Lleung: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Cluster Status&lt;br /&gt;
| cluster = ARC&lt;br /&gt;
| status = green&lt;br /&gt;
| title = Cluster operational&lt;br /&gt;
| message = System is operational. No updates are planned.&lt;br /&gt;
&lt;br /&gt;
See the [[ARC Cluster Status]] page for system notices. &lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=Think_Login_Node_Status&amp;diff=3536</id>
		<title>Think Login Node Status</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=Think_Login_Node_Status&amp;diff=3536"/>
		<updated>2024-09-03T21:18:40Z</updated>

		<summary type="html">&lt;p&gt;Lleung: Created page with &amp;quot;{{ARC Cluster Status}}  == System Messages == {{Message of the day item | title = Systems Operating Normally | date = 2024/09/03 | message = The ARC Cluster and the Think login node is operational. No upcoming upgrades are planned. }}  Category:ARC {{Navbox ARC}}&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{ARC Cluster Status}}&lt;br /&gt;
&lt;br /&gt;
== System Messages ==&lt;br /&gt;
{{Message of the day item&lt;br /&gt;
| title = Systems Operating Normally&lt;br /&gt;
| date = 2024/09/03&lt;br /&gt;
| message =&lt;br /&gt;
The ARC Cluster and the Think login node is operational. No upcoming upgrades are planned.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
[[Category:ARC]]&lt;br /&gt;
{{Navbox ARC}}&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=Storage_Options&amp;diff=3503</id>
		<title>Storage Options</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=Storage_Options&amp;diff=3503"/>
		<updated>2024-08-14T16:08:32Z</updated>

		<summary type="html">&lt;p&gt;Lleung: /* Support for OneDrive */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;There are a few options researchers can take advantage of when storing their research data. Please take into account the purpose of the storage, appropriate research data management principles, and the data classification when choosing an appropriate storage solution. &lt;br /&gt;
&lt;br /&gt;
== Data Classification ==&lt;br /&gt;
Please review the different data classifications that are outlined by the [https://ucalgary.ca/legal-services/sites/default/files/teams/1/Standards-Legal-Information-Security-Classification-Standard.pdf Information Security Classification Standard]. There are 4 levels of data classification which are summarised in the table below.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Level&lt;br /&gt;
! Description&lt;br /&gt;
! Example&lt;br /&gt;
|-&lt;br /&gt;
| Level 1&lt;br /&gt;
| Public&lt;br /&gt;
|&lt;br /&gt;
* Reference data sets&lt;br /&gt;
* Published research data&lt;br /&gt;
|-&lt;br /&gt;
| Level 2&lt;br /&gt;
| Internal&lt;br /&gt;
|&lt;br /&gt;
* Internal memos&lt;br /&gt;
* Unpublished research data&lt;br /&gt;
* Anonymized or de-identified human subject data&lt;br /&gt;
* Library transactions and journals&lt;br /&gt;
|-&lt;br /&gt;
| Level 3&lt;br /&gt;
| Confidential&lt;br /&gt;
|&lt;br /&gt;
* Faculty/staff employment applications, personnel files, contact information&lt;br /&gt;
* Donor or prospective donor information&lt;br /&gt;
* Contracts&lt;br /&gt;
* Intellectual property&lt;br /&gt;
|-&lt;br /&gt;
| Level 4&lt;br /&gt;
| Restricted&lt;br /&gt;
|&lt;br /&gt;
* Patient identifiable health information&lt;br /&gt;
* identifiable human subject research data&lt;br /&gt;
* information subject to special government requirements&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
When selecting a storage option, you must use one that meets or exceeds the rated security classification.&lt;br /&gt;
&lt;br /&gt;
* See also the Collaboration, storage and file shares article in Service Now:&lt;br /&gt;
: https://ucalgary.service-now.com/it?id=it_catalog_by_category&amp;amp;sys_id=4dbb82ee13661200c524fc04e144b044&lt;br /&gt;
&lt;br /&gt;
== Research Data Management ==&lt;br /&gt;
We recommend you follow good Research Data Management (RDM) practices and ensure you have a Data Management Plan (DMP) created to guide your data&#039;s life-cycle. Your DMP can help us support the FAIR (Findable, Accessible, Interoperable, and Reusable) principles for data management.&lt;br /&gt;
&lt;br /&gt;
Please consider contacting Libraries and Cultural Resources for assistance.&lt;br /&gt;
&lt;br /&gt;
=== Resources ===&lt;br /&gt;
&lt;br /&gt;
* A DMP Assistant is a tool created specifically for Canadian scholars and aims to meet any and all Tri-Agency requirements available at: https://assistant.portagenetwork.ca/&lt;br /&gt;
* For guidance on general data management and developing a DMP, consult https://library.ucalgary.ca/guides/researchdatamanagement or contact research.data@ucalgary.ca.&lt;br /&gt;
* For support using PRISM Dataverse, the University of Calgary&#039;s institutional data repository, contact digitize@ucalgary.ca.&lt;br /&gt;
* If you need to share and preserve your large post-publication data set for a mandated period of time, consider using the national Federated Research Data Repository (FRDR). Learn more at https://www.frdr-dfdr.ca/repo/. FRDR aligns with Tri-Agency Principles as a platform for Preservation, Retention and Sharing of research data. see: [http://www.science.gc.ca/eic/site/063.nsf/eng/h_83F7624E.html Tri-Agency Statement of Principles on Digital Data Management]&lt;br /&gt;
&lt;br /&gt;
== University of Calgary RCS storage services ==&lt;br /&gt;
&lt;br /&gt;
=== Secure Compute Data Storage (SCDS) ===&lt;br /&gt;
Secure Computing Data Storage (SCDS) is a service provided by Research Computing Services that allows researchers to store restricted and confidential data. Collaboration with Level 4 data stored in SCDS is possible using ShareFile, a secure file sharing and collaboration tool by Citrix.&lt;br /&gt;
 &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Capacity&lt;br /&gt;
| 10 GB or more&lt;br /&gt;
|-&lt;br /&gt;
! Classification&lt;br /&gt;
| Level 4&lt;br /&gt;
|-&lt;br /&gt;
! Learn More&lt;br /&gt;
| Visit [https://it.ucalgary.ca/secure-computing-platform The SCDS Website]&lt;br /&gt;
|-&lt;br /&gt;
! Request Access&lt;br /&gt;
| Visit [https://ucalgary.service-now.com/kb_view.do?sysparm_article=KB0030163 ServiceNow to request access]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== ResearchFS ===&lt;br /&gt;
ResearchFS is a University of Calgary-hosted SMB/CIFS storage solution funded and operated by RCS. It is available by request to faculty and staff with active research data.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Capacity&lt;br /&gt;
| 1TB with quota increases available on request. &lt;br /&gt;
|-&lt;br /&gt;
! Classification&lt;br /&gt;
| Level 1 - 2&lt;br /&gt;
|-&lt;br /&gt;
! Request Access&lt;br /&gt;
| Visit [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=fe66b3a7db297300897e4b8b0b96199d ServiceNow to request access]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==== Service Description ====&lt;br /&gt;
You may use ResearchFS to store your active research data files. ResearchFS is intended to be used as a research group or project share. ResearchFS is available on campus or off campus using the IT supported VPN client. Information on how to download and install the VPN client can be found here: https://ucalgary.service-now.com/it?id=kb_article&amp;amp;sys_id=880e71071381ae006f3afbb2e144b05c (IT account login may be required).&lt;br /&gt;
All ResearchFS users must have a University of Calgary IT account.&lt;br /&gt;
&lt;br /&gt;
==== Data recovery ====&lt;br /&gt;
ResearchFS does daily snapshots at a bit past midnight, which it keeps for 30 days. You should be able to recover a deleted file for up to 30 days, if it was in your share overnight. If you create a file and delete it during a day, no snapshot will be available for you to recover. ResearchFS presents backups using the windows OS &#039;previous versions&#039; functionality. If you are not familiar with using this, or if you are on a Linux or MacOS device, you can request a restore, with Service Now.&lt;br /&gt;
&lt;br /&gt;
For backup, we replicate changes to a distant data center every hour. The storage hardware which hosts your data is located in the basement of the Math Sciences building and our backup is in the HRIC building, so in case of an on campus disaster, your data should be safe.&lt;br /&gt;
&lt;br /&gt;
==== Support for ResearchFS ====&lt;br /&gt;
If you have questions, please contact the IT Support Centre.&lt;br /&gt;
: Mon – Fri: 8:30 am – 5:00 pm; Sat, Sun &amp;amp; holidays: 10:00 am – 2:00 pm.&lt;br /&gt;
: Live Chat: ucalgary.ca/it&lt;br /&gt;
: Email: itsupport@ucalgary.ca&lt;br /&gt;
: Phone: 403.210.9300&lt;br /&gt;
: In person: 773 Math Science&lt;br /&gt;
&lt;br /&gt;
=== ARC Cluster Storage ===&lt;br /&gt;
ARC storage is used to &#039;&#039;&#039;support workflows on the ARC computing cluster&#039;&#039;&#039;. The expectation is that storage on ARC will only be used for active and upcoming computational projects. &lt;br /&gt;
It is not suitable for long-term or archival storage as it is not backed-up and is not guaranteed to be available for the time periods that are typical of archiving.&lt;br /&gt;
ARC is a research cluster, which means it has high performance but can be stopped for required maintenance when needed. &lt;br /&gt;
Thus, ARC cannot be relied on for any kind of service that requires constant availability. &lt;br /&gt;
Which means, in turn that ARC&#039;s storage cannot and &#039;&#039;&#039;should not be used as a main storage facility for research data&#039;&#039;&#039;. &lt;br /&gt;
The &#039;&#039;&#039;master copy&#039;&#039;&#039; of research data should be &#039;&#039;&#039;stored elsewhere&#039;&#039;&#039; and only part of that data are expected to be copied to ARC for computational analysis.&lt;br /&gt;
&lt;br /&gt;
==== ARC Home Directories ====&lt;br /&gt;
Every user account on ARC has a static 500GB allocation of storage and a maximum of 1.5 million files (including directories). This cannot be increased or decreased. Home directory storage is connected via a network file system to the rest of the cluster and supports fast data transfer to memory on compute nodes. This also means that basic file system commands (like &amp;lt;code&amp;gt;ls&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;find&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;du&amp;lt;/code&amp;gt;) take longer to run as the number of files in your home directory increases. In particular, we strongly encourage users to stay under 100000 files if it is at all possible. This can be achieved by combining smaller data files into single larger files, using structured data formats rather than large number of text files, or combining collections of files that will be used together into archives (tar, dar, etc). Since top level permissions on home directories are set to prevent other users from reading or executing, home directories are not suitable for sharing data directly with colleagues working on ARC. A Research Group Allocation is a more appropriate place for storing shared data or very large data sets that will be used as part of active computational projects.   &lt;br /&gt;
&lt;br /&gt;
==== ARC /work and /bulk Group Allocation ====&lt;br /&gt;
Any group member who wants to use the shared storage, should send an email to the support@hpc.ucalgary.ca to be added to the access group and CC the PI/ data owner. &#039;&#039;&#039;This will confirm that the PI approves the group member&#039;s request access to the shared storage.&#039;&#039;&#039; Please note that the access permissions inside the directory are expected to be managed by the data owners.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All requests should answer the following questions:&lt;br /&gt;
&lt;br /&gt;
* How much storage is requested and why is that the amount that you need?&lt;br /&gt;
A rationale for a request can be a formal data management plan or something more informal like a rough estimate to the primary dataset used for a project and a rough estimate to the size of outputs expected from your computations that are planned to run on ARC over the &#039;&#039;&#039;next year.&#039;&#039;&#039;&lt;br /&gt;
* What is the requested &#039;&#039;&#039;allocation name&#039;&#039;&#039;? (typically something like &amp;lt;PI name&amp;gt;_lab, &amp;lt;code&amp;gt;smith_lab&amp;lt;/code&amp;gt;, for example)&lt;br /&gt;
* What is the &#039;&#039;&#039;data classification&#039;&#039;&#039; using the University of Calgary data security classification system?&lt;br /&gt;
* Which user or users would be the &#039;&#039;&#039;owner&#039;&#039;&#039; of the allocation? (Full Name and UCalgary Email address, typically the requesting PI but there may be co-PIs)&lt;br /&gt;
* Which members of the allocation should be able to &#039;&#039;&#039;request access&#039;&#039;&#039; for new users? (Full Name and UCalgary Email address for active ARC users)&lt;br /&gt;
* What is the &#039;&#039;&#039;faculty&#039;&#039;&#039; of the owner or owners?&lt;br /&gt;
* Please provide a short description of the lab.&lt;br /&gt;
* Please provide a brief &#039;&#039;&#039;numerical estimate&#039;&#039;&#039; of the required storage space based on &#039;&#039;&#039;projects&#039;&#039;&#039; that will use the allocation and their storage &#039;&#039;&#039;requirements&#039;&#039;&#039; .&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example 1&#039;&#039;&#039;: &amp;quot;We will be processing a 1T dataset by performing 100 experimental runs. &lt;br /&gt;
Each experiment will be processed to produce a 6GB output, giving 600GB of the total output data. &lt;br /&gt;
We will also need 400GB additional space for post-processing and data management. &lt;br /&gt;
Thus, we would like to request &#039;&#039;&#039;2TB&#039;&#039;&#039; of shared space in total.&amp;quot;   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example 2&#039;&#039;&#039;: &amp;quot;3 members of our research group need additional shared space on ARC for their independent projects.&lt;br /&gt;
Project 1 starts with 100GB of initial data and is expected to generate 800GB of the output results. &lt;br /&gt;
Project 2 is going to use simulations and does not use any input data but is expected to generate 2TB of the simulated data for further processing.&lt;br /&gt;
The processing will require 200GB of additional space.&lt;br /&gt;
Project 3 will be working on a 1TB dataset and is expected to generate about 1TB of the output data. &lt;br /&gt;
These projects, therefore, will require 5.1TB of storage. &lt;br /&gt;
For convenience of data manipulation and management we would also like to have additional 400GB of extra storage space.&lt;br /&gt;
Therefore, we would like to request &#039;&#039;&#039;5.5TB&#039;&#039;&#039; of shared storage space in total.&amp;quot;   &lt;br /&gt;
&lt;br /&gt;
Work and Bulk storage can be considerably larger than the home directory allocations. However, there are limits on what RCS can provide as ARC storage provides high-speed access and is expensive to purchase. Typically, &#039;&#039;&#039;any request over 10TB&#039;&#039;&#039; will require some discussion. Work and Bulk allocations differ in a few ways that influence how they are used. Work storage is faster to access as part of computational jobs on ARC although the impact is small for jobs that don&#039;t involve enormous numbers of reads. Bulk storage is designed to be a target for instrument data (which is typically processed in a way that reads data a small number of times per job) and is capable of mounting instruments elsewhere on campus using SMB. A number of questions come up frequently about Work and Bulk storage and these are addressed in an [[Group Storage Allocation FAQ | FAQ]].&lt;br /&gt;
&lt;br /&gt;
== University of Calgary IT storage services ==&lt;br /&gt;
You can learn more about Information Technologies Storage solutions at https://ucalgary.service-now.com/it?id=kb_article&amp;amp;sys_id=d785de4e1b3ed41422ba4158dc4bcbf1&lt;br /&gt;
&lt;br /&gt;
=== OneDrive for Business ===&lt;br /&gt;
OneDrive for Business is a storage solution provided by Microsoft and is available to all faculty and staff.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Capacity&lt;br /&gt;
| 5 TB with quota increases [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=438e6d8313896a0053f2d7b2e144b0b9 available on request].&lt;br /&gt;
|-&lt;br /&gt;
! Classification&lt;br /&gt;
| Level 1 - 4&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
You may use OneDrive for Business to store your personal and work related files. Files stored within OneDrive are by default private only to you but has the option to allow sharing and collaboration with others. &lt;br /&gt;
OneDrive for Business cannot be used as a department or project share space. There is no group/lab offering with OneDrive. &lt;br /&gt;
&lt;br /&gt;
While OneDrive provides a secure/compliant location from an IT Security stand point, it’s not the most adequate location for data the PI is accountable for 5 years upon completion of the study. This is not a security issue, but a data management issue.&lt;br /&gt;
&lt;br /&gt;
For example, if a study was using a personal OneDrive of one of the researchers to store all the records, and the researcher was to leave the university, this OneDrive would be gone in 30 days.&lt;br /&gt;
&lt;br /&gt;
MS has an automation capability for their O365 products. If you have a windows OS machine, you can use the automation product ‘Flow’ to copy a file to a local file system when a new file is created on OneDrive.&lt;br /&gt;
&lt;br /&gt;
To back up data residing on ARC to your personal OneDrive allocation please see:  [[How to transfer data#rclone: rsync for cloud storage]]&lt;br /&gt;
&lt;br /&gt;
OneDrive requires Multi-Factor Authentication (MFA) enabled on your University of Calgary IT account. &lt;br /&gt;
&lt;br /&gt;
More information can be located in the following article https://ucalgary.service-now.com/kb_view.do?sysparm_article=KB0032351&lt;br /&gt;
&lt;br /&gt;
University of Calgary OneDrive data is reportedly hosted in Canada (Markham, Ontario).&lt;br /&gt;
&lt;br /&gt;
==== Support for OneDrive ====&lt;br /&gt;
If you have questions, please contact the [https://ucalgary.service-now.com/it UService Support Centre].&lt;br /&gt;
&lt;br /&gt;
==== Other Resources ====&lt;br /&gt;
For more information on OneDrive for Business:&lt;br /&gt;
* Operating Level of Agreement KB0032404 (https://ucalgary.service-now.com/it?id=kb_article&amp;amp;sys_id=7f57bddcdb56a3047cab5068dc9619b6)&lt;br /&gt;
*OneDrive for Business Getting Started KB0032351 (https://ucalgary.service-now.com/it?id=kb_article&amp;amp;sys_id=60994170db2da7487cab5068dc961900)&lt;br /&gt;
*If you are above 90% of your OneDrive quota, you can request an increase here: ( https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=438e6d8313896a0053f2d7b2e144b0b9) PLEASE NOTE: Microsoft will only increase an allocation while the Cloud Storage is more than 90% full. Please log into your O365 cloud account to review before making your request.&lt;br /&gt;
&lt;br /&gt;
Any questions regarding if data hosted on OneDrive is subject to US jurisdiction discovery or access should be directed to:&lt;br /&gt;
*https://cumming.ucalgary.ca/research-institutes/csm-research-services/legal-research-services (CSM researchers.)&lt;br /&gt;
*https://research.ucalgary.ca/contact/research-services (Not CSM Researchers)&lt;br /&gt;
*https://www.ucalgary.ca/legalservices/  (for teaching/learning – non research enquiries that make their way to you)&lt;br /&gt;
&lt;br /&gt;
=== Office365 SharePoint for research groups ===&lt;br /&gt;
To be determined....&lt;br /&gt;
&lt;br /&gt;
Researchers will be able to request an Office 365 SharePoint site for a group at some point in the future &lt;br /&gt;
which could be considered a group cloud sharing platform.&lt;br /&gt;
&lt;br /&gt;
* The official service page:&lt;br /&gt;
: https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=b55f2f72132f5240b5b4df82e144b085&lt;br /&gt;
&lt;br /&gt;
== Digital Research Alliance of Canada storage services ==&lt;br /&gt;
&lt;br /&gt;
=== Storage on the Alliance HPC clusters ===&lt;br /&gt;
* Alliance Wiki article &amp;quot;Storage and file management&amp;quot;:&lt;br /&gt;
: https://docs.alliancecan.ca/wiki/Storage_and_file_management&lt;br /&gt;
&lt;br /&gt;
=== The Alliance NextCloud ===&lt;br /&gt;
For personal or level 1 data, you may use an external solution from the Alliance.&lt;br /&gt;
One has to have an Alliance account to use the service.&lt;br /&gt;
This is similar to DropBox or Google drive functionality. &lt;br /&gt;
:https://nextcloud.computecanada.ca&lt;br /&gt;
: 100 GB of storage that can be shared between your computers.&lt;br /&gt;
: Alliance documentation: https://docs.alliancecan.ca/wiki/Nextcloud&lt;br /&gt;
&lt;br /&gt;
== Commercial Cloud Based Storage Options ==&lt;br /&gt;
&lt;br /&gt;
=== Amazon Web Services ===&lt;br /&gt;
Provided by &#039;&#039;&#039;Amazon Web Services, Inc.&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
* Pricing calculator: https://calculator.aws&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
AWS provides very many different kinds of services, including &#039;&#039;&#039;storage services&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
These options can be a solution for your research needs, but &lt;br /&gt;
* it &#039;&#039;&#039;can be expensive&#039;&#039;&#039;, depending on your needs and the amount of data;&lt;br /&gt;
* &#039;&#039;&#039;pricing schema is complex&#039;&#039;&#039; and can be confusing for new users;&lt;br /&gt;
* the &#039;&#039;&#039;number of options can be overwhelming&#039;&#039;&#039;. A lot of it is designed to provide pricing flexibility, not to increase functionality.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The key points:&lt;br /&gt;
* &#039;&#039;&#039;Uploading data&#039;&#039;&#039; to AWS is &#039;&#039;&#039;free&#039;&#039;&#039;.&lt;br /&gt;
* &#039;&#039;&#039;Storing data&#039;&#039;&#039; on AWS storage is a &#039;&#039;&#039;paid service&#039;&#039;&#039;.&lt;br /&gt;
* &#039;&#039;&#039;Downloading data&#039;&#039;&#039; from AWS storage to your computer is a &#039;&#039;&#039;paid service&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Federated Research Data Repository (FRDR) ==&lt;br /&gt;
The Federated Research Data Repository (RFRDR) is a suitable storage solution for long-term archive storage for research datasets used in published research work. FRDR is a bilingual publishing platform for sharing and preserving Canadian research data.&lt;br /&gt;
It is a curated, general-purpose repository, custom built for large datasets.&lt;br /&gt;
FRDR is run by the Digital Research Alliance of Canada.&lt;br /&gt;
&lt;br /&gt;
For more information on FRDR visit their web site:  https://www.frdr-dfdr.ca/repo/&lt;br /&gt;
&lt;br /&gt;
[[Category:Administration]]&lt;br /&gt;
{{Navbox Administration}}&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=Storage_Options&amp;diff=3499</id>
		<title>Storage Options</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=Storage_Options&amp;diff=3499"/>
		<updated>2024-08-13T21:08:05Z</updated>

		<summary type="html">&lt;p&gt;Lleung: /* Support for ResearchFS */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;There are a few options researchers can take advantage of when storing their research data. Please take into account the purpose of the storage, appropriate research data management principles, and the data classification when choosing an appropriate storage solution. &lt;br /&gt;
&lt;br /&gt;
== Data Classification ==&lt;br /&gt;
Please review the different data classifications that are outlined by the [https://ucalgary.ca/legal-services/sites/default/files/teams/1/Standards-Legal-Information-Security-Classification-Standard.pdf Information Security Classification Standard]. There are 4 levels of data classification which are summarised in the table below.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Level&lt;br /&gt;
! Description&lt;br /&gt;
! Example&lt;br /&gt;
|-&lt;br /&gt;
| Level 1&lt;br /&gt;
| Public&lt;br /&gt;
|&lt;br /&gt;
* Reference data sets&lt;br /&gt;
* Published research data&lt;br /&gt;
|-&lt;br /&gt;
| Level 2&lt;br /&gt;
| Internal&lt;br /&gt;
|&lt;br /&gt;
* Internal memos&lt;br /&gt;
* Unpublished research data&lt;br /&gt;
* Anonymized or de-identified human subject data&lt;br /&gt;
* Library transactions and journals&lt;br /&gt;
|-&lt;br /&gt;
| Level 3&lt;br /&gt;
| Confidential&lt;br /&gt;
|&lt;br /&gt;
* Faculty/staff employment applications, personnel files, contact information&lt;br /&gt;
* Donor or prospective donor information&lt;br /&gt;
* Contracts&lt;br /&gt;
* Intellectual property&lt;br /&gt;
|-&lt;br /&gt;
| Level 4&lt;br /&gt;
| Restricted&lt;br /&gt;
|&lt;br /&gt;
* Patient identifiable health information&lt;br /&gt;
* identifiable human subject research data&lt;br /&gt;
* information subject to special government requirements&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
When selecting a storage option, you must use one that meets or exceeds the rated security classification.&lt;br /&gt;
&lt;br /&gt;
* See also the Collaboration, storage and file shares article in Service Now:&lt;br /&gt;
: https://ucalgary.service-now.com/it?id=it_catalog_by_category&amp;amp;sys_id=4dbb82ee13661200c524fc04e144b044&lt;br /&gt;
&lt;br /&gt;
== Research Data Management ==&lt;br /&gt;
We recommend you follow good Research Data Management (RDM) practices and ensure you have a Data Management Plan (DMP) created to guide your data&#039;s life-cycle. Your DMP can help us support the FAIR (Findable, Accessible, Interoperable, and Reusable) principles for data management.&lt;br /&gt;
&lt;br /&gt;
Please consider contacting Libraries and Cultural Resources for assistance.&lt;br /&gt;
&lt;br /&gt;
=== Resources ===&lt;br /&gt;
&lt;br /&gt;
* A DMP Assistant is a tool created specifically for Canadian scholars and aims to meet any and all Tri-Agency requirements available at: https://assistant.portagenetwork.ca/&lt;br /&gt;
* For guidance on general data management and developing a DMP, consult https://library.ucalgary.ca/guides/researchdatamanagement or contact research.data@ucalgary.ca.&lt;br /&gt;
* For support using PRISM Dataverse, the University of Calgary&#039;s institutional data repository, contact digitize@ucalgary.ca.&lt;br /&gt;
* If you need to share and preserve your large post-publication data set for a mandated period of time, consider using the national Federated Research Data Repository (FRDR). Learn more at https://www.frdr-dfdr.ca/repo/. FRDR aligns with Tri-Agency Principles as a platform for Preservation, Retention and Sharing of research data. see: [http://www.science.gc.ca/eic/site/063.nsf/eng/h_83F7624E.html Tri-Agency Statement of Principles on Digital Data Management]&lt;br /&gt;
&lt;br /&gt;
== University of Calgary RCS storage services ==&lt;br /&gt;
&lt;br /&gt;
=== Secure Compute Data Storage (SCDS) ===&lt;br /&gt;
Secure Computing Data Storage (SCDS) is a service provided by Research Computing Services that allows researchers to store restricted and confidential data. Collaboration with Level 4 data stored in SCDS is possible using ShareFile, a secure file sharing and collaboration tool by Citrix.&lt;br /&gt;
 &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Capacity&lt;br /&gt;
| 10 GB or more&lt;br /&gt;
|-&lt;br /&gt;
! Classification&lt;br /&gt;
| Level 4&lt;br /&gt;
|-&lt;br /&gt;
! Learn More&lt;br /&gt;
| Visit [https://it.ucalgary.ca/secure-computing-platform The SCDS Website]&lt;br /&gt;
|-&lt;br /&gt;
! Request Access&lt;br /&gt;
| Visit [https://ucalgary.service-now.com/kb_view.do?sysparm_article=KB0030163 ServiceNow to request access]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== ResearchFS ===&lt;br /&gt;
ResearchFS is a University of Calgary-hosted SMB/CIFS storage solution funded and operated by RCS. It is available by request to faculty and staff with active research data.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Capacity&lt;br /&gt;
| 1TB with quota increases available on request. &lt;br /&gt;
|-&lt;br /&gt;
! Classification&lt;br /&gt;
| Level 1 - 2&lt;br /&gt;
|-&lt;br /&gt;
! Request Access&lt;br /&gt;
| Visit [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=fe66b3a7db297300897e4b8b0b96199d ServiceNow to request access]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==== Service Description ====&lt;br /&gt;
You may use ResearchFS to store your active research data files. ResearchFS is intended to be used as a research group or project share. ResearchFS is available on campus or off campus using the IT supported VPN client. Information on how to download and install the VPN client can be found here: https://ucalgary.service-now.com/it?id=kb_article&amp;amp;sys_id=880e71071381ae006f3afbb2e144b05c (IT account login may be required).&lt;br /&gt;
All ResearchFS users must have a University of Calgary IT account.&lt;br /&gt;
&lt;br /&gt;
==== Data recovery ====&lt;br /&gt;
ResearchFS does daily snapshots at a bit past midnight, which it keeps for 30 days. You should be able to recover a deleted file for up to 30 days, if it was in your share overnight. If you create a file and delete it during a day, no snapshot will be available for you to recover. ResearchFS presents backups using the windows OS &#039;previous versions&#039; functionality. If you are not familiar with using this, or if you are on a Linux or MacOS device, you can request a restore, with Service Now.&lt;br /&gt;
&lt;br /&gt;
For backup, we replicate changes to a distant data center every hour. The storage hardware which hosts your data is located in the basement of the Math Sciences building and our backup is in the HRIC building, so in case of an on campus disaster, your data should be safe.&lt;br /&gt;
&lt;br /&gt;
==== Support for ResearchFS ====&lt;br /&gt;
If you have questions, please contact the IT Support Centre.&lt;br /&gt;
: Mon – Fri: 8:30 am – 5:00 pm; Sat, Sun &amp;amp; holidays: 10:00 am – 2:00 pm.&lt;br /&gt;
: Live Chat: ucalgary.ca/it&lt;br /&gt;
: Email: itsupport@ucalgary.ca&lt;br /&gt;
: Phone: 403.210.9300&lt;br /&gt;
: In person: 773 Math Science&lt;br /&gt;
&lt;br /&gt;
=== ARC Cluster Storage ===&lt;br /&gt;
ARC storage is used to &#039;&#039;&#039;support workflows on the ARC computing cluster&#039;&#039;&#039;. The expectation is that storage on ARC will only be used for active and upcoming computational projects. &lt;br /&gt;
It is not suitable for long-term or archival storage as it is not backed-up and is not guaranteed to be available for the time periods that are typical of archiving.&lt;br /&gt;
ARC is a research cluster, which means it has high performance but can be stopped for required maintenance when needed. &lt;br /&gt;
Thus, ARC cannot be relied on for any kind of service that requires constant availability. &lt;br /&gt;
Which means, in turn that ARC&#039;s storage cannot and &#039;&#039;&#039;should not be used as a main storage facility for research data&#039;&#039;&#039;. &lt;br /&gt;
The &#039;&#039;&#039;master copy&#039;&#039;&#039; of research data should be &#039;&#039;&#039;stored elsewhere&#039;&#039;&#039; and only part of that data are expected to be copied to ARC for computational analysis.&lt;br /&gt;
&lt;br /&gt;
==== ARC Home Directories ====&lt;br /&gt;
Every user account on ARC has a static 500GB allocation of storage and a maximum of 1.5 million files (including directories). This cannot be increased or decreased. Home directory storage is connected via a network file system to the rest of the cluster and supports fast data transfer to memory on compute nodes. This also means that basic file system commands (like &amp;lt;code&amp;gt;ls&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;find&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;du&amp;lt;/code&amp;gt;) take longer to run as the number of files in your home directory increases. In particular, we strongly encourage users to stay under 100000 files if it is at all possible. This can be achieved by combining smaller data files into single larger files, using structured data formats rather than large number of text files, or combining collections of files that will be used together into archives (tar, dar, etc). Since top level permissions on home directories are set to prevent other users from reading or executing, home directories are not suitable for sharing data directly with colleagues working on ARC. A Research Group Allocation is a more appropriate place for storing shared data or very large data sets that will be used as part of active computational projects.   &lt;br /&gt;
&lt;br /&gt;
==== ARC /work and /bulk Group Allocation ====&lt;br /&gt;
Any group member who wants to use the shared storage, should send an email to the support@hpc.ucalgary.ca to be added to the access group and CC the PI/ data owner. &#039;&#039;&#039;This will confirm that the PI approves the group member&#039;s request access to the shared storage.&#039;&#039;&#039; Please note that the access permissions inside the directory are expected to be managed by the data owners.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All requests should answer the following questions:&lt;br /&gt;
&lt;br /&gt;
* How much storage is requested and why is that the amount that you need?&lt;br /&gt;
A rationale for a request can be a formal data management plan or something more informal like a rough estimate to the primary dataset used for a project and a rough estimate to the size of outputs expected from your computations that are planned to run on ARC over the &#039;&#039;&#039;next year.&#039;&#039;&#039;&lt;br /&gt;
* What is the requested &#039;&#039;&#039;allocation name&#039;&#039;&#039;? (typically something like &amp;lt;PI name&amp;gt;_lab, &amp;lt;code&amp;gt;smith_lab&amp;lt;/code&amp;gt;, for example)&lt;br /&gt;
* What is the &#039;&#039;&#039;data classification&#039;&#039;&#039; using the University of Calgary data security classification system?&lt;br /&gt;
* Which user or users would be the &#039;&#039;&#039;owner&#039;&#039;&#039; of the allocation? (Full Name and UCalgary Email address, typically the requesting PI but there may be co-PIs)&lt;br /&gt;
* Which members of the allocation should be able to &#039;&#039;&#039;request access&#039;&#039;&#039; for new users? (Full Name and UCalgary Email address for active ARC users)&lt;br /&gt;
* What is the &#039;&#039;&#039;faculty&#039;&#039;&#039; of the owner or owners?&lt;br /&gt;
* Please provide a short description of the lab.&lt;br /&gt;
* Please provide a brief &#039;&#039;&#039;numerical estimate&#039;&#039;&#039; of the required storage space based on &#039;&#039;&#039;projects&#039;&#039;&#039; that will use the allocation and their storage &#039;&#039;&#039;requirements&#039;&#039;&#039; .&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example 1&#039;&#039;&#039;: &amp;quot;We will be processing a 1T dataset by performing 100 experimental runs. &lt;br /&gt;
Each experiment will be processed to produce a 6GB output, giving 600GB of the total output data. &lt;br /&gt;
We will also need 400GB additional space for post-processing and data management. &lt;br /&gt;
Thus, we would like to request &#039;&#039;&#039;2TB&#039;&#039;&#039; of shared space in total.&amp;quot;   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example 2&#039;&#039;&#039;: &amp;quot;3 members of our research group need additional shared space on ARC for their independent projects.&lt;br /&gt;
Project 1 starts with 100GB of initial data and is expected to generate 800GB of the output results. &lt;br /&gt;
Project 2 is going to use simulations and does not use any input data but is expected to generate 2TB of the simulated data for further processing.&lt;br /&gt;
The processing will require 200GB of additional space.&lt;br /&gt;
Project 3 will be working on a 1TB dataset and is expected to generate about 1TB of the output data. &lt;br /&gt;
These projects, therefore, will require 5.1TB of storage. &lt;br /&gt;
For convenience of data manipulation and management we would also like to have additional 400GB of extra storage space.&lt;br /&gt;
Therefore, we would like to request &#039;&#039;&#039;5.5TB&#039;&#039;&#039; of shared storage space in total.&amp;quot;   &lt;br /&gt;
&lt;br /&gt;
Work and Bulk storage can be considerably larger than the home directory allocations. However, there are limits on what RCS can provide as ARC storage provides high-speed access and is expensive to purchase. Typically, &#039;&#039;&#039;any request over 10TB&#039;&#039;&#039; will require some discussion. Work and Bulk allocations differ in a few ways that influence how they are used. Work storage is faster to access as part of computational jobs on ARC although the impact is small for jobs that don&#039;t involve enormous numbers of reads. Bulk storage is designed to be a target for instrument data (which is typically processed in a way that reads data a small number of times per job) and is capable of mounting instruments elsewhere on campus using SMB. A number of questions come up frequently about Work and Bulk storage and these are addressed in an [[Group Storage Allocation FAQ | FAQ]].&lt;br /&gt;
&lt;br /&gt;
== University of Calgary IT storage services ==&lt;br /&gt;
You can learn more about Information Technologies Storage solutions at https://ucalgary.service-now.com/it?id=kb_article&amp;amp;sys_id=d785de4e1b3ed41422ba4158dc4bcbf1&lt;br /&gt;
&lt;br /&gt;
=== OneDrive for Business ===&lt;br /&gt;
OneDrive for Business is a storage solution provided by Microsoft and is available to all faculty and staff.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Capacity&lt;br /&gt;
| 5 TB with quota increases [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=438e6d8313896a0053f2d7b2e144b0b9 available on request].&lt;br /&gt;
|-&lt;br /&gt;
! Classification&lt;br /&gt;
| Level 1 - 4&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
You may use OneDrive for Business to store your personal and work related files. Files stored within OneDrive are by default private only to you but has the option to allow sharing and collaboration with others. &lt;br /&gt;
OneDrive for Business cannot be used as a department or project share space. There is no group/lab offering with OneDrive. &lt;br /&gt;
&lt;br /&gt;
While OneDrive provides a secure/compliant location from an IT Security stand point, it’s not the most adequate location for data the PI is accountable for 5 years upon completion of the study. This is not a security issue, but a data management issue.&lt;br /&gt;
&lt;br /&gt;
For example, if a study was using a personal OneDrive of one of the researchers to store all the records, and the researcher was to leave the university, this OneDrive would be gone in 30 days.&lt;br /&gt;
&lt;br /&gt;
MS has an automation capability for their O365 products. If you have a windows OS machine, you can use the automation product ‘Flow’ to copy a file to a local file system when a new file is created on OneDrive.&lt;br /&gt;
&lt;br /&gt;
To back up data residing on ARC to your personal OneDrive allocation please see:  [[How to transfer data#rclone: rsync for cloud storage]]&lt;br /&gt;
&lt;br /&gt;
OneDrive requires Multi-Factor Authentication (MFA) enabled on your University of Calgary IT account. &lt;br /&gt;
&lt;br /&gt;
More information can be located in the following article https://ucalgary.service-now.com/kb_view.do?sysparm_article=KB0032351&lt;br /&gt;
&lt;br /&gt;
University of Calgary OneDrive data is reportedly hosted in Canada (Markham, Ontario).&lt;br /&gt;
&lt;br /&gt;
==== Support for OneDrive ====&lt;br /&gt;
If you have questions, please contact the [https://ucalgary.service-now.com/it UService Support Centre].&lt;br /&gt;
: Email:it@ucalgary.ca Phone:403.210.9300  1.888.342.3802  Mon - Fri: 8:00 a.m. to noon and 1:00 p.m. to 4:30 p.m.  (closed over the lunch hour)  Walk-in service Math Sciences 7th floor, Room 773  Tues - Thurs: 1:00 p.m. to 4:30 p.m.&lt;br /&gt;
&lt;br /&gt;
==== Other Resources ====&lt;br /&gt;
For more information on OneDrive for Business:&lt;br /&gt;
* Operating Level of Agreement KB0032404 (https://ucalgary.service-now.com/it?id=kb_article&amp;amp;sys_id=7f57bddcdb56a3047cab5068dc9619b6)&lt;br /&gt;
*OneDrive for Business Getting Started KB0032351 (https://ucalgary.service-now.com/it?id=kb_article&amp;amp;sys_id=60994170db2da7487cab5068dc961900)&lt;br /&gt;
*If you are above 90% of your OneDrive quota, you can request an increase here: ( https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=438e6d8313896a0053f2d7b2e144b0b9) PLEASE NOTE: Microsoft will only increase an allocation while the Cloud Storage is more than 90% full. Please log into your O365 cloud account to review before making your request.&lt;br /&gt;
&lt;br /&gt;
Any questions regarding if data hosted on OneDrive is subject to US jurisdiction discovery or access should be directed to:&lt;br /&gt;
*https://cumming.ucalgary.ca/research-institutes/csm-research-services/legal-research-services (CSM researchers.)&lt;br /&gt;
*https://research.ucalgary.ca/contact/research-services (Not CSM Researchers)&lt;br /&gt;
*https://www.ucalgary.ca/legalservices/  (for teaching/learning – non research enquiries that make their way to you)&lt;br /&gt;
&lt;br /&gt;
=== Office365 SharePoint for research groups ===&lt;br /&gt;
To be determined....&lt;br /&gt;
&lt;br /&gt;
Researchers will be able to request an Office 365 SharePoint site for a group at some point in the future &lt;br /&gt;
which could be considered a group cloud sharing platform.&lt;br /&gt;
&lt;br /&gt;
* The official service page:&lt;br /&gt;
: https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=b55f2f72132f5240b5b4df82e144b085&lt;br /&gt;
&lt;br /&gt;
== Digital Research Alliance of Canada storage services ==&lt;br /&gt;
&lt;br /&gt;
=== Storage on the Alliance HPC clusters ===&lt;br /&gt;
* Alliance Wiki article &amp;quot;Storage and file management&amp;quot;:&lt;br /&gt;
: https://docs.alliancecan.ca/wiki/Storage_and_file_management&lt;br /&gt;
&lt;br /&gt;
=== The Alliance NextCloud ===&lt;br /&gt;
For personal or level 1 data, you may use an external solution from the Alliance.&lt;br /&gt;
One has to have an Alliance account to use the service.&lt;br /&gt;
This is similar to DropBox or Google drive functionality. &lt;br /&gt;
:https://nextcloud.computecanada.ca&lt;br /&gt;
: 100 GB of storage that can be shared between your computers.&lt;br /&gt;
: Alliance documentation: https://docs.alliancecan.ca/wiki/Nextcloud&lt;br /&gt;
&lt;br /&gt;
== Commercial Cloud Based Storage Options ==&lt;br /&gt;
&lt;br /&gt;
=== Amazon Web Services ===&lt;br /&gt;
Provided by &#039;&#039;&#039;Amazon Web Services, Inc.&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
* Pricing calculator: https://calculator.aws&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
AWS provides very many different kinds of services, including &#039;&#039;&#039;storage services&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
These options can be a solution for your research needs, but &lt;br /&gt;
* it &#039;&#039;&#039;can be expensive&#039;&#039;&#039;, depending on your needs and the amount of data;&lt;br /&gt;
* &#039;&#039;&#039;pricing schema is complex&#039;&#039;&#039; and can be confusing for new users;&lt;br /&gt;
* the &#039;&#039;&#039;number of options can be overwhelming&#039;&#039;&#039;. A lot of it is designed to provide pricing flexibility, not to increase functionality.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The key points:&lt;br /&gt;
* &#039;&#039;&#039;Uploading data&#039;&#039;&#039; to AWS is &#039;&#039;&#039;free&#039;&#039;&#039;.&lt;br /&gt;
* &#039;&#039;&#039;Storing data&#039;&#039;&#039; on AWS storage is a &#039;&#039;&#039;paid service&#039;&#039;&#039;.&lt;br /&gt;
* &#039;&#039;&#039;Downloading data&#039;&#039;&#039; from AWS storage to your computer is a &#039;&#039;&#039;paid service&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Federated Research Data Repository (FRDR) ==&lt;br /&gt;
The Federated Research Data Repository (RFRDR) is a suitable storage solution for long-term archive storage for research datasets used in published research work. FRDR is a bilingual publishing platform for sharing and preserving Canadian research data.&lt;br /&gt;
It is a curated, general-purpose repository, custom built for large datasets.&lt;br /&gt;
FRDR is run by the Digital Research Alliance of Canada.&lt;br /&gt;
&lt;br /&gt;
For more information on FRDR visit their web site:  https://www.frdr-dfdr.ca/repo/&lt;br /&gt;
&lt;br /&gt;
[[Category:Administration]]&lt;br /&gt;
{{Navbox Administration}}&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=Storage_Options&amp;diff=3498</id>
		<title>Storage Options</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=Storage_Options&amp;diff=3498"/>
		<updated>2024-08-13T17:44:06Z</updated>

		<summary type="html">&lt;p&gt;Lleung: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;There are a few options researchers can take advantage of when storing their research data. Please take into account the purpose of the storage, appropriate research data management principles, and the data classification when choosing an appropriate storage solution. &lt;br /&gt;
&lt;br /&gt;
== Data Classification ==&lt;br /&gt;
Please review the different data classifications that are outlined by the [https://ucalgary.ca/legal-services/sites/default/files/teams/1/Standards-Legal-Information-Security-Classification-Standard.pdf Information Security Classification Standard]. There are 4 levels of data classification which are summarised in the table below.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Level&lt;br /&gt;
! Description&lt;br /&gt;
! Example&lt;br /&gt;
|-&lt;br /&gt;
| Level 1&lt;br /&gt;
| Public&lt;br /&gt;
|&lt;br /&gt;
* Reference data sets&lt;br /&gt;
* Published research data&lt;br /&gt;
|-&lt;br /&gt;
| Level 2&lt;br /&gt;
| Internal&lt;br /&gt;
|&lt;br /&gt;
* Internal memos&lt;br /&gt;
* Unpublished research data&lt;br /&gt;
* Anonymized or de-identified human subject data&lt;br /&gt;
* Library transactions and journals&lt;br /&gt;
|-&lt;br /&gt;
| Level 3&lt;br /&gt;
| Confidential&lt;br /&gt;
|&lt;br /&gt;
* Faculty/staff employment applications, personnel files, contact information&lt;br /&gt;
* Donor or prospective donor information&lt;br /&gt;
* Contracts&lt;br /&gt;
* Intellectual property&lt;br /&gt;
|-&lt;br /&gt;
| Level 4&lt;br /&gt;
| Restricted&lt;br /&gt;
|&lt;br /&gt;
* Patient identifiable health information&lt;br /&gt;
* identifiable human subject research data&lt;br /&gt;
* information subject to special government requirements&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
When selecting a storage option, you must use one that meets or exceeds the rated security classification.&lt;br /&gt;
&lt;br /&gt;
* See also the Collaboration, storage and file shares article in Service Now:&lt;br /&gt;
: https://ucalgary.service-now.com/it?id=it_catalog_by_category&amp;amp;sys_id=4dbb82ee13661200c524fc04e144b044&lt;br /&gt;
&lt;br /&gt;
== Research Data Management ==&lt;br /&gt;
We recommend you follow good Research Data Management (RDM) practices and ensure you have a Data Management Plan (DMP) created to guide your data&#039;s life-cycle. Your DMP can help us support the FAIR (Findable, Accessible, Interoperable, and Reusable) principles for data management.&lt;br /&gt;
&lt;br /&gt;
Please consider contacting Libraries and Cultural Resources for assistance.&lt;br /&gt;
&lt;br /&gt;
=== Resources ===&lt;br /&gt;
&lt;br /&gt;
* A DMP Assistant is a tool created specifically for Canadian scholars and aims to meet any and all Tri-Agency requirements available at: https://assistant.portagenetwork.ca/&lt;br /&gt;
* For guidance on general data management and developing a DMP, consult https://library.ucalgary.ca/guides/researchdatamanagement or contact research.data@ucalgary.ca.&lt;br /&gt;
* For support using PRISM Dataverse, the University of Calgary&#039;s institutional data repository, contact digitize@ucalgary.ca.&lt;br /&gt;
* If you need to share and preserve your large post-publication data set for a mandated period of time, consider using the national Federated Research Data Repository (FRDR). Learn more at https://www.frdr-dfdr.ca/repo/. FRDR aligns with Tri-Agency Principles as a platform for Preservation, Retention and Sharing of research data. see: [http://www.science.gc.ca/eic/site/063.nsf/eng/h_83F7624E.html Tri-Agency Statement of Principles on Digital Data Management]&lt;br /&gt;
&lt;br /&gt;
== University of Calgary RCS storage services ==&lt;br /&gt;
&lt;br /&gt;
=== Secure Compute Data Storage (SCDS) ===&lt;br /&gt;
Secure Computing Data Storage (SCDS) is a service provided by Research Computing Services that allows researchers to store restricted and confidential data. Collaboration with Level 4 data stored in SCDS is possible using ShareFile, a secure file sharing and collaboration tool by Citrix.&lt;br /&gt;
 &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Capacity&lt;br /&gt;
| 10 GB or more&lt;br /&gt;
|-&lt;br /&gt;
! Classification&lt;br /&gt;
| Level 4&lt;br /&gt;
|-&lt;br /&gt;
! Learn More&lt;br /&gt;
| Visit [https://it.ucalgary.ca/secure-computing-platform The SCDS Website]&lt;br /&gt;
|-&lt;br /&gt;
! Request Access&lt;br /&gt;
| Visit [https://ucalgary.service-now.com/kb_view.do?sysparm_article=KB0030163 ServiceNow to request access]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== ResearchFS ===&lt;br /&gt;
ResearchFS is a University of Calgary-hosted SMB/CIFS storage solution funded and operated by RCS. It is available by request to faculty and staff with active research data.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Capacity&lt;br /&gt;
| 1TB with quota increases available on request. &lt;br /&gt;
|-&lt;br /&gt;
! Classification&lt;br /&gt;
| Level 1 - 2&lt;br /&gt;
|-&lt;br /&gt;
! Request Access&lt;br /&gt;
| Visit [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=fe66b3a7db297300897e4b8b0b96199d ServiceNow to request access]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==== Service Description ====&lt;br /&gt;
You may use ResearchFS to store your active research data files. ResearchFS is intended to be used as a research group or project share. ResearchFS is available on campus or off campus using the IT supported VPN client. Information on how to download and install the VPN client can be found here: https://ucalgary.service-now.com/it?id=kb_article&amp;amp;sys_id=880e71071381ae006f3afbb2e144b05c (IT account login may be required).&lt;br /&gt;
All ResearchFS users must have a University of Calgary IT account.&lt;br /&gt;
&lt;br /&gt;
==== Data recovery ====&lt;br /&gt;
ResearchFS does daily snapshots at a bit past midnight, which it keeps for 30 days. You should be able to recover a deleted file for up to 30 days, if it was in your share overnight. If you create a file and delete it during a day, no snapshot will be available for you to recover. ResearchFS presents backups using the windows OS &#039;previous versions&#039; functionality. If you are not familiar with using this, or if you are on a Linux or MacOS device, you can request a restore, with Service Now.&lt;br /&gt;
&lt;br /&gt;
For backup, we replicate changes to a distant data center every hour. The storage hardware which hosts your data is located in the basement of the Math Sciences building and our backup is in the HRIC building, so in case of an on campus disaster, your data should be safe.&lt;br /&gt;
&lt;br /&gt;
==== Support for ResearchFS ====&lt;br /&gt;
If you have questions, please contact the IT Support Centre.&lt;br /&gt;
: Mon – Fri: 8:30 am – 5:00 pm; Sat, Sun &amp;amp; holidays: 10:00 am – 2:00 pm.&lt;br /&gt;
: Live Chat: ucalgary.ca/it&lt;br /&gt;
: Email: itsupport@ucalgary.ca&lt;br /&gt;
: Phone: 403.220.5555&lt;br /&gt;
: In person: 773 Math Science&lt;br /&gt;
&lt;br /&gt;
=== ARC Cluster Storage ===&lt;br /&gt;
ARC storage is used to &#039;&#039;&#039;support workflows on the ARC computing cluster&#039;&#039;&#039;. The expectation is that storage on ARC will only be used for active and upcoming computational projects. &lt;br /&gt;
It is not suitable for long-term or archival storage as it is not backed-up and is not guaranteed to be available for the time periods that are typical of archiving.&lt;br /&gt;
ARC is a research cluster, which means it has high performance but can be stopped for required maintenance when needed. &lt;br /&gt;
Thus, ARC cannot be relied on for any kind of service that requires constant availability. &lt;br /&gt;
Which means, in turn that ARC&#039;s storage cannot and &#039;&#039;&#039;should not be used as a main storage facility for research data&#039;&#039;&#039;. &lt;br /&gt;
The &#039;&#039;&#039;master copy&#039;&#039;&#039; of research data should be &#039;&#039;&#039;stored elsewhere&#039;&#039;&#039; and only part of that data are expected to be copied to ARC for computational analysis.&lt;br /&gt;
&lt;br /&gt;
==== ARC Home Directories ====&lt;br /&gt;
Every user account on ARC has a static 500GB allocation of storage and a maximum of 1.5 million files (including directories). This cannot be increased or decreased. Home directory storage is connected via a network file system to the rest of the cluster and supports fast data transfer to memory on compute nodes. This also means that basic file system commands (like &amp;lt;code&amp;gt;ls&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;find&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;du&amp;lt;/code&amp;gt;) take longer to run as the number of files in your home directory increases. In particular, we strongly encourage users to stay under 100000 files if it is at all possible. This can be achieved by combining smaller data files into single larger files, using structured data formats rather than large number of text files, or combining collections of files that will be used together into archives (tar, dar, etc). Since top level permissions on home directories are set to prevent other users from reading or executing, home directories are not suitable for sharing data directly with colleagues working on ARC. A Research Group Allocation is a more appropriate place for storing shared data or very large data sets that will be used as part of active computational projects.   &lt;br /&gt;
&lt;br /&gt;
==== ARC /work and /bulk Group Allocation ====&lt;br /&gt;
Any group member who wants to use the shared storage, should send an email to the support@hpc.ucalgary.ca to be added to the access group and CC the PI/ data owner. &#039;&#039;&#039;This will confirm that the PI approves the group member&#039;s request access to the shared storage.&#039;&#039;&#039; Please note that the access permissions inside the directory are expected to be managed by the data owners.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All requests should answer the following questions:&lt;br /&gt;
&lt;br /&gt;
* How much storage is requested and why is that the amount that you need?&lt;br /&gt;
A rationale for a request can be a formal data management plan or something more informal like a rough estimate to the primary dataset used for a project and a rough estimate to the size of outputs expected from your computations that are planned to run on ARC over the &#039;&#039;&#039;next year.&#039;&#039;&#039;&lt;br /&gt;
* What is the requested &#039;&#039;&#039;allocation name&#039;&#039;&#039;? (typically something like &amp;lt;PI name&amp;gt;_lab, &amp;lt;code&amp;gt;smith_lab&amp;lt;/code&amp;gt;, for example)&lt;br /&gt;
* What is the &#039;&#039;&#039;data classification&#039;&#039;&#039; using the University of Calgary data security classification system?&lt;br /&gt;
* Which user or users would be the &#039;&#039;&#039;owner&#039;&#039;&#039; of the allocation? (Full Name and UCalgary Email address, typically the requesting PI but there may be co-PIs)&lt;br /&gt;
* Which members of the allocation should be able to &#039;&#039;&#039;request access&#039;&#039;&#039; for new users? (Full Name and UCalgary Email address for active ARC users)&lt;br /&gt;
* What is the &#039;&#039;&#039;faculty&#039;&#039;&#039; of the owner or owners?&lt;br /&gt;
* Please provide a short description of the lab.&lt;br /&gt;
* Please provide a brief &#039;&#039;&#039;numerical estimate&#039;&#039;&#039; of the required storage space based on &#039;&#039;&#039;projects&#039;&#039;&#039; that will use the allocation and their storage &#039;&#039;&#039;requirements&#039;&#039;&#039; .&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example 1&#039;&#039;&#039;: &amp;quot;We will be processing a 1T dataset by performing 100 experimental runs. &lt;br /&gt;
Each experiment will be processed to produce a 6GB output, giving 600GB of the total output data. &lt;br /&gt;
We will also need 400GB additional space for post-processing and data management. &lt;br /&gt;
Thus, we would like to request &#039;&#039;&#039;2TB&#039;&#039;&#039; of shared space in total.&amp;quot;   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example 2&#039;&#039;&#039;: &amp;quot;3 members of our research group need additional shared space on ARC for their independent projects.&lt;br /&gt;
Project 1 starts with 100GB of initial data and is expected to generate 800GB of the output results. &lt;br /&gt;
Project 2 is going to use simulations and does not use any input data but is expected to generate 2TB of the simulated data for further processing.&lt;br /&gt;
The processing will require 200GB of additional space.&lt;br /&gt;
Project 3 will be working on a 1TB dataset and is expected to generate about 1TB of the output data. &lt;br /&gt;
These projects, therefore, will require 5.1TB of storage. &lt;br /&gt;
For convenience of data manipulation and management we would also like to have additional 400GB of extra storage space.&lt;br /&gt;
Therefore, we would like to request &#039;&#039;&#039;5.5TB&#039;&#039;&#039; of shared storage space in total.&amp;quot;   &lt;br /&gt;
&lt;br /&gt;
Work and Bulk storage can be considerably larger than the home directory allocations. However, there are limits on what RCS can provide as ARC storage provides high-speed access and is expensive to purchase. Typically, &#039;&#039;&#039;any request over 10TB&#039;&#039;&#039; will require some discussion. Work and Bulk allocations differ in a few ways that influence how they are used. Work storage is faster to access as part of computational jobs on ARC although the impact is small for jobs that don&#039;t involve enormous numbers of reads. Bulk storage is designed to be a target for instrument data (which is typically processed in a way that reads data a small number of times per job) and is capable of mounting instruments elsewhere on campus using SMB. A number of questions come up frequently about Work and Bulk storage and these are addressed in an [[Group Storage Allocation FAQ | FAQ]].&lt;br /&gt;
&lt;br /&gt;
== University of Calgary IT storage services ==&lt;br /&gt;
You can learn more about Information Technologies Storage solutions at https://ucalgary.service-now.com/it?id=kb_article&amp;amp;sys_id=d785de4e1b3ed41422ba4158dc4bcbf1&lt;br /&gt;
&lt;br /&gt;
=== OneDrive for Business ===&lt;br /&gt;
OneDrive for Business is a storage solution provided by Microsoft and is available to all faculty and staff.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Capacity&lt;br /&gt;
| 5 TB with quota increases [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=438e6d8313896a0053f2d7b2e144b0b9 available on request].&lt;br /&gt;
|-&lt;br /&gt;
! Classification&lt;br /&gt;
| Level 1 - 4&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
You may use OneDrive for Business to store your personal and work related files. Files stored within OneDrive are by default private only to you but has the option to allow sharing and collaboration with others. &lt;br /&gt;
OneDrive for Business cannot be used as a department or project share space. There is no group/lab offering with OneDrive. &lt;br /&gt;
&lt;br /&gt;
While OneDrive provides a secure/compliant location from an IT Security stand point, it’s not the most adequate location for data the PI is accountable for 5 years upon completion of the study. This is not a security issue, but a data management issue.&lt;br /&gt;
&lt;br /&gt;
For example, if a study was using a personal OneDrive of one of the researchers to store all the records, and the researcher was to leave the university, this OneDrive would be gone in 30 days.&lt;br /&gt;
&lt;br /&gt;
MS has an automation capability for their O365 products. If you have a windows OS machine, you can use the automation product ‘Flow’ to copy a file to a local file system when a new file is created on OneDrive.&lt;br /&gt;
&lt;br /&gt;
To back up data residing on ARC to your personal OneDrive allocation please see:  [[How to transfer data#rclone: rsync for cloud storage]]&lt;br /&gt;
&lt;br /&gt;
OneDrive requires Multi-Factor Authentication (MFA) enabled on your University of Calgary IT account. &lt;br /&gt;
&lt;br /&gt;
More information can be located in the following article https://ucalgary.service-now.com/kb_view.do?sysparm_article=KB0032351&lt;br /&gt;
&lt;br /&gt;
University of Calgary OneDrive data is reportedly hosted in Canada (Markham, Ontario).&lt;br /&gt;
&lt;br /&gt;
==== Support for OneDrive ====&lt;br /&gt;
If you have questions, please contact the [https://ucalgary.service-now.com/it UService Support Centre].&lt;br /&gt;
: Email:it@ucalgary.ca Phone:403.210.9300  1.888.342.3802  Mon - Fri: 8:00 a.m. to noon and 1:00 p.m. to 4:30 p.m.  (closed over the lunch hour)  Walk-in service Math Sciences 7th floor, Room 773  Tues - Thurs: 1:00 p.m. to 4:30 p.m.&lt;br /&gt;
&lt;br /&gt;
==== Other Resources ====&lt;br /&gt;
For more information on OneDrive for Business:&lt;br /&gt;
* Operating Level of Agreement KB0032404 (https://ucalgary.service-now.com/it?id=kb_article&amp;amp;sys_id=7f57bddcdb56a3047cab5068dc9619b6)&lt;br /&gt;
*OneDrive for Business Getting Started KB0032351 (https://ucalgary.service-now.com/it?id=kb_article&amp;amp;sys_id=60994170db2da7487cab5068dc961900)&lt;br /&gt;
*If you are above 90% of your OneDrive quota, you can request an increase here: ( https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=438e6d8313896a0053f2d7b2e144b0b9) PLEASE NOTE: Microsoft will only increase an allocation while the Cloud Storage is more than 90% full. Please log into your O365 cloud account to review before making your request.&lt;br /&gt;
&lt;br /&gt;
Any questions regarding if data hosted on OneDrive is subject to US jurisdiction discovery or access should be directed to:&lt;br /&gt;
*https://cumming.ucalgary.ca/research-institutes/csm-research-services/legal-research-services (CSM researchers.)&lt;br /&gt;
*https://research.ucalgary.ca/contact/research-services (Not CSM Researchers)&lt;br /&gt;
*https://www.ucalgary.ca/legalservices/  (for teaching/learning – non research enquiries that make their way to you)&lt;br /&gt;
&lt;br /&gt;
=== Office365 SharePoint for research groups ===&lt;br /&gt;
To be determined....&lt;br /&gt;
&lt;br /&gt;
Researchers will be able to request an Office 365 SharePoint site for a group at some point in the future &lt;br /&gt;
which could be considered a group cloud sharing platform.&lt;br /&gt;
&lt;br /&gt;
* The official service page:&lt;br /&gt;
: https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=b55f2f72132f5240b5b4df82e144b085&lt;br /&gt;
&lt;br /&gt;
== Digital Research Alliance of Canada storage services ==&lt;br /&gt;
&lt;br /&gt;
=== Storage on the Alliance HPC clusters ===&lt;br /&gt;
* Alliance Wiki article &amp;quot;Storage and file management&amp;quot;:&lt;br /&gt;
: https://docs.alliancecan.ca/wiki/Storage_and_file_management&lt;br /&gt;
&lt;br /&gt;
=== The Alliance NextCloud ===&lt;br /&gt;
For personal or level 1 data, you may use an external solution from the Alliance.&lt;br /&gt;
One has to have an Alliance account to use the service.&lt;br /&gt;
This is similar to DropBox or Google drive functionality. &lt;br /&gt;
:https://nextcloud.computecanada.ca&lt;br /&gt;
: 100 GB of storage that can be shared between your computers.&lt;br /&gt;
: Alliance documentation: https://docs.alliancecan.ca/wiki/Nextcloud&lt;br /&gt;
&lt;br /&gt;
== Commercial Cloud Based Storage Options ==&lt;br /&gt;
&lt;br /&gt;
=== Amazon Web Services ===&lt;br /&gt;
Provided by &#039;&#039;&#039;Amazon Web Services, Inc.&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
* Pricing calculator: https://calculator.aws&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
AWS provides very many different kinds of services, including &#039;&#039;&#039;storage services&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
These options can be a solution for your research needs, but &lt;br /&gt;
* it &#039;&#039;&#039;can be expensive&#039;&#039;&#039;, depending on your needs and the amount of data;&lt;br /&gt;
* &#039;&#039;&#039;pricing schema is complex&#039;&#039;&#039; and can be confusing for new users;&lt;br /&gt;
* the &#039;&#039;&#039;number of options can be overwhelming&#039;&#039;&#039;. A lot of it is designed to provide pricing flexibility, not to increase functionality.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The key points:&lt;br /&gt;
* &#039;&#039;&#039;Uploading data&#039;&#039;&#039; to AWS is &#039;&#039;&#039;free&#039;&#039;&#039;.&lt;br /&gt;
* &#039;&#039;&#039;Storing data&#039;&#039;&#039; on AWS storage is a &#039;&#039;&#039;paid service&#039;&#039;&#039;.&lt;br /&gt;
* &#039;&#039;&#039;Downloading data&#039;&#039;&#039; from AWS storage to your computer is a &#039;&#039;&#039;paid service&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Federated Research Data Repository (FRDR) ==&lt;br /&gt;
The Federated Research Data Repository (RFRDR) is a suitable storage solution for long-term archive storage for research datasets used in published research work. FRDR is a bilingual publishing platform for sharing and preserving Canadian research data.&lt;br /&gt;
It is a curated, general-purpose repository, custom built for large datasets.&lt;br /&gt;
FRDR is run by the Digital Research Alliance of Canada.&lt;br /&gt;
&lt;br /&gt;
For more information on FRDR visit their web site:  https://www.frdr-dfdr.ca/repo/&lt;br /&gt;
&lt;br /&gt;
[[Category:Administration]]&lt;br /&gt;
{{Navbox Administration}}&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=Storage_Options&amp;diff=3497</id>
		<title>Storage Options</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=Storage_Options&amp;diff=3497"/>
		<updated>2024-08-13T17:40:59Z</updated>

		<summary type="html">&lt;p&gt;Lleung: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;There are a few options researchers can take advantage of when storing their research data. Please take into account the purpose of the storage, appropriate research data management principles, and the data classification when choosing an appropriate storage solution. &lt;br /&gt;
&lt;br /&gt;
== Data Classification ==&lt;br /&gt;
Please review the different data classifications that are outlined by the [https://ucalgary.ca/legal-services/sites/default/files/teams/1/Standards-Legal-Information-Security-Classification-Standard.pdf Information Security Classification Standard]. There are 4 levels of data classification which are summarised in the table below.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Level&lt;br /&gt;
! Description&lt;br /&gt;
! Example&lt;br /&gt;
|-&lt;br /&gt;
| Level 1&lt;br /&gt;
| Public&lt;br /&gt;
|&lt;br /&gt;
* Reference data sets&lt;br /&gt;
* Published research data&lt;br /&gt;
|-&lt;br /&gt;
| Level 2&lt;br /&gt;
| Internal&lt;br /&gt;
|&lt;br /&gt;
* Internal memos&lt;br /&gt;
* Unpublished research data&lt;br /&gt;
* Anonymized or de-identified human subject data&lt;br /&gt;
* Library transactions and journals&lt;br /&gt;
|-&lt;br /&gt;
| Level 3&lt;br /&gt;
| Confidential&lt;br /&gt;
|&lt;br /&gt;
* Faculty/staff employment applications, personnel files, contact information&lt;br /&gt;
* Donor or prospective donor information&lt;br /&gt;
* Contracts&lt;br /&gt;
* Intellectual property&lt;br /&gt;
|-&lt;br /&gt;
| Level 4&lt;br /&gt;
| Restricted&lt;br /&gt;
|&lt;br /&gt;
* Patient identifiable health information&lt;br /&gt;
* identifiable human subject research data&lt;br /&gt;
* information subject to special government requirements&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
When selecting a storage option, you must use one that meets or exceeds the rated security classification.&lt;br /&gt;
&lt;br /&gt;
* See also the Collaboration, storage and file shares article in Service Now:&lt;br /&gt;
: https://ucalgary.service-now.com/it?id=it_catalog_by_category&amp;amp;sys_id=4dbb82ee13661200c524fc04e144b044&lt;br /&gt;
&lt;br /&gt;
== Research Data Management ==&lt;br /&gt;
We recommend you follow good Research Data Management (RDM) practices and ensure you have a Data Management Plan (DMP) created to guide your data&#039;s life-cycle. Your DMP can help us support the FAIR (Findable, Accessible, Interoperable, and Reusable) principles for data management.&lt;br /&gt;
&lt;br /&gt;
Please consider contacting Libraries and Cultural Resources for assistance.&lt;br /&gt;
&lt;br /&gt;
=== Resources ===&lt;br /&gt;
&lt;br /&gt;
* A DMP Assistant is a tool created specifically for Canadian scholars and aims to meet any and all Tri-Agency requirements available at: https://assistant.portagenetwork.ca/&lt;br /&gt;
* For guidance on general data management and developing a DMP, consult https://library.ucalgary.ca/guides/researchdatamanagement or contact research.data@ucalgary.ca.&lt;br /&gt;
* For support using PRISM Dataverse, the University of Calgary&#039;s institutional data repository, contact digitize@ucalgary.ca.&lt;br /&gt;
* If you need to share and preserve your large post-publication data set for a mandated period of time, consider using the national Federated Research Data Repository (FRDR). Learn more at https://www.frdr-dfdr.ca/repo/. FRDR aligns with Tri-Agency Principles as a platform for Preservation, Retention and Sharing of research data. see: [http://www.science.gc.ca/eic/site/063.nsf/eng/h_83F7624E.html Tri-Agency Statement of Principles on Digital Data Management]&lt;br /&gt;
&lt;br /&gt;
== University of Calgary RCS storage services ==&lt;br /&gt;
&lt;br /&gt;
=== Secure Compute Data Storage (SCDS) ===&lt;br /&gt;
Secure Computing Data Storage (SCDS) is a service provided by Research Computing Services that allows researchers to store restricted and confidential data. Collaboration with Level 4 data stored in SCDS is possible using ShareFile, a secure file sharing and collaboration tool by Citrix.&lt;br /&gt;
 &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Capacity&lt;br /&gt;
| 10 GB or more&lt;br /&gt;
|-&lt;br /&gt;
! Classification&lt;br /&gt;
| Level 4&lt;br /&gt;
|-&lt;br /&gt;
! Learn More&lt;br /&gt;
| Visit [https://it.ucalgary.ca/secure-computing-platform The SCDS Website]&lt;br /&gt;
|-&lt;br /&gt;
! Request Access&lt;br /&gt;
| Visit [https://ucalgary.service-now.com/kb_view.do?sysparm_article=KB0030163 ServiceNow to request access]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== ResearchFS ===&lt;br /&gt;
ResearchFS is a University of Calgary-hosted SMB/CIFS storage solution funded and operated by RCS. It is available by request to faculty and staff with active research data.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Capacity&lt;br /&gt;
| 1TB with quota increases available on request. &lt;br /&gt;
|-&lt;br /&gt;
! Classification&lt;br /&gt;
| Level 1 - 2&lt;br /&gt;
|-&lt;br /&gt;
! Request Access&lt;br /&gt;
| Visit [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=fe66b3a7db297300897e4b8b0b96199d ServiceNow to request access]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==== Service Description ====&lt;br /&gt;
You may use ResearchFS to store your active research data files. ResearchFS is intended to be used as a research group or project share. ResearchFS is available on campus or off campus using the IT supported VPN client. Information on how to download and install the VPN client can be found here: https://ucalgary.service-now.com/it?id=kb_article&amp;amp;sys_id=880e71071381ae006f3afbb2e144b05c (IT account login may be required).&lt;br /&gt;
All ResearchFS users must have a University of Calgary IT account.&lt;br /&gt;
&lt;br /&gt;
==== Data recovery ====&lt;br /&gt;
ResearchFS does daily snapshots at a bit past midnight, which it keeps for 30 days. You should be able to recover a deleted file for up to 30 days, if it was in your share overnight. If you create a file and delete it during a day, no snapshot will be available for you to recover. ResearchFS presents backups using the windows OS &#039;previous versions&#039; functionality. If you are not familiar with using this, or if you are on a Linux or MacOS device, you can request a restore, with Service Now.&lt;br /&gt;
&lt;br /&gt;
For backup, we replicate changes to a distant data center every hour. The storage hardware which hosts your data is located in the basement of the Math Sciences building and our backup is in the HRIC building, so in case of an on campus disaster, your data should be safe.&lt;br /&gt;
&lt;br /&gt;
==== Support for ResearchFS ====&lt;br /&gt;
If you have questions, please contact the IT Support Centre.&lt;br /&gt;
: Mon – Fri: 8:30 am – 5:00 pm; Sat, Sun &amp;amp; holidays: 10:00 am – 2:00 pm.&lt;br /&gt;
: Live Chat: ucalgary.ca/it&lt;br /&gt;
: Email: itsupport@ucalgary.ca&lt;br /&gt;
: Phone: 403.220.5555&lt;br /&gt;
: In person: 773 Math Science&lt;br /&gt;
&lt;br /&gt;
== University of Calgary IT storage services ==&lt;br /&gt;
You can learn more about Information Technologies Storage solutions at https://ucalgary.service-now.com/it?id=kb_article&amp;amp;sys_id=d785de4e1b3ed41422ba4158dc4bcbf1&lt;br /&gt;
&lt;br /&gt;
=== OneDrive for Business ===&lt;br /&gt;
OneDrive for Business is a storage solution provided by Microsoft and is available to all faculty and staff.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Capacity&lt;br /&gt;
| 5 TB with quota increases [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=438e6d8313896a0053f2d7b2e144b0b9 available on request].&lt;br /&gt;
|-&lt;br /&gt;
! Classification&lt;br /&gt;
| Level 1 - 4&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
You may use OneDrive for Business to store your personal and work related files. Files stored within OneDrive are by default private only to you but has the option to allow sharing and collaboration with others. &lt;br /&gt;
OneDrive for Business cannot be used as a department or project share space. There is no group/lab offering with OneDrive. &lt;br /&gt;
&lt;br /&gt;
While OneDrive provides a secure/compliant location from an IT Security stand point, it’s not the most adequate location for data the PI is accountable for 5 years upon completion of the study. This is not a security issue, but a data management issue.&lt;br /&gt;
&lt;br /&gt;
For example, if a study was using a personal OneDrive of one of the researchers to store all the records, and the researcher was to leave the university, this OneDrive would be gone in 30 days.&lt;br /&gt;
&lt;br /&gt;
MS has an automation capability for their O365 products. If you have a windows OS machine, you can use the automation product ‘Flow’ to copy a file to a local file system when a new file is created on OneDrive.&lt;br /&gt;
&lt;br /&gt;
To back up data residing on ARC to your personal OneDrive allocation please see:  [[How to transfer data#rclone: rsync for cloud storage]]&lt;br /&gt;
&lt;br /&gt;
OneDrive requires Multi-Factor Authentication (MFA) enabled on your University of Calgary IT account. &lt;br /&gt;
&lt;br /&gt;
More information can be located in the following article https://ucalgary.service-now.com/kb_view.do?sysparm_article=KB0032351&lt;br /&gt;
&lt;br /&gt;
University of Calgary OneDrive data is reportedly hosted in Canada (Markham, Ontario).&lt;br /&gt;
&lt;br /&gt;
==== Support for OneDrive ====&lt;br /&gt;
If you have questions, please contact the [https://ucalgary.service-now.com/it UService Support Centre].&lt;br /&gt;
: Email:it@ucalgary.ca Phone:403.210.9300  1.888.342.3802  Mon - Fri: 8:00 a.m. to noon and 1:00 p.m. to 4:30 p.m.  (closed over the lunch hour)  Walk-in service Math Sciences 7th floor, Room 773  Tues - Thurs: 1:00 p.m. to 4:30 p.m.&lt;br /&gt;
&lt;br /&gt;
==== Other Resources ====&lt;br /&gt;
For more information on OneDrive for Business:&lt;br /&gt;
* Operating Level of Agreement KB0032404 (https://ucalgary.service-now.com/it?id=kb_article&amp;amp;sys_id=7f57bddcdb56a3047cab5068dc9619b6)&lt;br /&gt;
*OneDrive for Business Getting Started KB0032351 (https://ucalgary.service-now.com/it?id=kb_article&amp;amp;sys_id=60994170db2da7487cab5068dc961900)&lt;br /&gt;
*If you are above 90% of your OneDrive quota, you can request an increase here: ( https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=438e6d8313896a0053f2d7b2e144b0b9) PLEASE NOTE: Microsoft will only increase an allocation while the Cloud Storage is more than 90% full. Please log into your O365 cloud account to review before making your request.&lt;br /&gt;
&lt;br /&gt;
Any questions regarding if data hosted on OneDrive is subject to US jurisdiction discovery or access should be directed to:&lt;br /&gt;
*https://cumming.ucalgary.ca/research-institutes/csm-research-services/legal-research-services (CSM researchers.)&lt;br /&gt;
*https://research.ucalgary.ca/contact/research-services (Not CSM Researchers)&lt;br /&gt;
*https://www.ucalgary.ca/legalservices/  (for teaching/learning – non research enquiries that make their way to you)&lt;br /&gt;
&lt;br /&gt;
=== Office365 SharePoint for research groups ===&lt;br /&gt;
To be determined....&lt;br /&gt;
&lt;br /&gt;
Researchers will be able to request an Office 365 SharePoint site for a group at some point in the future &lt;br /&gt;
which could be considered a group cloud sharing platform.&lt;br /&gt;
&lt;br /&gt;
* The official service page:&lt;br /&gt;
: https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=b55f2f72132f5240b5b4df82e144b085&lt;br /&gt;
&lt;br /&gt;
== Digital Research Alliance of Canada storage services ==&lt;br /&gt;
&lt;br /&gt;
=== Storage on the Alliance HPC clusters ===&lt;br /&gt;
* Alliance Wiki article &amp;quot;Storage and file management&amp;quot;:&lt;br /&gt;
: https://docs.alliancecan.ca/wiki/Storage_and_file_management&lt;br /&gt;
&lt;br /&gt;
=== The Alliance NextCloud ===&lt;br /&gt;
For personal or level 1 data, you may use an external solution from the Alliance.&lt;br /&gt;
One has to have an Alliance account to use the service.&lt;br /&gt;
This is similar to DropBox or Google drive functionality. &lt;br /&gt;
:https://nextcloud.computecanada.ca&lt;br /&gt;
: 100 GB of storage that can be shared between your computers.&lt;br /&gt;
: Alliance documentation: https://docs.alliancecan.ca/wiki/Nextcloud&lt;br /&gt;
&lt;br /&gt;
== Commercial Cloud Based Storage Options ==&lt;br /&gt;
&lt;br /&gt;
=== Amazon Web Services ===&lt;br /&gt;
Provided by &#039;&#039;&#039;Amazon Web Services, Inc.&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
* Pricing calculator: https://calculator.aws&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
AWS provides very many different kinds of services, including &#039;&#039;&#039;storage services&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
These options can be a solution for your research needs, but &lt;br /&gt;
* it &#039;&#039;&#039;can be expensive&#039;&#039;&#039;, depending on your needs and the amount of data;&lt;br /&gt;
* &#039;&#039;&#039;pricing schema is complex&#039;&#039;&#039; and can be confusing for new users;&lt;br /&gt;
* the &#039;&#039;&#039;number of options can be overwhelming&#039;&#039;&#039;. A lot of it is designed to provide pricing flexibility, not to increase functionality.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The key points:&lt;br /&gt;
* &#039;&#039;&#039;Uploading data&#039;&#039;&#039; to AWS is &#039;&#039;&#039;free&#039;&#039;&#039;.&lt;br /&gt;
* &#039;&#039;&#039;Storing data&#039;&#039;&#039; on AWS storage is a &#039;&#039;&#039;paid service&#039;&#039;&#039;.&lt;br /&gt;
* &#039;&#039;&#039;Downloading data&#039;&#039;&#039; from AWS storage to your computer is a &#039;&#039;&#039;paid service&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== University of Calgary RCS - ARC Cluster Storage ==&lt;br /&gt;
ARC storage is used to &#039;&#039;&#039;support workflows on the ARC computing cluster&#039;&#039;&#039;. The expectation is that storage on ARC will only be used for active and upcoming computational projects. &lt;br /&gt;
It is not suitable for long-term or archival storage as it is not backed-up and is not guaranteed to be available for the time periods that are typical of archiving.&lt;br /&gt;
ARC is a research cluster, which means it has high performance but can be stopped for required maintenance when needed. &lt;br /&gt;
Thus, ARC cannot be relied on for any kind of service that requires constant availability. &lt;br /&gt;
Which means, in turn that ARC&#039;s storage cannot and &#039;&#039;&#039;should not be used as a main storage facility for research data&#039;&#039;&#039;. &lt;br /&gt;
The &#039;&#039;&#039;master copy&#039;&#039;&#039; of research data should be &#039;&#039;&#039;stored elsewhere&#039;&#039;&#039; and only part of that data are expected to be copied to ARC for computational analysis.&lt;br /&gt;
&lt;br /&gt;
=== Home Directories ===&lt;br /&gt;
Every user account on ARC has a static 500GB allocation of storage and a maximum of 1.5 million files (including directories). This cannot be increased or decreased. Home directory storage is connected via a network file system to the rest of the cluster and supports fast data transfer to memory on compute nodes. This also means that basic file system commands (like &amp;lt;code&amp;gt;ls&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;find&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;du&amp;lt;/code&amp;gt;) take longer to run as the number of files in your home directory increases. In particular, we strongly encourage users to stay under 100000 files if it is at all possible. This can be achieved by combining smaller data files into single larger files, using structured data formats rather than large number of text files, or combining collections of files that will be used together into archives (tar, dar, etc). Since top level permissions on home directories are set to prevent other users from reading or executing, home directories are not suitable for sharing data directly with colleagues working on ARC. A Research Group Allocation is a more appropriate place for storing shared data or very large data sets that will be used as part of active computational projects.   &lt;br /&gt;
&lt;br /&gt;
=== Research Group Allocations (&amp;lt;code&amp;gt;/work&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/bulk&amp;lt;/code&amp;gt;) ===&lt;br /&gt;
The principal investigator (PI) for a research group may request an extended shared allocation for the research group by contacting support@hpc.ucalgary.ca with answers to the following questions (please copy the full text of the questions into your email and write answers under it):&lt;br /&gt;
&lt;br /&gt;
* How much storage is requested and why is that the amount that you need? &lt;br /&gt;
A rationale for a request can be a formal data management plan or something more informal like a rough estimate to the primary dataset used for a project and a rough estimate to the size of outputs expected from your computations that are planned to run on ARC over the &#039;&#039;&#039;next year.&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* What is the requested &#039;&#039;&#039;allocation name&#039;&#039;&#039;? (typically something like &amp;lt;PI name&amp;gt;_lab, &amp;lt;code&amp;gt;smith_lab&amp;lt;/code&amp;gt;, for example)&lt;br /&gt;
* What is the &#039;&#039;&#039;data classification&#039;&#039;&#039; using the University of Calgary data security classification system?&lt;br /&gt;
* Which user or users would be the &#039;&#039;&#039;owner&#039;&#039;&#039; of the allocation? (Full Name and UCalgary Email address, typically the requesting PI but there may be co-PIs)&lt;br /&gt;
* Which members of the allocation should be able to &#039;&#039;&#039;request access&#039;&#039;&#039; for new users? (Full Name and UCalgary Email address for active ARC users) &lt;br /&gt;
* What is the &#039;&#039;&#039;faculty&#039;&#039;&#039; of the owner or owners? &lt;br /&gt;
* Please provide a short description of the lab.&lt;br /&gt;
* Please provide a brief &#039;&#039;&#039;numerical estimate&#039;&#039;&#039; of the required storage space based on &#039;&#039;&#039;projects&#039;&#039;&#039; that will use the allocation and their storage &#039;&#039;&#039;requirements&#039;&#039;&#039; .&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example 1&#039;&#039;&#039;: &amp;quot;We will be processing a 1T dataset by performing 100 experimental runs. &lt;br /&gt;
Each experiment will be processed to produce a 6GB output, giving 600GB of the total output data. &lt;br /&gt;
We will also need 400GB additional space for post-processing and data management. &lt;br /&gt;
Thus, we would like to request &#039;&#039;&#039;2TB&#039;&#039;&#039; of shared space in total.&amp;quot;   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example 2&#039;&#039;&#039;: &amp;quot;3 members of our research group need additional shared space on ARC for their independent projects.&lt;br /&gt;
Project 1 starts with 100GB of initial data and is expected to generate 800GB of the output results. &lt;br /&gt;
Project 2 is going to use simulations and does not use any input data but is expected to generate 2TB of the simulated data for further processing.&lt;br /&gt;
The processing will require 200GB of additional space.&lt;br /&gt;
Project 3 will be working on a 1TB dataset and is expected to generate about 1TB of the output data. &lt;br /&gt;
These projects, therefore, will require 5.1TB of storage. &lt;br /&gt;
For convenience of data manipulation and management we would also like to have additional 400GB of extra storage space.&lt;br /&gt;
Therefore, we would like to request &#039;&#039;&#039;5.5TB&#039;&#039;&#039; of shared storage space in total.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Work and Bulk storage can be considerably larger than the home directory allocations. However, there are limits on what RCS can provide as ARC storage provides high-speed access and is expensive to purchase. Typically, &#039;&#039;&#039;any request over 10TB&#039;&#039;&#039; will require some discussion. Work and Bulk allocations differ in a few ways that influence how they are used. Work storage is faster to access as part of computational jobs on ARC although the impact is small for jobs that don&#039;t involve enormous numbers of reads. Bulk storage is designed to be a target for instrument data (which is typically processed in a way that reads data a small number of times per job) and is capable of mounting instruments elsewhere on campus using SMB. A number of questions come up frequently about Work and Bulk storage and these are addressed in an [[Group Storage Allocation FAQ | FAQ]].&lt;br /&gt;
&lt;br /&gt;
===== How to a add group member to the access list (&amp;lt;code&amp;gt;/work&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/bulk&amp;lt;/code&amp;gt;)?=====&lt;br /&gt;
&lt;br /&gt;
Any group member who wants to use the shared storage, should send an email to the support@hpc.ucalgary.ca to be added to the access group and CC the PI/ data owner. &#039;&#039;&#039;This will confirm that the PI approves the group member&#039;s request access to the shared storage.&#039;&#039;&#039; Please note that the access permissions inside the directory are expected to be managed by the data owners.&lt;br /&gt;
&lt;br /&gt;
== Federated Research Data Repository (FRDR) ==&lt;br /&gt;
The Federated Research Data Repository (RFRDR) is a suitable storage solution for long-term archive storage for research datasets used in published research work. FRDR is a bilingual publishing platform for sharing and preserving Canadian research data.&lt;br /&gt;
It is a curated, general-purpose repository, custom built for large datasets.&lt;br /&gt;
FRDR is run by the Digital Research Alliance of Canada.&lt;br /&gt;
&lt;br /&gt;
For more information on FRDR visit their web site:  https://www.frdr-dfdr.ca/repo/&lt;br /&gt;
&lt;br /&gt;
[[Category:Administration]]&lt;br /&gt;
{{Navbox Administration}}&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=Storage_Options&amp;diff=3496</id>
		<title>Storage Options</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=Storage_Options&amp;diff=3496"/>
		<updated>2024-08-13T17:31:34Z</updated>

		<summary type="html">&lt;p&gt;Lleung: /* Research Data Management */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= General =&lt;br /&gt;
&lt;br /&gt;
There are a few options researchers can take advantage of when storing their research data. &lt;br /&gt;
 &lt;br /&gt;
== Data Classification ==&lt;br /&gt;
Please review the different data classifications that are outlined by the [https://ucalgary.ca/legal-services/sites/default/files/teams/1/Standards-Legal-Information-Security-Classification-Standard.pdf Information Security Classification Standard]. There are 4 levels of data classification which are summarized in the table below.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Level&lt;br /&gt;
! Description&lt;br /&gt;
! Example&lt;br /&gt;
|-&lt;br /&gt;
| Level 1&lt;br /&gt;
| Public&lt;br /&gt;
|&lt;br /&gt;
* Reference data sets&lt;br /&gt;
* Published research data&lt;br /&gt;
|-&lt;br /&gt;
| Level 2&lt;br /&gt;
| Internal&lt;br /&gt;
|&lt;br /&gt;
* Internal memos&lt;br /&gt;
* Unpublished research data&lt;br /&gt;
* Anonymized or de-identified human subject data&lt;br /&gt;
* Library transactions and journals&lt;br /&gt;
|-&lt;br /&gt;
| Level 3&lt;br /&gt;
| Confidential&lt;br /&gt;
|&lt;br /&gt;
* Faculty/staff employment applications, personnel files, contact information&lt;br /&gt;
* Donor or prospective donor information&lt;br /&gt;
* Contracts&lt;br /&gt;
* Intellectual property&lt;br /&gt;
|-&lt;br /&gt;
| Level 4&lt;br /&gt;
| Restricted&lt;br /&gt;
|&lt;br /&gt;
* Patient identifiable health information&lt;br /&gt;
* identifiable human subject research data&lt;br /&gt;
* information subject to special government requirements&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
When selecting a storage option, you must use one that meets or exceeds the rated security classification.&lt;br /&gt;
&lt;br /&gt;
* See also the Collaboration, storage and file shares article in Service Now:&lt;br /&gt;
: https://ucalgary.service-now.com/it?id=it_catalog_by_category&amp;amp;sys_id=4dbb82ee13661200c524fc04e144b044&lt;br /&gt;
&lt;br /&gt;
== Research Data Management ==&lt;br /&gt;
We recommend you follow good Research Data Management (RDM) practices and ensure you have a Data Management Plan (DMP) created to guide your data&#039;s life-cycle. Your DMP can help us support the FAIR (Findable, Accessible, Interoperable, and Reusable) principles for data management.&lt;br /&gt;
&lt;br /&gt;
Please consider contacting Libraries and Cultural Resources for assistance.&lt;br /&gt;
&lt;br /&gt;
=== Resources ===&lt;br /&gt;
&lt;br /&gt;
* A DMP Assistant is a tool created specifically for Canadian scholars and aims to meet any and all Tri-Agency requirements available at: https://assistant.portagenetwork.ca/&lt;br /&gt;
* For guidance on general data management and developing a DMP, consult https://library.ucalgary.ca/guides/researchdatamanagement or contact research.data@ucalgary.ca.&lt;br /&gt;
* For support using PRISM Dataverse, the University of Calgary&#039;s institutional data repository, contact digitize@ucalgary.ca.&lt;br /&gt;
* If you need to share and preserve your large post-publication data set for a mandated period of time, consider using the national Federated Research Data Repository (FRDR). Learn more at https://www.frdr-dfdr.ca/repo/. FRDR aligns with Tri-Agency Principles as a platform for Preservation, Retention and Sharing of research data. see: [http://www.science.gc.ca/eic/site/063.nsf/eng/h_83F7624E.html Tri-Agency Statement of Principles on Digital Data Management]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= University of Calgary RCS storage services =&lt;br /&gt;
&lt;br /&gt;
== Secure Compute Data Storage (SCDS) ==&lt;br /&gt;
Secure Computing Data Storage (SCDS) is a service provided by Research Computing Services that allows researchers to store restricted and confidential data. Collaboration with Level 4 data stored in SCDS is possible using ShareFile, a secure file sharing and collaboration tool by Citrix.&lt;br /&gt;
 &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Capacity&lt;br /&gt;
| 10 GB or more&lt;br /&gt;
|-&lt;br /&gt;
! Classification&lt;br /&gt;
| Level 4&lt;br /&gt;
|-&lt;br /&gt;
! Learn More&lt;br /&gt;
| Visit [https://it.ucalgary.ca/secure-computing-platform The SCDS Website]&lt;br /&gt;
|-&lt;br /&gt;
! Request Access&lt;br /&gt;
| Visit [https://ucalgary.service-now.com/kb_view.do?sysparm_article=KB0030163 ServiceNow to request access]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== ResearchFS ==&lt;br /&gt;
ResearchFS is a University of Calgary-hosted SMB/CIFS storage solution funded and operated by RCS. It is available by request to faculty and staff with active research data.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Capacity&lt;br /&gt;
| 1TB with quota increases available on request. &lt;br /&gt;
|-&lt;br /&gt;
! Classification&lt;br /&gt;
| Level 1 - 2&lt;br /&gt;
|-&lt;br /&gt;
! Request Access&lt;br /&gt;
| Visit [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=fe66b3a7db297300897e4b8b0b96199d ServiceNow to request access]&lt;br /&gt;
|}&lt;br /&gt;
=== Service Description ===&lt;br /&gt;
You may use ResearchFS to store your active research data files. ResearchFS is intended to be used as a research group or project share. ResearchFS is available on campus or off campus using the IT supported VPN client. Information on how to download and install the VPN client can be found here: https://ucalgary.service-now.com/it?id=kb_article&amp;amp;sys_id=880e71071381ae006f3afbb2e144b05c (IT account login may be required).&lt;br /&gt;
All ResearchFS users must have a University of Calgary IT account.&lt;br /&gt;
&lt;br /&gt;
=== Data recovery ===&lt;br /&gt;
ResearchFS does daily snapshots at a bit past midnight, which it keeps for 30 days. You should be able to recover a deleted file for up to 30 days, if it was in your share overnight. If you create a file and delete it during a day, no snapshot will be available for you to recover. ResearchFS presents backups using the windows OS &#039;previous versions&#039; functionality. If you are not familiar with using this, or if you are on a Linux or MacOS device, you can request a restore, with Service Now.&lt;br /&gt;
&lt;br /&gt;
For backup, we replicate changes to a distant data center every hour. The storage hardware which hosts your data is located in the basement of the Math Sciences building and our backup is in the HRIC building, so in case of an on campus disaster, your data should be safe.&lt;br /&gt;
 &lt;br /&gt;
=== Support for ResearchFS ===&lt;br /&gt;
If you have questions, please contact the IT Support Centre.&lt;br /&gt;
: Mon – Fri: 8:30 am – 5:00 pm; Sat, Sun &amp;amp; holidays: 10:00 am – 2:00 pm.&lt;br /&gt;
: Live Chat: ucalgary.ca/it&lt;br /&gt;
: Email: itsupport@ucalgary.ca&lt;br /&gt;
: Phone: 403.220.5555&lt;br /&gt;
: In person: 773 Math Science&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= University of Calgary IT storage services =&lt;br /&gt;
&lt;br /&gt;
* Information Technologies Storage usage guide:&lt;br /&gt;
: https://ucalgary.service-now.com/it?id=kb_article&amp;amp;sys_id=d785de4e1b3ed41422ba4158dc4bcbf1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== OneDrive for Business ==&lt;br /&gt;
OneDrive for Business is a storage solution provided by Microsoft and is available to all faculty and staff.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Capacity&lt;br /&gt;
| 5 TB with quota increases [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=438e6d8313896a0053f2d7b2e144b0b9 available on request].&lt;br /&gt;
|-&lt;br /&gt;
! Classification&lt;br /&gt;
| Level 1 - 4&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
You may use OneDrive for Business to store your personal and work related files. Files stored within OneDrive are by default private only to you but has the option to allow sharing and collaboration with others. &lt;br /&gt;
OneDrive for Business cannot be used as a department or project share space. There is no group/lab offering with OneDrive. &lt;br /&gt;
&lt;br /&gt;
While OneDrive provides a secure/compliant location from an IT Security stand point, it’s not the most adequate location for data the PI is accountable for 5 years upon completion of the study. This is not a security issue, but a data management issue.&lt;br /&gt;
&lt;br /&gt;
For example, if a study was using a personal OneDrive of one of the researchers to store all the records, and the researcher was to leave the university, this OneDrive would be gone in 30 days.&lt;br /&gt;
&lt;br /&gt;
MS has an automation capability for their O365 products. If you have a windows OS machine, you can use the automation product ‘Flow’ to copy a file to a local file system when a new file is created on OneDrive.&lt;br /&gt;
&lt;br /&gt;
To back up data residing on ARC to your personal OneDrive allocation please see:  [[How to transfer data#rclone: rsync for cloud storage]]&lt;br /&gt;
&lt;br /&gt;
OneDrive requires Multi-Factor Authentication (MFA) enabled on your University of Calgary IT account. &lt;br /&gt;
&lt;br /&gt;
More information can be located in the following article https://ucalgary.service-now.com/kb_view.do?sysparm_article=KB0032351&lt;br /&gt;
&lt;br /&gt;
University of Calgary OneDrive data is reportedly hosted in Canada (Markham, Ontario).&lt;br /&gt;
&lt;br /&gt;
===Support for OneDrive===&lt;br /&gt;
If you have questions, please contact the [https://ucalgary.service-now.com/it UService Support Centre].&lt;br /&gt;
: Email:it@ucalgary.ca Phone:403.210.9300  1.888.342.3802  Mon - Fri: 8:00 a.m. to noon and 1:00 p.m. to 4:30 p.m.  (closed over the lunch hour)  Walk-in service Math Sciences 7th floor, Room 773  Tues - Thurs: 1:00 p.m. to 4:30 p.m.&lt;br /&gt;
&lt;br /&gt;
===Data recovery===&lt;br /&gt;
&lt;br /&gt;
===Other Resources===&lt;br /&gt;
For more information on OneDrive for Business:&lt;br /&gt;
* Operating Level of Agreement KB0032404 (https://ucalgary.service-now.com/it?id=kb_article&amp;amp;sys_id=7f57bddcdb56a3047cab5068dc9619b6)&lt;br /&gt;
*OneDrive for Business Getting Started KB0032351 (https://ucalgary.service-now.com/it?id=kb_article&amp;amp;sys_id=60994170db2da7487cab5068dc961900)&lt;br /&gt;
*If you are above 90% of your OneDrive quota, you can request an increase here: ( https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=438e6d8313896a0053f2d7b2e144b0b9) PLEASE NOTE: Microsoft will only increase an allocation while the Cloud Storage is more than 90% full. Please log into your O365 cloud account to review before making your request.&lt;br /&gt;
&lt;br /&gt;
Any questions regarding if data hosted on OneDrive is subject to US jurisdiction discovery or access should be directed to:&lt;br /&gt;
*https://cumming.ucalgary.ca/research-institutes/csm-research-services/legal-research-services (CSM researchers.)&lt;br /&gt;
*https://research.ucalgary.ca/contact/research-services (Not CSM Researchers)&lt;br /&gt;
*https://www.ucalgary.ca/legalservices/  (for teaching/learning – non research enquiries that make their way to you)&lt;br /&gt;
&lt;br /&gt;
==Office365 SharePoint for research groups==&lt;br /&gt;
&lt;br /&gt;
To be determined....&lt;br /&gt;
&lt;br /&gt;
Researchers will be able to request an Office 365 SharePoint site for a group at some point in the future &lt;br /&gt;
which could be considered a group cloud sharing platform.&lt;br /&gt;
&lt;br /&gt;
* The official service page:&lt;br /&gt;
: https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=b55f2f72132f5240b5b4df82e144b085&lt;br /&gt;
&lt;br /&gt;
= Digital Research Alliance of Canada storage services =&lt;br /&gt;
&lt;br /&gt;
== Storage on the Alliance HPC clusters ==&lt;br /&gt;
&lt;br /&gt;
* Alliance Wiki article &amp;quot;Storage and file management&amp;quot;:&lt;br /&gt;
: https://docs.alliancecan.ca/wiki/Storage_and_file_management&lt;br /&gt;
&lt;br /&gt;
==Personal storage options==&lt;br /&gt;
For personal or level 1 data, you may use an external solution from the Alliance.&lt;br /&gt;
One has to have an Alliance account to use the service.&lt;br /&gt;
This is similar to DropBox or Google drive functionality. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;The Alliance NextCloud&#039;&#039;&#039;:&lt;br /&gt;
:https://nextcloud.computecanada.ca&lt;br /&gt;
: 100 GB of storage that can be shared between your computers.&lt;br /&gt;
: Alliance documentation: https://docs.alliancecan.ca/wiki/Nextcloud&lt;br /&gt;
&lt;br /&gt;
= Commercial Cloud Based Storage Options =&lt;br /&gt;
&lt;br /&gt;
== AWS ==&lt;br /&gt;
&lt;br /&gt;
Provided by &#039;&#039;&#039;Amazon Web Services, Inc.&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
* Pricing calculator: https://calculator.aws&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
AWS provides very many different kinds of services, including &#039;&#039;&#039;storage services&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
These options can be a solution for your research needs, but &lt;br /&gt;
* it &#039;&#039;&#039;can be expensive&#039;&#039;&#039;, depending on your needs and the amount of data;&lt;br /&gt;
* &#039;&#039;&#039;pricing schema is complex&#039;&#039;&#039; and can be confusing for new users;&lt;br /&gt;
* the &#039;&#039;&#039;number of options can be overwhelming&#039;&#039;&#039;. A lot of it is designed to provide pricing flexibility, not to increase functionality.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The key points:&lt;br /&gt;
* &#039;&#039;&#039;Uploading data&#039;&#039;&#039; to AWS is &#039;&#039;&#039;free&#039;&#039;&#039;.&lt;br /&gt;
* &#039;&#039;&#039;Storing data&#039;&#039;&#039; on AWS storage is a &#039;&#039;&#039;paid service&#039;&#039;&#039;.&lt;br /&gt;
* &#039;&#039;&#039;Downloading data&#039;&#039;&#039; from AWS storage to your computer is a &#039;&#039;&#039;paid service&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
= ARC Cluster Storage =&lt;br /&gt;
&lt;br /&gt;
ARC storage is used to &#039;&#039;&#039;support workflows on the ARC computing cluster&#039;&#039;&#039;. The expectation is that storage on ARC will only be used for active and upcoming computational projects. &lt;br /&gt;
It is not suitable for long-term or archival storage as it is not backed-up and is not guaranteed to be available for the time periods that are typical of archiving.&lt;br /&gt;
ARC is a research cluster, which means it has high performance but can be stopped for required maintenance when needed. &lt;br /&gt;
Thus, ARC cannot be relied on for any kind of service that requires constant availability. &lt;br /&gt;
Which means, in turn that ARC&#039;s storage cannot and &#039;&#039;&#039;should not be used as a main storage facility for research data&#039;&#039;&#039;. &lt;br /&gt;
The &#039;&#039;&#039;master copy&#039;&#039;&#039; of research data should be &#039;&#039;&#039;stored elsewhere&#039;&#039;&#039; and only part of that data are expected to be copied to ARC for computational analysis.&lt;br /&gt;
&lt;br /&gt;
== Home Directories ==&lt;br /&gt;
Every user account on ARC has a static 500GB allocation of storage and a maximum of 1.5 million files (including directories). This cannot be increased or decreased. Home directory storage is connected via a network file system to the rest of the cluster and supports fast data transfer to memory on compute nodes. This also means that basic file system commands (like &amp;lt;code&amp;gt;ls&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;find&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;du&amp;lt;/code&amp;gt;) take longer to run as the number of files in your home directory increases. In particular, we strongly encourage users to stay under 100000 files if it is at all possible. This can be achieved by combining smaller data files into single larger files, using structured data formats rather than large number of text files, or combining collections of files that will be used together into archives (tar, dar, etc). Since top level permissions on home directories are set to prevent other users from reading or executing, home directories are not suitable for sharing data directly with colleagues working on ARC. A Research Group Allocation is a more appropriate place for storing shared data or very large data sets that will be used as part of active computational projects.   &lt;br /&gt;
&lt;br /&gt;
== Research Group Allocations (&amp;lt;code&amp;gt;/work&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/bulk&amp;lt;/code&amp;gt;) ==&lt;br /&gt;
The principal investigator (PI) for a research group may request an extended shared allocation for the research group by contacting support@hpc.ucalgary.ca with answers to the following questions (please copy the full text of the questions into your email and write answers under it):&lt;br /&gt;
&lt;br /&gt;
* How much storage is requested and why is that the amount that you need? &lt;br /&gt;
A rationale for a request can be a formal data management plan or something more informal like a rough estimate to the primary dataset used for a project and a rough estimate to the size of outputs expected from your computations that are planned to run on ARC over the &#039;&#039;&#039;next year.&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* What is the requested &#039;&#039;&#039;allocation name&#039;&#039;&#039;? (typically something like &amp;lt;PI name&amp;gt;_lab, &amp;lt;code&amp;gt;smith_lab&amp;lt;/code&amp;gt;, for example)&lt;br /&gt;
* What is the &#039;&#039;&#039;data classification&#039;&#039;&#039; using the University of Calgary data security classification system?&lt;br /&gt;
* Which user or users would be the &#039;&#039;&#039;owner&#039;&#039;&#039; of the allocation? (Full Name and UCalgary Email address, typically the requesting PI but there may be co-PIs)&lt;br /&gt;
* Which members of the allocation should be able to &#039;&#039;&#039;request access&#039;&#039;&#039; for new users? (Full Name and UCalgary Email address for active ARC users) &lt;br /&gt;
* What is the &#039;&#039;&#039;faculty&#039;&#039;&#039; of the owner or owners? &lt;br /&gt;
* Please provide a short description of the lab.&lt;br /&gt;
* Please provide a brief &#039;&#039;&#039;numerical estimate&#039;&#039;&#039; of the required storage space based on &#039;&#039;&#039;projects&#039;&#039;&#039; that will use the allocation and their storage &#039;&#039;&#039;requirements&#039;&#039;&#039; .&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example 1&#039;&#039;&#039;: &amp;quot;We will be processing a 1T dataset by performing 100 experimental runs. &lt;br /&gt;
Each experiment will be processed to produce a 6GB output, giving 600GB of the total output data. &lt;br /&gt;
We will also need 400GB additional space for post-processing and data management. &lt;br /&gt;
Thus, we would like to request &#039;&#039;&#039;2TB&#039;&#039;&#039; of shared space in total.&amp;quot;   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example 2&#039;&#039;&#039;: &amp;quot;3 members of our research group need additional shared space on ARC for their independent projects.&lt;br /&gt;
Project 1 starts with 100GB of initial data and is expected to generate 800GB of the output results. &lt;br /&gt;
Project 2 is going to use simulations and does not use any input data but is expected to generate 2TB of the simulated data for further processing.&lt;br /&gt;
The processing will require 200GB of additional space.&lt;br /&gt;
Project 3 will be working on a 1TB dataset and is expected to generate about 1TB of the output data. &lt;br /&gt;
These projects, therefore, will require 5.1TB of storage. &lt;br /&gt;
For convenience of data manipulation and management we would also like to have additional 400GB of extra storage space.&lt;br /&gt;
Therefore, we would like to request &#039;&#039;&#039;5.5TB&#039;&#039;&#039; of shared storage space in total.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Work and Bulk storage can be considerably larger than the home directory allocations. However, there are limits on what RCS can provide as ARC storage provides high-speed access and is expensive to purchase. Typically, &#039;&#039;&#039;any request over 10TB&#039;&#039;&#039; will require some discussion. Work and Bulk allocations differ in a few ways that influence how they are used. Work storage is faster to access as part of computational jobs on ARC although the impact is small for jobs that don&#039;t involve enormous numbers of reads. Bulk storage is designed to be a target for instrument data (which is typically processed in a way that reads data a small number of times per job) and is capable of mounting instruments elsewhere on campus using SMB. A number of questions come up frequently about Work and Bulk storage and these are addressed in an [[Group Storage Allocation FAQ | FAQ]].&lt;br /&gt;
&lt;br /&gt;
===== How to a add group member to the access list (&amp;lt;code&amp;gt;/work&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/bulk&amp;lt;/code&amp;gt;)?=====&lt;br /&gt;
&lt;br /&gt;
Any group member who wants to use the shared storage, should send an email to the support@hpc.ucalgary.ca to be added to the access group and CC the PI/ data owner. &#039;&#039;&#039;This will confirm that the PI approves the group member&#039;s request access to the shared storage.&#039;&#039;&#039; Please note that the access permissions inside the directory are expected to be managed by the data owners.&lt;br /&gt;
&lt;br /&gt;
= Archive Storage =&lt;br /&gt;
&lt;br /&gt;
Archive storage for data sets supporting published research is available through the Federated Research Data Repository (FRDR).&lt;br /&gt;
FRDR is a bilingual publishing platform for sharing and preserving Canadian research data.&lt;br /&gt;
It is a curated, general-purpose repository, custom built for large datasets.&lt;br /&gt;
FRDR is run by the Digital Research Alliance of Canada.&lt;br /&gt;
&lt;br /&gt;
For more information on FRDR visit their web site:  https://www.frdr-dfdr.ca/repo/&lt;br /&gt;
&lt;br /&gt;
[[Category:Administration]]&lt;br /&gt;
{{Navbox Administration}}&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=Storage_Options&amp;diff=3495</id>
		<title>Storage Options</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=Storage_Options&amp;diff=3495"/>
		<updated>2024-08-13T17:26:53Z</updated>

		<summary type="html">&lt;p&gt;Lleung: /* Research Data Management */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= General =&lt;br /&gt;
&lt;br /&gt;
There are a few options researchers can take advantage of when storing their research data. &lt;br /&gt;
 &lt;br /&gt;
== Data Classification ==&lt;br /&gt;
Please review the different data classifications that are outlined by the [https://ucalgary.ca/legal-services/sites/default/files/teams/1/Standards-Legal-Information-Security-Classification-Standard.pdf Information Security Classification Standard]. There are 4 levels of data classification which are summarized in the table below.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Level&lt;br /&gt;
! Description&lt;br /&gt;
! Example&lt;br /&gt;
|-&lt;br /&gt;
| Level 1&lt;br /&gt;
| Public&lt;br /&gt;
|&lt;br /&gt;
* Reference data sets&lt;br /&gt;
* Published research data&lt;br /&gt;
|-&lt;br /&gt;
| Level 2&lt;br /&gt;
| Internal&lt;br /&gt;
|&lt;br /&gt;
* Internal memos&lt;br /&gt;
* Unpublished research data&lt;br /&gt;
* Anonymized or de-identified human subject data&lt;br /&gt;
* Library transactions and journals&lt;br /&gt;
|-&lt;br /&gt;
| Level 3&lt;br /&gt;
| Confidential&lt;br /&gt;
|&lt;br /&gt;
* Faculty/staff employment applications, personnel files, contact information&lt;br /&gt;
* Donor or prospective donor information&lt;br /&gt;
* Contracts&lt;br /&gt;
* Intellectual property&lt;br /&gt;
|-&lt;br /&gt;
| Level 4&lt;br /&gt;
| Restricted&lt;br /&gt;
|&lt;br /&gt;
* Patient identifiable health information&lt;br /&gt;
* identifiable human subject research data&lt;br /&gt;
* information subject to special government requirements&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
When selecting a storage option, you must use one that meets or exceeds the rated security classification.&lt;br /&gt;
&lt;br /&gt;
* See also the Collaboration, storage and file shares article in Service Now:&lt;br /&gt;
: https://ucalgary.service-now.com/it?id=it_catalog_by_category&amp;amp;sys_id=4dbb82ee13661200c524fc04e144b044&lt;br /&gt;
&lt;br /&gt;
== Research Data Management ==&lt;br /&gt;
We recommend you follow good Research Data Management (RDM) practices and ensure you have a Data Management Plan (DMP) created to guide your data&#039;s life-cycle.&lt;br /&gt;
&lt;br /&gt;
DMP Assistant has been created specifically for Canadian scholars and aims to meet any and all Tri-Agency requirements available at: https://assistant.portagenetwork.ca/&lt;br /&gt;
&lt;br /&gt;
Your DMP can help us support the FAIR (Findable, Accessible, Interoperable, and Reusable) principles for data management.&lt;br /&gt;
&lt;br /&gt;
Please consider contacting Libraries and Cultural Resources for assistance. For guidance on general data management and developing a DMP, consult https://library.ucalgary.ca/guides/researchdatamanagement or contact research.data@ucalgary.ca.&lt;br /&gt;
&lt;br /&gt;
For support using PRISM Dataverse, UofC&#039;s institutional data repository, contact digitize@ucalgary.ca.&lt;br /&gt;
&lt;br /&gt;
If you need to share and preserve your large post-publication data set for a mandated period of time, please visit https://www.frdr-dfdr.ca/repo/ in order to learn more about the national Federated Research Data Repository. &lt;br /&gt;
&lt;br /&gt;
FRDR aligns with Tri-Agency Principles as a platform for Preservation, Retention and Sharing of research data. see: [http://www.science.gc.ca/eic/site/063.nsf/eng/h_83F7624E.html Tri-Agency Statement of Principles on Digital Data Management]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= University of Calgary RCS storage services =&lt;br /&gt;
&lt;br /&gt;
== Secure Compute Data Storage (SCDS) ==&lt;br /&gt;
Secure Computing Data Storage (SCDS) is a service provided by Research Computing Services that allows researchers to store restricted and confidential data. Collaboration with Level 4 data stored in SCDS is possible using ShareFile, a secure file sharing and collaboration tool by Citrix.&lt;br /&gt;
 &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Capacity&lt;br /&gt;
| 10 GB or more&lt;br /&gt;
|-&lt;br /&gt;
! Classification&lt;br /&gt;
| Level 4&lt;br /&gt;
|-&lt;br /&gt;
! Learn More&lt;br /&gt;
| Visit [https://it.ucalgary.ca/secure-computing-platform The SCDS Website]&lt;br /&gt;
|-&lt;br /&gt;
! Request Access&lt;br /&gt;
| Visit [https://ucalgary.service-now.com/kb_view.do?sysparm_article=KB0030163 ServiceNow to request access]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== ResearchFS ==&lt;br /&gt;
ResearchFS is a UofC hosted SMB/CIFS storage solution funded and operated by RCS. It is available by request to faculty and staff with active research data.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Capacity&lt;br /&gt;
| 1TB with quota increases available on request. &lt;br /&gt;
|-&lt;br /&gt;
! Classification&lt;br /&gt;
| Level 1 - 2&lt;br /&gt;
|-&lt;br /&gt;
! Request Access&lt;br /&gt;
| Visit [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=fe66b3a7db297300897e4b8b0b96199d ServiceNow to request access]&lt;br /&gt;
|}&lt;br /&gt;
=== Service Description ===&lt;br /&gt;
You may use ResearchFS to store your active research data files. ResearchFS is intended to be used as a research group or project share. ResearchFS is available on campus or off campus using the IT supported VPN client. Information on how to download and install the VPN client can be found here: https://ucalgary.service-now.com/it?id=kb_article&amp;amp;sys_id=880e71071381ae006f3afbb2e144b05c (IT account login may be required).&lt;br /&gt;
All ResearchFS users must have a UofC IT account.&lt;br /&gt;
&lt;br /&gt;
=== Data recovery ===&lt;br /&gt;
ResearchFS does daily snapshots at a bit past midnight, which it keeps for 30 days. You should be able to recover a deleted file for up to 30 days, if it was in your share overnight. If you create a file and delete it during a day, no snapshot will be available for you to recover. ResearchFS presents backups using the windows OS &#039;previous versions&#039; functionality. If you are not familiar with using this, or if you are on a Linux or MacOS device, you can request a restore, with Service Now.&lt;br /&gt;
&lt;br /&gt;
For backup, we replicate changes to a distant data center every hour. The storage hardware which hosts your data is located in the basement of the Math Sciences building and our backup is in the HRIC building, so in case of an on campus disaster, your data should be safe.&lt;br /&gt;
 &lt;br /&gt;
=== Support for ResearchFS ===&lt;br /&gt;
If you have questions, please contact the IT Support Centre.&lt;br /&gt;
: Mon – Fri: 8:30 am – 5:00 pm; Sat, Sun &amp;amp; holidays: 10:00 am – 2:00 pm.&lt;br /&gt;
: Live Chat: ucalgary.ca/it&lt;br /&gt;
: Email: itsupport@ucalgary.ca&lt;br /&gt;
: Phone: 403.220.5555&lt;br /&gt;
: In person: 773 Math Science&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= University of Calgary IT storage services =&lt;br /&gt;
&lt;br /&gt;
* Information Technologies Storage usage guide:&lt;br /&gt;
: https://ucalgary.service-now.com/it?id=kb_article&amp;amp;sys_id=d785de4e1b3ed41422ba4158dc4bcbf1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== OneDrive for Business ==&lt;br /&gt;
OneDrive for Business is a storage solution provided by Microsoft and is available to all faculty and staff.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Capacity&lt;br /&gt;
| 5 TB with quota increases [https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=438e6d8313896a0053f2d7b2e144b0b9 available on request].&lt;br /&gt;
|-&lt;br /&gt;
! Classification&lt;br /&gt;
| Level 1 - 4&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
You may use OneDrive for Business to store your personal and work related files. &lt;br /&gt;
Files stored within OneDrive are by default private only to you but has the option to allow sharing and collaboration with others. &lt;br /&gt;
OneDrive for Business cannot be used as a department or project share space.&lt;br /&gt;
There is no group/lab offering with OneDrive. &lt;br /&gt;
&lt;br /&gt;
While OneDrive provides a secure/compliant location from an IT Security stand point, &lt;br /&gt;
it’s not the most adequate location for data the PI is accountable for 5 years upon completion of the study. &lt;br /&gt;
This is not a security issue, but a data management issue.&lt;br /&gt;
&lt;br /&gt;
For example, if a study was using a personal OneDrive of one of the researchers to store all the records, &lt;br /&gt;
and the researcher was to leave the university, this OneDrive would be gone in 30 days.&lt;br /&gt;
&lt;br /&gt;
MS has an automation capability for their O365 products.&lt;br /&gt;
If you have a windows OS machine, you can use the automation product ‘Flow’ to copy a file to a local file system when a new file is created on OneDrive.&lt;br /&gt;
&lt;br /&gt;
To back up data residing on ARC to your personal OneDrive allocation please see:  [[How to transfer data#rclone: rsync for cloud storage]]&lt;br /&gt;
&lt;br /&gt;
OneDrive requires Multi-Factor Authentication (MFA) enabled on your University of Calgary IT account. &lt;br /&gt;
&lt;br /&gt;
More information can be located in the following article https://ucalgary.service-now.com/kb_view.do?sysparm_article=KB0032351&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
UofC OneDrive data is reportedly hosted in Canada (Markham Ont).&lt;br /&gt;
&lt;br /&gt;
===Support for OneDrive===&lt;br /&gt;
If you have questions, please contact the [https://ucalgary.service-now.com/it UService Support Centre].&lt;br /&gt;
: Email:it@ucalgary.ca Phone:403.210.9300  1.888.342.3802  Mon - Fri: 8:00 a.m. to noon and 1:00 p.m. to 4:30 p.m.  (closed over the lunch hour)  Walk-in service  Math Sciences 7th floor, Room 773  Tues - Thurs: 1:00 p.m. to 4:30 p.m.&lt;br /&gt;
&lt;br /&gt;
===Data recovery===&lt;br /&gt;
&lt;br /&gt;
===Other Resources===&lt;br /&gt;
For more information on OneDrive for Business:&lt;br /&gt;
* Operating Level of Agreement KB0032404 (https://ucalgary.service-now.com/it?id=kb_article&amp;amp;sys_id=7f57bddcdb56a3047cab5068dc9619b6)&lt;br /&gt;
*OneDrive for Business Getting Started KB0032351 (https://ucalgary.service-now.com/it?id=kb_article&amp;amp;sys_id=60994170db2da7487cab5068dc961900)&lt;br /&gt;
*If you are above 90% of your OneDrive quota, you can request an increase here: ( https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=438e6d8313896a0053f2d7b2e144b0b9) PLEASE NOTE: Microsoft will only increase an allocation while the Cloud Storage is more than 90% full. Please log into your O365 cloud account to review before making your request.&lt;br /&gt;
&lt;br /&gt;
Any questions regarding if data hosted on OneDrive is subject to US jurisdiction discovery or access should be directed to:&lt;br /&gt;
*https://cumming.ucalgary.ca/research-institutes/csm-research-services/legal-research-services (CSM researchers.)&lt;br /&gt;
*https://research.ucalgary.ca/contact/research-services (Not CSM Researchers)&lt;br /&gt;
*https://www.ucalgary.ca/legalservices/  (for teaching/learning – non research enquiries that make their way to you)&lt;br /&gt;
&lt;br /&gt;
==Office365 SharePoint for research groups==&lt;br /&gt;
&lt;br /&gt;
To be determined....&lt;br /&gt;
&lt;br /&gt;
Researchers will be able to request an Office 365 SharePoint site for a group at some point in the future &lt;br /&gt;
which could be considered a group cloud sharing platform.&lt;br /&gt;
&lt;br /&gt;
* The official service page:&lt;br /&gt;
: https://ucalgary.service-now.com/it?id=sc_cat_item&amp;amp;sys_id=b55f2f72132f5240b5b4df82e144b085&lt;br /&gt;
&lt;br /&gt;
= Digital Research Alliance of Canada storage services =&lt;br /&gt;
&lt;br /&gt;
== Storage on the Alliance HPC clusters ==&lt;br /&gt;
&lt;br /&gt;
* Alliance Wiki article &amp;quot;Storage and file management&amp;quot;:&lt;br /&gt;
: https://docs.alliancecan.ca/wiki/Storage_and_file_management&lt;br /&gt;
&lt;br /&gt;
==Personal storage options==&lt;br /&gt;
For personal or level 1 data, you may use an external solution from the Alliance.&lt;br /&gt;
One has to have an Alliance account to use the service.&lt;br /&gt;
This is similar to DropBox or Google drive functionality. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;The Alliance NextCloud&#039;&#039;&#039;:&lt;br /&gt;
:https://nextcloud.computecanada.ca&lt;br /&gt;
: 100 GB of storage that can be shared between your computers.&lt;br /&gt;
: Alliance documentation: https://docs.alliancecan.ca/wiki/Nextcloud&lt;br /&gt;
&lt;br /&gt;
= Commercial Cloud Based Storage Options =&lt;br /&gt;
&lt;br /&gt;
== AWS ==&lt;br /&gt;
&lt;br /&gt;
Provided by &#039;&#039;&#039;Amazon Web Services, Inc.&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
* Pricing calculator: https://calculator.aws&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
AWS provides very many different kinds of services, including &#039;&#039;&#039;storage services&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
These options can be a solution for your research needs, but &lt;br /&gt;
* it &#039;&#039;&#039;can be expensive&#039;&#039;&#039;, depending on your needs and the amount of data;&lt;br /&gt;
* &#039;&#039;&#039;pricing schema is complex&#039;&#039;&#039; and can be confusing for new users;&lt;br /&gt;
* the &#039;&#039;&#039;number of options can be overwhelming&#039;&#039;&#039;. A lot of it is designed to provide pricing flexibility, not to increase functionality.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The key points:&lt;br /&gt;
* &#039;&#039;&#039;Uploading data&#039;&#039;&#039; to AWS is &#039;&#039;&#039;free&#039;&#039;&#039;.&lt;br /&gt;
* &#039;&#039;&#039;Storing data&#039;&#039;&#039; on AWS storage is a &#039;&#039;&#039;paid service&#039;&#039;&#039;.&lt;br /&gt;
* &#039;&#039;&#039;Downloading data&#039;&#039;&#039; from AWS storage to your computer is a &#039;&#039;&#039;paid service&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
= ARC Cluster Storage =&lt;br /&gt;
&lt;br /&gt;
ARC storage is used to &#039;&#039;&#039;support workflows on the ARC computing cluster&#039;&#039;&#039;. The expectation is that storage on ARC will only be used for active and upcoming computational projects. &lt;br /&gt;
It is not suitable for long-term or archival storage as it is not backed-up and is not guaranteed to be available for the time periods that are typical of archiving.&lt;br /&gt;
ARC is a research cluster, which means it has high performance but can be stopped for required maintenance when needed. &lt;br /&gt;
Thus, ARC cannot be relied on for any kind of service that requires constant availability. &lt;br /&gt;
Which means, in turn that ARC&#039;s storage cannot and &#039;&#039;&#039;should not be used as a main storage facility for research data&#039;&#039;&#039;. &lt;br /&gt;
The &#039;&#039;&#039;master copy&#039;&#039;&#039; of research data should be &#039;&#039;&#039;stored elsewhere&#039;&#039;&#039; and only part of that data are expected to be copied to ARC for computational analysis.&lt;br /&gt;
&lt;br /&gt;
== Home Directories ==&lt;br /&gt;
Every user account on ARC has a static 500GB allocation of storage and a maximum of 1.5 million files (including directories). This cannot be increased or decreased. Home directory storage is connected via a network file system to the rest of the cluster and supports fast data transfer to memory on compute nodes. This also means that basic file system commands (like &amp;lt;code&amp;gt;ls&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;find&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;du&amp;lt;/code&amp;gt;) take longer to run as the number of files in your home directory increases. In particular, we strongly encourage users to stay under 100000 files if it is at all possible. This can be achieved by combining smaller data files into single larger files, using structured data formats rather than large number of text files, or combining collections of files that will be used together into archives (tar, dar, etc). Since top level permissions on home directories are set to prevent other users from reading or executing, home directories are not suitable for sharing data directly with colleagues working on ARC. A Research Group Allocation is a more appropriate place for storing shared data or very large data sets that will be used as part of active computational projects.   &lt;br /&gt;
&lt;br /&gt;
== Research Group Allocations (&amp;lt;code&amp;gt;/work&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/bulk&amp;lt;/code&amp;gt;) ==&lt;br /&gt;
The principal investigator (PI) for a research group may request an extended shared allocation for the research group by contacting support@hpc.ucalgary.ca with answers to the following questions (please copy the full text of the questions into your email and write answers under it):&lt;br /&gt;
&lt;br /&gt;
* How much storage is requested and why is that the amount that you need? &lt;br /&gt;
A rationale for a request can be a formal data management plan or something more informal like a rough estimate to the primary dataset used for a project and a rough estimate to the size of outputs expected from your computations that are planned to run on ARC over the &#039;&#039;&#039;next year.&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* What is the requested &#039;&#039;&#039;allocation name&#039;&#039;&#039;? (typically something like &amp;lt;PI name&amp;gt;_lab, &amp;lt;code&amp;gt;smith_lab&amp;lt;/code&amp;gt;, for example)&lt;br /&gt;
* What is the &#039;&#039;&#039;data classification&#039;&#039;&#039; using the University of Calgary data security classification system?&lt;br /&gt;
* Which user or users would be the &#039;&#039;&#039;owner&#039;&#039;&#039; of the allocation? (Full Name and UCalgary Email address, typically the requesting PI but there may be co-PIs)&lt;br /&gt;
* Which members of the allocation should be able to &#039;&#039;&#039;request access&#039;&#039;&#039; for new users? (Full Name and UCalgary Email address for active ARC users) &lt;br /&gt;
* What is the &#039;&#039;&#039;faculty&#039;&#039;&#039; of the owner or owners? &lt;br /&gt;
* Please provide a short description of the lab.&lt;br /&gt;
* Please provide a brief &#039;&#039;&#039;numerical estimate&#039;&#039;&#039; of the required storage space based on &#039;&#039;&#039;projects&#039;&#039;&#039; that will use the allocation and their storage &#039;&#039;&#039;requirements&#039;&#039;&#039; .&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example 1&#039;&#039;&#039;: &amp;quot;We will be processing a 1T dataset by performing 100 experimental runs. &lt;br /&gt;
Each experiment will be processed to produce a 6GB output, giving 600GB of the total output data. &lt;br /&gt;
We will also need 400GB additional space for post-processing and data management. &lt;br /&gt;
Thus, we would like to request &#039;&#039;&#039;2TB&#039;&#039;&#039; of shared space in total.&amp;quot;   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example 2&#039;&#039;&#039;: &amp;quot;3 members of our research group need additional shared space on ARC for their independent projects.&lt;br /&gt;
Project 1 starts with 100GB of initial data and is expected to generate 800GB of the output results. &lt;br /&gt;
Project 2 is going to use simulations and does not use any input data but is expected to generate 2TB of the simulated data for further processing.&lt;br /&gt;
The processing will require 200GB of additional space.&lt;br /&gt;
Project 3 will be working on a 1TB dataset and is expected to generate about 1TB of the output data. &lt;br /&gt;
These projects, therefore, will require 5.1TB of storage. &lt;br /&gt;
For convenience of data manipulation and management we would also like to have additional 400GB of extra storage space.&lt;br /&gt;
Therefore, we would like to request &#039;&#039;&#039;5.5TB&#039;&#039;&#039; of shared storage space in total.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Work and Bulk storage can be considerably larger than the home directory allocations. However, there are limits on what RCS can provide as ARC storage provides high-speed access and is expensive to purchase. Typically, &#039;&#039;&#039;any request over 10TB&#039;&#039;&#039; will require some discussion. Work and Bulk allocations differ in a few ways that influence how they are used. Work storage is faster to access as part of computational jobs on ARC although the impact is small for jobs that don&#039;t involve enormous numbers of reads. Bulk storage is designed to be a target for instrument data (which is typically processed in a way that reads data a small number of times per job) and is capable of mounting instruments elsewhere on campus using SMB. A number of questions come up frequently about Work and Bulk storage and these are addressed in an [[Group Storage Allocation FAQ | FAQ]].&lt;br /&gt;
&lt;br /&gt;
===== How to a add group member to the access list (&amp;lt;code&amp;gt;/work&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/bulk&amp;lt;/code&amp;gt;)?=====&lt;br /&gt;
&lt;br /&gt;
Any group member who wants to use the shared storage, should send an email to the support@hpc.ucalgary.ca to be added to the access group and CC the PI/ data owner. &#039;&#039;&#039;This will confirm that the PI approves the group member&#039;s request access to the shared storage.&#039;&#039;&#039; Please note that the access permissions inside the directory are expected to be managed by the data owners.&lt;br /&gt;
&lt;br /&gt;
= Archive Storage =&lt;br /&gt;
&lt;br /&gt;
Archive storage for data sets supporting published research is available through the Federated Research Data Repository (FRDR).&lt;br /&gt;
FRDR is a bilingual publishing platform for sharing and preserving Canadian research data.&lt;br /&gt;
It is a curated, general-purpose repository, custom built for large datasets.&lt;br /&gt;
FRDR is run by the Digital Research Alliance of Canada.&lt;br /&gt;
&lt;br /&gt;
For more information on FRDR visit their web site:  https://www.frdr-dfdr.ca/repo/&lt;br /&gt;
&lt;br /&gt;
[[Category:Administration]]&lt;br /&gt;
{{Navbox Administration}}&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=MARC_Cluster_Guide&amp;diff=3494</id>
		<title>MARC Cluster Guide</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=MARC_Cluster_Guide&amp;diff=3494"/>
		<updated>2024-08-13T17:10:49Z</updated>

		<summary type="html">&lt;p&gt;Lleung: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{MARC Cluster Status}}&lt;br /&gt;
{{Message Box&lt;br /&gt;
|title=Need Help or have other MARC Related Questions?&lt;br /&gt;
|message=For all general RCS related issues, questions, or comments, please contact us at support@hpc.ucalgary.ca.&lt;br /&gt;
|icon=Support Icon.png}}&lt;br /&gt;
&lt;br /&gt;
This guide gives an overview of the MARC (Medical Advanced Research Computing) cluster at the University of Calgary and is intended to be read by new account holders getting started on MARC. This guide covers information on MARC&#039;s restrictions, hardware, performance characteristics, and storage information. &lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
MARC is a cluster of Linux-based computers that were purchased in 2019. It has been specifically designed to meet the security requirements for handling Level 3 and Level 4 classified data, as defined by the University of Calgary Information Security Classification Standard, which you can find here: https://www.ucalgary.ca/policies/files/policies/im010-03-security-standard_0.pdf&lt;br /&gt;
&lt;br /&gt;
To ensure the security of Level 3/4 data, MARC has implemented several restrictions:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Account Requirement&#039;&#039;&#039;: All users must have IT accounts.&lt;br /&gt;
* &#039;&#039;&#039;Project ID Requirement&#039;&#039;&#039;: To use MARC, a project ID is required. This project ID is the same number used on SCDS.&lt;br /&gt;
* &#039;&#039;&#039;SSH Access via Citrix&#039;&#039;&#039;: Access to MARC via SSH must be done through the IT Citrix system. Admin VPN access is neither sufficient nor necessary.&lt;br /&gt;
* &#039;&#039;&#039;Internet Access&#039;&#039;&#039;: Neither compute nodes nor login nodes have internet access.&lt;br /&gt;
* &#039;&#039;&#039;Data Ingestion via SCDS&#039;&#039;&#039;: All data must be transferred to MARC by first copying it to SCDS (Secure Compute Data Store) and then fetching it from SCDS to MARC.&lt;br /&gt;
* &#039;&#039;&#039;Data Retrieval via SCDS&#039;&#039;&#039;: Resulting data, such as analysis outputs, must be copied to SCDS and then fetched from SCDS to its intended destination using established methods.&lt;br /&gt;
* &#039;&#039;&#039;Data Auditing&#039;&#039;&#039;: All file accesses are logged for auditing purposes.&lt;br /&gt;
&lt;br /&gt;
These measures have been put in place to ensure the security and controlled handling of sensitive data on the MARC cluster and to prevent intentional or accidental data exfiltration.&lt;br /&gt;
&lt;br /&gt;
== Obtaining an account ==&lt;br /&gt;
Please refer to the [[how to get a MARC account]] page for more information on how to obtain and log in to MARC.&lt;br /&gt;
&lt;br /&gt;
== Hardware ==&lt;br /&gt;
MARC has compute nodes of two different varieties: &lt;br /&gt;
* 4 GPU (Graphics Processing Unit)-enabled nodes containing:&lt;br /&gt;
** 40-cores: each node having 2 sockets. Each socket has an Intel Xeon Gold 6148 20-core processor, running at 2.4 GHz. &lt;br /&gt;
** The 40 cores on the individual compute nodes share about 750 GB of RAM (memory) but, jobs should request no more than 753000 MB.&lt;br /&gt;
** Two Tesla V100-PCIE-16GB GPUs.&lt;br /&gt;
* 1 Bigmem Node&lt;br /&gt;
** 80-cores: node with 4 sockets.  Each socket has an Intel Xeon Gold 6148 20-core processor, running at 2.4 GHz.  &lt;br /&gt;
** The 80 cores on the node share about 3 TB of RAM (memory), but, jobs should request no more than 3000000 MB.&lt;br /&gt;
The hardware is broken out into three distinct partitions with the following restrictions.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Partition&lt;br /&gt;
!Purpose and description&lt;br /&gt;
!CPUs per Node&lt;br /&gt;
!GPUs per Node&lt;br /&gt;
!Memory per Node&lt;br /&gt;
|-&lt;br /&gt;
|cpu2019&lt;br /&gt;
|For non-GPU jobs; homogeneous nodes&lt;br /&gt;
|Up to 38&lt;br /&gt;
|No GPUs&lt;br /&gt;
|Up to 500GB&lt;br /&gt;
|-&lt;br /&gt;
|gpu2019&lt;br /&gt;
|For jobs requiring nVidia v100 gpu jobs&lt;br /&gt;
|Up to 40&lt;br /&gt;
|1 or 2 GPUs&lt;br /&gt;
|Up to 750GB&lt;br /&gt;
|-&lt;br /&gt;
|bigmem&lt;br /&gt;
|For very large memory jobs&lt;br /&gt;
|Up to 80&lt;br /&gt;
|No GPUs&lt;br /&gt;
|Up to 3TB&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Storage ==&lt;br /&gt;
About a petabyte of raw disk storage is available to the MARC cluster, but for error checking and performance reasons, the amount of usable storage for researchers&#039; projects is considerably less than that.  From a user&#039;s perspective, the total amount of storage is less important than the individual storage limits.&lt;br /&gt;
&lt;br /&gt;
MARC storage is not accessible outside of MARC.&lt;br /&gt;
&lt;br /&gt;
=== Home file system: /home ===&lt;br /&gt;
Each account on MARC has a home directory under &amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt; with a 25 GB per-user quota. This limit is fixed and cannot be increased. The intended use for the home directory is for software, scripts, and configuration files and must only contain Level 1/2 data only. Do not store patient identifiable files (Level 3/4) in your home directory. Level 3/4 data is only appropriate under &amp;lt;code&amp;gt;/project&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Project file system: /project ===&lt;br /&gt;
All projects will have a directory under &amp;lt;code&amp;gt;/project&amp;lt;/code&amp;gt; named after your project ID and will be the same as your SCDS share name. All files related to a project should only be stored within its assigned &amp;lt;code&amp;gt;/project&amp;lt;/code&amp;gt; directory. Quotas in &amp;lt;code&amp;gt;/project&amp;lt;/code&amp;gt; are somewhat flexible and are assigned based on the project requirements. Please contact support@hpc.ucalgary.ca if you require additional space.&lt;br /&gt;
&lt;br /&gt;
The SCDS share and the project directory on MARC are two separate storage systems. Data transferred to the &amp;lt;code&amp;gt;/project&amp;lt;/code&amp;gt; directory is a second copy of the data on the SCDS share. On a SCDS share deleting files will not create free space. The NetApp device that hosts the SCDS drive maintains a copy of the deleted files in ‘snapshot space’. Hence the quota is consumed by the files in the SCDS share plus the files in ‘snapshot space’. &lt;br /&gt;
&lt;br /&gt;
=== Temporary file system: /tmp ===&lt;br /&gt;
Each compute node has a temporary storage location available under &amp;lt;code&amp;gt;/tmp&amp;lt;/code&amp;gt; that is only accessible from within that node and for the running job. Data stored in &amp;lt;code&amp;gt;/tmp&amp;lt;/code&amp;gt; will be deleted immediately after the job terminates. It is suitable for all levels of data. &lt;br /&gt;
&lt;br /&gt;
Because the login node is a shared system, you must take care that data stored within &amp;lt;code&amp;gt;/tmp&amp;lt;/code&amp;gt; on the login node is restricted to only your account using the appropriate file modes (&amp;lt;code&amp;gt;chmod&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;chown&amp;lt;/code&amp;gt;/&amp;lt;code&amp;gt;chgrp&amp;lt;/code&amp;gt; commands) and deleted after use.&lt;br /&gt;
&lt;br /&gt;
== Software installations ==&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
&lt;br /&gt;
There are some complications in using Python on MARC relative to using ARC.  &lt;br /&gt;
&lt;br /&gt;
Normally, we would recommend installing Conda in user&#039;s home directory. &lt;br /&gt;
On MARC, security requirements for working with Level 4 data require that we block outgoing and incoming internet connections. &lt;br /&gt;
As a result, new packages cannot be downloaded with conda. &lt;br /&gt;
&lt;br /&gt;
Depending on what you need, the two recommendations we can make are&lt;br /&gt;
 &lt;br /&gt;
* Download the standard anaconda distribution from the anaconda website to a personal computer: https://repo.anaconda.com/archive/Anaconda3-2020.07-Linux-x86_64.sh&lt;br /&gt;
** Transfer the script to MARC via SCDS&lt;br /&gt;
** Copy it to your /home directory&lt;br /&gt;
** Install it in your home directory with $bash Anaconda3-2020.07-Linux-x86_64.sh&lt;br /&gt;
** you will be asked to agree to a license agreement and to confirm that you wish to create a folder anaconda3 once the installation completes, you will have a new directory under your home directory ~/anaconda3. In order to make it possible to use the local conda instance you will need to change the system path to include your local python directories $ export PATH=~/anaconda3/bin:$PATH&lt;br /&gt;
* Download a docker container with the software that you need including python (e.g. tensorflow-gpu)&lt;br /&gt;
** Transfer the docker container to MARC via SCDS&lt;br /&gt;
** Copy it to your /home directory&lt;br /&gt;
** Run it with singularity&lt;br /&gt;
&lt;br /&gt;
* Non-open source software which requires a connection to a license server may require admin assistance to set up. contact [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] for support.&lt;br /&gt;
&lt;br /&gt;
== Further Reading ==&lt;br /&gt;
See [[Running jobs]] for information on how to submit a job in the HPC cluster.&lt;br /&gt;
&lt;br /&gt;
[[Category:MARC]]&lt;br /&gt;
[[Category:Guides]]&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=How_to_get_a_MARC_account&amp;diff=3493</id>
		<title>How to get a MARC account</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=How_to_get_a_MARC_account&amp;diff=3493"/>
		<updated>2024-08-13T17:05:55Z</updated>

		<summary type="html">&lt;p&gt;Lleung: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the process of obtaining an account and how to login to the MARC Cluster.  If you are looking for information about the MARC cluster hardware, please see the [[Marc Cluster Guide]].&lt;br /&gt;
&lt;br /&gt;
== Obtaining a MARC account ==&lt;br /&gt;
&lt;br /&gt;
=== Prerequisites ===&lt;br /&gt;
* Access will only be granted to MARC for those with one or more [https://ucalgary.service-now.com/kb_view.do?sysparm_article=KB0030163 SCDS] project share as this is the only to get data into MARC.&lt;br /&gt;
* Account applicants must have an &#039;&#039;&#039;active UCalgary IT account&#039;&#039;&#039;. For external researchers/collaborators, a [[External collaborators|General Associate]] is required. &lt;br /&gt;
&lt;br /&gt;
=== Requesting a MARC account ===&lt;br /&gt;
# Navigate to the IT Homepage: [https://ccalgary.ca/it www.ucalgary.ca/it]&lt;br /&gt;
# Click Login in the upper right corner&lt;br /&gt;
# Click &amp;quot;Order Something&amp;quot;&lt;br /&gt;
# Click &amp;quot;Research Computing&amp;quot;&lt;br /&gt;
# Click &amp;quot;Medical Advanced Research Computing (MARC)&amp;quot;&lt;br /&gt;
# Select &amp;quot;Add Access&amp;quot; from the &amp;quot;What would you like to do?&amp;quot; box&lt;br /&gt;
# Choose your SCDS share from the next box&lt;br /&gt;
# Type a synopsis of the work you plan to do into the &amp;quot;Business Reason&amp;quot; box.&lt;br /&gt;
# You may leave the Additional Information box empty&lt;br /&gt;
# Someone should reply to you when your account is ready.&lt;br /&gt;
&lt;br /&gt;
== Logging in to MARC ==&lt;br /&gt;
&lt;br /&gt;
=== Prerequisites ===&lt;br /&gt;
In order to access MARC, please ensure that you have:&lt;br /&gt;
# Enabled MFA on your University of Calgary IT account.&lt;br /&gt;
# Requested for and have received your MARC account&lt;br /&gt;
&lt;br /&gt;
=== Accessing MARC ===&lt;br /&gt;
MARC can only be accessed via the Citrix NetScaler. You can access Citrix directly both from on-campus and off-campus without using the IT General VPN.&lt;br /&gt;
# Navigate to https://myappmf.ucalgary.ca&lt;br /&gt;
# You may need to log in with MFA.&lt;br /&gt;
&lt;br /&gt;
==== Add Applications ====&lt;br /&gt;
After logging in to Citrix, you will need to launch PuTTY To connect to MARC.&lt;br /&gt;
# On the Citrix NetScalar page, click on Apps in the top navigation menu.&lt;br /&gt;
# Click the PuTTY icon to start PuTTY [[File:RdhAddPutty.png|thumb|none]]&lt;br /&gt;
# After launching PuTTY, you should see the PuTTY configuration menu: [[File:RdhPuttyConfiguration.png|thumb|none]]&lt;br /&gt;
# Enter &amp;quot;&amp;lt;code&amp;gt;marc.ucalgary.ca&amp;lt;/code&amp;gt;&amp;quot; as the host name. Optionally, you may save this as the default setting (although the settings may not be persistent in this environment).&lt;br /&gt;
# Click Open.  You will be presented with a black terminal screen prompting for a username.[[File:RdhPuttyLogin.PNG|thumb|none]]&lt;br /&gt;
# Enter your University of Calgary IT username and hit enter.&lt;br /&gt;
# Enter your University of Calgary IT password when prompted.&lt;br /&gt;
# If the log in is successful, you should see the MARC banner:&lt;br /&gt;
&amp;lt;PRE&amp;gt;&lt;br /&gt;
login as: myusername&lt;br /&gt;
myusername@marc.ucalgary.ca&#039;s password:&lt;br /&gt;
Last login: Thu Apr  9 13:12:21 2020&lt;br /&gt;
===========================================================================&lt;br /&gt;
                  ___________________________________&lt;br /&gt;
                      _   _    __     ____       __&lt;br /&gt;
                      /  /|    / |    /    )   /    )&lt;br /&gt;
                  ---/| /-|---/__|---/___ /---/------&lt;br /&gt;
                    / |/  |  /   |  /    |   /&lt;br /&gt;
                  _/__/___|_/____|_/_____|__(____/___&lt;br /&gt;
                         marc.ucalgary.ca&lt;br /&gt;
               Problems or deficiencies? Send email to:&lt;br /&gt;
                       support@hpc.ucalgary.ca&lt;br /&gt;
&lt;br /&gt;
===========================================================================&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
SYSTEM NOTICES:&lt;br /&gt;
[myusername@marc ~]$&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/PRE&amp;gt;&lt;br /&gt;
Now you may return to the [[Marc Cluster Guide]] page&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:MARC]]&lt;br /&gt;
[[Category:Guides]]&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=MARC_accounts&amp;diff=3492</id>
		<title>MARC accounts</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=MARC_accounts&amp;diff=3492"/>
		<updated>2024-08-13T16:43:38Z</updated>

		<summary type="html">&lt;p&gt;Lleung: Lleung moved page MARC accounts to How to get a MARC account&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[How to get a MARC account]]&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=How_to_get_a_MARC_account&amp;diff=3491</id>
		<title>How to get a MARC account</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=How_to_get_a_MARC_account&amp;diff=3491"/>
		<updated>2024-08-13T16:43:38Z</updated>

		<summary type="html">&lt;p&gt;Lleung: Lleung moved page MARC accounts to How to get a MARC account&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the process of obtaining an account and how to login to the Marc Cluster.  If you are looking for information about the MARC cluster hardware, please see the [[Marc_Cluster_Guide]].&lt;br /&gt;
&lt;br /&gt;
= Obtaining an Account =&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
* Access will only be granted to MARC for those with project shares on SCDS as this is the only to get data into MARC.&lt;br /&gt;
* Users must have a UC domain IT account, and an HR relationship type of Staff, Faculty, Student, General Associate or General Associate - Research collaborator. &lt;br /&gt;
* If you wish to add a new person to MARC or SCDS, and they do not appear in the ServiceNow person picker, this may be because the new account has not completed provisioning in ServiceNow, or their HR relationship type might not be in the categories above. Please contact HR for assistance with adjusting your collaborator&#039;s HR relationship type. Changes in HR-&amp;gt; Peoplesoft can take some time to complete and propagate to ServiceNow. Please contact it@ucalgary.ca for assistance with ServiceNow.&lt;br /&gt;
&lt;br /&gt;
== Procedure ==&lt;br /&gt;
# Navigate to the IT Homepage: www.ucalgary.ca/it&lt;br /&gt;
# Click Login in the upper right corner&lt;br /&gt;
# Click &amp;quot;Order Something&amp;quot;&lt;br /&gt;
# Click &amp;quot;Research Computing&amp;quot;&lt;br /&gt;
# Click &amp;quot;Medical Advanced Research Computing (MARC)&amp;quot;&lt;br /&gt;
# Select &amp;quot;Add Access&amp;quot; from the &amp;quot;What would you like to do?&amp;quot; box&lt;br /&gt;
# Choose your SCDS share from the next box&lt;br /&gt;
# Type a synopsis of the work you plan to do into the &amp;quot;Business Reason&amp;quot; box.&lt;br /&gt;
# You may leave the Additional Information box empty&lt;br /&gt;
# Someone should reply to you when your account is ready.&lt;br /&gt;
&lt;br /&gt;
= Logging in to MARC =&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
# You will need a Microsoft Authenticator second factor to access Marc.  This is the same Microsoft Authenticator that is used with an Office 365 account.&lt;br /&gt;
# VPN is NOT required but is acceptable when accessing Marc.&lt;br /&gt;
&lt;br /&gt;
== Procedure ==&lt;br /&gt;
=== Login to myappmf ===&lt;br /&gt;
# Point your browser to https://myappmf.ucalgary.ca [[File:RdhLoginScreen.png|thumb|center|Image of the myappmf.ucalgary.ca login screen.]]&lt;br /&gt;
# This will bring up the familiar login portal that is used in many other UofC apps. Enter your User Name. Use your UC domain credentials without the UC\ in front. (eg. jsmith)&lt;br /&gt;
# Enter your usual UC password.&lt;br /&gt;
# Click the Sign In button.&lt;br /&gt;
# A Microsoft authenticator request will be sent to your chosen second factor device.  This is the same as an Office 365 login.  Accept the authentication.&lt;br /&gt;
&lt;br /&gt;
=== Add Applications ===&lt;br /&gt;
# Once you have logged into myappmf click Apps at the top&lt;br /&gt;
# Click the Putty icon to start Putty [[File:RdhAddPutty.png|thumb|center]]&lt;br /&gt;
&lt;br /&gt;
=== Login to Marc with Putty ===&lt;br /&gt;
# Once Putty starts you will see the PuTTY Configuration screen: [[File:RdhPuttyConfiguration.png|thumb|center]]&lt;br /&gt;
# Enter &amp;quot;marc.ucalgary.ca&amp;quot; in the &amp;quot;Host Name&amp;quot; box&lt;br /&gt;
# If you want this to be default, click &amp;quot;Default Settings&amp;quot; then click the Save button beside it.&lt;br /&gt;
# Click Open.  You will be presented with a black terminal screen prompting for your UC username and password.[[File:RdhPuttyLogin.PNG|thumb|center]]&lt;br /&gt;
# The contents of the screen will look something like this: &lt;br /&gt;
&amp;lt;PRE&amp;gt;&lt;br /&gt;
login as: myusername&lt;br /&gt;
myusername@marc.ucalgary.ca&#039;s password:&lt;br /&gt;
Last login: Thu Apr  9 13:12:21 2020&lt;br /&gt;
===========================================================================&lt;br /&gt;
                  ___________________________________&lt;br /&gt;
                      _   _    __     ____       __&lt;br /&gt;
                      /  /|    / |    /    )   /    )&lt;br /&gt;
                  ---/| /-|---/__|---/___ /---/------&lt;br /&gt;
                    / |/  |  /   |  /    |   /&lt;br /&gt;
                  _/__/___|_/____|_/_____|__(____/___&lt;br /&gt;
                         marc.ucalgary.ca&lt;br /&gt;
               Problems or deficiencies? Send email to:&lt;br /&gt;
                       support@hpc.ucalgary.ca&lt;br /&gt;
&lt;br /&gt;
===========================================================================&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
SYSTEM NOTICES:&lt;br /&gt;
[myusername@marc ~]$&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/PRE&amp;gt;&lt;br /&gt;
Now you may return to the [[Marc Cluster Guide]] page&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:MARC]]&lt;br /&gt;
[[Category:Guides]]&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
	<entry>
		<id>https://rcs.ucalgary.ca/index.php?title=How_to_get_an_account&amp;diff=3490</id>
		<title>How to get an account</title>
		<link rel="alternate" type="text/html" href="https://rcs.ucalgary.ca/index.php?title=How_to_get_an_account&amp;diff=3490"/>
		<updated>2024-08-13T16:42:24Z</updated>

		<summary type="html">&lt;p&gt;Lleung: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;All eligible University of Calgary Researchers may request for a HPC account. Visiting or external collaborators must first obtain a General Associate account before they are eligible for a HPC account. Undergraduates must be confirmed by their research supervisor. In all cases, the applicant must must have an &#039;&#039;&#039;active UCalgary IT account&#039;&#039;&#039; to be able to get access to our HPC systems. &lt;br /&gt;
&lt;br /&gt;
This process is only for production Level 1/2 HPC systems. &lt;br /&gt;
&lt;br /&gt;
* For teaching/learning applications and for undergraduates who need an account on TALC, please visit [[TALC Cluster|Teaching And Learning Cluster (TALC)]] for more information.&lt;br /&gt;
* For Level 3/4 (high-security) HPC applications, please apply for a [[MARC accounts|MARC]] account instead.&lt;br /&gt;
&lt;br /&gt;
Please refer to the following table for next steps: &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: left;&amp;quot; |I am a University of Calgary researcher or graduate student&lt;br /&gt;
|All University of Calgary researchers, including graduate students, can directly request an account on the ARC cluster.&lt;br /&gt;
&#039;&#039;&#039;Next steps&#039;&#039;&#039;: Please email the completed application form to support@hpc.ucalgary.ca.&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: left;&amp;quot; | I am not a University of Calgary researcher&lt;br /&gt;
|If you are not University of Calgary researcher and have collaboration work that required access to the ARC cluster, you must first obtain the status of the [[External collaborators |&#039;&#039;&#039;General Associate&#039;&#039;&#039;]] with the University in order to obtain a UCalgary email account. You may apply for an ARC account only after obtaining your University of Calgary IT and Email account.&lt;br /&gt;
&#039;&#039;&#039;Next steps&#039;&#039;&#039;: Obtain a [[External collaborators |&#039;&#039;&#039;General Associate&#039;&#039;&#039;]], then email the completed application form to support@hpc.ucalgary.ca.&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: left;&amp;quot; |I am an undergraduate student&lt;br /&gt;
|If an undergraduate student is working for a research group and needs access to HPC infrastructure, their account must be confirmed by their research supervisor. Additionally, the supervisor must confirm the nature of the research work the student is conducting is related to the supervisor&#039;s area of research and that the HPC infrastructure is necessary to faciliate this research.&lt;br /&gt;
Every undergraduate student who needs an account on ARC is still expected to submit their own application, and answer all the questions &#039;&#039;&#039;on their own&#039;&#039;&#039;. If the student has difficulties with answering the application questions, it may be too early to create and account or the ARC environment is not appropriate for their research. ARC is a production research system and untrained users can potentially disrupt other researchers work.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Next steps&#039;&#039;&#039;: Please have undergraduate student email the completed application form to support@hpc.ucalgary.ca and cc&#039;d to the research supervisor. The research supervisor must then reply to support@hpc.ucalgary.ca with their approval.&lt;br /&gt;
|} &lt;br /&gt;
&lt;br /&gt;
To apply, please &#039;&#039;&#039;copy and paste&#039;&#039;&#039; the [[How to get an account#Application form|Application form]] below into an email message to [mailto:support@hpc.ucalgary.ca support@hpc.ucalgary.ca] and then respond to the questions in the text. &lt;br /&gt;
&lt;br /&gt;
Please also include the subsequent &#039;&#039;&#039;[[How to get an account#Clauses of understanding|clauses of understanding]]&#039;&#039;&#039; into your application as your agreement to these terms are mandatory.&lt;br /&gt;
&lt;br /&gt;
== ARC application form ==&lt;br /&gt;
&#039;&#039;&#039;About myself:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*What is your status with the University of Calgary? (Eg. undergraduate student, PhD student, MS student, postdoc, visiting researcher)&lt;br /&gt;
&lt;br /&gt;
*What research group do you work for?&lt;br /&gt;
*Who is your supervisor?&lt;br /&gt;
:(If you are a Principal Investigator yourself, please respond accordingly).&lt;br /&gt;
&lt;br /&gt;
*How did you learn about the ARC cluster?&lt;br /&gt;
&lt;br /&gt;
*Do you have any experience with &#039;&#039;&#039;Linux&#039;&#039;&#039;?&lt;br /&gt;
&lt;br /&gt;
*Have you used &#039;&#039;&#039;compute clusters&#039;&#039;&#039; before?&lt;br /&gt;
&lt;br /&gt;
*Does anybody else in your group use ARC for their work?&lt;br /&gt;
&lt;br /&gt;
*What &#039;&#039;&#039;shortcoming of your work computer&#039;&#039;&#039; are you trying to address by using a compute cluster? &#039;&#039;&#039;What is lacking&#039;&#039;&#039; on your computer that is required for you work?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;About the project(s) I am going to work on&#039;&#039;&#039;: &lt;br /&gt;
&lt;br /&gt;
*Please tell us a briefly about the &#039;&#039;&#039;research topic&#039;&#039;&#039; you are going to be working on on ARC.&lt;br /&gt;
&lt;br /&gt;
*What are the &#039;&#039;&#039;data&#039;&#039;&#039; you are planning to work on? What &#039;&#039;&#039;form&#039;&#039;&#039; is it in?&lt;br /&gt;
&lt;br /&gt;
*What &#039;&#039;&#039;kind of analysis&#039;&#039;&#039; is it?&lt;br /&gt;
&lt;br /&gt;
*What &#039;&#039;&#039;software&#039;&#039;&#039; are you going to be using?&lt;br /&gt;
&lt;br /&gt;
*Do you have an estimate for the &#039;&#039;&#039;amount&#039;&#039;&#039; of work (please provide it, if known)?&lt;br /&gt;
&lt;br /&gt;
=== Clauses of understanding ===&lt;br /&gt;
By applying for an ARC account I certify &#039;&#039;&#039;I understand that&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
* The storage provided by the ARC cluster is only suitable for &#039;&#039;&#039;Level 1 and Level 2 data&#039;&#039;&#039;, as classified according to the University of Calgary Information &#039;&#039;&#039;Security Classification Standard&#039;&#039;&#039; (https://www.ucalgary.ca/legal-services/sites/default/files/teams/1/Standards-Legal-Information-Security-Classification-Standard.pdf)&lt;br /&gt;
* ARC&#039;s availability may be changed with little to no warning. While RCS take precautions to avoid interrupting running jobs, there may be instances where interruptions are occur including power or network interruptions. RCS may also take nodes offline for regular maintenance and node availability is subject to change.&lt;br /&gt;
* ARC&#039;s storage should not be used as your main storage facility for research data. Access to your data may be interrupted or temporarily unavailable when ARC is under maintenance. Data on ARC is not backed up. Your research group should ensure that the master copy of research data should be stored elsewhere. We highly recommend that only data used for computational analysis on ARC should reside on ARC.&lt;br /&gt;
* User&#039;s accounts on ARC are subject to &#039;&#039;&#039;automatic deletion after 12 months of inactivity&#039;&#039;&#039;. Please log in periodically to prevent your account from being deleted. You will be notified before the account is deleted. Please note that when an account is delete &#039;&#039;&#039;all the data&#039;&#039;&#039; stored in the home directory of the account are &#039;&#039;&#039;deleted&#039;&#039;&#039; as well.&lt;br /&gt;
&lt;br /&gt;
== Book online training sessions ==&lt;br /&gt;
After obtaining your ARC account, you may [[book online training sessions]] with one of our analysts to get started with ARC.&lt;br /&gt;
&lt;br /&gt;
[[Category:Guides]]&lt;br /&gt;
[[Category:How-Tos]]&lt;br /&gt;
{{Navbox Guides}}&lt;/div&gt;</summary>
		<author><name>Lleung</name></author>
	</entry>
</feed>