Parallel Models: Difference between revisions

From RCSWiki
Jump to navigation Jump to search
Line 22: Line 22:


Introductory Talk on Parallel Models and Scheduling
Introductory Talk on Parallel Models and Scheduling
[:File:Parallelism_and_Scheduling_v2.pdf|Parallelism and Scheduling Slides]
[[:File:Parallelism_and_Scheduling_v2.pdf|Parallelism and Scheduling Slides]]

Revision as of 20:32, 17 March 2021

Parallel work has a long history in practical human endeavours. Rigorous reasoning about it dates back to roughly the 1950s with the development of job-shop and logistical scheduling theories and their implementation on and for parallel computers. A basic working knowledge of parallelism is valuable for learning how to work effectively on a computing cluster.

Introduction

It is easy to identify examples of parallelism in everyday life and scientific work:

  • Coordinated construction of large structures (e.g. Hadrian's Wall)
  • 12 Bakers with 12 Ovens baking 24 loaves of bread in the time it would take one baker to bake 2
  • Multi-channel pipetting
  • Parallel computing

Working on modern computing clusters (like ARC or Compute Canada sites) involves multiple kinds of resources and scales of parallelism. Disentangling them will require some discussion of parallel models and resource mappings.

<ui>

  • Serial Computation<\li>
  • Shared Memory Parallelism<\li>
  • Distributed Memory Parallelism<\li>
  • Job Level Parallelism<\li>
  • GPU Parallelism<\li>
  • Hybrid Parallelism<\li> <\ui>

    References

    Introductory Talk on Parallel Models and Scheduling Parallelism and Scheduling Slides