Parallel work has a long history in practical human endeavours. Rigorous reasoning about it dates back to roughly the 1950s with the development of job-shop and logistical scheduling theories and their implementation on and for parallel computers. A basic working knowledge of parallelism is valuable for learning how to work effectively on a computing cluster.
It is easy to identify examples of parallelism in everyday life and scientific work:
- Coordinated construction of large structures (e.g. Hadrian's Wall)
- 12 Bakers with 12 Ovens baking 24 loaves of bread in the time it would take one baker to bake 2
- Multi-channel pipetting
- Parallel computing
Working on modern computing clusters (like ARC or Compute Canada sites) involves multiple kinds of resources and scales of parallelism. Disentangling them will require some discussion of parallel models and resource mappings.
- Serial Computation
- Shared Memory Parallelism
- Distributed Memory Parallelism
- Job Level Parallelism
- GPU Parallelism
- Hybrid Parallelism
Introductory Talk on Parallel Models and Scheduling