Parallelism

It speeds things up.

Motivation

Parallelism is multiple computations executing simultaneously, literally at the same time. It gives us performance and efficiency gains (unless the overhead of distributing the computation and gathering up the results is too high). It is important in modern computing systems, where multi-core processors and distributed systems are prevalent.

Notes are incomplete

Currently these notes can be found as scans of handwritten notes on the BrightSpace.

Concurrency vs. Parallelism

First things first, parallelism is only one kind of concurrency.

TODO video

Parallel Algorithms

It’s fun to write parallel algorithms for the challenge and the pleasure of achieving speedup.

But it’s not easy! There are many challenges: data dependencies, synchronization, communication overhead, load balancing, scaling limitations, and fault tolerance.

You can find some good sources at Wikipedia, A book chapter, NVIDIA's C++ Parallel Algorithms

TODO classic algorithms that can be naturally parallelized

TODO Map Reduce - distributed but also parallel

Parallel Complexity Theory

TODO copy over handwritten notes

TODO models: PRAM, DMM, Coarse grained multicomputer model

Parallel Hardware

TODO SIMD, Multicore, Vector machines, Connection machines, GPUs, TPUs

TODO x86 example

TODO ARM example

Parallel Programming Languages

A class of languages known as array languages are naturally parallelizable. TODO

TODO Fortran Ada ParaSail Chapel

TODO Libraries and frameworks: OpenMP, MPI, CUDA, OpenCL, Cilk

short research summary

Recall Practice

Here are some questions useful for your spaced repetition learning. Many of the answers are not found on this page. Some will have popped up in lecture. Others will require you to do your own research.

Summary

We’ve covered:

  • ...