Parallelism is multiple computations executing simultaneously, literally at the same time. It gives us performance and efficiency gains (unless the overhead of distributing the computation and gathering up the results is too high). It is important in modern computing systems, where multi-core processors and distributed systems are prevalent.
Notes are incompleteCurrently these notes can be found as scans of handwritten notes on the BrightSpace.
First things first, parallelism is only one kind of concurrency.
TODO video
It’s fun to write parallel algorithms for the challenge and the pleasure of achieving speedup.
But it’s not easy! There are many challenges: data dependencies, synchronization, communication overhead, load balancing, scaling limitations, and fault tolerance.
You can find some good sources at Wikipedia, A book chapter, NVIDIA's C++ Parallel Algorithms
TODO classic algorithms that can be naturally parallelized
TODO Map Reduce - distributed but also parallel
TODO copy over handwritten notes
TODO models: PRAM, DMM, Coarse grained multicomputer model
TODO SIMD, Multicore, Vector machines, Connection machines, GPUs, TPUs
TODO x86 example
TODO ARM example
A class of languages known as array languages are naturally parallelizable. TODO
TODO Fortran Ada ParaSail Chapel
TODO Libraries and frameworks: OpenMP, MPI, CUDA, OpenCL, Cilk
Here are some questions useful for your spaced repetition learning. Many of the answers are not found on this page. Some will have popped up in lecture. Others will require you to do your own research.
We’ve covered: