Introduction to Concurrency

Alan Kay said OOP, as he envisioned it, was more about messaging than anything else. He's right. Computer scientists should have good mental models, and good programming models, for how objects behave and how they communicate with each other.

Examples of Concurrency

The real world contains actors that execute independently of, but communicate with, each other. In modeling the world, many parallel executions have to be composed and coordinated, and that's where the study of concurrency comes in. Consider:

Hardware examples:

Software examples:

Even more examples:

Modeling Concurrency

Architecture

I’m going to go out on a limb and suggest that there are three major styles of architecting a concurrent system.

Distributed

Multiple independent processes with no shared memory, communicating only via message passing.

Multi-
Threaded

Multiple threads share memory, so require locks. Generally, a single process can have multiple threads.

Event
Queue

One a single thread exists! A single loop reads from the event queue and invokes the handlers.

Probably some mixing can occur.

The “threads vs. events” issue is a big one. Which is better? Or when is one better than the other, if ever? Read this neutral analysis of both styles.

Generally speaking, threading requires that you use, locks, mutexes, countdowns, condition variables, semaphores, and similar things. But there do exist higher-level synchronization mechanisms like monitors and Ada-style protected objects.

Agents

In a distributed system, the processes can be considered to have a life of their own, in which case we sometimes call them actors or agents. High-level models for the agents include:

Actors

Communicate with each other via mailboxes

Goroutines

Communicate with each other via channels

Coroutines

Communicate by yielding to each other

Theory

As always, a theory (organized body of knowledge with predictive powers) helps use understand the state of the art and enables growth. People have created a number of process algebras and process calculi to describe concurrent systems mathematically.

Exercise: Read the linked article.

Definitions

As always, a working vocabulary helps.

The Concrete

Program
The source code for a process or processes.
Process
A unit of program execution as seen by an operating system. Processes tend to act like they are the only thing running. A process has its own address space, file handles, security attributes, threads, etc. The operating system prevents processes from messing with each other.
Thread
A sequential flow of control within a process. A process can contain one or more threads. Threads have their own program counter and register values, but they share the memory space and other resources of the process.
Multiprogramming / Multiprocessing
Concurrent execution of several programs on one computer.
Multithreading
Execution of a program with multiple threads.

The Abstract

Parallelism
Truly simultaneous execution or evaluation of things.
Concurrency
The coordination and management of independent lines of execution. These executions can be truly parallel or simply be managed by interleaving. They can communicate via shared memory or message passing. (Rob Pike's definition: “the composition”dependently executing computations”)
Distribution
Concurrency in which all communication is via message passing (useful because shared memory communication doesn’t scale to thousands of processors).

The Pragmatic

Concurrent Programming
Solving a single problem by breaking it down into concurrently executing processes or threads.
Shared Resource
An entity used by more than one thread but one that “should” can only be operated on by one thread at a time.
Synchronization
The act of threads agreeing to coordinate their behavior in order to solve a problem correctly. (This usually means that thread $A$ will block and wait until thread $B$ performs some operation.)

Why Study Concurrency?

Because reasons.

Challenges

Naturally, concurrent programming is harder than sequential programming because concurrent programming is about writing a bunch of communicating sequential programs. You end up with a bunch of problems that don’t exist in sequential programs:

Single-threaded event-based systems have issues, too. You need to make sure each chunk of code (responses to events) always run quickly and leave everything in a consistent state.

Concerns

The field of concurrent programming is concerned with:

A brief introduction of each follows.

Modeling

We need a formal way to talk about concurrent programming so that we can analyze requirements and design and implement correct and efficient algorithms. One of the most useful models used in reasoning about concurrent programs is the non-realtime interleaved execution model. This is:

The study of interleaved execution sequences of atomic instructions, where each of the instructions execute in a completely arbitrary but finite amount of time.

In this model we can make no assumptions at all about the relative speeds of the individual instructions, or how malicious a scheduler might be. Since instructions take arbitrary time, there can be many possible interleavings.

Example: Suppose thread-1 has instruction sequence $[A,B,C]$ and thread-2 has sequence $[x,y]$. Then we have to consider:

   [A B C x y]
   [A B x C y]
   [A B x y C]
   [A x B C y]
   [A x B y C]
   [A x y B C]
   [x A B C y]
   [x A B y C]
   [x A y B C]
   [x y A B C]
Exercise: Write out all interleavings of two threads, the first with sequence $[A,B,C]$ and the second with $[x,y,z]$.

The number of interleavings for two threads, one with $m$ instructions and one with $n$ instructions is $m+n \choose n$ — which you probably know is $\frac{(n+m)!}{n!m!}$.

Exercise: How many interleavings are possible with $n$ threads where thread $i$ has $k_i$ instructions?

Granularity

Systems can be classified by “where” the concurrency is expressed or implemented.

Instruction Level

Most processors have several execution units and can execute several instructions at the same time. Good compilers can reorder instructions to maximize instruction throughput. Often the processor itself can do this.

Example: On the old (ancient) Pentium microprocessor, the instruction sequence:
    inc ebx
    inc ecx
    inc edx
    mov esi,[x]
    mov eax,[ebx]

would be executed as follows:

    Step       U-pipe            V-pipe
    -----------------------------------------
      0        inc ebx           inc ecx
      1        inc edx           mov esi, [x]
      2        mov eax, [ebx]

Modern processors even parallelize execution of micro-steps of instructions within the same pipe.

Statement Level

Many programming languages have syntactic forms to express that statements should execute in sequence or in parallel. Common notations:

Pascal-style Occam-style Algebraic
begin
   A;
   cobegin
      B;
      C;
   coend
   D;
   cobegin
      E;
      F;
      G;
   coend
end
SEQ
   a
   PAR
      b
      c
   d
   PAR
      e
      f
      g

a ; b || c ; d ; e || f || g

Procedure Level

Many languages have a thread type (or class) and a means to run a function on a thread. These things might also be called tasks. Alternatively there might be a library call to spawn a thread and execute a function on the thread.

Program Level

The operating system runs processes concurrently. And this is cool: Many programming languages give you the ability to spawn processes from your program.

In Java:

Runtime.getRuntime().exec(commandline);

In Python:

TODO

In C, using the Win32 API:

CreateProcess(security_attributes, stack_size, &function, param, activation, &thread_id);
Exercise: Show how to spawn a new process using the Unix system calls execve with fork.)

Scheduling

Generally you'll have $M$ processors and $N$ threads. If $M < N$ you need a scheduler to interleave execution. A thread can be in one of several states; one example is:

threestatesched.png

Exercise: Win32 threads have six states. What are they?

Communication

The threads of control must communicate with each other. The two main ways to communicate:

Synchronization

Threads sometimes have to coordinate their activities. Here is the overused classic example: two threads $A$ and $B$ are trying to make a $100 deposit. They both execute the sequence:

  1. Move the current value of the account into a register
  2. Add 100 to the register
  3. Write the value of the register back into memory

If A executes its first statement and then $B$ executes its first statement before $A$ finishes its third statement, one of the deposits gets lost. There are dozens of programming paradigms to make sure threads can synchronize properly to avoid these problems. Roughly speaking, there are two approaches. The code to define this “critical region”; might use explicit acquisition and release of locks:

lock := acquire_a_lock()
register := read_balance()
register += 100
write_balance(register)
release_lock()

or the synchronization may be more implicit:

critical_section do
  register := read_balance()
  register += 100
  write_balance(register)
end

Temporal Constraints

If a system has temporal constraints, such as “this operation ”complete in 5 ms” it is called a real-time system. (Real-time constraints are common in embedded systems.)

Language Integration

Many have argued whether a language should have direct support for concurrency and distribution or whether such support should come from a library.

Library or O.S. based Language-Intrinsic
.
.
.
int t1, t2;
.
.
.
t1 = createThread(myFunction);
.
.
.
start(t1);
    // now t1 runs in parallel
    // with the function that
    // called start().
    // Termination is
    // asynchronous, though
    // joining can be done with
    // a library call.
procedure P is
   .
   .
   .
   task T;
   .
   .
   .
begin   -- T activated implicitly here
   .
   .
   .   -- T and P execute in parallel
   .
   .
end P;  -- T and P terminate together

Advantages:

  • Because different languages have different models of concurrency, language interfacing (multi-lingual development) may be easier.
  • A specific language model may not fit well with a particular O.S.
  • O.S. standards (e.g. POSIX) exist anyway, so perhaps portability might be likely.

Advantages:

  • Code likely to be more reliable and maintainable since constructs are high level.
  • Lots of different operating systems exist, so code may be more portable.
  • An embedded computer might not even have an operating system.

Programming Paradigms and Patterns

What kind of patterns can you adopt that will help you to produce reliable concurrent code? There are too many to mention here, but here are some things to think about:

Fault-Tolerance, Scale, Reliability

This is a big topic. You might like to read about Erlang.

Correctness

Concurrent programs have to be correct for all possible interleavings. Natrually this makes testing more complicated, but it can be done.

Further Reading

We barely scratched the surface. There is so much more to learn.

Topics

How about some academic overviews of these fascinating topics.

Language Support

This is an embarrassingly short and incomplete list:

Exercise: Rust claims to eliminate data races. How do they do that?

Recall Practice

Here are some questions useful for your spaced repetition learning. Many of the answers are not found on this page. Some will have popped up in lecture. Others will require you to do your own research.

  1. How does concurrency differ from parallelism?
  2. A coroutines an example of parallel computation? Why or why not?
  3. Explain the difference between the Erlang-style and the Go-style of process communication.
  4. Explain the difference between modeling concurrency with threads and modeling concurrency with an event queue.
  5. What is a race condition?
  6. What are deadlock and starvation?
  7. What do the terms safety and liveness refer to in a concurrent system?

Summary

We’ve covered:

  • What we mean by concurrency
  • Real-world examples (outside of software)
  • Architectural choices
  • Ways to model processes
  • Definitions of some terms
  • Why we should study this field
  • Granularity, Scheduling, Communication, Synchronization, and more
  • Where to find more information