Introduction to Concurrency

Alan Kay said OOP as he envisioned it, was more about messaging than anything else. He's right. Computer scientists should have good mental models, and good programming models, for how objects behave and how they communicate with each other.

Examples of Concurrency

Computing systems model the world, and the world contains actors that execute indepedently of, but communicate with, each other. In modeling the world, many (possibly) parallel executions have to be composed and coordinated, and that's where the study of concurrency comes in. Consider:

Hardware examples:

Software examples:

Even more examples:

Modeling Concurrency

High-level models for the agents include:

There are probably two main philosophies at play here:

Read this neutral analysis of both styles.

Generally speaking, threading requires that you use, locks, mutexes, countdowns, condition variables, semaphores, and similar things. But there do exist higher-level synchronization mechanisms like monitors and Ada-style protected objects.


The Concrete

The source code for a process or processes.
A unit of program execution as seen by an operating system. Processes tend to act like they are the only thing running. A process has its own address space, file handles, security attributes, threads, etc. The operating system prevents processes from messing with each other.
A sequential flow of control within a process. A process can contain one or more threads. Threads have their own program counter and register values, but they share the memory space and other resources of the process.
Concurrent execution of several programs on one computer.
Execution of a program with multiple threads.

The Abstract

Truly simultaneous execution or evaluation of things.
The coordination and management of independent lines of execution. These executions can be truly parallel or simply be managed by interleaving. They can communicate via shared memory or message passing. (Rob Pike's definition: “the composition of independently executing computations”)
Concurrency in which all communication is via message passing (useful because shared memory communication doesn’t scale to thousands of processors).

The Pragmatic

Concurrent Programming
Solving a single problem by breaking it down into concurrently executing processes or threads.
Shared Resource
An entity used by more than one thread but one that (usually) can only be operated on by one thread at a time.
The act of threads agreeing to coordinate their behavior in order to solve a problem correctly. (This usually means that thread $A$ will block and wait until thread $B$ performs some operation.)

Why Study Concurrency?

Is this stuff hard?

Well, yeah. Concurrent programming is harder than sequential programming because concurrent programming is about writing a bunch of communicating sequential programs. You end up with a bunch of problems that don’t exist in sequential programs:

Single-threaded event-based systems have issues, too. You need to make sure each chunk of code (responses to events) always run quickly and leave everything in a consistent state.


The field of concurrent programming is concerned with


We need a formal way to talk about concurrent programming so that we can analyze requirements and design and implement correct and efficient algorithms. One of the most useful models used in reasoning about concurrent programs is the non-realtime interleaved execution model. This is:

The study of interleaved execution sequences of atomic instructions, where each of the instructions execute in a completely arbitrary but finite amount of time.

In this model we can make no assumptions at all about the relative speeds of the individual instructions, or how malicious a scheduler might be. Since instructions take arbitrary time, there can be many possible interleavings.

Example: Suppose thread-1 has instruction sequence $[A,B,C]$ and thread-2 has sequence $[x,y]$. Then we have to consider:

   [A B C x y]
   [A B x C y]
   [A B x y C]
   [A x B C y]
   [A x B y C]
   [A x y B C]
   [x A B C y]
   [x A B y C]
   [x A y B C]
   [x y A B C]
Exercise: Write out all interleavings of two threads, the first with sequence $[A,B,C]$ and the second with $[x,y,z]$.

The number of interleavings for two threads, one with $m$ instructions and one with $n$ instructions is $m+n \choose n$ — which you probably know is $\frac{(n+m)!}{n!m!}$.

Exercise: How many interleavings are possible with $n$ threads where thread $i$ has $k_i$ instructions?


Systems can be classified as to the level of concurrency that can be expressed or implemented.

Instruction Level

Most processors have several execution units and can execute several instructions at the same time. Good compilers can reorder instructions to maximize instruction throughput. Often the processor itself can do this.

Example: On the old Pentium microprocessor, the instruction sequence
    inc ebx
    inc ecx
    inc edx
    mov esi,[x]
    mov eax,[ebx]

would be executed as follows:

    Step       U-pipe            V-pipe
      0        inc ebx           inc ecx
      1        inc edx           mov esi, [x]
      2        mov eax, [ebx]

Modern processors even parallelize execution of micro-steps of instructions within the same pipe.

Statement Level

Many programming languages have syntactic forms to express that statements should execute in sequence or in parallel. Common notations:

Pascal-style Occam-style Algebraic

a ; b || c ; d ; e || f || g

Procedure Level

Many languages have a thread type (or class) and a means to run a function on a thread. These things might also be called tasks. Alternatively there might be a library call to spawn a thread and execute a function on the thread.

Program Level

Usually the responsibility of the operating system, which runs processes concurrently.

Many programming languages give you the ability to spawn processes from your program.

In Java


In C, using the Win32 API

CreateProcess(security_attributes, stack_size, &function, param, activation, &thread_id);
Exercise: Show how to spawn a new process using the Unix system calls execve with fork.)


Generally you'll have $M$ processors and $N$ threads. If $M < N$ you need a scheduler to interleave execution. A thread can be in one of several states; one example is:


Exercise: Win32 threads have six states. What are they?


The threads of control must communicate with each other. The two main ways to communicate:


Threads sometimes have to coordinate their activities. Here is the overused classic example: two threads $A$ and $B$ are trying to make a $100 deposit. They both execute the sequence:

  1. Move the current value of the account into a register
  2. Add 100 to the register
  3. Write the value of the register back into memory

If A executes its first statement and then $B$ executes its first statement before $A$ finishes its third statement, one of the deposits gets lost. There are dozens of programming paradigms to make sure threads can synchronize properly to avoid these problems.

The code to define this “critical region” might use explicit acquisition and release of locks:

lock := acquire_a_lock()
register := read_balance()
register += 100

or the synchronization may be more implicit:

critical_section do
  register := read_balance()
  register += 100

Temporal Constraints

If a system has temporal constraints, such as “this operation must complete in 5 ms” it is called a real-time system. (Real-time constraints are common in embedded systems.)

Support for Concurrency

Many have argued whether a language should have direct support for concurrency and distribution or whether such support should come from a library.

Library or O.S. based Language-Intrinsic
int t1, t2;
t1 = createThread(myFunction);
    // now t1 runs in parallel
    // with the function that
    // called start().
    // Termination is
    // asynchronous, though
    // joining can be done with
    // a library call.
procedure P is
   task T;
begin   -- T activated implicitly here
   .    -- T and P execute in parallel
end P;  -- T and P terminate together


  • Because different languages have different models of concurrency, language interfacing (multi-lingual development) may be easier.
  • A specific language model may not fit well with a particular O.S.
  • O.S. standards (e.g. POSIX) exist anyway, so perhaps portability might be likely.


  • Code likely to be more reliable and maintainable since constructs are high level.
  • Lots of different operating systems exist, so code may be more portable.
  • An embedded computer might not even have an operating system.

Programming Paradigms and Patterns

What kind of patterns can you adopt that will help you to produce reliable concurrent code? There are too many to mention here, but here are some things to think about:

Fault-Tolerance, Scale, Reliability

This is a big topic. You might like to read about Erlang.


Concurrent programs have to be correct for all possible interleavings. Natrually this makes testing more complicated, but it can be done.

Further Reading


Language Support

Exercise: Rust claims to eliminate data races. How do they do that?