Programming Languages/Concurrent Languages

Concurrent programming is a computer programming technique that provides for the execution of operations concurrently - either within a single computer, or across a number of systems. In the latter case, the term distributed computing is used. Multiprocessor machines achieve better performance by taking advantage of this kind of programming.

Concurrent programming is now the only way to make a CPU-intensive program run twice as fast. (Historically another popular approach was to wait a couple of years for a CPU with twice the clock speed to show up, but that trend ended around 2003).

In parallel programming, single tasks are split into a number of subtasks that can be computed relatively independently and then aggregated to form a single coherent solution. Parallel programming is most effective for tasks that can easily broken down into independent tasks such as purely mathematical problems, e.g., factorisation.

One way to achieve parallel programming is through distributed computing, which is a method of information processing in which work is performed by separate computers linked through a communications network.

Mutual Exclusion

 * Definition
 * Formally defined and solved by E.W. Dijkstra in 1965 [D65].


 * We have a collection of processes that alternate repeatedly between two sections of code: non-critical section and critical section(CS). The problem is to provide a synchronisation algorithm in the form of a trying and an exit protocol to be executed, respectively, immediately before and after the CS:


 * non-critical section
 * trying protocol
 * critical section
 * exit protocol


 * , so that the following properties hold:


 * Mutual Exclusion
 * No two processes are in their CS at the same time.
 * Lockout-freedom
 * If a process enter the trying protocol, then it eventually enters the CS.
 * Absence of unnecessary delay:
 * A process attempting to enter its critical section is prevented from doing so only by another process that is executing its critical section or by other processes attempting to enter their critical sections.
 * Bounded exit
 * If a process enters the exit protocol, then it enter the non-critical section within a bounded number of its own steps.

Solutions of Mutual Exclusion

 * According to [Y03], there are thousands of solutions of mutual exclusion, some invented by clever experts, some found by brute force approaches with a computer.


 * Here are some famous ones:


 * Dijkstra's algorithm
 * The first try.
 * According to [K66][D67], this algorithm allows for the possibility that an individual computer, when contending for access, might have to wait indefinitely.

For ease of understanding, we begin with 2 processes with a modified version of the original bakery algorithm:
 * Lamport's bakery algorithm
 * For 2 processes


 * Informal proof
 * Suppose p0 is attempting to enter its critical section while p1 is in its non-critical section. Then, p0 gains access because turn1 is 0;
 * Suppose p0 is attempting to enter its critical section while p1 is in its critical section. Then, p0 will busy wait because turn 1 is not 0 and turn0 was set to be greater than turn1. When p1 finishes its critical section, it sets turn1 to 0. There are two subcases. In each case, p0 will enter its critical section.
 * If p1 is executing in its non-critical section when p0 tests turn1(i.e., turn1 is 0), then p0 will enter its critical section.
 * On the other hand, p1 might finish its non-critical section and set turn1 to turn0+1 as part of its entry protocol. When p0 retests this condition, it will find it false because turn1 is greater than turn0. So, p0 will enter its critical section.
 * Suppose that p0 and p1 begin execution of their entry protocols at about the same time and each completes their first assignment statement; so, each turn variable is 1. Suppose further that, say, p0 completes its second assignment statement and evaluates its condition before p1 executes its second assignment statement; so, turn0 is 2 and turn1 is 1. Then p0 will wait while p1 continues to execute and sets turn1 to 3. Note how one process setting its turn variable to 1 forces the other process to wait until the first process has finished picking its value for the turn variable.
 * Suppose p0 and p1 are as in the previous case. They complete their first assignment statements, which set each turn variable to 1. Suppose now that execution of the second assignment statement is interleaved at the machine instruction level. Then the following execution ordering can occur:
 * p0 reads turn1's value (1);
 * p1 reads turn0's value (1);
 * p0 adds 1 and stores the result (2) into turn0;
 * p1 adds 1 and stores the result (2) into turn1;
 * Although the turn variables have equal values. p1 will defer to p0 because its condition uses >= whereas p0's uses >.
 * Why underline codes?
 * The necessity of underline codes might not be obvious.
 * See UCDAVIS: Bakery Algorithm for an in-depth discussion.
 * For N processes


 * In the pseudocode there is a comparison of the form:
 * which is equivalent to:


 * Why call it the bakery algorithm?
 * Lamport envisioned a bakery with a numbering machine at its entrance. So each customer is given a unique number. Numbers increase by one as customers enter the store. A global counter displays the number of the customer that is currently being served. All other customers must wait in a queue until the baker finishes serving the current customer and the next number is displayed. When done shopping, the customer loses its number and it can then do whatever it wants, except that it cannot shop without getting a new number.

Concurrent programming in specific languages

 * Tasks in Ada
 * Threads in Java
 * Threads in C++