INF214 notes - Concurrent Programming
November 20, 2020
Marie Heggebakk
The Concurrent Computing Landscape
- Multiple tasks that need to be done
- Execute one at the time on a single processor (road)
- Execute in parallel on multiple processors (lanes in a road)
- Execute on distributed processors (separate roads)
The Essence of Concurrent Programming
- A concurrent program contains two or more processes that work together to perform a task.
- The processes work together by communicating with each other.
- The communication is programmed using shared variables or message passing.
- Processes need to synchronize with each other
- Mutual exclusion
- Ensuring critical sections of statements do not execute at the same time
- Condition synchronization
- Delaying a process until a given condition is true
- Mutual exclusion
Hardware Architectures
Summarizes key attributes of modern computer architectures
Processors and Caches
- A modern single-processor machine contains several components
- A central processing unit (CPU)
- Primary memory
- One or more levels of cache memory
- Secondary (disk) storage
- Variety of peripheral devices
Shared-Memory Multiprocessors
- The processors and memory modules are connected by means of an
interconnection network
- The processors share the primary memory, but each processor has its own cache memory
- Memory consistency models:
- Sequential consistency: Guarantees that memory updates will
appear to occur in some sequential order, and that every
processor will see the same order
- The strongest model
- Processor consistency: Ensures that the writes by each
processor will occur in memory in the order they are issued by
the processor, but writes issued by different processors might
be seen in different orders by other processors.
- Weaker model
- Release consistency: Ensures only that the primary memory is
updated at programmer-specified synchronization points.
- Even weaker model
- Sequential consistency: Guarantees that memory updates will
appear to occur in some sequential order, and that every
processor will see the same order
- Memory consistency presents ease of programming and implementation, but are costly to implement and it makes a machine slower.
Distributed-Memory Multicomputers and Networks
- The interconnection network supports message passing rather than reading and writing.
- The processors communicate by sending and receiving messages.
- Multicomputer: A distributed-memory multiprocessor in which the possessors and network are physically close to each other.
- Network system: A distributed-memory multiprocessor in which nodes are connected by a local area communication network.
Applications and Programming Styles
- A multithreaded software system manages multiple independent
activities such as:
- Window systems or personal computers on workstations
- Time-shared and multiprocessor OS
- Real-time systems that control power plants, spacecraft, and so on.
- Concurrent programming applications:
- Iterative parallelism: When a program has several, often identical processes, where each process is an iterative program.
- Recursive parallelism: When a program has one or more recursive procedures and the procedure calls are independent
- Producers and consumers: Communication processes, often organized into a pipeline through which information flow. Each process in a pipeline is a filter that consumes the output of its predecessor and produces output for its successor.
- Clients and servers: The dominant interaction pattern in distributed systems.
- Interactive peers: Occurs in distributed programs when there are several processes that execute basically the same code and that exchange messages to accomplish a task.
Iterative Parallelism: Matrix Multiplication
Summary of Programming Notation
Declarations
Sequential Statements
Concurrent Statements, Processes,and Procedures
Comments
Processes and Synchronization
States, Actions, Histories, and Properties
- A concurrent program begins execution in some initial state. Each process in the program executes independently, and as it executes it examines and alters the program state
- Atomic action: Actions that invisibly examine or change the program state
- Critical sections: Mutual executions is concerned with combining atomic actions that are implemented directly by hardware into sequences of actions called critical sections.
- Safety property: The program never enters a bad state
- Liveness property: The program eventually enters a good state
- Partial correctness: If the final state is correct, assuming that the program terminates (safety property)
- Termination: A program terminates if every loop and procedure call terminates (liveness property)
- Total correctness: Combined partial correctness and termination. If a program always terminates with a correct answer.
Parallelization: Finding Patterns in a File
- By using “while inside a co” the processes are created only once,
rather than on each loop iteration.
- Have to use two buffers
Synchronization: The Maximum of an Array
- Synchronization is required to get correct answers whenever processes both read and write shared variables.
- Angle brackets are used to specify actions that are to be atomic
- The technique of double checking before updating a shared variable is quite useful.
Atomic Actions and Await Statements
Fine-Grained Atomicity
Fine-Grained Atomic Action: Is implemented directly by the hardware on which a concurrent program executes.
// No critical references int x = 0, y = 0; co x = x+1; || y = y+1; oc; // One critical references int x = 0, y = 0; co x = y+1; || y = y+1; oc; // Not At-Most-Once-Property int x = 0, y = 0; co x = y+1; || y = x+1; oc;
Specifying Synchronization: The Await Statement
Atomic actions is specified by angle brackets
<>
await
statement specifies synchronizationTo specify only mutual exclusion:
<S;>
where S is a sequence`<x = x+1; y = y+1;>`
To specify only condition synchronization:
<await (B);>
where (B) specifies a delayed condition`<await (count > 0);>`
Producer/Consumer Synchronization
// Copying an array from a producer to a consumer
int buf, p = 0, c = 0;
process Producer {
int a[n]
while (p < n) {
<await (p == c);>
buf = a[p];
p = p+1;
}
}
process Consumer {
int b[n];
while (c < n) {
<await (p > c);>
b[c] = buf;
c = c+1;
}
}
- When synchronization is implemented this way, a process is said to be busy, waiting or spinning.
Techniques for Avoiding Interference
- Four basic techniques for avoiding interference.
Disjoint Variables
The critical variables are those in assertions.
IF the write set of one process is disjoint from the reference set of a second, and vice versa, then the two processes cannot interfere.
co x = x+1; || y = y+1; oc
- If both x and y are initially 0
{x == 0} x = x+1; {x == 1} {y == 0} y = y+1; {y == 1}
Weakened Assertions
- We can sometimes avoid interference by weakening assertions to take into account the effects of concurrent execution
{x == 0}
co
{x == 0 || x == 2}
<x = x+1;>
{x == 1 || x == 3}
||
{x == 0 || x == 1}
<x = x+2;>
{x == 2 || x == 3}
oc
{x == 3}
Global Invariants
- We can avoid interference by empolying a global invariant to capture the relations between shared variables.
Synchronization
- It’s sufficient to consider only whether the entire sequence of statements might cause interference.
Safety and Liveness Properties
Proving Safety Properties
- BAD: Predicate that characterizes a bad program state. The program satisfies the associated safety property if BAD is false in every state in every possible history of the program.
- GOOD: Equivalent to BAD.
Scheduling Policies and Fairness
- Most liveness properties depend on fairness
- Concerned with guaranteeing that processes get the same chance to proceed, regardless of what other processes do.
- Scheduling policy: determines which one will be executed first.
- Three degrees of fairness that a scheduling policy might provide
- Unconditional Fairness: A scheduling policy is unconditionally fair if every unconditional atomic action that is eligible is executed eventually.
- Weak Fairness: A scheduling process is weakly fair if
- It’s unconditionally fair, and
- Every conditional atomic action that is eligible is executed eventually, assuming that its condition becomes true and then remains true until it is seen by the process executing the conditional atomic action.
- Strong Fairness: A scheduling policy is strongly fair if
- It’s unconditionally fair, and
- Every condition atomic action that is eligible is executed eventually, assuming that its condition is infinitely often true.
- Livelock: The program is alive, but the processes are not going anywhere.