Lecture 4

Correctness Concerns

 


Correctness Concerns 2

 


Example 1: Logical Vector Clocks

Consider assertion: for-all i,j: Ci[i] >= Cj[i] about logical vector clocks

States that a process is always more up to date on its own time than any other process is.

WHY?
Because time is monotonically increasing and
only a process can increment its own clock.
The clocks of other processes are never changed by a process (only remembered and passed on).

 


Example 2: Logical Clocks

Expanding on previous notes about limitations of logical clocks. Recall
if a->b then C(a) < C(b).
However if C(a) < C(b) we cannot imply anything about the casually relationship between events a and b. Either (a -> b or a || b). We only know that (not b->a).

However vector clocks (see ISIS also) can provide a partial order to event times.
Consider a vector timestamp Ta and Tb, then the following relationships can be defined:

 

Now can state:

a -> b iff Ta < Tb

Question: for two different events a and b, can Ta = Tb?
Homework (see also part b): Prove if Ta < Tb then a -> b


Huang's Termination Detection Algorithm

Problem: how to know when all processes have finished a computation (need consistent global view of this computation, be it an election, deadlock detection or resolution, token generation, etc).

A process is either IDLE or ACTIVE in the computation. A computation message is sent to initiate a computation.
DEFINITION: a computation is terminated iff all processes are idle and there are no messages in transit.

There is a controlling agent which initially has weight = 1.
Weight is used to coordinate work sent and results received.
Let B(DW) be a computation request message sent with weight DW
and C(DW) be an acknowledgement message with weight DW.

 


Huang's Termination Detection Algorithm 2

 


Correctness of Huang's Termination Detection Algorithm

Let
A : set of weights of all active processes
B : set of weights of all computation messages in transit
C : set of weights of all control messages in transit
Wc: weight of controlling agent

Then the following invariants hold:

I1: Wc + SUM{over union of A,B and C} = 1 (conservation of weight)

I2: for-all W in union of A,B and C) W > 0 (weights are never negative)

------

By I1, Wc = 1 implies SUM{over union of A,B and C} = 0

By P2, SUM{over union of A,B and C} = 0 implies UNION A,B and C is empty

A UNION B = empty implies termination.

if assume message sending is finite and reliable, then eventually C will become empty and Wc = 1 so noting the termination

QUESTION: In what way related to two phase commit
and
Distributed Mutual Exclusion Algorithm (Ricart and Agrawala)


Correctness of Ricart-Agrawala Algorithm

Proof by contradiction: Assume that two sites Si and Sj are executing the critical section (CS) concurrently and that Si's request has a smaller timestamp than Sj (timestamps are totally ordered). Si must have receieved Sj's request after it made its own request. But Sj can only be in the CS if Si returned a reply to it before Si finishes the CS. But this is not possible since Sj has lower priority than Si's request.

 


Homework (part b - see also part a):
State invariants for the

 


Processes

On Uni-processors,
processes are mainly to create illusion of virtual processor.
Therefore they are meant to keep computations logically apart.

For distributed systems
they are additionally used to create cooperating computations, fault tolerant computations and real time and parallel systems.

 


Threads

Single Address space
Multiple threads of control =

AKA mini-processes or lightweight processes


Threads Share Memory

Threads can execute in parallel on appropriate shared memory multiprocessors (such as high end workstations).


Server Applications

In Client Server model,


Server Implementations

Consider analogy to dentist's office:


Using Threads: Organizational Models

QUESTION: Do threads make software easier to write?


Design Issues/Threads


Threads in User Space

Advantages:

Disadvantages:


Threads in Kernel Space


Scheduler Activations

Hybrid solution:


RPC and Threads

Many RPCs are to processes on same machine:
Can share memory(map page registers to calling stack)
Not just for threads

For server RPC: don't need to save/restore state while waiting.

Implicit receive: create new thread to handle incoming message

Pop-up thread: created to handle RPC


Copyright chris wild 1996.
For problems or questions regarding this web contact [Dr. Wild].
Last updated: October 03, 1996.