08 april 2004 1 pps - groupe de travail en concurrence the probabilistic asynchronous -calculus...

28
08 April 2004 08 April 2004 1 PPS - Groupe de Travail en Concurre PPS - Groupe de Travail en Concurre nce nce The probabilistic The probabilistic asynchronous asynchronous -calculus -calculus Catuscia Palamidessi, INRIA Futurs, Catuscia Palamidessi, INRIA Futurs, France France

Upload: carter-webley

Post on 13-Dec-2015

214 views

Category:

Documents


0 download

TRANSCRIPT

08 April 200408 April 2004 11PPS - Groupe de Travail en ConcurrencePPS - Groupe de Travail en Concurrence

The probabilistic The probabilistic asynchronous asynchronous -calculus-calculus

Catuscia Palamidessi, INRIA Futurs, FranceCatuscia Palamidessi, INRIA Futurs, France

08 April 2004 PPS - Groupe de Travail en Concurrence 2

Motivations Main motivation behind the development

of the probabilistic asynchronous -calculus:To use it as an intermediate language for the fully distributed implementation of the -calculus with mixed choice

Plan of the talk The -calculus,

Operational semantics Expressive power

The probabilistic asynchronous -calculus Probabilistic automata, operational semantics Distributed implementation

Encoding the -calculus into the probabilistic asynchronous -calculus

08 April 2004 PPS - Groupe de Travail en Concurrence 3

The -calculus Proposed by [Milner, Parrow, Walker ‘92] as a formal language to

reason about concurrent systems Concurrent: several processes running in parallel Asynchronous: every process proceeds at its own speed Synchonous communication: aka handshaking Mixed guarded choice: input and output guards like in CSP and CCS.

The implementation of guarded choice is aka the binary interaction problem

Dynamic generation of communication channels Scope extrusion: a channel name can be communicated and its

scope extended to include the recipient

x y

zz

zR

Q

P

08 April 2004 PPS - Groupe de Travail en Concurrence 4

: the -calculus (w/ mixed choice)

Syntax

g ::= x(y) | x^y | prefixes (input, output, silent)

P ::= i gi . Pi mixed guarded choice

| P | P parallel

| (x) P new name

| recA P recursion

| A procedure name

08 April 2004 PPS - Groupe de Travail en Concurrence 5

Operational semantics

Transition system P -a Q

Rules

Choice i gi . Pi –gi Pi

P -x^y P’Open ___________________

(y) P -x^(y) P’

08 April 2004 PPS - Groupe de Travail en Concurrence 6

Operational semantics

Rules (continued)

P -x(y) P’ Q -x^z Q’

Com ________________________ P | Q - P’ [z/y] | Q’

P -x(y) P’ Q -x^(z) Q’

Close _________________________ P | Q - (z) (P’ [z/y] | Q’)

P -g P’ Par _________________ fn(Q) and bn(g) disjoint

Q | P -g Q | P

08 April 2004 PPS - Groupe de Travail en Concurrence 7

Features which make very expressive- and cause difficulty in its

distributed implementation

(Mixed) Guarded choice Symmetric solution to certain distributed problems involving

distributed agreement

Link mobility Network reconfiguration It allows expressing HO (e.g. calculus) in a natural way In combination with guarded choice, it allows solving more

distributed problems than those solvable by guarded choice alone

08 April 2004 PPS - Groupe de Travail en Concurrence 8

Example of distributed agreement: The leader election problem in a symmetric networkTwo symmetric processes must elect one of them as the leader In a finite amount of time The two processes must agree

A symmetric and fully distributed solution in using guarded choice:

x.Pwins + y^.Ploses | y.Qwins + x^.Qloses

The expressive power of

Ploses | Qwins

P Q

y

x

Pwins | Qloses

08 April 2004 PPS - Groupe de Travail en Concurrence 9

Example of a network where the leader election problem cannot be solved by

guarded choice alone

For the following network there is no (fully distributed and symmetric) solution in CCS, or in CSP

08 April 2004 PPS - Groupe de Travail en Concurrence 10

A solution to the leader election problem

in

looser

winner

looser

winnerlooser winner

08 April 2004 PPS - Groupe de Travail en Concurrence 11

Approaches to the implementation of guarded choice in literature

[Parrow and Sjodin 92], [Knabe 93], [Tsai and Bagrodia 94]: asymmetric solution based on introducing an order on processes

Other asymmetric solutions based on differentiating the initial state

Plenty of centralized solutions [Joung and Smolka 98] proposed a randomized solution to the

multiway interaction problem, but it works only under an assumption of partial synchrony among processes

Our solution is the first to be fully distributed, symmetric, and using no synchronous hypotheses.

08 April 2004 PPS - Groupe de Travail en Concurrence 12

State of the art in Formalisms able to express distributed

agreement are difficult to implement in a distributed fashion

For this reason, the field has evolved towards variants of which retain mobility, but have no guarded choice

One example of such variant is the asynchronous calculus proposed by [Honda-Tokoro’91, Boudol, ’92] (Asynchronous = Asynchronous communication)

08 April 2004 PPS - Groupe de Travail en Concurrence 13

a : the Asynchonous Version of [Amadio, Castellani, Sangiorgi ’97]

Syntax

g ::= x(y) | prefixes

P ::= i gi . Pi input guarded choice

| x^y output action

| P | P parallel

| (x) P new name

| recA P recursion

| A procedure name

08 April 2004 PPS - Groupe de Travail en Concurrence 14

Characteristics of a

Asynchronous communication: we can’t write a continuation after an output,

i.e. no x^y.P, but only x^y | P so P will proceed without waiting for the actual delivery of

the message

Input-guarded choice: only input prefixes are allowed in a choice.

Note: the original asynchronous calculus did not contain a choice construct. However the version presented here was shown by [Nestmann and Pierce, ’96] to be equivalent to the original asynchronous calculus

It can be implemented in a fully distributed fashion (see for instance Odersky’s group’s project PiLib)

08 April 2004 PPS - Groupe de Travail en Concurrence 15

Towards a fully distributed implementation of The results of previous pages show that a fully distributed

implementation of must necessarily be randomized

A two-steps approach:

probabilistic asynchronous

distributed machine

[[ ]]

<< >>

Advantages: the correctness proof is easier since [[ ]] (which is the difficult part of the implementation) is between two similar languages

08 April 2004 PPS - Groupe de Travail en Concurrence 16

pa: the Probabilistic Asynchonous

Syntax

g ::= x(y) | prefixes

P ::= i pi gi . Pi pr. inp. guard. choice

i pi = 1

| x^y output action

| P | P parallel

| (x) P new name

| recA P recursion

| A procedure name

08 April 2004 PPS - Groupe de Travail en Concurrence 17

1/2

1/21/3

1/31/3

1/32/3

1/2

1/21/3

1/31/3

1/32/3

1/2

1/21/3

1/31/3

1/32/3

The operational semantics of pa

Based on the Probabilistic Automata of Segala and Lynch Distinction between

nondeterministic behavior (choice of the scheduler) and

probabilistic behavior (choice of the process)

Scheduling Policy:The scheduler chooses the group of transitions

Execution:The process chooses probabilistically the transition within the group

08 April 2004 PPS - Groupe de Travail en Concurrence 18

The operational semantics of pa

Representation of a group of transition

P { --gi-> pi Pi } i

Rules

Choice i pi gi . Pi {--gi-> pi Pi }i

P {--gi-> piPi }i

Par ____________________

Q | P {--gi-> piQ | Pi }i

08 April 2004 PPS - Groupe de Travail en Concurrence 19

The operational semantics of pa

Rules (continued)

P {--xi(yi)-> piPi }i Q {--x^z-> 1 Q’ }i

Com ____________________________________ P | Q {---> pi

Pi[z/yi] | Q’ }xi=x U { --xi(yi)-> pi Pi | Q }xi=/=x

P {--xi(yi)-> piPi }i

Res ___________________ qi renormalized

(x) P { --xi(yi)-> qi (x) Pi }xi =/= x

08 April 2004 PPS - Groupe de Travail en Concurrence 20

Implementation of pa

Compilation in Java << >> : pa Java

Distributed<< P | Q >> = << P >>.start(); << Q >>.start();

Compositional<< P op Q >> = << P >> jop << Q >> for all op

Channels are one-position buffers with test-and-set (synchronized) methods for input and output

08 April 2004 PPS - Groupe de Travail en Concurrence 21

Encoding into pa

[[ ]] : pa

Fully distributed [[ P | Q ]] = [[ P ]] | [[ Q ]]

Preserves the communication structure[[ P ]] = [[ P ]]

Compositional[[ P op Q ]] = Cop [ [[ P ]] , [[ Q ]] ]

Correct wrt a notion of probabilistic testing semantics

P must O iff [[ P ]] must [[ O ]] with prob 1

08 April 2004 PPS - Groupe de Travail en Concurrence 22

Encoding into pa

Idea (from an idea in [Nestmann’97]): Every mixed choice is translated into a parallel comp. of

processes corresponding to the branches, plus a lock f The input processes compete for acquiring both its own lock

and the lock of the partner The input process which succeeds first, establishes the

communication. The other alternatives are discarded

The problem is reduced to a generalized dining philosophers problem where each fork (lock) can be adjacent to more than two philosophers

Further, we can reduce the generalized DP to the classic case, and then apply the randomized algorithm of Lehmann and Rabin for the dining philosophers

P

Q

R

Pi

QiRi

f

f

f

S

R’i

f Si

08 April 2004 PPS - Groupe de Travail en Concurrence 23

Dining Philosophers: classic case

Each fork is shared by exactly two philosophers

08 April 2004 PPS - Groupe de Travail en Concurrence 24

The algorithm of Lehmann and Rabin

1. think;2. choose probabilistically first_fork in {left,right}; 3. if not taken(first_fork) then take(first_fork) else

goto 3;4. if not taken(second_fork) then take(second_fork);

else { release(first_fork); goto 2 } 5. eat;6. release(second_fork);7. release(first_fork);

8. goto 1

08 April 2004 PPS - Groupe de Travail en Concurrence 25

Dining Philosophers: generalized case

•Each fork can be shared by more than two philosophers

•The classical algorithm of Lehmann and Rabin, as it is, does not work in the generalized case. However, we can transform it into the classic case Transformation

into the classic case: each fork is initially associated with a token. Each phil needs to acquire a token in order to participate to the competition. The competing phils determine a set of subgraphs in which each subgraph contains at most one cycle

08 April 2004 PPS - Groupe de Travail en Concurrence 26

Generalized philosophers

Another problem we had to face: the solution of Lehmann and Rabin works only for fair schedulers, while pa does not provide any guarantee of fairness

Fortunately, it turns out that the fairness is required only in order to avoid a busy-waiting livelock at instruction 3. If we replace busy-waiting with suspension, then the algorithm works for any scheduler

This result was achieved independently also by [Duflot, Fribourg, Picarronny 02].

08 April 2004 PPS - Groupe de Travail en Concurrence 27

1. think;2. choose probabilistically first_fork in {left,right};

3. if not taken(first_fork) then take(first_fork) else wait;

4. if not taken(second_fork) then take(second_fork); else { release(first_fork); goto 2 }

5. eat;6. release(second_fork);7. release(first_fork);8. goto 1

1. think;2. choose probabilistically first_fork in {left,right}; 3. if not taken(first_fork) then take(first_fork) else

goto 3;4. if not taken(second_fork) then take(second_fork);

else { release(first_fork); goto 2 } 5. Eat;6. release(second_fork);7. release(first_fork);8. goto 1

The algorithm of Lehmann and RabinModified so to avoid the need for fairnessThe algorithm of Lehmann and Rabin

08 April 2004 PPS - Groupe de Travail en Concurrence 28

Conclusion

We have provided an encoding of the calculus into a probabilistic version of its asynchronous fragment fully distributed compositional correct wrt a notion of testing semantics

Advantages: high-level solutions to distributed algorithms Easier to prove correct (no reasoning about randomization

required)