Title: Control%20flow%20structures%20in%20Distributed%20programs
1Control flow structures in Distributed programs
Philippe Demaecker
Programming Technology Lab (PROG)Departement
Informatica (DINF)Vrije Universiteit Brussel
(VUB)
2About distributed programs
- Wide area networks ? programs on remote machines
cant be altered - Component wise (application consists of
interconnected components) - Components are active
3About
- Examples
- metacrawler ? several searchengines
- dispatching within HTTPD daemons (to cgis)
- ...
these kind of control flow handlers are often
more needed than anticipated
4Goal
- Event handlers are often needed
- control flow code hardcoded in program
- insert components that were written by other
programmers - Which control flow structures are needed to write
distributed programs in an easier way? - No AOP (You dont posses the remote code)
- Purpose is to go a level higher than concurrency
primitives
5Working method
- Study the control flow of some simple programs
to extract primitives - (e.g. pipeline).
- Programs written in Cborg
6Cborg
- Asynchronous sends in Cborg syntax
- ltagentgt.ltcalleegt(ltparsgt)
- But no result is returned this way
- This can be solved using callbacks
7Callbacks in Cborg
tecra/ses2
tecra/ses1
aremotedict(tecra/ses2) answer(b)display(b)
a.calcdet(1,8,5,6, agentself())
calcdet(matrx,cb) cb.answer(matrx)
8Example
- SieveComp(next)
- prime 0
- Receive(aNumber)if (prime 0,
- SetPrime(aNumber)
- display(prime),
- if((not ((aNumber\\prime)0)),
- next.Receive(aNumber),void))
- agentclone(clone())
9Example (ctd)
- c Collector()
- s3 Sievecomp(c)
- s2 Sievecomp(s3)
- s1 Sievecomp(s2)
- g Generator(s1)
- g.Start()
Problem what to do if we want to extend the
functionality (e.g display between components)?
- Write a component that understands Receive, with
the same parameter structure. - Creationstructure must be altered
10Problems
- Components have to understand Receive(...)
- Building the structure is too explicit
- Collector should be a control flow component
- Parameter passing convention is hardcoded gt
control flow too explicit
11Example (with pipeline)
PrimeCheck()prime 0 Receive(aNumber,primeTo,
nonPrimeTo) if(prime 0, SetLocalPrime(a
Number) primeTo.GetResult(prime), if((not
((aNumber\\prime)0)), nonPrimeTo.GetResult(a
Number), void)) agentclone(clone())
12Example (ctd)
NumberGenerator(from,to, name) next
0 SendInfo(target,info) target.GetResult(
info) Start(target) for(tfrom,
tt1, tltto, SendInfo(target,t)) agentclon
e(clone(), name)
13Example (proposed solution)
- componentsg,s1,s2,s3
- pPipeline(components))
- componentsg,disp,s1,disp,s2,disp,s3,disp
- pPipeline(components)
- gtgt p.Start(console, Receive,GetResult)
14Pipeline structure
15FFT using same pipeline
- CalcFFT(name, stageNr)
- Calc(coeffs, displayTo, resultTo)
- ltcompute coefficientsgt
- resultto.SendToNext(info)
- agentclone(clone(), name)
- componentsg, FFT1, FFT2, ...
- pipeline(components,Calc,SendToNext)
- Same pipeline structure !
16Addendum
- Good use of migration
- Assume components are in Zimbabwe
- 1. You can declare the pipeline locally and
- 2. send the pipes to Zimbabwe.
- gt more performant
17Future Work Thesis (1)
- Study other applications and designs
- HTTPD
- MVC
- Distributed Namespaces
- Whiteboards
- Design Patterns
- Message Queues
- Event Dispatchers
18Future Work Thesis (2)
- Checking out what happens when working with other
natives. - Synchronisation natives (sync, send, recv)
- Main goal is to detect appropriate control flow
structures - star structure (many to one)
- multi casting (one to many)
- others (many to many, ...)
19Pipe (internals)
- Pipe(n, posPipe, nextAgent,nextNegPipe)
- SendResultToNext_at_args
- info args1
- mss read("rcv." n "()")
- mss1 nextAgent
- app mss2
- app2 info, posPipe, nextNegPipe
- eval(mss)
- cst make(16)
- cst1 read(callbackName"_at_args")
- cst2 SendResultToNext3
- eval(cst)
- agentclone(clone(), name)