Languages%20and%20Compilers%20(SProg%20og%20Overs - PowerPoint PPT Presentation

About This Presentation
Title:

Languages%20and%20Compilers%20(SProg%20og%20Overs

Description:

Languages and Compilers (SProg og ... Simple Network Management ... Method Invocation CORBA Common Object Request Broker Architecture An industry ... – PowerPoint PPT presentation

Number of Views:222
Avg rating:3.0/5.0
Slides: 106
Provided by: aau89
Category:

less

Transcript and Presenter's Notes

Title: Languages%20and%20Compilers%20(SProg%20og%20Overs


1
Languages and Compilers(SProg og
Oversættere)Lecture 14Concurrency and
distribution
  • Bent Thomsen
  • Department of Computer Science
  • Aalborg University

With acknowledgement to John Mitchell whose
slides this lecture is based on.
2
Concurrency, distributed computing, the Internet
  • Traditional view
  • Let the OS deal with this
  • gt It is not a programming language issue!
  • End of Lecture
  • Wait-a-minute
  • Maybe the traditional view is getting out of
    date?

3
Languages with concurrency constructs
  • Maybe the traditional view was always out of
    date?
  • Simula
  • Modula3
  • Occam
  • Concurrent Pascal
  • ADA
  • Linda
  • CML
  • Facile
  • Jo-Caml
  • Java
  • C
  • Fortress

4
Categories of Concurrency
  • Physical concurrency - Multiple independent
    processors
  • Uni-processor with I/O channels
  • (multi-programming)
  • Multiple CPU
  • (parallel programming)
  • Network of uni- or multi- CPU machines
  • (distributed programming)
  • Logical concurrency - The appearance of physical
    concurrency is presented by time-sharing one
    processor (software can be designed as if there
    were multiple threads of control)
  • Concurrency as a programming abstraction
  • Def A thread of control in a program is the
    sequence of program points reached as control
    flows through the program

5
Introduction
  • Reasons to Study Concurrency
  • It involves a different way of designing software
    that can be very usefulmany real-world
    situations involve concurrency
  • Control programs
  • Simulations
  • Client/Servers
  • Mobile computing
  • Games
  • 2. Computers capable of physical concurrency are
    now widely used
  • High-end servers
  • Grid computing
  • Game consoles
  • Dual Core CPUs, Quad Core 32 Core in 3 years

6
Compilers and More--What to Do With All Those
Cores?HPC Wire (04/06/07) Vol. 16, No. 14,
Wolfe, Michael
One of the PPoPP attendees, Prof. Rudolf
Eigenmann (Purdue Univ.) issued an indictment,
saying that we in the parallel programming
research community should be ashamed of
ourselves. Single-processor systems have run out
of steam, something the parallel programming
community has been predicting since I was a
college student. Now is the time to step up and
reap the benefits of all our past work. We've had
30 years to study this problem and come up with a
solution, but what's the end result? Surprise! We
still have no well-accepted method to generate
parallel applications.
Dr. Andrew Chien (Intel), one of the PPoPP
keynote speakers, took issue with Eigenmann's
criticism. Chien said that in fact we've had a
great deal of success in parallel programming
just look at all the massively parallel systems
and the applications that run on them. However,
halfway through his talk was the slide "Wanted
Breakthrough Innovations in Parallel
Programming." I asked how he could claim past
success, then state that breakthrough innovations
are needed it sounded like a typical manager
"good job, now get back to work." He replied that
in the past, parallel programming meant high
performance. Now, parallel programming means
spreadsheets, games, email, and applications on
your laptop. It's a different target environment,
with a different class of programmer, and
different expectations.
Compilers and More -- What To Do With All Those
Cores?
7
The promise of concurrency
  • Speed
  • If a task takes time t on one processor,
    shouldnt it take time t/n on n processors?
  • Availability
  • If one processor is busy, another may be ready to
    help
  • Distribution
  • Processors in different locations can collaborate
    to solve a problem or work together
  • Humans do it so why cant computers?
  • Vision, cognition appear to be highly parallel
    activities

8
Challenges
  • Concurrent programs are harder to get right
  • Folklore Need an order of magnitude speedup (or
    more) to be worth the effort
  • Some problems are inherently sequential
  • Theory circuit evaluation is P-complete
  • Practice many problems need coordination and
    communication among sub-problems
  • Specific issues
  • Communication send or receive information
  • Synchronization wait for another process to act
  • Atomicity do not stop in the middle and leave a
    mess

9
Why is concurrent programming hard?
  • Nondeterminism
  • Deterministic two executions on the same input
    will always produce the same output
  • Nondeterministic two executions on the same
    input may produce different output
  • Why does this cause difficulty?
  • May be many possible executions of one system
  • Hard to think of all the possibilities
  • Hard to test program since some cases may occur
    infrequently

10
Traditional C Library for concurrency
System Calls - fork( ) - wait( ) - pipe( ) -
write( ) - read( ) Examples
11
Process Creation
Fork( ) NAME fork() create a new
process SYNOPSIS include ltsys/types.hgt
include ltunistd.hgt pid_t fork(void) RETURN
VALUE success parent- child pid child-
0 failure -1
12
Fork()- program structure
include ltsys/types.hgt include
ltunistd.hgt include ltstdio.hgt Main() pid_t
pid if((pid fork())gt0) / parent
/ else if ((pid0) /child/ else
/ cannot fork exit(0)
13
Wait() system call
Wait()- wait for the process whose pid reference
is passed to finish executing SYNOPSIS includelts
ys/types.hgt includeltsys/wait.hgt pid_t wait(int
stat)loc) The unsigned decimal integer process
ID for which to wait RETURN VALUE success-
child pid failure- -1 and errno is set
14
Wait()- program structure
include ltsys/types.hgt include
ltunistd.hgtinclude ltstdlib.hgt include
ltstdio.hgt Main(int argc, char argv) pid_t
childPID if((childPID fork())0) /child/
else / parent wait(0) exit(0)
15
Pipe() system call
Pipe()- to create a read-write pipe that may
later be used to communicate with a process
well fork off. SYNOPSIS int pipe(pfd) int
pfd2 PARAMETER Pfd is an array of 2 integers,
which that will be used to save the two file
descriptors used to access the pipeRETURN
VALUE0 success-1 error.
16
Pipe() - structure
/ first, define an array to store the two file
descriptors/int pipes2/ now, create the
pipe/int rc pipe (pipes) if(rc -1)
/ pipe() failed/ perror(pipe) exit(1)
If the call to pipe() succeeded, a pipe will be
created, pipes0 will contain the number of its
read file descriptor, and pipes1 will contain
the number of its write file descriptor.
17
Write() system call
write() used to write data to a file or other
object identified by a file descriptor. SYNOPSIS
include ltsys/types.hgt Size_t write(int fildes,
const void buf, size_t nbyte) PARAMETER fildes
is the file descriptor, buf is the base address
of area of memory that data is copied
from, nbyte is the amount of data to
copy RETURN VALUE The return value is the actual
amount of data written, if this differs from
nbyte then something has gone wrong
18
Read() system call
read() read data from a file or other object
identified by a file descriptor SYNOPSIS include
ltsys/types.hgt Size_t read(int fildes, void buf,
size_t nbyte) ARGUMENT fildes is the file
descriptor, buf is the base address of the
memory area into which the data is read, nbyte
is the maximum amount of data to read. RETURN
VALUE The actual amount of data read from the
file. The pointer is incremented by the amount of
data read.
19
Solaris 2 Synchronization
  • Implements a variety of locks to support
    multitasking, multithreading (including real-time
    threads), and multiprocessing.
  • Uses adaptive mutexes for efficiency when
    protecting data from short code segments.
  • Uses condition variables and readers-writers
    locks when longer sections of code need access to
    data.
  • Uses turnstiles to order the list of threads
    waiting to acquire either an adaptive mutex or
    reader-writer lock.

20
Windows 2000 Synchronization
  • Uses interrupt masks to protect access to global
    resources on uniprocessor systems.
  • Uses spinlocks on multiprocessor systems.
  • Also provides dispatcher objects which may act as
    wither mutexes and semaphores.
  • Dispatcher objects may also provide events. An
    event acts much like a condition variable.

21
Basic question
  • Maybe the library approach is not such a good
    idea?
  • How can programming languages make concurrent and
    distributed programming easier?

22
Language support for concurrency
  • Help promote good software engineering
  • Allowing the programmer to express solutions more
    closely to the problem domain
  • No need to juggle several programming models
    (Hardware, OS, library, )
  • Make invariants and intentions more apparent
    (part of the interface and/or type system)
  • Allows the compiler much more freedom to choose
    different implementations
  • Base the programming language constructs on a
    well-understood formal model gt formal reasoning
    may be less hard and the use of tools may be
    possible

23
What could languages provide?
  • Abstract model of system
  • abstract machine gt abstract system
  • Example high-level constructs
  • Communication abstractions
  • Synchronous communication
  • Buffered asynchronous channels that preserve msg
    order
  • Mutual exclusion, atomicity primitives
  • Most concurrent languages provide some form of
    locking
  • Atomicity is more complicated, less commonly
    provided
  • Process as the value of an expression
  • Pass processes to functions
  • Create processes at the result of function calls

24
Design Issues for Concurrency
  • How is cooperation synchronization provided?
  • How is competition synchronization provided?
  • How and when do tasks begin and end execution?
  • Are tasks statically or dynamically created?
  • Are there any syntactic constructs in the
    language?
  • Are concurrency construct reflected in the type
    system?
  • How to generate code for concurrency constructs?
  • How is the run-time system affected?

25
Run-time system for concurrency
  • Processes versus Threads

Threads
Fibres
Process
thread library
Operating System
Fibres are sometimes called green threads
26
Multithreading in Java multithreading models
Many-to-One model Green threads in Solaris
LWP
CPU
Kernel
User space
Kernel space
Java application
(Green threads)
(JVM)
27
Multithreading in Java multithreading models
Many-to-One Green threads in Solaris
  • Multiple ULTs to one KLT
  • Threads library is stored in Java Development
    Kit (JDK).

Thread library is a package of code for user
level thread management, i.e. scheduling thread
execution and saving thread contexts, etc.. In
Solaris threads library is called green threads.
  • Disadvantages
  • One thread is blocked, all threads are blocked
  • Can not run on multiprocessors in parallel

28
Multithreading in Java multithreading models
One-to-One model in Windows NT
LWP
LWP
CPU
Kernel
Kernel space
User space
Java Application
(JVM)
29
Multithreading in Java multithreading models
One-to-One model in Windows NT
  • One ULT to one KLT
  • Realized by Windows NT threads package.
  • The kernel maintains context information for
  • the process and for individual thread.
  • Disadvantage
  • The time of switching one thread to another
    thread at
  • kernel level is much longer than at user
    level.

30
Multithreading in Java multithreading models
Many-to-Many model Naive threads in Solaris
LWP
LWP
CPU
Kernel
Kernel space
User space
Java Application
(Native threads)
31
Multithreading in Java multithreading models
Many-to-Many model Native threads in Solaris
  • Two level model or combined model of ULT and KLT
  • In Solaris operating system, native threads
    library can be
  • invoked by setting THREADS_FLAG in JDK to
    native environment.
  • A user level threads library (Native threads),
    provided by JDK,
  • can schedule user-level threads above
    kernel-level threads.
  • The kernel only need to manage the threads that
    are currently
  • active.
  • Solve the problems in two models above

32
Synchronization
  • Kinds of synchronization
  • 1. Cooperation
  • Task A must wait for task B to complete some
    specific activity before task A can continue its
    execution e.g., the producer-consumer problem
  • 2. Competition
  • When two or more tasks must use some resource
    that cannot be simultaneously used e.g., a shared
    counter
  • Competition is usually provided by mutually
    exclusive access (approaches are discussed
    later)

33
Basic issue conflict between processes
  • Critical section
  • Two processes may access shared resource(s)
  • Inconsistent behaviour if two actions are
    interleaved
  • Allow only one process in critical section
  • Deadlock
  • Process may hold some locks while awaiting others
  • Deadlock occurs when no process can proceed

34
Concurrent Pascal cobegin/coend
  • Limited concurrency primitive
  • Example
  • x 0
  • cobegin
  • begin x 1 x x1 end
  • begin x 2 x x1 end
  • coend
  • print(x)

execute sequential blocks in parallel
x 1
x x1
x 0
print(x)
x 2
x x1
Atomicity at level of assignment statement
35
Mutual exclusion
  • Sample action
  • procedure sign_up(person)
  • begin
  • number number 1
  • listnumber person
  • end
  • Problem with parallel execution
  • cobegin
  • sign_up(fred)
  • sign_up(bill)
  • end

bob
fred
36
Locks and Waiting
  • ltinitialze concurrency controlgt
  • cobegin
  • begin
  • ltwaitgt
  • sign_up(fred) // critical section
  • ltsignalgt
  • end
  • begin
  • ltwaitgt
  • sign_up(bill) // critical section
  • ltsignalgt
  • end
  • end

Need atomic operations to implement wait
37
Mutual exclusion primitives
  • Atomic test-and-set
  • Instruction atomically reads and writes some
    location
  • Common hardware instruction
  • Combine with busy-waiting loop to implement mutex
  • Semaphore
  • Avoid busy-waiting loop
  • Keep queue of waiting processes
  • Scheduler has access to semaphore process sleeps
  • Disable interrupts during semaphore operations
  • OK since operations are short

38
Monitor Brinch-Hansen, Dahl, Dijkstra, Hoare
  • Synchronized access to private data. Combines
  • private data
  • set of procedures (methods)
  • synchronization policy
  • At most one process may execute a monitor
    procedure at a time this process is said to be
    in the monitor.
  • If one process is in the monitor, any other
    process that calls a monitor procedure will be
    delayed.
  • Modern terminology synchronized object

39
Java Concurrency
  • Threads
  • Create process by creating thread object
  • Communication
  • Shared variables
  • Method calls
  • Mutual exclusion and synchronization
  • Every object has a lock (inherited from class
    Object)
  • synchronized methods and blocks
  • Synchronization operations (inherited from class
    Object)
  • wait pause current thread until another thread
    calls notify
  • notify wake up waiting threads
  • notifyAll

40
Java Threads
  • Thread
  • Set of instructions to be executed one at a time,
    in a specified order
  • Java thread objects
  • Object of class Thread
  • Methods inherited from Thread
  • start method called to spawn a new thread of
    control causes VM to call run method
  • (suspend freeze execution)
  • (interrupt freeze execution and throw exception
    to thread)
  • (stop forcibly cause thread to halt)
  • Objects can implement the Runnable interface and
    be passed to a thread
  • public interface Runnable
  • public void run()

41
Interaction between threads
  • Shared variables
  • Two threads may assign/read the same variable
  • Programmer responsibility
  • Avoid race conditions by explicit
    synchronization!!
  • Method calls
  • Two threads may call methods on the same object
  • Synchronization primitives
  • Each object has internal lock, inherited from
    Object
  • Synchronization primitives based on object
    locking

42
Synchronization example
  • Objects may have synchronized methods
  • Can be used for mutual exclusion
  • Two threads may share an object.
  • If one calls a synchronized method, this locks
    the object.
  • If the other calls a synchronized method on the
    same object, this thread blocks until the object
    is unlocked.

43
Synchronized methods
  • Marked by keyword
  • public synchronized void commitTransaction()
  • Provides mutual exclusion
  • At most one synchronized method can be active
  • Unsynchronized methods can still be called
  • Programmer must be careful
  • Not part of method signature
  • sync method equivalent to unsync method with body
    consisting of a synchronized block
  • subclass may replace a synchronized method with
    unsynchronized method
  • This problem is known as the inheritance anomaly

44
Aspects of Java Threads
  • Portable since part of language
  • Easier to use in basic libraries than C system
    calls
  • Example garbage collector is separate thread
  • General difficulty combining serial/concur code
  • Serial to concurrent
  • Code for serial execution may not work in
    concurrent sys
  • Concurrent to serial
  • Code with synchronization may be inefficient in
    serial programs (10-20 unnecessary overhead)
  • Abstract memory model
  • Shared variables can be problematic on some
    implementations
  • Java 1.5 has expanded the definition of the
    memory model

45
C Threads
  • Basic thread operations
  • Any method can run in its own thread, i.e. no
    need to pass a class implementing a run method
  • A thread is created by creating a Thread object
  • The Thread class is sealed thus no inheritance
    from it
  • Creating a thread does not start its concurrent
    execution it must be requested through the
    Start method
  • A thread can be made to wait for another thread
    to finish with Join
  • A thread can be suspended with Sleep
  • A thread can be terminated with Abort

46
C Threads
  • Synchronizing threads
  • The Interlock class
  • The lock statement
  • The Monitor class
  • Evaluation
  • An advance over Java threads, e.g., any method
    can run its own thread
  • Thread termination cleaner than in Java
  • Synchronization is more sophisticated

47
Polyphonic C
  • An extension of the C language with new
    concurrency constructs
  • Based on the join calculus
  • A foundational process calculus like the
    p-calculus but better suited to asynchronous,
    distributed systems
  • A single model which works both for
  • local concurrency (multiple threads on a single
    machine)
  • distributed concurrency (asynchronous messaging
    over LAN or WAN)
  • It is different
  • But its also simple if Mort can do any kind of
    concurrency, he can do this

48
In one slide
  • Objects have both synchronous and asynchronous
    methods.
  • Values are passed by ordinary method calls
  • If the method is synchronous, the caller blocks
    until the method returns some result (as usual).
  • If the method is async, the call completes at
    once and returns void.
  • A class defines a collection of chords
    (synchronization patterns), which define what
    happens once a particular set of methods has been
    invoked. One method may appear in several chords.
  • When pending method calls match a pattern, its
    body runs.
  • If there is no match, the invocations are queued
    up.
  • If there are several matches, an unspecified
    pattern is selected.
  • If a pattern containing only async methods fires,
    the body runs in a new thread.

49
Extending C with chords
  • Classes can declare methods using generalized
    chord-declarations instead of method-declarations
    .

chord-declaration method-header
method-header body method-header
attributes modifiers return-type async name
(parms)
  • Interesting well-formedness conditions
  • At most one header can have a return type (i.e.
    be synchronous).
  • Inheritance restriction.
  • ref and out parameters cannot appear in async
    headers.

50
A Simple Buffer
  • class Buffer
  • String get() async put(String s)
  • return s
  • Calls to put() return immediately (but are
    internally queued if theres no waiting get()).
  • Calls to get() block until/unless theres a
    matching put()
  • When theres a match the body runs, returning the
    argument of the put() to the caller of get().
  • Exactly which pairs of calls are matched up is
    unspecified.

51
OCCAM
  • Program consists of processes and channels
  • Process is code containing channel operations
  • Channel is a data object
  • All synchronization is via channels
  • Formal foundation based on CSP

52
Channel Operations in OCCAM
  • Read data item D from channel C
  • D ? C
  • Write data item Q to channel C
  • Q ! C
  • If reader accesses channel first, wait for
    writer, and then both proceed after transfer.
  • If writer accesses channel first, wait for
    reader, and both proceed after transfer.

53
Concurrent ML
  • Threads
  • New type of entity
  • Communication
  • Synchronous channels
  • Synchronization
  • Channels
  • Events
  • Atomicity
  • No specific language support

54
Threads
  • Thread creation
  • spawn (unit ? unit) ? thread_id
  • Example code
  • CIO.print "begin parent\n"
  • spawn (fn () gt (CIO.print "child
    1\n"))
  • spawn (fn () gt (CIO.print "child
    2\n"))
  • CIO.print "end parent\n
  • Result

child 1
child 2
begin parent
end parent
55
Channels
  • Channel creation
  • channel unit ? a chan
  • Communication
  • recv a chan ? a
  • send ( a chan a ) ? unit
  • Example
  • ch channel()
  • spawn (fn()gt ltAgt send(ch,0) ltBgt )
  • spawn (fn()gt ltCgt recv ch ltDgt )
  • Result

ltAgt
ltBgt
send/recv
ltCgt
ltDgt
56
CML programming
  • Functions
  • Can write functions channels ? threads
  • Build concurrent system by declaring channels and
    wiring together sets of threads
  • Events
  • Delayed action that can be used for
    synchronization
  • Powerful concept for concurrent programming
  • Sample Application
  • eXene concurrent uniprocessor window system

57
A CML implementation (simplified)
  • Use queues with side-effecting functions
  • datatype 'a queue Q of front 'a list ref,
    rear 'a list ref
  • fun queueIns (Q()) ( insert into queue )
  • fun queueRem (Q()) ( remove from queue )
  • And continuations
  • val enqueue queueIns rdyQ
  • fun dispatch () throw (queueRem rdyQ) ()
  • fun spawn f callcc (fn parent_k gt
  • ( enqueue parent_k f ()
    dispatch()))

Source Appel, Reppy
58
Fortress
  • Fortress STM

59
Fortress Atomic blocks
60
Software Transactional Memory
  • Locks are hard to get right
  • Programmability vs scalability
  • Transactional memory is appealing alternative
  • Simpler programming model
  • Stronger guarantees
  • Atomicity, Consistency, Isolation
  • Deadlock avoidance
  • Closer to programmer intent
  • Scalable implementations
  • Questions
  • How to lower TM overheads particularly in
    software?
  • How to balance granularity / scalability?
  • How to co-exist with other concurrency
    constructs?

61
Language issues in client/server programming
  • Communication mechanisms
  • RPC, Remote Objects, SOAP
  • Data representation languages
  • XDR, ASN.1, XML
  • Parsing and deparsing between internal and
    external representation
  • Stub generation

62
Client/server example
A major task of most clients is to interact
with a human user and a remote server.
  • The basic organization of the X Window System

63
Client-Side Software for Distribution Transparency
  • A possible approach to transparent replication of
    a remote object using a client-side solution.

64
The Stub Generation Process

Compiler / Linker
Server Program
Server Stub
Server Source
Interface Specification
Common Header
RPC LIBRARY
RPC LIBRARY
Stub Generator
Client Stub
Client Source
Client Program
Compiler / Linker
65
RPC and the OSI Reference Model
66
Representation
  • Data must be represented in a meaningful format.
  • Methods
  • Sender or Receiver makes right (NDR).
  • Network Data Representation (NDR).
  • Transmit architecture tag with data.
  • Represent data in a canonical (or standard) form
  • XDR
  • ASN.1
  • Note these are languages, but traditional DS
    programmers dont like programming languages,
    except C

67
XDR - eXternal Data Representation
  • XDR is a universally used standard from Sun
    Microsystems used to represent data in a network
    canonical (standard) form.
  • A set of conversion functions are used to encode
    and decode data for example, xdr_int( ) is used
    to encode and decode integers.
  • Conversion functions exist for all standard data
    types
  • Integers, chars, arrays,
  • For complex structures, RPCGEN can be used to
    generate conversion routines.

68
RPC Example
69
XDR Example
  • include ltrpc/xdr.hgt
  • ..
  • XDR sptr // XDR stream pointer
  • XDR xdrs // Pointer to XDR stream pointer
  • char bufBUFSIZE // Buffer to hold XDR data
  • xdrs (sptr)
  • xdrmem_create(xdrs, buf, BUFSIZE, XDR_ENCODE)
  • ..
  • int i 256
  • xdr_int(xdrs, i)
  • printf(position d. \n, xdr_getpos(xdrs))

70
Abstract Syntax Notation 1 (ASN.1)
  • ASN.1 is a formal language that has two features
  • a notation used in documents that humans read
  • a compact encoded representation of the same
    information used in communication protocols.
  • ASN.1 uses a tagged message format
  • lt tag (data type), data length, data value gt
  • Simple Network Management Protocol (SNMP)
    messages are encoded using ASN.1.

71
Distributed Objects
  • CORBA
  • Java RMI
  • SOAP and XML

72
Distributed ObjectsProxy and Skeleton in Remote
Method Invocation
73
CORBA
  • Common Object Request Broker Architecture
  • An industry standard developed by OMG to help in
    distributed programming
  • A specification for creating and using
    distributed objects
  • A tool for enabling multi-language,
    multi-platform communication
  • A CORBA based-system is a collection of objects
    that isolates the requestors of services
    (clients) from the providers of services
    (servers) by an encapsulating interface

74
CORBA objects
  • They are different from typical programming
    objects in three ways
  • CORBA objects can run on any platform
  • CORBA objects can be located anywhere on the
    network
  • CORBA objects can be written in any language that
    has IDL mapping.

75
Client
Client
Object Implementation
Object Implementation
IDL
IDL
IDL
IDL
ORB
ORB
NETWORK
A request from a client to an Object
implementation within a network
76
IDL (Interface Definition Language)
  • CORBA objects have to be specified with
    interfaces (as with RMI) defined in a special
    definition language IDL.
  • The IDL defines the types of objects by defining
    their interfaces and describes interfaces only,
    not implementations.
  • From IDL definitions an object implementation
    tells its clients what operations are available
    and how they should be invoked.
  • Some programming languages have IDL mapping (C,
    C, SmallTalk, Java,Lisp)

77
IDL File
IDL Compiler
Client Stub File
Server Skeleton File
Object Implementation
Client Implementation
ORB
78
The IDL compiler
  • It will accept as input an IDL file written using
    any text editor (fileName.idl)
  • It generates the stub and the skeleton code in
    the target programming language (ex Java stub
    and C skeleton)
  • The stub is given to the client as a tool to
    describe the server functionality, the skeleton
    file is implemented at the server.

79
IDL Example
module katytrail module weather struct
WeatherData float temp string
wind_direction_and_speed float
rain_expected float humidity
typedef sequenceltWeatherDatagt WeatherDataSeq
interface WeatherInfo WeatherData
get_weather( in string site
) WeatherDataSeq find_by_temp(
in float temperature )
80
IDL Example Cont.
interface WeatherCenter
register_weather_for_site ( in string
site, in WeatherData site_data )

Both interfaces will have Object
Implementations. A different type of Client will
talk to each of the interfaces. The Object
Implementations can be done in one of two ways.
Through Inheritance or through a Tie.
81
Stubs and Skeletons
  • In terms of CORBA development, the stubs and
    skeleton files are standard in terms of their
    target language.
  • Each file exposes the same operations specified
    in the IDL file.
  • Invoking an operation on the stub file will cause
    the method to be executed in the skeleton file
  • The stub file allows the client to manipulate the
    remote object with the same ease with each a
    local file is manipulated

82
Java RMI
  • Overview
  • Supports remote invocation of Java objects
  • Key Java Object SerializationStream objects
    over the wire
  • Language specific
  • History
  • Goal RPC for Java
  • First release in JDK 1.0.2, used in Netscape 3.01
  • Full support in JDK 1.1, intended for applets
  • JDK 1.2 added persistent reference, custom
    protocols, more support for user control.

83
Java RMI
  • Advantages
  • True object-orientation Objects as arguments
    and values
  • Mobile behavior Returned objects can execute on
    caller
  • Integrated security
  • Built-in concurrency (through Java threads)
  • Disadvantages
  • Java only
  • Advertises support for non-Java
  • But this is external to RMI requires Java on
    both sides

84
Java RMI Components
  • Base RMI classes
  • Extend these to get RMI functionality
  • Java compiler javac
  • Recognizes RMI as integral part of language
  • Interface compiler rmic
  • Generates stubs from class files
  • RMI Registry rmiregistry
  • Directory service
  • RMI Run-time activation system rmid
  • Supports activatable objects that run only on
    demand

85
RMI Implementation
Client Host
Server Host
Stub
Skeleton
86
Java RMI Object Serialization
  • Java can send object to be invoked at remote site
  • Allows objects as arguments/results
  • Mechanism Object Serialization
  • Object passed must inherit from serializable
  • Provides methods to translate object to/from byte
    stream
  • Security issues
  • Ensure object not tampered with during
    transmission
  • Solution Class-specific serializationThrow it
    on the programmer

87
Building a Java RMI Application
  • Define remote interface
  • Extend java.rmi.Remote
  • Create server code
  • Implements interface
  • Creates security manager, registers with registry
  • Create client code
  • Define object as instance of interface
  • Lookup object in registry
  • Call object
  • Compile and run
  • Run rmic on compiled classes to create stubs
  • Start registry
  • Run server then client

88
Parameter Passing
  • Primitive types
  • call-by-value
  • Remote objects
  • call-by-reference
  • Non-remote objects
  • call-by-value
  • use Java Object Serialization

89
Java Serialization
  • Writes object as a sequence of bytes
  • Writes it to a Stream
  • Recreates it on the other end
  • Creates a brand new object with the old data
  • Objects can be transmitted using any byte stream
    (including sockets and TCP).

90
Codebase Property
  • Stub classpaths can be confusing
  • 3 VMs, each with its own classpath
  • Server vs. Registry vs. Client
  • The RMI class loader always loads stubs from the
    CLASSPATH first
  • Next, it tries downloading classes from a web
    server
  • (but only if a security manager is in force)
  • java.rmi.server.codebase specifies which web
    server

91
CORBA vs. RMI
  • CORBA was designed for language independence
    whereas RMI was designed for a single language
    where objects run in a homogeneous environment
  • CORBA interfaces are defined in IDL, while RMI
    interfaces are defined in Java
  • CORBA objects are not garbage collected because
    they are language independent and they have to be
    consistent with languages that do not support
    garbage collection, on the other hand RMI objects
    are garbage collected automatically

92
SOAP Introduction
  • SOAP is simple, light weight and text based
    protocol
  • SOAP is XML based protocol (XML encoding)
  • SOAP is remote procedure call protocol, not
    object oriented completely
  • SOAP can be wired with any protocol
  • SOAP is a simple lightweight protocol with
    minimum set of rules for invoking remote services
    using XML data representation and HTTP wire.
  • Main goal of SOAP protocol Interoperability
  • SOAP does not specify any advanced distributed
    services.

93
Why SOAP Whats wrong with existing distributed
technologies
  • Platform and vendor dependent solutions
    (DCOM Windows) (CORBA
    ORB vendors) (RMI Java)
  • Different data representation schemes
    (CDR NDR)
  • Complex client side deployment
  • Difficulties with firewall

    Firewalls allow only specific ports (port 80),
    but DCOM and CORBA assigns port numbers
    dynamically.
  • In short, these distributed technologies do not
    communicate easily with each other because of
    lack of standards between them.

94
Base Technologies HTTP and XML
  • SOAP uses the existing technologies, invents no
    new technology.
  • XML and HTTP are accepted and deployed in all
    platforms.
  • Hypertext Transfer Protocol (HTTP)
  • HTTP is very simple and text-based protocol.
  • HTTP layers request/response communication over
    TCP/IP. HTTP supports fixed set of methods like
    GET, POST.
  • Client / Server interaction
  • Client requests to open connection to server on
    default port number
  • Server accepts connection
  • Client sends a request message to the Server
  • Server process the request
  • Server sends a reply message to the client
  • Connection is closed
  • HTTP servers are scalable, reliable and easy to
    administer.
  • SOAP can bind to any protocol HTTP , SMTP, FTP

95
Extensible Markup Language (XML)
  • XML is platform neutral data representation
    protocol.
  • HTML combines data and representation, but XML
    contains just structured data.
  • XML contains no fixed set of tags, and users can
    build their own customized tags.
  • ltstudentgt
  • ltfull_namegtBhavin Parikhlt/full_namegt
  • ltemailgtbgp4_at_psu.edult/emailgt
  • lt/studentgt
  • XML is platform and language independent.
  • XML is text-based and easy to handle and it can
    be easily extended.

96
Architecture diagram
97
Parsing XML Documents
  • Remember XML is just text
  • Simple API for XML (SAX) Parsing
  • SAX is typically most efficient
  • No Memory Implementation!
  • Left to the Developer
  • Document Object Model (DOM) Parsing
  • Parsing is not fundamental emphasis.
  • A DOM Object is a representation of the XML
    document in a binary tree format.

98
Parsing Examples
  • SaxParseExample
  • Callback functions to process Nodes
  • DomParseExample
  • Use of JAXP (Java API for XML Parsing)
  • Implementations can be swapped, such as
    replacing Apache Xerces with Sun Crimson.
  • JAXP does not include some advanced features
    that may be useful.
  • SAX used behind the scenes to create object model

99
Web-based applications today
Presentation HTML, CSS, Javascript, Flash, Java
applets, ActiveX controls
Application server Web server Content management
system
Business logic C, Java, VB, PHP, Perl,
Python,Ruby
Beans, servlets, CGI, ASP.NET,
Operating System
Database SQL File system
Sockets, HTTP, email, SMS, XML, SOAP, REST,
Rails, reliable messaging, AJAX,
Replication, distribution, load-balancing,
security, concurrency
100
Languages for distributed computing
  • Motivation
  • Why all the fuss about language and platform
    independence?
  • It is extremely inefficient to parse/deparse
    to/from external/internal representation
  • 95 of all computers run Windows anyway
  • There is a JVM for almost any processor you can
    think of
  • Few programmers master more than one programming
    language anyway
  • Develop a coherent programming model for all
    aspects of an application

101
Facile Programming Language
  • Integration of Multiple Paradigms
  • Functions
  • Types/complex data types
  • Concurrency
  • Distribution/soft real-time
  • Dynamic connectivity
  • Implemented as extension to SML
  • Syntax for concurrency similar to CML

102
(No Transcript)
103
Facile implementation
  • Pre-emptive scheduler implemented at the lowest
    level
  • Exploiting CPS translation gt state characterised
    by the set of registers
  • Garbage collector used for linearizing data
    structures
  • Lambda level code used as intermediate language
    when shipping data (including code) in
    heterogeneous networks
  • Native representation is shipped when possible
  • i.e. same architecture and within same trust
    domain
  • Possibility to mix between interpretation or JIT
    depending on usage

104
Conclusion
  • Concurrency may be an order of magnitude more
    difficult to handle
  • Programming language support for concurrency may
    help make the task easier
  • Which concurrency constructs to add to the
    language is still a very active research area
  • If you add concurrency constructs, be sure you
    base them on a formal model!

105
The guiding principle
Put important features in the language itself,
rather than in libraries
  • Provide better level of abstraction
  • Make invariants and intentions more apparent
  • Part of the language syntax
  • Part of the type system
  • Part of the interface
  • Give stronger compile-time guarantees (types)
  • Enable different implementations and
    optimizations
  • Expose structure for other tools to exploit (e.g.
    static analysis)
Write a Comment
User Comments (0)
About PowerShow.com