Lecture 2 Addendum: Software Platforms - PowerPoint PPT Presentation

1 / 58
About This Presentation
Title:

Lecture 2 Addendum: Software Platforms

Description:

Lecture 2 Addendum: Software Platforms Anish Arora CIS788.11J Introduction to Wireless Sensor Networks Lecture uses s from tutorials prepared by authors of these ... – PowerPoint PPT presentation

Number of Views:149
Avg rating:3.0/5.0
Slides: 59
Provided by: Anish8
Category:

less

Transcript and Presenter's Notes

Title: Lecture 2 Addendum: Software Platforms


1
Lecture 2 Addendum Software Platforms
  • Anish Arora
  • CIS788.11J
  • Introduction to Wireless Sensor Networks
  • Lecture uses slides from tutorials prepared by
    authors of these platforms

2
Outline
  • Platforms (contd.)
  • SOS slides from UCLA
  • Virtual machines (Maté) slides from UCB
  • Contiki slides from Upsaala

3
References
  • SOS Mobisys paper
  • SOS webpage
  • Mate A Virtual Machine for Sensor Networks.
    ASPLOS
  • Mate webpage
  • Contiki Emnets Paper
  • Contiki webpage

4
SOS Motivation and Key Feature
  • Post-deployment software updates are necessary to
  • customize the system to the environment
  • upgrade features
  • remove bugs
  • re-task system
  • Remote reprogramming is desirable
  • Approach Remotely insert binary modules into
    running kernel
  • software reconfiguration without interrupting
    system operation
  • no stop and re-boot unlike differential patching
  • Performance should be superior to virtual
    machines

5
Architecture Overview
  • Static Kernel
  • Provides hardware abstraction common services
  • Maintains data structures to enable module
    loading
  • Costly to modify after deployment
  • Dynamic Modules
  • Drivers, protocols, and applications
  • Inexpensive to modify after deployment
  • Position independent

6
SOS Kernel
  • Hardware Abstraction Layer (HAL)
  • Clock, UART, ADC, SPI, etc.
  • Low layer device drivers interface with HAL
  • Timer, serial framer, communications stack, etc.
  • Kernel services
  • Dynamic memory management
  • Scheduling
  • Function control blocks

7
Kernel Services Memory Management
  • Fixed-partition dynamic memory allocation
  • Constant allocation time
  • Low overhead
  • Memory management features
  • Guard bytes for run-time memory overflow checks
  • Ownership tracking
  • Garbage collection on completion
  • pkt (uint8_t)ker_malloc(hdr_size
    sizeof(SurgeMsg), SURGE_MOD_PID)

8
Kernel Services Scheduling
  • SOS implements non-preemptive priority scheduling
    via priority queues
  • Event served when there is no higher priority
    event
  • Low priority queue for scheduling most events
  • High priority queue for time critical events,
    e.g., h/w interrupts sensitive timers
  • Prevents execution in interrupt contexts
  • post_long(TREE_ROUTING_PID, SURGE_MOD_PID,
    MSG_SEND_PACKET,
  • hdr_size sizeof(SurgeMsg), (void)packet,
    SOS_MSG_DYM_MANAGED)

9
Modules
  • Each module is uniquely identified by its ID or
    pid
  • Has private state
  • Represented by a message handler has prototype
  • int8_t handler(void private_state, Message
    msg)
  • Return value follows errno
  • SOS_OK for success. -EINVAL, -ENOMEM, etc. for
    failure

10
Kernel Services Module Linking
  • Orthogonal to module distribution protocol
  • Kernel stores new module in free block located in
    program memory
  • and critical information about module in the
    module table
  • Kernel calls initialization routine for module
  • Publish functions for other parts of the system
    to use
  • char tmp_string 'C', 'v', 'v', 0
  • ker_register_fn(TREE_ROUTING_PID,
    MOD_GET_HDR_SIZE, tmp_string, (fn_ptr_t)tr_get_hea
    der_size)
  • Subscribe to functions supplied by other modules
  • char tmp_string 'C', 'v', 'v', 0
  • s-gtget_hdr_size (func_u8_t)ker_get_handle(TREE_
    ROUTING_PID, MOD_GET_HDR_SIZE, tmp_string)
  • Set initial timers and schedule events

11
ModuletoKernel Communication
High Priority Message Buffer
  • Kernel provides system services and access to
    hardware
  • ker_timer_start(s-gtpid, 0, TIMER_REPEAT, 500)
  • ker_led(LED_YELLOW_OFF)
  • Kernel jump table re-directs system calls to
    handlers
  • upgrade kernel independent of the module
  • Interrupts messages from kernel dispatched by a
    high priority message buffer
  • low latency
  • concurrency safe operation

12
Inter-Module Communication
  • Inter-Module Message Passing
  • Asynchronous communication
  • Messages dispatched by a two-level priority
    scheduler
  • Suited for services with long latency
  • Type safe binding through publish / subscribe
    interface
  • Inter-Module Function Calls
  • Synchronous communication
  • Kernel stores pointers to functions registered by
    modules
  • Blocking calls with low latency
  • Type-safe runtime function binding

13
Synchronous Communication
  • Module can register function for low latency
    blocking call (1)
  • Modules which need such function can subscribe to
    it by getting function pointer pointer (i.e.
    func) (2)
  • When service is needed, module dereferences the
    function pointer pointer (3)

14
Asynchronous Communication
  • Module is active when it is handling the message
    (2)(4)
  • Message handling runs to completion and can only
    be interrupted by hardware interrupts
  • Module can send message to another module (3) or
    send message to the network (5)
  • Message can come from both network (1) and local
    host (3)

15
Module Safety
  • Problem Modules can be remotely added, removed,
    modified on deployed nodes
  • Accessing a module
  • If module doesn't exist, kernel catches messages
    sent to it handles dynamically allocated memory
  • If module exists but can't handle the message,
    then module's default handler gets message
    kernel handles dynamically allocated memory
  • Subscribing to a modules function
  • Publishing a function includes a type description
    that is stored in a function control block (FCB)
    table
  • Subscription attempts include type checks against
    corresponding FCB
  • Type changes/removal of published functions
    result in subscribers being redirected to system
    stub handler function specific to that type
  • Updates to functions w/ same type assumed to have
    same semantics

16
Module Library
  • Some applications created by combining already
    written and tested modules
  • SOS kernel facilitates loosely coupled modules
  • Passing of memory ownership
  • Efficient function and messaging interfaces

Surge Application with Debugging
17
Module Design
include ltmodule.hgt typedef struct uint8_t
pid uint8_t led_on app_state DECL_MOD_STAT
E(app_state) DECL_MOD_ID(BLINK_ID) int8_t
module(void state, Message msg) app_state s
(app_state)state switch (msg-gttype)
case MSG_INIT s-gtpid msg-gtdid
s-gtled_on 0 ker_timer_start(s-gtpid, 0,
TIMER_REPEAT, 500) break case
MSG_FINAL ker_timer_stop(s-gtpid, 0)
break case MSG_TIMER_TIMEOUT
if(s-gtled_on 1) ker_led(LED_YELLOW_O
N) else ker_led(LED_YELLOW_OFF)
s-gtled_on if(s-gtled_on gt
1) s-gtled_on 0 break default
return -EINVAL return SOS_OK
  • Uses standard C
  • Programs created by wiring modules together


18
Sensor Manager
  • Enables sharing of sensor data between multiple
    modules
  • Presents uniform data access API to diverse
    sensors
  • Underlying device specific drivers register with
    the sensor manager
  • Device specific sensor drivers control
  • Calibration
  • Data interpolation
  • Sensor drivers are loadable enables
  • post-deployment configuration of sensors
  • hot-swapping of sensors on a running node

19
Application Level Performance
Comparison of application performance in SOS,
TinyOS, and MateVM
Surge Forwarding Delay
Surge Tree Formation Latency
Surge Packet Delivery Ratio
Memory footprint for base operating system with
the ability to distribute and update node
programs.
CPU active time for surge application.
20
Reconfiguration Performance
Energy cost of light sensor driver update
Module size and energy profile for installing
surge under SOS
Energy cost of surge application update
  • Energy trade offs
  • SOS has slightly higher base operating cost
  • TinyOS has significantly higher update cost
  • SOS is more energy efficient when the system is
    updated one or more times a week

21
Platform Support
  • Supported micro controllers
  • Atmel Atmega128
  • 4 Kb RAM
  • 128 Kb FLASH
  • Oki ARM
  • 32 Kb RAM
  • 256 Kb FLASH
  • Supported radio stacks
  • Chipcon CC1000
  • BMAC
  • Chipcon CC2420
  • IEEE 802.15.4 MAC (NDA required)

22
Simulation Support
  • Source code level network simulation
  • Pthread simulates hardware concurrency
  • UDP simulates perfect radio channel
  • Supports user defined topology heterogeneous
    software configuration
  • Useful for verifying the functional correctness
  • Instruction level simulation with Avrora
  • Instruction cycle accurate simulation
  • Simple perfect radio channel
  • Useful for verifying timing information
  • See http//compilers.cs.ucla.edu/avrora/
  • EmStar integration under development

23
Network Capable Messages
  • typedef struct
  • sos_pid_t did // destination module ID
  • sos_pid_t sid // source module ID
  • uint16_t daddr // destination node
  • uint16_t saddr // source node
  • uint8_t type // message type
  • uint8_t len // message length
  • uint8_t data // payload
  • uint8_t flag // options
  • Message
  • Messages are best-effort by default
  • No senddone and Low priority
  • Can be changed via flag in runtime
  • Messages are filtered when received
  • CRC Check and Non-promiscuous mode
  • Can turn off filter in runtime

24
Mate A Virtual Machine for Sensor Networks
  • Why VM?
  • Large number (100s to 1000s) of nodes in a
    coverage area
  • Some nodes will fail during operation
  • Change of function during the mission
  • Related Work
  • PicoJava
  • assumes Java bytecode execution hardware
  • K Virtual Machine
  • requires 160 512 KB of memory
  • XML
  • too complex and not enough RAM
  • Scylla
  • VM for mobile embedded system

25
Mate features
  • Small (16KB instruction memory, 1KB RAM)
  • Concise (limited memory bandwidth)
  • Resilience (memory protection)
  • Efficient (bandwidth)
  • Tailorable (user defined instructions)

26
Mate in a Nutshell
  • Stack architecture
  • Three concurrent execution contexts
  • Execution triggered by predefined events
  • Tiny code capsules self-propagate into network
  • Built in communication and sensing instructions

27
When is Mate Preferable?
  • For small number of executions
  • GDI example
  • Bytecode version is preferable for a program
    running less than 5 days
  • In energy constrained domains
  • Use Mate capsule as a general RPC engine

28
Mate Architecture
  • Stack based architecture
  • Single shared variable
  • gets/sets
  • Three events
  • Clock timer
  • Message reception
  • Message send
  • Hides asynchrony
  • Simplifies programming
  • Less prone to bugs

29
Instruction Set
  • One byte per instruction
  • Three classes basic, s-type, x-type
  • basic arithmetic, halting, LED operation
  • s-type messaging system
  • x-type pushc, blez
  • 8 instructions reserved for users to define
  • Instruction polymorphism
  • e.g. add(data, message, sensing)

30
Code Example(1)
  • Display Counter to LED

31
Code Capsules
  • One capsule 24 instructions
  • Fits into single TOS packet
  • Atomic reception
  • Code Capsule
  • Type and version information
  • Type send, receive, timer, subroutine

32
Viral Code
  • Capsule transmission forw
  • Forwarding other installed capsule forwo (use
    within clock capsule)
  • Mate checks on version number on reception of a
    capsule
  • -gt if it is newer, install it
  • Versioning 32bit counter
  • Disseminates new code over the network

33
Component Breakdown
  • Mate runs on mica with 7286 bytes code, 603 bytes
    RAM

34
Network Infection Rate
  • 42 node network in 3 by 14 grid
  • Radio transmission 3 hop network
  • Cell size 15 to 30 motes
  • Every mote runs its clock capsule every 20
    seconds
  • Self-forwarding clock capsule

35
Bytecodes vs. Native Code
  • Mate IPS 10,000
  • Overhead Every instruction executed as separate
    TOS task

36
Installation Costs
  • Bytecodes have computational overhead
  • But this can be compensated by using small
    packets on upload (to some extent)

37
Customizing Mate
  • Mate is general architecture user can build
    customized VM
  • User can select bytecodes and execution events
  • Issues
  • Flexibility vs. Efficiency
  • Customizing increases efficiency w/ cost of
    changing requirements
  • Javas solution
  • General computational VM class libraries
  • Mates approach
  • More customizable solution -gt let user decide

38
How to
  • Select a language
  • -gt defines VM bytecodes
  • Select execution events
  • -gt execution context, code image
  • Select primitives
  • -gt beyond language functionality

39
Constructing a Mate VM
  • This generates
  • a set of files
  • -gt which are
  • used to build
  • TOS application
  • and
  • to configure
  • script program

40
Compiling and Running a Program
Send it over the network to a VM
VM-specific binary code
Write programs in the scripter
41
Bombilla Architecture
  • Once context perform operations that only need
    single execution
  • 16 word heap sharing among the context
  • setvar, getvar
  • Buffer holds up to ten values
  • bhead, byank, bsorta

42
Bombilla Instruction Set
  • basic arithmetic, halt, sensing
  • m-class access message header
  • v-class 16 word heap access
  • j-class two jump instructions
  • x-class pushc

43
Enhanced Features of Bombilla
  • Capsule Injector programming environment
  • Synchronization 16-word shared heap locking
    scheme
  • Provide synchronization model handler,
    invocations, resources, scheduling points,
    sequences
  • Resource management prevent deadlock
  • Random and selective capsule forwarding
  • Error State

44
Discussion
  • Comparing to traditional VM concept, is Mate
    platform independent? Can we have it run on
    heterogeneous hardware?
  • Security issues
  • How can we trust the received capsule? Is there
    a way to prevent version number race with
    adversary?
  • In viral programming, is there a way to forward
    messages other than flooding? After a certain
    number of nodes are infected by new version
    capsule, can we forward based on need?
  • Bombilla has some sophisticated OS features. What
    is the size of the program? Does sensor node need
    all those features?

45
Contiki
  • Dynamic loading of programs (vs. static)
  • Multi-threaded concurrency managed execution
    (in addition to event driven)
  • Available on MSP430, AVR, HC12, Z80, 6502, x86,
    ...
  • Simulation environment available for
    BSD/Linux/Windows

46
Key ideas
  • Dynamic loading of programs
  • Selective reprogramming
  • Static vs dynamic linking
  • Concurrency management mechanisms
  • Events and threads
  • Trade-offs preemption, size

47
Contiki size (bytes)
  • Module
  • Kernel
  • Program loader
  • Multi-threading library
  • Timer library
  • Memory manager
  • Event log replicator
  • µIP TCP/IP stack
  • Code AVR
  • 1044
  • -
  • 678
  • 90
  • 226
  • 1934
  • 5218

Code MSP430 810 658 582 60 170 1656 4146
RAM 10 e p 8 8 s 0 0 200 18 b
48
Loadable programs
  • One-way dependencies
  • Core resident in memory
  • Language run-time, communication
  • Programs know the core
  • Statically linked against core
  • Individual programs can be loaded/unloaded

Core
49
Loadable programs (contd.)
  • Programs can be loaded from anywhere
  • Radio (multi-hop, single-hop), EEPROM, etc.
  • During software development, usually change only
    one module

Core
50
How well does it work?
  • Works well
  • Program typically much smaller than entire system
    image (1-10)
  • Much quicker to transfer over the radio
  • Reprogramming takes seconds
  • Static linking can be a problem
  • Small differences in core means module cannot be
    run
  • We are implementing a dynamic linker

51
Revisiting Multi-threaded Computation
Thread
Thread
Thread
  • Threads blocked, waiting for events
  • Kernel unblocks threads when event occurs
  • Thread runs until next blocking statement
  • Each thread requires its own stack
  • Larger memory usage

Kernel
52
Event-driven vs multi-threaded
  • Multi-threaded
  • wait() statements
  • Preemption possible
  • Sequential code flow
  • - Larger code overhead
  • - Locking problematic
  • - Larger memory requirements
  • Event-driven
  • - No wait() statements
  • - No preemption
  • - State machines
  • Compact code
  • Locking less of a problem
  • Memory efficient

How to combine them?
53
Contiki event-based kernel with threads
  • Kernel is event-based
  • Most programs run directly on top of the kernel
  • Multi-threading implemented as a library
  • Threads only used if explicitly needed
  • Long running computations, ...
  • Preemption possible
  • Responsive system with running computations

54
Responsiveness
55
Threads implemented atop an event-based kernel
Event
Thread
Thread
Event
Kernel
Event
Event
56
Implementing preemptive threads 1
Thread
Event handler
57
Implementing preemptive threads 2
Event handler
yield()
58
Memory management
  • Memory allocated when module is loaded
  • Both ROM and RAM
  • Fixed block memory allocator
  • Code relocation made by module loader
  • Exercises flash ROM evenly

59
Protothreads light-weight stackless threads
  • Protothreads mixture between event-driven and
    threaded
  • A third concurrency mechanism
  • Allows blocked waiting
  • Requires per-thread no stack
  • Each protothread runs inside a single C function
  • 2 bytes of per-protothread state
Write a Comment
User Comments (0)
About PowerShow.com