Dynamically Controllable Applications for Wireless Sensor Networks PowerPoint PPT Presentation

presentation player overlay
1 / 39
About This Presentation
Transcript and Presenter's Notes

Title: Dynamically Controllable Applications for Wireless Sensor Networks


1
Dynamically Controllable Applications for
Wireless Sensor Networks
Sriram Rajan Masters Thesis Presentation Advisor
Dr. Georgios Lazarou Department of Electrical
and Computer Engineering December 9th, 2005
2
Agenda
  • Introduction to Wireless Sensor Networks
  • Requirements for Designing Dynamically
    Controllable Applications
  • Sensor Network Programming
  • Related Work
  • Research Contribution
  • Results, Conclusion, and Future work

3
  • Introduction to Wireless Sensor Networks (Slide 1
    of 3)
  • Why Sensor Networks ?
  • Autonomous, early warning systems 1 for
    Tsunamis can be developed
  • Small size, can be deployed right out of the box
  • Sensor node or Mote refers to a combination
    of the following items
  • Sensor (reports events)
  • Processor (stores the Mote ID, processes
    readings, provides encryption to sensor messages)
  • Radio (Send, Receive Messages)
  • Split-phase operations are used for data events
  • Request event
  • Continue executing other events or tasks until
    data is received

4
  • Introduction to Wireless Sensor Networks (Slide 2
    of 3)
  • Base Station
  • Has external power supply
  • Can program the base node
  • Used with base node to communicate with the
    Wireless Sensor Network (WSN)
  • Base Node
  • Node connected to Base Station
  • Provides processing capability to the Base
    Station
  • Can send/ receive radio signals to/from the WSN

5
  • Introduction to Wireless Sensor Networks (Slide 3
    of 3)
  • Communication Interface
  • Link between Host Computer and Base Station
  • Links are possible via LAN, Internet and Serial
    mediums
  • Host Computer
  • Communicates the WSN via the communication
    interface.
  • Sensor Node
  • Robust, sensor devices
  • Capable of multi-hop communication

6
Problem Statement
  • The main requirements for implementing
    dynamically controllable applications are 7
  • Propagation The data/control messages must
    reach all of the nodes in the network
  • Lifetime of the network The data traffic in the
    sensor network must be minimal for effective
    communication within the sensor network
  • Efficient Power consumption Any implementation
    must consider the limited power availability and
    must restrict the number of transmissions for the
    protocol

7
  • Motivation Why work on Application optimization
    ?

Efficient sensor network applications can be
designed 6,9 that have better fault tolerance
Sensor application failures
  • The approach of executing multiple operations
    dynamically can be exploited to design
    controllable applications
  • Low cost of nodes and low maintenance
    requirements allow for easy implementation and
    rapid deployment

8
  • Summary of This Work
  • Designed and developed a novel scheme for TinyOS
    application design that attains the following
    objectives
  • Interchangeable modules at run-time
  • Controllable frequency of sensor operations, such
    as transmitting readings or performing actions at
    regular intervals
  • Ability to control the mote operation by
    terminating or resetting a mote to a particular
    state during operation
  • Our scheme has been verified to work in TinyOS
    and has significant time and performance
    improvements.

9
  • TinyOS Elements
  • Why TinyOS for WSN?
  • Modular nature
  • Extensive support of platforms
  • Large user base
  • Example Calamari application
  • TinyOS Components
  • A component stores the state of the system,
    consists of interfaces and events
  • Event are triggered by commands, signals or
    other events themselves . Example event
    Timer.fired()
  • Sending messages is achieved using split-phase or
    using an event that calls an appropriate task.

Sample TinyOS Component 2
10
  • Application framework
  • Application
  • Combination of software and hardware, can also
    provide routing information. Example Send
    message only if light intensity gt 4
  • Messaging component
  • Messaging functions to upper level components
  • Event handlers for lower-level components
  • Packet , Byte, Bit
  • Software/ Hardware components that deal with
    packet-level, byte level and bit level processing

Sample Application Framework 3
11
  • TinyOS Concurrency model 2
  • Concurrency implemented by using separate
    procedures
  • Events (hardware interrupt handlers) and tasks
  • Events preempt other events and tasks
  • Those statements marked atomic or events with
    keyword async are never interrupted
  • Event failures are common occurrence when the
    sensor is busy with another that cannot be
    interrupted
  • The scheduler component runs these tasks in FIFO
    order, unless interrupted.

12
  • Related Work
  • Also used for reliable upgrade of communication
    software 4
  • An Arbiter code is used to switch between two
    working module implementations
  • First, the reliable module and the experimental
    module are processed in parallel.
  • If the error is within the acceptable threshold,
    experimental module is used
  • Otherwise, the reliable module is implemented.

Simplex Architecture 4, 5
13
  • Limitations of related work and solutions

Limitations
Solutions
  • Processor overload as a result of the upgrade
  • Reduce overload involved in upgrade 9, 10
  • Dynamically Controllable Application (DCA) is
    proposed as a technique is designed in TinyOS to
    achieve dynamic
  • State changes
  • Control the frequency of transmission and other
    events (variable changes)
  • Functionality changes
  • Do not meet the static programming requirement in
    TinyOS

Solutions from this research
  • Upgrades results in significant sensor
    application downtime
  • Sensor operation remains unaffected in DCA

14
  • Our Dynamically Controllable Application (DCA)
    Scheme (Slide 1 of 4)
  • Triggers an event after receiving a message from
    the base mote
  • a high probability of an event interrupting the
    applications operation (otherwise, the event is
    triggered again)
  • State control
  • Identical to the frequency control
  • In addition, involves additional variable
    settings and declarations

Block Diagram of the DCA scheme
15
  • Our Dynamically Controllable Application (DCA)
    Scheme (Slide 2 of 4)
  • Frequency Control
  • Sensor applications typically use periodic
    logging or other sensor data reporting functions.
  • Module Control
  • The hypothesis that a module change could be
    achieved using DCA needed to be verified
  • A module control component has therefore been
    included in the DCA mechanism

Block Diagram of the DCA scheme
16
  • Our Dynamically Controllable Application (DCA)
    Scheme (Slide 3 of 4)
  • Component and Messaging Interface
  • Uses message types provided by TinyOS to trigger
    appropriate events
  • Sensor Hardware and Radio Interface
  • Provides a medium to communicate

Block Diagram of the DCA scheme
17
  • Our Dynamically Controllable Application (DCA)
    Scheme (Slide 4 of 4)

DCA Message Structure
DCA Application Framework
18
  • Working of DCA Scheme (Slide 1 of 3)
  • Frequency and State changes
  • Application Design scheme that allows for
  • Frequency changes
  • Resets
  • Changes are achieved when events are triggered
  • An event is triggered when command messages are
    received
  • Bypasses the static redesign requirement of the
    original application

Flow chart for DCA frequency/ reset change
19
  • Working of DCA Scheme (Slide 2 of 3)
  • Module Changes
  • Implemented using the DCA scheme
  • DCA allows direct communication between the
    modules and radio/messaging components
  • Uses start/stop commands to control the active
    module
  • Original Implementation (Without DCA)
  • Separate modules would need to be declared and
    used simultaneously

20
Flow chart for DCA Module change
21
  • Example for designing DCA applications

Original Application
Applications would need to be redesigned,
recompiled, and loaded to the sensor nodes in
order to implement this change in functionality.
Application using DCA
Applications that use the DCA scheme can easily
change from one Common Off the Shelf (COTS)
component (COTS_Module_A) to another
(COTS_Module_B) provided these were included in
the design phase.
22
  • Experimental Setup
  • Deployment
  • Final stage of application implementation on
    Field motes
  • Performed after repetitive tests on simulator and
    hardware to meet loss threshold
  • Design or Development
  • Failures must be anticipated during the design of
    the application and debug messages provided for
    evaluation.
  • Several areas of the code can be tested
    simultaneously.
  • Experimentation
  • Experimentation is done with Hardware motes and
    Simulation iteratively to isolate errors
  • Testing is performed to determine if application
    is suitable for deployment

23
  • Comparison of Hardware and Software Experimental
    setup

Hardware
  • Requires the MessageCenter or similar
    application to communicate with base node.
  • Uses the base node to communicate with other
    nodes
  • Metrics used are primarily verification and time
    to completion

Software
  • MessageCenter application can be used, though
    the communication is also possible via the TOSSIM
    8
  • The simulator can directly inject messages to any
    node
  • Verification, Time to completion, Code
    comparisons and Losses can be determined.

24
  • Validation and Testing Metrics
  • Time to Completion
  • Verification of operation
  • Comparison of results with related work
  • Testing for fault conformance

25
  • Cases for Testing and Validation

26
  • Case I Software reset implementation (Slide 1 of
    2)

Setup
  • 10 sets experiments performed on hardware setup
    with varying node startup sequence.
  • Final set of experiments performed in simulation.
  • Motes are numbered (Mote id) 1 to 4
  • Initially all motes have their respective yellow
    LEDs blinking in State 1.
  • Upon receiving the sync message, each node waits
    for a random amount of time based on its node id,
    then the traffic light operation is initiated.
  • The reset signal shifts all nodes to State 1 for
    re-synchronization.

27
  • Case I Software reset implementation (Slide 2 of
    2)

Results
Verification of Hardware/ Simulation operation
  • Hardware and software experiments have verified
    the operation and have obtained the time for
    completion of operation.
  • In a few cases of hardware setup the
    synchronization was not achieved without the
    reset signal.

28
  • Case 2 Frequency change application comparison

Results
Setup
  • A readily available application that transmits
    sensor data via radio (RFM) was used
  • Application functionality was verified in both
    hardware and software implementations
  • Two sets of experiments performed on both
    hardware and software simulation separately
  • Initially the node is started with its default
    frequency
  • The frequency is later changed by sending a
    message from the base node during its operation.

Comparison of Case 2 with related mechanisms
29
  • Case 3 Module change implementation (Slide 1 of
    2)

Setup
  • Identical setup was used for Hardware and
    Simulation experiment.
  • The nodes were started with one module (radio for
    sending sensor output signals) and the output was
    directed to another component (led) after a
    random wait.
  • The network was monitored for any transmission of
    stray or unwanted signals from the nodes to
    determine any failures in module transition.

Hypothesis
  • Two or more modules can be dynamically replaced
    in an application
  • This operation can be easily accomplished by
    combining two readily available components
    (Common Off the Shelf or COTS)

30
  • Case 3 Module change implementation (Slide 2 of
    2)

Results
  • The change in hardware functionality from one
    module to another was observed instantly.
  • This simulation software readings (TOSSIM) also
    verified the change of functionality.
  • The power requirements were linear and increased
    steadily with time across all sensor components.

Energy consumption in Millijoules for
implementing and completing Module change
operation
31
  • Case 3 Module change operation comparison
  • The upgrade protocol in 9, 10 was evaluated
    for the following metrics
  • Time to complete operation
  • Varies with the size of the network.
  • Average of 1 minute.
  • In contrast, the entire setup of DCA lasted only
    for 20 seconds.
  • Downtime
  • The sensor is out of normal operation for up to
    10 seconds during the upgrade process from the
    experimental observation.
  • However, the DCA application was successful in
    implementing the module change without affecting
    the sensor operation.

32
  • Discussion of Results
  • A DCA scheme can be effectively implemented and
    tested using any TinyOS application.
  • The energy consumption is linear and increases
    steadily with time.
  • The operation of the sensor application remains
    unaffected.
  • The time to completion for DCA is lesser than
    that for upgrade protocols 9,10.

33
  • Conclusions
  • A new scheme (DCA) has been developed
  • Allows the user to bypass the static programming
    requirement in TinyOS platform
  • Allows control over the application features,
    during the operation of the sensor network
  • The results indicate that the DCA mechanism was
    found to be more time saving
  • in the order of few milliseconds
  • compared to several seconds to minutes required
    for implementing similar changes via upgrade
    protocols

34
  • Future Work
  • In this thesis, losses were constant, depending
    on the network size, and irrespective of node
    spacing (2 -20 feet) in the experiments (this
    research and in 10)
  • Efficient node loss determination with lossy
    network setup
  • Simulations could be performed around the maximum
    sensor range of 500 feet
  • Could be combined with upgrades to analyze the
    overall performance in terms of losses, time to
    completion and sensor downtime

35
  • References (Slide 1 of 4)

1 P.E. ROSS, "Waiting and Waiting For the Next
Killer Wave", IEEE Spectrum. Volume 42, Issue 3,
Page 17. 2 TinyOS Tutorial, Available
http//www.tinyos.net/tinyos-1.x/doc/tutorial/.
3 Anna Hac, Wireless Sensor Network
Designs, John Wiley and Sons Ltd. 2003, Pages
325-352 4 Lui Sha, Using Simplicity to
Control Complexity, IEEE SOFTWARE, July /
August 2001.
36
  • References (Slide 2 of 4)

5 P.V. Krishnan, L. Sha, K. Mechitov, Reliable
Upgrade Of Group Communication Software In Sensor
Networks, Proceedings of the First IEEE
International Workshop on Sensor Network
Protocols and Applications, May 2003.
6 Chalermek Intanagonwiwat, Ramesh Govindan,
Deborah Estrin, John Heidemann, and Fabio Silva,
Directed Diffusion for Wireless Sensor
Networking, IEEE/ACM Transactions on Networking,
VOL. 11, NO. 1, February 2003. 7 Thanos
Stathopoulos, John Heidemann, Deborah Esterin, A
Remote Code Update Mechanism For Wireless Sensor
Networks, Available http//lecs.cs.ucla.edu/th
anos/moap-draft.pdf
37
  • References (Slide 3 of 4)

8 Philip Levis, Nelson Lee, Matt Welsh, David
Culler, "Platforms TOSSIM Accurate And
Scalable Simulation Of Entire Tinyos
Applications", Proceedings of the 1st
international conference on Embedded networked
sensor systems, November 2003. 9 Jaein Jeong
and David Culler, Incremental Network
Programming for Wireless Sensors, IEEE SECON
2004, 2004 1st Annual IEEE Communications Society
Conference on Sensor and Ad Hoc Communications
and Networks, October 2004 Pages 25-33.
38
  • References (Slide 4 of 4)

10 Adam Chlipala, Jonathan Hui and Gilman
Tolle, Deluge Data Dissemination in Multi-Hop
Sensor Networks, UC Berkeley CS294-1 Project
Report, December 2003, Avaliable
http//www.cs.berkeley.edu/jwhui/research/project
s/deluge/deluge_poster.ppt
39
  • Acknowledgements
  • Dr. Lazarou for motivating me to pursue research
    and training me to work aggressively on research
    issues
  • Mr. Hannigan for his encouragement, support, and
    motivation both as an employer and as a friend
  • My thesis committee members Dr. Chu and Dr.
    Philip for patiently reviewing my thesis in a
    short time. They also encouraged me in my
    course work and motivated me to attain higher
    grades
  • Still searching for words to thank Arun
    Ramakrishnan, Ashwini Mani, and Gaurav Marwah.
    Their assistance in patiently reviewing several
    revisions of my thesis helped me a lot
  • Thanks to Sai Bushan for helping me getting
    started and going with Python scripting
  • Thanks to Shivakumar, Sridhar, Marshall Crocker,
    Sanjay Patil, Sai Bushan for assisting and
    motivating me towards my defense
  • A special thanks to Aravindh Ravichandran, Ezhil
    Nachiappan and Ekta Mathur for their
    understanding and assistance.
  • Worldwide TinyOS community for their timely
    support in various programming issues

40
  • Questions?

41
  • Appendix
  • To purchase hardware to program Sensor networks,
    several options are available
  • http//www.xbow.com
  • http//www.zigbee.net
  • Links to sample TinyOS code
  • Calamari Application
  • http//www.cs.berkeley.edu/kamin/localization.htm
    l
  • TinyOS overview
  • http//shamir.eas.asu.edu/mcn/cse494sp05/TinyOS.p
    pt

42
  • Program of Study
Write a Comment
User Comments (0)
About PowerShow.com