A Framework to Evaluate Intelligent Environments - PowerPoint PPT Presentation

1 / 15
About This Presentation
Title:

A Framework to Evaluate Intelligent Environments

Description:

The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it... – PowerPoint PPT presentation

Number of Views:35
Avg rating:3.0/5.0
Slides: 16
Provided by: cise8
Learn more at: https://www.cise.ufl.edu
Category:

less

Transcript and Presenter's Notes

Title: A Framework to Evaluate Intelligent Environments


1
A Framework to Evaluate Intelligent Environments
  • Chao Chen

Supervisor Dr. Sumi Helal Mobile Pervasive
Computing Lab CISE Department April 21, 2007
2
Motivation
  • Mark Weisers Vision
  • The most profound technologies are those that
    disappear. They weave themselves into the fabric
    of everyday life until they are indistinguishable
    from it Scientific American, 91
  • An increasing number of deployment in the past 16
    years
  • lab Gaia, GatorTech SmartHouse, Aware home,
    etc.
  • real world iHospital
  • The Big Question
  • Are we there yet?
  • Our research community need a ruler
  • quantitative metrics, a benchmark (suite),
    common set scenarios...

3
Conventional Performance Evaluation
  • Performance evaluation is never a new idea
  • Evaluation parameters
  • System throughput, Transmission rate, Responsive
    time,
  • Evaluation approaches
  • Test bed
  • Simulation / Emulation
  • Theoretical model (Queueing theory, Petri net,
    Markov chain, Monte Carlo simulation )
  • Evaluation tools
  • Performance monitoring MetaSim Tracer (memory),
    PAPI, HPCToolkit, Sigma (memory), DPOMP
    (OpenMP), mpiP, gprof, psrun,
  • Modeling/analysis/prediction MetaSim Convolver
    (memory), DIMEMAS(network), SvPablo
    (scalability), Paradyn, Sigma,
  • Runtime adaptation ActiveHarmony, SALSA
  • Simulation ns-2 (network), netwiser (network),

4
All déjà vu again?
  • When it comes to pervasive computing, questions
    emerge
  • Same set of parameters?
  • Is conventional tools sufficient?
  • I have tons of performance data, now what?
  • It is not feasible to bluntly apply conventional
    evaluation methods for hardware, database or
    distributed systems to pervasive computing
    systems.
  • Pervasive computing systems are heterogeneous,
    dynamic, and heavily context dependent.
    Evaluation of PerCom systems require new thinking.

5
Related work
  • Performance evaluations in related area
  • Atlas, University of Florida. Metrics
    Scalability (memory usage / number of sensors)
  • one.world, University of Washington. Metrics
    Throughput (tuples / time, tuples / senders)
  • PICO, University of Texas at Arlington. Metrics
    Latency (Webcast latency / duration)

We are measuring different things, applying
different metrics, evaluating systems of
different architecture.
6
Challenges
  • Pervasive computing systems are diverse.
  • Performance metrics A panacea for all?
  • Taxonomy a classification of PerCom systems.

7
Taxonomy
Performance Factors
Systems Perspective
Centralized
Distributed
  • Scalability
  • Heterogeneity
  • Consistency / Coherency
  • Communication cost / performance,
  • Resource constraints
  • Energy
  • Size/Weight
  • Responsiveness
  • Throughput
  • Transmission rate
  • Failure rate
  • Availability
  • Safety
  • Privacy Trust
  • Context Sentience
  • Quality of context
  • User intention prediction

/
Stationary
Mobile
/
Mission-critical
Auxiliary
Remedial
/
/
(Application domain)
Proactive
Reactive
/
(User-interactivity)
Users Perspective
Body-area
Building
Urban computing
/
/
(Geographic span)
8
Outline
  • Taxonomy
  • Common Set of Scenarios
  • Evaluation Metrics

9
A Common Set of Scenarios
  • Re-defining research goals
  • A variety of understanding and interpretation of
    pervasive computing
  • What researchers design may not be exactly what
    users expect
  • Evaluating pervasive computing systems is a
    process involving two steps
  • Are we building the right thing? (Validation)
  • Are we building things right? (Verification)
  • A common set of scenarios defines
  • the capacities a PerCom system should have
  • The parameters to be examined when evaluating how
    well these capacities are achieved.

10
Common Set Scenarios
  • Settings Smart House
  • Scenario
  • Plasma burnt out
  • System capabilities
  • Service composability
  • Fault resilience
  • Heterogeneity compliance
  • Performance parameters
  • Failure rate
  • Availability
  • Recovery time

11
Common Set Scenarios
  • Settings Smart Office
  • Scenario
  • Real-time location tracking
  • System overload
  • Location prediction
  • System capabilities
  • Adaptivity
  • Proactivity
  • Context sentience
  • Performance parameters
  • Scalability
  • Quality of Context (refreshness precision)
  • Prediction rate

12
Parameters
  • Taxonomy and common set scenarios enable us
    identify performance parameters.
  • Observation
  • Quantifiable vs. non-quantifiable parameters
  • Parameters does not contribute equally to overall
    performance
  • Performance metrics
  • Quantifiable parameters measurement
  • Non-quantifiable analysis testing
  • Parameters may have different weights.

13
(No Transcript)
14
Conclusion Future work
  • Contributions
  • performed a taxonomy on existing pervasive
    computing systems
  • proposed a set of common scenarios as an
    evaluating benchmark
  • Identified the evaluation metrics (a set of
    parameters) for pervasive computing systems.
  • With parameters of performance listed, can we
    evaluate/measure them? How?
  • A test bed
  • reality measurement
  • - expensive, difficult to set-up/maintain, replay
    difficult
  • Simulation/Emulation
  • reduced cost, quick set-up, consistent replay,
    safe
  • - not reality, needs modeling and validation
  • Theoretical Model abstraction of pervasive space
    on a higher level

Empirical
Analytical
15
Thank you!
Write a Comment
User Comments (0)
About PowerShow.com