Agent approaches to Security, Trust and Privacy in Pervasive Computing - PowerPoint PPT Presentation

About This Presentation
Title:

Agent approaches to Security, Trust and Privacy in Pervasive Computing

Description:

Free, at least for now. Mobile devices. Aware of their neighborhood ... Max, min, avg. Trust and Distrust Convergence. How soon are dishonest devices detected ... – PowerPoint PPT presentation

Number of Views:92
Avg rating:3.0/5.0
Slides: 50
Provided by: anupam8
Category:

less

Transcript and Presenter's Notes

Title: Agent approaches to Security, Trust and Privacy in Pervasive Computing


1
Agent approaches to Security, Trust and Privacy
in Pervasive Computing
  • Anupam Joshi
  • joshi_at_cs.umbc.edu
  • http//www.cs.umbc.edu/joshi/

2
The Vision
  • Pervasive Computing a natural extension of the
    present human computing life style
  • Using computing technologies will be as natural
    as using other non-computing technologies (e.g.,
    pen, paper, and cups)
  • Computing services will be available anytime and
    anywhere.

3
Pervasive Computing
  • The most profound technologies are those that
    disappear. They weave themselves into the fabric
    of everyday life until they are indistinguishable
    from it Mark Weiser
  • Think writing, central heating, electric
    lighting,
  • Not taking your laptop to the beach, or
    immersing yourself into a virtual reality

4
Today Life is Good.
5
Tomorrow We Got Problems!
6
Yesterday Gadget Rules
7
Today Communication Rules
8
Tomorrow Services Will Rule
Thank God! Pervasive Computing is here
9
The Brave New World
  • Devices increasingly more powerful smaller
    cheaper
  • People interact daily with hundreds of computing
    devices (many of them mobile)
  • Cars
  • Desktops/Laptops
  • Cell phones
  • PDAs
  • MP3 players
  • Transportation passes
  • ? Computing is becoming pervasive

10
Securing Data Services
  • Security is critical because in many pervasive
    applications, we interact with agents that are
    not in our home or office environment.
  • Much of the work in security for distributed
    systems is not directly applicable to pervasive
    environments
  • Need to build analogs to trust and reputation
    relationships in human societies
  • Need to worry about privacy!

11
Security Challenges
  • Example 1

ABC Industries Inc.
ABC Industries Inc., New York
ABC Industries Inc., LA
ABC Industries Inc., Baltimore
What if someone from the New York office visits
the LA office ? How are his rights for access to
resources in the LA Office decided ?
Company wide directory ? Needs dynamic updating
by sysadmin, Violates minimality principles of
security, Not scalable
12
Security Challenges
  • Example 2

Rights ?
ABC Industries Inc., Baltimore
XYZ Inc, Seattle
How does the ABC system decide what rights to
give a consultant from XYZ Inc ?
How does the ABC system decide what rights to
give a manager from XYZ Inc ?
13
Security Challenges
  • Example 2 cont.
  • Company directory cannot be used
  • Cross organizational roles may be meaningless
  • Issues specific to pervasive environments
  • Central access control is not scalable
  • Foreign users or visitors
  • Not possible to store their individual access
    rights
  • The role of policies.

14
An early policy for agents
  • 1 A robot may not injure a human being,
    or,through inaction, allow a human being tocome
    to harm.
  • 2 A robot must obey the orders given it by human
    beings except where such orders would conflict
    with the First Law.
  • 3 A robot must protect its own existence as long
    as such protection does not conflict with the
    First or Second Law.
  • -- Handbook of Robotics, 56th Edition, 2058 A.D.

15
On policies, rules and laws
  • The interesting thing about Asimovs laws were
    that robots did not always strictly follow them.
  • This is a point of departure from more
    traditional hard coded rules like DB access
    control, and OS file permissions
  • For autonomous agents, we need policies that
    describe norms of behavior that they should
    follow to be good citizens.
  • So, its natural to worry about issues like
  • When an agent is governed by multiple policies,
    how does it resolve conflicts among them?
  • How can we define penalties when agents dont
    fulfill their obligations?
  • How can we relate notions of trust and reputation
    to policies?

16
The Role of Ontologies
  • We will require shared ontologies to support this
    framework
  • A common ontology to represent basic concepts
    agents, actions, permissions, obligations,
    prohibitions, delegations, credentials, etc.
  • Appropriate shared ontologies to describe
    classes, properties and roles of people and
    agents, e.g.,
  • any device owned by TimFinin
  • any request from a faculty member at ETZ
  • Ontologies to encode policy rules

17
ad-hoc networking technologies
  • Ad-hoc networking technologies (e.g. Bluetooth)
  • Main characteristics
  • Short range
  • Spontaneous connectivity
  • Free, at least for now
  • Mobile devices
  • Aware of their neighborhood
  • Can discover others in their vicinity
  • Interact with peers in their neighborhood
  • inter-operate and cooperate as needed and as
    desired
  • Both information consumers and providers
  • ? Ad-hoc mobile technology challenges the
    traditional client/server information access model

18
pervasive environment paradigm
  • Pervasive Computing Environment
  • Ad-Hoc mobile connectivity
  • Spontaneous interaction
  • Peers
  • Service/Information consumers and providers
  • Autonomous, adaptive, and proactive
  • Data intensive deeply networked environment
  • Everyone can exchange information
  • Data-centric model
  • Some sources generate streams of data, e.g.
    sensors
  • ? Pervasive Computing Environments

19
motivation conference scenario
  • Smart-room infrastructure and personal devices
    can assist an ongoing meeting data exchange,
    schedulers, etc.

20
imperfect world
  • In a perfect world
  • everything available and done automatically
  • In the real world
  • Limited resources
  • Battery, memory, computation, connection,
    bandwidth
  • Must live with less than perfect results
  • Dumb devices
  • Must explicitly be told What, When, and How
  • Foreign entities and unknown peers
  • So, we really want
  • Smart, autonomous, dynamic, adaptive, and
    proactive methods to handle data and services

21
Securing Ad-Hoc Networks
  • MANETs underlie much of pervasive computing
  • They bring to fore interesting problems related
    to
  • Open
  • Dynamic
  • Distributed Systems
  • Each node is an independent, autonomous router
  • Has to interact with other nodes, some never seen
    before.
  • How do you detect bad guys ?

22
Network Level Good Neighbor
  • Ad hoc network
  • Node A sends packet destined for E, through B.
  • B and C make snoop entry (A,E,Ck,B,D,E).
  • B and C check for snoop entry.
  • Perform Misroute

A
E
B
D
C
23
Good Neighbor
  • No Broadcast
  • Hidden terminal
  • Exposed terminal
  • DSR vs. AODV
  • GLOMOSIM

24
Intrusion Detection
  • Behaviors
  • Selfish
  • Malicious
  • Detection vs. Reactions
  • Shunning bad nodes
  • Cluster Voting
  • Incentives (Game Theoretic)
  • Colluding nodes
  • Forgiveness

25
Simulation in GlomoSim
  • Passive Intrusion Detection
  • Individual determination
  • No results forwarding
  • Active Intrusion Detection
  • Cluster Scheme
  • Voting
  • Result flooding

26
GlomoSim Setup
  • 16 nodes communication
  • 4 nodes sources for 2 CBR streams
  • 2 nodes pair CBR streams
  • Mobility 0 20 meters/sec
  • Pause time 0 15s
  • No bad nodes

27
Simulation Results
28
Preliminary Results
  • Passive
  • False alarm rate gt 50
  • Throughput rate decrease lt 3 additional
  • Active
  • False alarm rate lt 30
  • Throughput rate decrease 25 additional

29
challenges is that all? (1)
  • Spatio-temporal variation of data and data
    sources
  • All devices in the neighborhood are potential
    information providers
  • Nothing is fixed
  • No global catalog
  • No global routing table
  • No centralized control
  • However, each entity can interact with its
    neighbors
  • By advertising / registering its service
  • By collecting / registering services of others

30
challenges is that all? (2)
  • Query may be explicit or implicit, but is often
    known up-front
  • Users sometimes ask explicitly
  • e.g. tell me the nearest restaurant that has
    vegetarian menu items
  • The system can guess likely queries based on
    declarative information or past behavior
  • e.g. the user always wants to know the price of
    IBM stock

31
challenges is that all? (3)
  • Since information sources are not known a priori,
    schema translations cannot be done beforehand
  • Resource limited devices
  • ? so hope for common, domain specific ontologies
    ?
  • Different modes
  • Device could interact with only such providers
    whose schemas it understands
  • Device could interact with anyone, and cache the
    information in hopes of a translation in the
    future.
  • Device could always try to translate itself
  • Prior work in Schema Translation, Ongoing work in
    Ontology Mapping.

32
challenges is that all? (4)
  • Cooperation amongst information sources cannot be
    guaranteed
  • Device has reliable information, but makes it
    inaccessible
  • Devices provides information, which is
    unreliable
  • Once device shares information, it needs the
    capability to protect future propagation and
    changes to that information

33
challenges is that all? (5)
  • Need to avoid humans in the loop
  • Devices must dynamically "predict" data
    importance and utility based on the current
    context
  • The key insight declarative (or inferred)
    descriptions help
  • Information needs
  • Information capability
  • Constraints
  • Resources
  • Data
  • Answer fidelity
  • Expressive Profiles can capture such descriptions

34
4. our data management architecture
  • MoGATU
  • Design and implementation consists of
  • Data
  • Metadata
  • Profiles
  • Entities
  • Communication interfaces
  • Information Providers
  • Information Consumers
  • Information Managers

35
MoGATU metadata
  • Metadata representation
  • To provide information about
  • Information providers and consumers,
  • Data objects, and
  • Queries and answers
  • To describe relationships
  • To describe restrictions
  • To reason over the information
  • ? Semantic language
  • DAMLOIL / DAML-S
  • http//mogatu.umbc.edu/ont/

36
MoGATU profile
  • Profile
  • User preferences, schedule, requirements
  • Device constraints, providers, consumers
  • Data ownership, restriction, requirements,
    process model
  • Profiles based on BDI models
  • Beliefs are facts
  • about user or environment/context
  • Desires and Intentions
  • higher level expressions of beliefs and goals
  • Devices reason over the BDI profiles
  • Generate domains of interest and utility
    functions
  • Change domains and utility functions based on
    context

37
MoGATU information manager (8)
  • Problems
  • Not all sources and data are correct/accurate/reli
    able
  • No common sense
  • Person can evaluate a web site based on how it
    looks, a computer cannot
  • No centralized party that could verify peer
    reliability or reliability of its data
  • Device is reliable, malicious, ignorant or
    uncooperative
  • Distributed Belief
  • Need to depend on other peers
  • Evaluate integrity of peers and data based on
    peer distributed belief
  • Detect which peer and what data is accurate
  • Detect malicious peers
  • Incentive model if A is malicious, it will be
    excluded from the network

38
MoGATU information manager (9)
  • Distributed Belief Model
  • Device sends a query to multiple peers
  • Ask its vicinity for reputation of untrusted
    peers that responded to the query
  • Trust a device only if trusted before or if
    enough of trusted peers trust it
  • Use answers from (recommended to be) trusted
    peers to determine answer
  • Update reputation/trust level for all devices
    that responded
  • A trust level increases for devices that
    responded according to final answer
  • A trust level decreases for devices that
    responded in a conflicting way
  • Each devices builds a ring of trust

39
A D, where is Bob?
A C, where is Bob?
A B, where is Bob?
40
C A, Bob is at work.
B A, Bob is home.
D A, Bob is home.
41
A B Bob at home, C Bob at work, D Bob at home
A I have enoughtrust in D. What about B and C?
42
B I am not sure.
C I always do.
F I do.
E I dont.
A Do you trust C?
A I dont care what C says. I dont know enough
about B, but I trust D, E, and F. Together, they
dont trust C, so wont I.
D I dont.
43
B I do.
C I never do.
F I am not sure.
E I do.
A Do you trust B?
A I dont care what B says. I dont trust C,
but I trust D, E, and F. Together, they trust B
a little, so will I.
D I am not sure.
44
A I trust B and D, both say Bob ishome
A Increase trust in D.
A Decrease trust in C.
A Increase trust in B.
A Bob is home!
45
MoGATU information manager (10)
  • Distributed Belief Model
  • Initial Trust Function
  • Positive, negative, undecided
  • Trust Learning Function
  • Blindly , Blindly -, F/S-, S/F-, F/F-, S/S-,
    Exp
  • Trust Weighting Function
  • Multiplication, cosine
  • Accuracy Merging Function
  • Max, min, average

46
experiments
  • Primary goal of distributed belief
  • Improve query processing accuracy by using
    trusted sources and trusted data
  • Problems
  • Not all sources and data are correct/accurate/reli
    able
  • No centralized party that could verify peer
    reliability or reliability of its data
  • Need to depend on other peers
  • No common sense
  • Person can evaluate a web site based on how it
    looks, a computer cannot
  • Solution
  • Evaluate integrity of peers and data based on
    peer distributed belief
  • Detect which peer and what data is accurate
  • Detect malicious peers
  • Incentive model if A is malicious, it will be
    excluded from the network

47
experiments
  • Devices
  • Reliable (Share reliable data only)
  • Malicious (Try to share unreliable data as
    reliable)
  • Ignorant (Have unreliable data but believe they
    are reliable)
  • Uncooperative (Have reliable data, will not
    share)
  • Model
  • Device sends a query to multiple peers
  • Ask its vicinity for reputation of untrusted
    peers that responded to the query
  • Trust a device only if trusted before or if
    enough of trusted peers trust it
  • Use answers from (recommended to be) trusted
    peers to determine answer
  • Update reputation/trust level for all devices
    that responded
  • A trust level increases for devices that
    responded according to final answer
  • A trust level decreases for devices that
    responded in a conflicting way

48
experimental environment
  • HOW
  • Mogatu and GloMoSim
  • Spatio-temporal environment
  • 150 x 150 m2 field
  • 50 nodes
  • Random way-point mobility
  • AODV
  • Cache to hold 50 of global knowledge
  • Trust-based LRU
  • 50 minute each simulation run
  • 800 questions-tuples
  • Each device 100 random unique questions
  • Each device 100 random unique answers not
    matching its questions
  • Each device initially trusts 3-5 other devices

49
experimental environment (2)
  • Level of Dishonesty
  • 0 100
  • Dishonest device
  • Never provides an honest answer
  • Honest device
  • Best effort
  • Initial Trust Function
  • Positive, negative, undecided
  • Trust Learning Function
  • Blindly , Blindly -, F/S-, S/F-, F/F-, S/S-,
    Exp
  • Trust Weighting Function
  • Multiplication, cosine
  • Accuracy Merging Function
  • Max, min, avg
  • Trust and Distrust Convergence
  • How soon are dishonest devices detected

50
results
  • Answer Accuracy vs. Trust Learning Functions
  • Answer Accuracy vs. Accuracy Merging Functions
  • Distrust Convergence vs. Dishonesty Level

51
Answer Accuracy vs. Trust Learning Functions
  • The effects of trust learning functions with an
    initial optimistic trust for environments with
    varying level of dishonesty.
  • The results are shown for ?, ?--, ?s, ?f, ?f,
    ?f-, and ?exp learning functions.

52
Answer Accuracy vs. Trust Learning Functions (2)
  • The effects of trust learning functions with an
    initial pessimistic trust for environments with
    varying level of dishonesty.
  • The results are shown for ?, ?--, ?s, ?f, ?f,
    ?f-, and ?exp learning functions.

53
Answer Accuracy vs. Accuracy Merging Functions
  • The effects of accuracy merging functions for
    environments with varying level of dishonesty.
    The results are shown for
  • (a) MIN using only-one (OO) final answer approach
  • (b) MIN using \it highest-one (HO) final answer
    approach
  • (c) MAX OO, (d) MAX HO, (e) AVG OO, and (f)
    AVG HO.

54
Distrust Convergence vs. Dishonesty Level
  • Average distrust convergence period in seconds
    for environments with varying level of
    dishonesty.
  • The results are shown for ?, ?--, ?s, and ?f
    trust learning functions with an initial optimal
    trust strategy and for the same functions using
    an undecided initial trust strategy for results
    (e-h), respectively.

55
http//ebiquity.umbc.edu/
Write a Comment
User Comments (0)
About PowerShow.com